summaryrefslogtreecommitdiff
path: root/ChangeLog
blob: 5a72bbf7d06cad718e5105a6f3a9ddbc91669d03 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
2019-10-08 Neal Gompa <ngompa13@gmail.com>

	* Fix confused license header to clarify licensing
	* Fix Python 3 compatibility with urlgrabber-ext-down
	* Support HTTP CONNECT with reget. BZ 1585596
	* Fix for usage of _levelNames from logging module
	* Fix issue when URLGRABBER_DEBUG is not an integer on Python 3
	* Revise setup.py to remove need for extra setup-time dependencies
	* setuptools: Update Development Status to "Production/Stable"
	* Bump version to 4.1.0

2019-02-25 Neal Gompa <ngompa13@gmail.com>

	* Port to Python 3
	* Add curl_obj option to grabber
	* Throw an obvious error message when urlgrabber-ext-down
	  is missing when attempting to use external downloader
	* Use setuptools for setup.py instead of distutils
	* bump version to 4.0.0

2017-02-02 Valentina Mukhamedzhanova <vmukhame@redhat.com>

	* Add no_cache and retry_no_cache options.
	* Work around pycurl dependency in setup.py.
	* Don't set speed=0 on a new mirror that 404'd.
	* Add a comprehensive error message to pycurl error 77.
	* Don't crash on timedhosts parsing error.
	* bump version to 3.10.2

2013-10-09  Zdenek Pavlas <zpavlas@redhat.com>

	* lots of enahncements and bugfixes
	  (parallel downloading, mirror profiling, new options)
	* updated authors, url
	* updated unit tests
	* bump version to 3.10

2009-09-25  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/__init__.py: bump version to 3.9.1

2009-09-25  Seth Vidal <skvidal@fedoraproject.org>

	* makefile: clean up everything in make clean

2009-09-25  Seth Vidal <skvidal@fedoraproject.org>

	* test/runtests.py, test/test_grabber.py, test/test_keepalive.py,
	urlgrabber/__init__.py, urlgrabber/byterange.py,
	urlgrabber/grabber.py, urlgrabber/keepalive.py,
	urlgrabber/mirror.py, urlgrabber/progress.py,
	urlgrabber/sslfactory.py: cleanup all the old urlgrabber urllib code
	that's not being used delete sslfactory and keepalive fix up the
	unittests to match the existing code

2009-09-24  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: update documentation for ssl options and
	size/max_header_size options

2009-09-23  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - fix the reget testcases (and regets in general) with the max size
	check - make the errorcode more obvious when we go over the range -
	obviously don't do the check if all of our max values are None (or
	even 0 since that is a silly number for a Max)

2009-09-22  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: handle endless-data problems safely:  "A
	malicious server could cause libcurl to download an infinite amount
	of data, potentially causing all of memory or disk to be filled.
	Setting the CURLOPT_MAXFILESIZE_LARGE option is not sufficient to
	guard against this.  Instead, the app should monitor the amount of
	data received within the write or progress callback and abort once
	the limit is reached."  had to restructure a good bit of the error
	handling to do this but it works for both endless headers and
	endless content.

2009-09-21  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: make sure the value we get back from the
	parse150 and other calls is converted to an int before we make it
	'size' rhbug: #524705

2009-09-02  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: make file:// url not found msgs clearer and
	hopefully fix a couple of ctrl-c issues.

2009-08-27  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: make proxy=_none_ properly disable all
	proxies as per the docs

2009-08-14  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - add full contingent of ssl options:   - client keys   - client
	certs   - capath/cainfo   - client key passwords   - client key and
	cert types   - verifypeer/verifyhost - add a number of common errors
	to do_perform() - when an error is unknown, and doesn't make sense
	report complete pycurl error code - when the filename is '' and not
	None and we're doing a urlgrab() try to open the   file anyway
	rather than silently swallowing the data into a StringIO and
	discarding it.

2009-08-13  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: add _to_utf8() method to pycurlfileobject
	make sure postfield data is to_utf8'd before setting the option
	otherwise pycurl is unhappy if the postfield data is a unicode
	object instead of a string object. closes rh bug
	https://bugzilla.redhat.com/show_bug.cgi?id=515797

2009-08-12  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: initial pass at setting more advanced ssl
	options. verify peer and verify host work as expected.

2009-08-07  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: keep from making tmpfiles all over /tmp on
	any local file:// urlopen() by doing it in StringIO instead of
	mkstemp().  Sort of fixes
	https://bugzilla.redhat.com/show_bug.cgi?id=516178

2009-08-06  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - fix intrrupt handler and document why keyboardinterrupt is going
	to be so weird in pycurl - disable signals and make sure we don't
	handle/intercept any in the pycurl code. - set 'check_timestamp'
	regets as NotImplemented. The work around is multiple connections.
	it is possible but not immediately useful since, afaict, NOTHING
	uses the check_timestamp regets.

2009-08-05  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - make sure regets work when our filename is unicode - make sure we
	are not resetting self.append = False when we don't need to

2009-08-05  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - make sure we tell pycurl to get the filetime when downloading -
	set a couple of options as 'True/False' instead of 1,0 - for
	readability - make sure the option passed to timeout is an int - not
	a string

2009-08-04  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: missed setting the value from opts.timeout
	- doesn't really HURT what will happen b/c if your connect takes
	longer than 5minutes then you're SCREWED

2009-08-04  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: handle timeouts more correctly (with the
	exception) and set timeouts to be connect timeouts since libcurl
	seems to actually honor timeouts - as opposed to urllib. closes rh
	bug # 515497

2009-07-31  Seth Vidal <skvidal@fedoraproject.org>

	* ChangeLog, makefile, urlgrabber/__init__.py: changelog + release
	date touchup

2009-07-31  Seth Vidal <skvidal@fedoraproject.org>

	* makefile: add a few more things to be cleaned out

2009-07-31  Seth Vidal <skvidal@fedoraproject.org>

	* ChangeLog: update changelog

2009-07-31  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - make readlines() work for mirrorlists in yum (which probably
	shouldn't be using it anyway) - do a do_grab() in _do_open() which
	may or may not be a good idea - I could also make the _do_grab()
	happen when someone attempts to hit a method beyond the file object
	open

2009-07-30  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - make basic posts work

2009-07-30  Seth Vidal <skvidal@fedoraproject.org>

	* maint/git2cl: add git2cl

2009-07-30  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: when I first started this I hacked
	something into URLGrabberFileObject - this reverts that hack

2009-07-30  Seth Vidal <skvidal@fedoraproject.org>

	* ChangeLog, maint/cvs2cl.pl, maint/usermap, test/runtests.py,
	urlgrabber/__init__.py: - clean up some unused files - update the changelog - bump the
	version - update the copyright in a couple of places

2009-07-30  Seth Vidal <skvidal@fedoraproject.org>

	* MANIFEST.in, makefile: - make makefile work again without using cvs - add makefile to
	MANIFEST.in

2009-07-30  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - make simple/most proxies work - remove unnnecessary 'have_range'
	check for pycyurl obj

2009-07-29  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: - add range support - get rid of the .part file thing - it makes
	range-regets harder than they need to be - make sure regets behave

2009-07-29  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: implement throttle/bandwidth controls in
	pycurl tested with the progress call back - seems to work very well


2009-07-29  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: get the content-length/size for ftp pkgs
	too - steals parse150 from ftplib. Should work for A LOT of ftp
	servers, but not all of them - add self.scheme for which protocol
	we're using here.

2009-07-29  James Antill <james@and.org>

	* urlgrabber/byterange.py: Import fix for ftp ports in old urilib
	code (probably worthless now, but meh)

2009-07-29  James Antill <james@and.org>

	* urlgrabber/progress.py: Import progress patches from Fedora.
	These were done over a couple of years:    . cleanup UI.   . dynamic
	terminal widths.     . deal with serial console.   . total download
	stuff.

2009-07-28  Seth Vidal <skvidal@fedoraproject.org>

	* test/runtests.py, urlgrabber/grabber.py: implement
	PyCurlFileObject. This makes the default and forklifts all the code
	to pycurl. This is not finished but is functional for a significant
	number of the tests. things known to be broken: - proxies - http
	POST - non-header-based byte-ranges - certain types of read
	operations when downloading a file to memory instead of to a
	filename

2009-05-15  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: make it use *args instead of silly if
	statements

2009-05-15  Seth Vidal <skvidal@fedoraproject.org>

	* urlgrabber/grabber.py: modify urlgraberror so it has a url
	attribute and includes the url in all error messages.

2006-12-12  mstenner <mstenner>

	* urlgrabber/grabber.py: more debugging code to expose options

2006-12-08  mstenner <mstenner>

	* scripts/urlgrabber, test/test_grabber.py, urlgrabber/grabber.py,
	urlgrabber/keepalive.py: lots of changes... improved clarity of
	cached objects, improved debugging and logging, more options to the
	urlgrabber script.

2006-12-07  mstenner <mstenner>

	* scripts/urlgrabber, urlgrabber/grabber.py: Minor doc updates and
	error handling in grabber.py.  Complete rewrite of the urlgrabber
	script.

2006-12-05  mstenner <mstenner>

	* Minor fix to make byteranges work with some servers.  _do_grab now
	only reads as much as it needs to, rather than reading until the
	server sends EOF.