summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* test for https://github.com/eventlet/eventlet/pull/485485-https-noverify-envSergey Shepelev2020-10-192-2/+39
|
* Ensure eventlet SSL HTTPs contexts allow HTTP verify disabledSam Betts2020-10-191-1/+11
| | | | | | | | | | | Python SSL supports a couple of different ways to disable HTTPS verification, either via an environment variable or via methods defined in PEP 493. To ensure these work we must call the original _create_default_https_context function to ensure we are calling the right default https context (verified or unverified) function according set by the https context factory. Fixes #484
* Clean up TypeError in __del__Tim Burke2020-09-291-1/+4
| | | | | | | At the end of the py2 tests, we would see (ignored) errors like Exception TypeError: "'NoneType' object is not callable" in <bound method _SocketDuckForFd.__del__ of _SocketDuckForFd:14> ignored
* v0.28.0 releasev0.28.0Sergey Shepelev2020-09-232-1/+5
|
* Make remove more explicitClay Gerrard2020-09-231-16/+19
| | | | | | | | | | | | | There was some concern in fixing hub.remove for secondary listeners because tests seemed to rely on the implicit robustness of hub.remove when a listener wasn't being tracked. It was discovered that socket errors can bubble up in poll and select hubs which result in all listeners being removed for that fileno before the listener was alerted (which would then *also* triggered a remove). Rather than having remove be robust to being called twice this change makes it be called only once.
* Always remove the right listener from the hubSamuel Merritt2020-09-232-10/+63
| | | | | | | | | | | | | | | When in hubs.trampoline(fd, ...), a greenthread registers itself as a listener for fd, switches to the hub, and then calls hub.remove(listener) to deregister itself. hub.remove(listener) removes the primary listener. If the greenthread awoke because its fd became ready, then it is the primary listener, and everything is fine. However, if the greenthread was a secondary listener and awoke because a Timeout fired then it would remove the primary and promote a random secondary to primary. This commit makes hub.remove(listener) check to make sure listener is the primary, and if it's not, remove the listener from the secondaries.
* v0.27.0 releasev0.27.0Sergey Shepelev2020-09-022-1/+6
|
* Clean up threading book-keeping at fork when monkey-patchedTim Burke2020-08-283-0/+83
| | | | | | | | | | | | | | | | | | | | | | Previously, if we patched threading then forked (or, in some cases, used the subprocess module), Python would log an ignored exception like Exception ignored in: <function _after_fork at 0x7f16493489d8> Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 1335, in _after_fork assert len(_active) == 1 AssertionError: This comes down to threading in Python 3.7+ having an import side-effect of registering an at-fork callback. When we re-import threading to patch it, the old (but still registered) callback still points to the old thread-tracking dict, rather than the new dict that's actually doing the tracking. Now, register our own at_fork hook that will fix up the dict reference before threading's _at_fork runs and put it back afterwards. Closes #592
* .travis.yml: Remove superfluous matrix selectorsJohn Vandenberg2020-08-191-3/+3
| | | | sudo: and dist: are no longer needed.
* Add Python 3.8 testingJohn Vandenberg2020-08-192-1/+7
|
* tests checking output were broken by Python 2 end of support warningpy27-warningSergey Shepelev2020-08-191-2/+2
| | | | Previous behavior to ignore DeprecationWarning is now default in py2.7
* backdoor: handle disconnects betterTim Burke2020-07-312-8/+32
| | | | | | | | | | | Previously, when a client quickly disconnected (causing a socket.error before the SocketConsole greenlet had a chance to switch), it would break us out of our accept loop, permanently closing the backdoor. Now, it will just break us out of the interactive session, leaving the server ready to accept another backdoor client. Fixes #570
* v0.26.1 releasev0.26.1Sergey Shepelev2020-07-312-1/+5
|
* pin dnspython <2.0.0Peter Eacmen2020-07-311-1/+1
| | | | | https://github.com/eventlet/eventlet/issues/619 https://github.com/eventlet/eventlet/issues/629
* v0.26.0 releasev0.26.0Sergey Shepelev2020-07-302-1/+10
|
* drop Python 3.4 supportdrop-34Sergey Shepelev2020-07-063-10/+3
| | | | https://github.com/eventlet/eventlet/issues/623
* wsgi: Fix header capitalization on py3Tim Burke2020-07-022-2/+29
| | | | | | | | | | | | | | str.capitalize() and str.upper() respect unicode capitalization rules on py3, while py2 just translates a-z to A-Z. At best, this may cause confusion and unexpected behaviors, such as when '\xdf' (a Latin1-encoded ß) becomes 'SS'; at worst, this causes UnicodeEncodeErrors and the server fails to reply, such as when '\xff' (a Latin1-encoded ÿ) becomes '\u0178' which does not map back into Latin1. Now, convert everything to bytes before capitalizing so just a-z and A-Z are affected on both py2 and py3.
* Fix compatibility with SSLContext usage >= Python 3.7James Page2020-07-022-11/+44
| | | | | | | | | | | | | | | | | | | | For SSL sockets created using the SSLContext class under Python >= 3.7, eventlet incorrectly passes the context as '_context' to the top level wrap_socket function in the native ssl module. This causes: wrap_socket() got an unexpected keyword argument '_context' as the context cannot be passed this way. If a context is provided, use the underlying sslsocket_class to wrap the socket, mirroring the implementation of the wrap_socket method in the native SSLContext class. Fixes issue #526 Co-authored-by: Tim Burke <tim.burke@gmail.com>
* tests: Increase timeout for test_isolate_from_socket_default_timeoutMichał Górny2020-07-011-1/+1
| | | | | | | | | | Increase the timeout used for test_isolate_from_socket_default_timeout from 1 second to 5 seconds. Otherwise, the test can't succeed on hardware where Python runs slower. In particular, on our SPARC box importing greenlet modules takes almost 2 seconds, so the test program does not even start properly. Fixes #614
* tests: Assume that nonblocking mode might set O_NDELAY to fix SPARCMichał Górny2020-07-011-1/+4
| | | | | | | Fix test_set_nonblocking() to account for the alternative possible outcome that enabling non-blocking mode can set both O_NONBLOCK and O_NDELAY as it does on SPARC. Note that O_NDELAY may be a superset of O_NONBLOCK, so we can't just filter it out of new_flags.
* tests: Unset O_NONBLOCK|O_NDELAY to fix SPARCMichał Górny2020-07-011-3/+5
| | | | | | | | Fix TestGreenSocket.test_skip_nonblocking() to unset both O_NONBLOCK and O_NDELAY. This is necessary to fix tests on SPARC where both flags are used simultaneously, and unsetting one is ineffective (flags remain the same). This should not affect other platforms where O_NDELAY is an alias for O_NONBLOCK.
* tests: F_SETFL does not return flags, use F_GETFL againMichał Górny2020-07-011-1/+2
| | | | | | Fix TestGreenSocket.test_skip_nonblocking() to call F_GETFL again to get the flags for the socket. Previously, the code wrongly assumed F_SETFL will return flags while it always returns 0 (see fcntl(2)).
* Fix misc SyntaxWarning's under Python 3.8James Page2020-07-011-1/+1
| | | | | | | Use '==' instead of 'is': eventlet/db_pool.py:78: SyntaxWarning: "is" with a literal. Did you mean "=="? if self.max_age is 0 or self.max_idle is 0:
* Remove unnecessary assignment in _recv_loop (#601)Gorka Eguileor2020-05-151-1/+0
| | | | In _recv_loop we assign self.fd to local variable fd, but then we don't use it within the method, so we can remove it.
* tests: Fail on timeout when expect_pass=True (#612)Tim Burke2020-05-151-0/+3
| | | | | Previously, a bunch of tests that just call `tests.run_isolated(...)` (such as those at the end of patcher_test.py) might time out but not actually show any errors.
* Fix #508: Py37 Deadlock ThreadPoolExecutor (#598)Gorka Eguileor2020-05-153-0/+28
| | | | | | | | Python 3.7 and later implement queue.SimpleQueue in C, causing a deadlock when using ThreadPoolExecutor with eventlet. To avoid this deadlock we now replace the C implementation with the Python implementation on monkey_patch for Python versions 3.7 and higher.
* v0.25.2 releasev0.25.2Sergey Shepelev2020-04-092-1/+6
|
* green.ssl: redundant set_nonblocking() caused SSLWantReadErrorSergey Shepelev2020-03-021-1/+0
| | | | https://github.com/eventlet/eventlet/issues/308
* release notes and version bump for 0.25.1v0.25.1David Szotten2019-08-212-1/+5
|
* workaround for pathlib on py 3.7David Szotten2019-08-202-0/+20
| | | | | | | | | | | fixes #534 pathlib._NormalAccessor wraps `open` in `staticmethod` for py < 3.7 but not 3.7. That means we `Path.open` calls `green.os.open` with `file` being a pathlib._NormalAccessor object, and the other arguments shifted. Fortunately pathlib doesn't use the `dir_fd` argument, so we have space in the parameter list. We use some heuristics to detect this and adjust the parameters (without importing pathlib)
* Stop using deprecated cgi.parse_qs() to support Python 3.8Miro Hrončok2019-07-101-1/+2
| | | | Fixes https://github.com/eventlet/eventlet/issues/580
* v0.25.0 releasev0.25.0Tim Burke2019-05-242-1/+20
|
* wsgi: minimize API changes for 100-continue fix (#569)Tim Burke2019-05-231-11/+14
| | | | | | | | | | | | | | | | | | | | | | In commit b9bf369778ec9798b3b6cffe59b7fd15f6159013, we stopped sending 100 Continue responses in the middle of a response when the application was over-eager to start sending back bytes, but I did that by pulling the headers_sent state out of handle_one_response(), up into handle_one_request(), and plumbing it through to * get_environ(), * Input.__init__(), and finally * handle_one_response(). This works, but updates a whole bunch of HttpProtocol APIs in ways that consumers may not have been expecting. For example, if someone wanted to subclass HttpProtocol and override get_environ(), they may not have bothered to include *args and **kwargs to accommodate future API changes. That code should certainly be fixed, but we shouldn't break them gratuitously. Now, wait to introduce the headers_sent state until handle_one_response() once more, and push it directly into the request's Input. All the same protections with minimal API disruption.
* wsgi: Only send 100 Continue response if no response has been sent yet (#557)Tim Burke2019-03-212-7/+80
| | | | | | | | | | | | | | | | | Some applications may need to perform some long-running operation during a client-request cycle. To keep the client from timing out while waiting for the response, the application issues a status pro tempore, dribbles out whitespace (or some other filler) periodically, and expects the client to parse the final response to confirm success or failure. Previously, if the application was *too* eager and sent data before ever reading from the request body, we would write headers to the client, send that initial data, but then *still send the 100 Continue* when the application finally read the request. Since this would occur on a chunk boundary, the client cannot parse the size of the next chunk, and everything goes off the rails. Now, only be willing to send the 100 Continue response if we have not sent headers to the client.
* wsgi: Return 400 on negative Content-Length request headers (#537)Tim Burke2019-03-042-1/+10
| | | | | | | We already 400 missing and non-integer Content-Lengths, and Input almost certainly wasn't intended to handle negative lengths. Be sure to close the connection, too -- we have no reason to think that the client's request framing is still good.
* #53: Make a GreenPile with no spawn()s an empty sequence. (#555)nat-goodspeed2019-03-042-17/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | * #53: Make a GreenPile with no spawn()s an empty sequence. Remove the GreenPile.used flag. This was probably originally intended to address the potential problem of GreenPool.starmap() returning a (still empty) GreenPile, a problem that has since been addressed by using GreenMap which requires an explicit marker from the producer. Currently the only effect of GreenPile.used is to make GreenPile hang when used with an empty sequence. Test the empty GreenPile case. Even though GreenMap is probably not intended for general use, make it slightly more consumer-friendly by adding a done_spawning() method instead of requiring the consumer to spawn(return_stop_iteration). GreenPool._do_map() now calls done_spawning(). Remove return_stop_iteration(). Since done_spawning() merely spawns a function that returns StopIteration, any existing consumer that explicitly does the same will still work. However, this is potentially a breaking change if any consumers specifically reference eventlet.greenpool.return_stop_iteration() for that or any other purpose. Refactor GreenPile.next(), breaking out the bookkeeping detail to new _next() method. Make subclass GreenMap call base-class _next(), eliminating the need to replicate that bookkeeping detail.
* Increase Travis slop factor for ZMQ CPU usage. (#542)nat-goodspeed2019-03-041-1/+1
| | | | | | | | | | | | | | | | | | | * Issue #535: use Python 2 compatible syntax for keyword-only args. * Validate that encode_chunked is the *only* keyword argument passed. * Increase Travis slop factor for ZMQ CPU usage. The comment in check_idle_cpu_usage() reads: # This check is reliably unreliable on Travis, presumably because of CPU # resources being quite restricted by the build environment. The workaround # is to apply an arbitrary factor that should be enough to make it work nicely. Empirically -- it's not. Over the last few months there have been many Travis "failures" that boil down to this one spurious error. Increase from a slop factor of 1.2 to 5. If that's still unreliable, may bypass this test entirely on Travis.
* wsgi: fix Input.readlines when dealing with chunked inputTim Burke2019-02-282-1/+24
| | | | | | | | Previously, we pretended the input wasn't chunked and hoped for the best. On py2, this would give the caller the raw, chunk-encoded data; for some reason, on py3, this would hang. Now, readlines() will behave as expected.
* wsgi: fix Input.readline on Python 3Tim Burke2019-02-282-2/+15
| | | | | | | | | | | Previously, we would compare the last item of a byte string with a newline in a native string. On Python 3, getting a single item from a byte string give you an integer (which will not be equal to any string), so readline would return the entire request body. While we're at it, fix the return type when the caller requests that zero bytes be read.
* wsgi: Stop replacing invalid UTF-8 on py3Tim Burke2019-02-282-17/+19
| | | | | | | | | | | | | | | | | | | | For more context, see #467 and #497. On py3, urllib.parse.unquote() defaults to decoding via UTF-8 and replacing invalid UTF-8 sequences with "\N{REPLACEMENT CHARACTER}". This causes a few problems: - Since WSGI requires that bytes be decoded as Latin-1 on py3, we have to do an extra re-encode/decode cycle in encode_dance(). - Applications written for Latin-1 are broken, as there are valid Latin-1 sequences that are mangled because of the replacement. - Applications written for UTF-8 cannot differentiate between a replacement character that was intentionally sent by the client versus an invalid byte sequence. Fortunately, unquote() allows us to specify the encoding that should be used. By specifying Latin-1, we can drop encode_dance() entirely and preserve as much information from the wire as we can.
* Fix compatibility with Python 3.7 ssl.SSLSocket (#531)Junyi2019-01-302-38/+52
|
* [bug] reimport submodule as well in patcher.inject (#540)Junyi2019-01-236-0/+37
| | | | | | | | | | * [bug] reimport submodule as well in patcher.inject * [dev] add unit-test * [dev] move unit test to isolated tests * improve unit test
* maintainers listSergey Shepelev2018-12-241-1/+6
|
* Issue 535: use Python 2 compatible syntax for keyword-only args. (#536)nat-goodspeed2018-12-201-3/+13
| | | | | | * Issue #535: use Python 2 compatible syntax for keyword-only args. * Validate that encode_chunked is the *only* keyword argument passed.
* wsgi: Catch and swallow IOErrors during discard() (#532)Tim Burke2018-12-202-0/+33
|
* Fix for Python 3.7 (#506)Marcel Plch2018-09-285-13/+59
| | | | | | | | | | | | | | | | * Fix for Python 3.7 * Remove redundant piece of code. * Put back do_handshake_on_connect kwarg * Use Python 3.7 instead of 3.7-dev * Fix buildbot failing permissions with 3.7 * tests: env_tpool_zero assert details * setup: Python 3.7 classificator
* IMPORTANT: late import in `use_hub()` + thread race caused using epolls even ↵Sergey Shepelev2018-09-1211-188/+182
| | | | | | | | when it is unsupported on current platform Solution: eager import all built-in hubs, explicitly check support later https://github.com/eventlet/eventlet/issues/466
* wsgi: make Expect 100-continue field-value case-insensitive.Julien Kasarherou2018-09-092-22/+24
| | | | See https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20
* greenthread: optimize _exit_funcs getattr/del dance; Thanks to Alex KashirinSergey Shepelev2018-08-271-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | Origin: https://github.com/eventlet/eventlet/pull/517 Benchmarks performed on **some old laptop** running many other programs, OSX (no way to set CPU affinity) so don't look at exact values, only comparison. Baseline: 96fccf3dd88fb60368c5d1ab49d8b9eb85e99502 CPython 2.7: Benchmark_hub_timers 5867319 5733979 -2.27% Benchmark_pool_spawn 20666 20319 -1.68% Benchmark_pool_spawn_n 12697 12022 -5.32% Benchmark_sleep 21982 20385 -7.27% Benchmark_pool_spawn 20878 20025 -4.09% Benchmark_spawn 53915 52598 -2.44% Benchmark_spawn_link1 59215 56062 -5.32% Benchmark_spawn_link5 69550 67660 -2.72% Benchmark_spawn_link5_unlink3 72435 68624 -5.26% Benchmark_pool_spawn_n 12223 12058 -1.35% Benchmark_spawn_n 7394 7585 +2.58% Benchmark_spawn_n_kw 7798 7473 -4.17% Benchmark_spawn_nowait 11309 11510 +1.78% Again, benchmark environment is not clean, machine is used by many more processes. I invite seeing these numbers as "no significant change in performance" while explicit attribute initialisation is better than getattr/del games.
* New benchmarks runnerSergey Shepelev2018-08-276-190/+340
| | | | | | | | | | | | | Output compatibe with golang.org/x/tools/cmd/benchcmp which allows easy automated performance comparison. Usage: - git checkout -b newbranch - change code - bin/bench-compare -python ./venv-27 - bin/bench-compare -python ./venv-36 - copy benchmark results - git commit, include benchmark results