| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
Python SSL supports a couple of different ways to disable HTTPS
verification, either via an environment variable or via methods defined
in PEP 493. To ensure these work we must call the original
_create_default_https_context function to ensure we are calling the
right default https context (verified or unverified) function according
set by the https context factory.
Fixes #484
|
| |
|
|
|
|
|
| |
At the end of the py2 tests, we would see (ignored) errors like
Exception TypeError: "'NoneType' object is not callable" in
<bound method _SocketDuckForFd.__del__ of _SocketDuckForFd:14> ignored
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
There was some concern in fixing hub.remove for secondary listeners
because tests seemed to rely on the implicit robustness of hub.remove
when a listener wasn't being tracked.
It was discovered that socket errors can bubble up in poll and select
hubs which result in all listeners being removed for that fileno before
the listener was alerted (which would then *also* triggered a remove).
Rather than having remove be robust to being called twice this change
makes it be called only once.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When in hubs.trampoline(fd, ...), a greenthread registers itself as a
listener for fd, switches to the hub, and then calls
hub.remove(listener) to deregister itself. hub.remove(listener)
removes the primary listener. If the greenthread awoke because its fd
became ready, then it is the primary listener, and everything is
fine. However, if the greenthread was a secondary listener and awoke
because a Timeout fired then it would remove the primary and promote a
random secondary to primary.
This commit makes hub.remove(listener) check to make sure listener is
the primary, and if it's not, remove the listener from the
secondaries.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if we patched threading then forked (or, in some cases, used
the subprocess module), Python would log an ignored exception like
Exception ignored in: <function _after_fork at 0x7f16493489d8>
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 1335, in _after_fork
assert len(_active) == 1
AssertionError:
This comes down to threading in Python 3.7+ having an import side-effect
of registering an at-fork callback. When we re-import threading to patch
it, the old (but still registered) callback still points to the old
thread-tracking dict, rather than the new dict that's actually doing the
tracking.
Now, register our own at_fork hook that will fix up the dict reference
before threading's _at_fork runs and put it back afterwards.
Closes #592
|
| |
|
|
| |
sudo: and dist: are no longer needed.
|
| | |
|
| |
|
|
| |
Previous behavior to ignore DeprecationWarning is now default in py2.7
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, when a client quickly disconnected (causing a socket.error
before the SocketConsole greenlet had a chance to switch), it would
break us out of our accept loop, permanently closing the backdoor.
Now, it will just break us out of the interactive session, leaving the
server ready to accept another backdoor client.
Fixes #570
|
| | |
|
| |
|
|
|
| |
https://github.com/eventlet/eventlet/issues/619
https://github.com/eventlet/eventlet/issues/629
|
| | |
|
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/623
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
str.capitalize() and str.upper() respect unicode capitalization rules on
py3, while py2 just translates a-z to A-Z.
At best, this may cause confusion and unexpected behaviors, such as
when '\xdf' (a Latin1-encoded ß) becomes 'SS'; at worst, this causes
UnicodeEncodeErrors and the server fails to reply, such as when '\xff'
(a Latin1-encoded ÿ) becomes '\u0178' which does not map back into
Latin1.
Now, convert everything to bytes before capitalizing so just a-z and A-Z
are affected on both py2 and py3.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For SSL sockets created using the SSLContext class under Python >= 3.7,
eventlet incorrectly passes the context as '_context' to the top
level wrap_socket function in the native ssl module.
This causes:
wrap_socket() got an unexpected keyword argument '_context'
as the context cannot be passed this way.
If a context is provided, use the underlying sslsocket_class to
wrap the socket, mirroring the implementation of the wrap_socket
method in the native SSLContext class.
Fixes issue #526
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
| |
|
|
|
|
|
|
|
|
| |
Increase the timeout used for test_isolate_from_socket_default_timeout
from 1 second to 5 seconds. Otherwise, the test can't succeed
on hardware where Python runs slower. In particular, on our SPARC box
importing greenlet modules takes almost 2 seconds, so the test program
does not even start properly.
Fixes #614
|
| |
|
|
|
|
|
| |
Fix test_set_nonblocking() to account for the alternative possible
outcome that enabling non-blocking mode can set both O_NONBLOCK
and O_NDELAY as it does on SPARC. Note that O_NDELAY may be a superset
of O_NONBLOCK, so we can't just filter it out of new_flags.
|
| |
|
|
|
|
|
|
| |
Fix TestGreenSocket.test_skip_nonblocking() to unset both O_NONBLOCK
and O_NDELAY. This is necessary to fix tests on SPARC where both flags
are used simultaneously, and unsetting one is ineffective (flags remain
the same). This should not affect other platforms where O_NDELAY
is an alias for O_NONBLOCK.
|
| |
|
|
|
|
| |
Fix TestGreenSocket.test_skip_nonblocking() to call F_GETFL again
to get the flags for the socket. Previously, the code wrongly assumed
F_SETFL will return flags while it always returns 0 (see fcntl(2)).
|
| |
|
|
|
|
|
| |
Use '==' instead of 'is':
eventlet/db_pool.py:78: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.max_age is 0 or self.max_idle is 0:
|
| |
|
|
| |
In _recv_loop we assign self.fd to local variable fd, but then we don't
use it within the method, so we can remove it.
|
| |
|
|
|
| |
Previously, a bunch of tests that just call `tests.run_isolated(...)`
(such as those at the end of patcher_test.py) might time out but not
actually show any errors.
|
| |
|
|
|
|
|
|
| |
Python 3.7 and later implement queue.SimpleQueue in C, causing a
deadlock when using ThreadPoolExecutor with eventlet.
To avoid this deadlock we now replace the C implementation with the
Python implementation on monkey_patch for Python versions 3.7 and
higher.
|
| | |
|
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/308
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
fixes #534
pathlib._NormalAccessor wraps `open` in `staticmethod` for py < 3.7 but
not 3.7. That means we `Path.open` calls `green.os.open` with `file`
being a pathlib._NormalAccessor object, and the other arguments shifted.
Fortunately pathlib doesn't use the `dir_fd` argument, so we have space
in the parameter list. We use some heuristics to detect this and adjust
the parameters (without importing pathlib)
|
| |
|
|
| |
Fixes https://github.com/eventlet/eventlet/issues/580
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit b9bf369778ec9798b3b6cffe59b7fd15f6159013, we stopped sending
100 Continue responses in the middle of a response when the application
was over-eager to start sending back bytes, but I did that by pulling
the headers_sent state out of handle_one_response(), up into
handle_one_request(), and plumbing it through to
* get_environ(),
* Input.__init__(), and finally
* handle_one_response().
This works, but updates a whole bunch of HttpProtocol APIs in ways that
consumers may not have been expecting. For example, if someone wanted to
subclass HttpProtocol and override get_environ(), they may not have
bothered to include *args and **kwargs to accommodate future API changes.
That code should certainly be fixed, but we shouldn't break them
gratuitously.
Now, wait to introduce the headers_sent state until
handle_one_response() once more, and push it directly into the request's
Input. All the same protections with minimal API disruption.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some applications may need to perform some long-running operation during
a client-request cycle. To keep the client from timing out while waiting
for the response, the application issues a status pro tempore, dribbles
out whitespace (or some other filler) periodically, and expects the
client to parse the final response to confirm success or failure.
Previously, if the application was *too* eager and sent data before ever
reading from the request body, we would write headers to the client,
send that initial data, but then *still send the 100 Continue* when the
application finally read the request. Since this would occur on a chunk
boundary, the client cannot parse the size of the next chunk, and
everything goes off the rails.
Now, only be willing to send the 100 Continue response if we have not
sent headers to the client.
|
| |
|
|
|
|
|
| |
We already 400 missing and non-integer Content-Lengths, and Input almost
certainly wasn't intended to handle negative lengths.
Be sure to close the connection, too -- we have no reason to think that
the client's request framing is still good.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* #53: Make a GreenPile with no spawn()s an empty sequence.
Remove the GreenPile.used flag. This was probably originally intended to
address the potential problem of GreenPool.starmap() returning a (still empty)
GreenPile, a problem that has since been addressed by using GreenMap which
requires an explicit marker from the producer. Currently the only effect of
GreenPile.used is to make GreenPile hang when used with an empty sequence.
Test the empty GreenPile case.
Even though GreenMap is probably not intended for general use, make it
slightly more consumer-friendly by adding a done_spawning() method instead of
requiring the consumer to spawn(return_stop_iteration). GreenPool._do_map()
now calls done_spawning(). Remove return_stop_iteration().
Since done_spawning() merely spawns a function that returns StopIteration, any
existing consumer that explicitly does the same will still work. However, this
is potentially a breaking change if any consumers specifically reference
eventlet.greenpool.return_stop_iteration() for that or any other purpose.
Refactor GreenPile.next(), breaking out the bookkeeping detail to new _next()
method. Make subclass GreenMap call base-class _next(), eliminating the need
to replicate that bookkeeping detail.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Issue #535: use Python 2 compatible syntax for keyword-only args.
* Validate that encode_chunked is the *only* keyword argument passed.
* Increase Travis slop factor for ZMQ CPU usage.
The comment in check_idle_cpu_usage() reads:
# This check is reliably unreliable on Travis, presumably because of CPU
# resources being quite restricted by the build environment. The workaround
# is to apply an arbitrary factor that should be enough to make it work nicely.
Empirically -- it's not. Over the last few months there have been many Travis
"failures" that boil down to this one spurious error. Increase from a slop
factor of 1.2 to 5. If that's still unreliable, may bypass this test entirely
on Travis.
|
| |
|
|
|
|
|
|
| |
Previously, we pretended the input wasn't chunked and hoped for the best. On
py2, this would give the caller the raw, chunk-encoded data; for some reason,
on py3, this would hang.
Now, readlines() will behave as expected.
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, we would compare the last item of a byte string
with a newline in a native string. On Python 3, getting a
single item from a byte string give you an integer (which
will not be equal to any string), so readline would return
the entire request body.
While we're at it, fix the return type when the caller requests
that zero bytes be read.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For more context, see #467 and #497.
On py3, urllib.parse.unquote() defaults to decoding via UTF-8 and
replacing invalid UTF-8 sequences with "\N{REPLACEMENT CHARACTER}".
This causes a few problems:
- Since WSGI requires that bytes be decoded as Latin-1 on py3, we
have to do an extra re-encode/decode cycle in encode_dance().
- Applications written for Latin-1 are broken, as there are valid
Latin-1 sequences that are mangled because of the replacement.
- Applications written for UTF-8 cannot differentiate between a
replacement character that was intentionally sent by the client
versus an invalid byte sequence.
Fortunately, unquote() allows us to specify the encoding that should
be used. By specifying Latin-1, we can drop encode_dance() entirely
and preserve as much information from the wire as we can.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
* [bug] reimport submodule as well in patcher.inject
* [dev] add unit-test
* [dev] move unit test to isolated tests
* improve unit test
|
| | |
|
| |
|
|
|
|
| |
* Issue #535: use Python 2 compatible syntax for keyword-only args.
* Validate that encode_chunked is the *only* keyword argument passed.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix for Python 3.7
* Remove redundant piece of code.
* Put back do_handshake_on_connect kwarg
* Use Python 3.7 instead of 3.7-dev
* Fix buildbot failing permissions with 3.7
* tests: env_tpool_zero assert details
* setup: Python 3.7 classificator
|
| |
|
|
|
|
|
|
| |
when it is unsupported on current platform
Solution: eager import all built-in hubs, explicitly check support later
https://github.com/eventlet/eventlet/issues/466
|
| |
|
|
| |
See https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Origin: https://github.com/eventlet/eventlet/pull/517
Benchmarks performed on **some old laptop** running many other programs,
OSX (no way to set CPU affinity) so don't look at exact values, only comparison.
Baseline: 96fccf3dd88fb60368c5d1ab49d8b9eb85e99502
CPython 2.7:
Benchmark_hub_timers 5867319 5733979 -2.27%
Benchmark_pool_spawn 20666 20319 -1.68%
Benchmark_pool_spawn_n 12697 12022 -5.32%
Benchmark_sleep 21982 20385 -7.27%
Benchmark_pool_spawn 20878 20025 -4.09%
Benchmark_spawn 53915 52598 -2.44%
Benchmark_spawn_link1 59215 56062 -5.32%
Benchmark_spawn_link5 69550 67660 -2.72%
Benchmark_spawn_link5_unlink3 72435 68624 -5.26%
Benchmark_pool_spawn_n 12223 12058 -1.35%
Benchmark_spawn_n 7394 7585 +2.58%
Benchmark_spawn_n_kw 7798 7473 -4.17%
Benchmark_spawn_nowait 11309 11510 +1.78%
Again, benchmark environment is not clean, machine is used by many more processes.
I invite seeing these numbers as "no significant change in performance"
while explicit attribute initialisation is better than getattr/del games.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Output compatibe with golang.org/x/tools/cmd/benchcmp
which allows easy automated performance comparison.
Usage:
- git checkout -b newbranch
- change code
- bin/bench-compare -python ./venv-27
- bin/bench-compare -python ./venv-36
- copy benchmark results
- git commit, include benchmark results
|