| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
Previous behavior to ignore DeprecationWarning is now default in py2.7
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, when a client quickly disconnected (causing a socket.error
before the SocketConsole greenlet had a chance to switch), it would
break us out of our accept loop, permanently closing the backdoor.
Now, it will just break us out of the interactive session, leaving the
server ready to accept another backdoor client.
Fixes #570
|
| | |
|
| |
|
|
|
| |
https://github.com/eventlet/eventlet/issues/619
https://github.com/eventlet/eventlet/issues/629
|
| | |
|
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/623
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
str.capitalize() and str.upper() respect unicode capitalization rules on
py3, while py2 just translates a-z to A-Z.
At best, this may cause confusion and unexpected behaviors, such as
when '\xdf' (a Latin1-encoded ß) becomes 'SS'; at worst, this causes
UnicodeEncodeErrors and the server fails to reply, such as when '\xff'
(a Latin1-encoded ÿ) becomes '\u0178' which does not map back into
Latin1.
Now, convert everything to bytes before capitalizing so just a-z and A-Z
are affected on both py2 and py3.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For SSL sockets created using the SSLContext class under Python >= 3.7,
eventlet incorrectly passes the context as '_context' to the top
level wrap_socket function in the native ssl module.
This causes:
wrap_socket() got an unexpected keyword argument '_context'
as the context cannot be passed this way.
If a context is provided, use the underlying sslsocket_class to
wrap the socket, mirroring the implementation of the wrap_socket
method in the native SSLContext class.
Fixes issue #526
Co-authored-by: Tim Burke <tim.burke@gmail.com>
|
| |
|
|
|
|
|
|
|
|
| |
Increase the timeout used for test_isolate_from_socket_default_timeout
from 1 second to 5 seconds. Otherwise, the test can't succeed
on hardware where Python runs slower. In particular, on our SPARC box
importing greenlet modules takes almost 2 seconds, so the test program
does not even start properly.
Fixes #614
|
| |
|
|
|
|
|
| |
Fix test_set_nonblocking() to account for the alternative possible
outcome that enabling non-blocking mode can set both O_NONBLOCK
and O_NDELAY as it does on SPARC. Note that O_NDELAY may be a superset
of O_NONBLOCK, so we can't just filter it out of new_flags.
|
| |
|
|
|
|
|
|
| |
Fix TestGreenSocket.test_skip_nonblocking() to unset both O_NONBLOCK
and O_NDELAY. This is necessary to fix tests on SPARC where both flags
are used simultaneously, and unsetting one is ineffective (flags remain
the same). This should not affect other platforms where O_NDELAY
is an alias for O_NONBLOCK.
|
| |
|
|
|
|
| |
Fix TestGreenSocket.test_skip_nonblocking() to call F_GETFL again
to get the flags for the socket. Previously, the code wrongly assumed
F_SETFL will return flags while it always returns 0 (see fcntl(2)).
|
| |
|
|
|
|
|
| |
Use '==' instead of 'is':
eventlet/db_pool.py:78: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.max_age is 0 or self.max_idle is 0:
|
| |
|
|
| |
In _recv_loop we assign self.fd to local variable fd, but then we don't
use it within the method, so we can remove it.
|
| |
|
|
|
| |
Previously, a bunch of tests that just call `tests.run_isolated(...)`
(such as those at the end of patcher_test.py) might time out but not
actually show any errors.
|
| |
|
|
|
|
|
|
| |
Python 3.7 and later implement queue.SimpleQueue in C, causing a
deadlock when using ThreadPoolExecutor with eventlet.
To avoid this deadlock we now replace the C implementation with the
Python implementation on monkey_patch for Python versions 3.7 and
higher.
|
| | |
|
| |
|
|
| |
https://github.com/eventlet/eventlet/issues/308
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
fixes #534
pathlib._NormalAccessor wraps `open` in `staticmethod` for py < 3.7 but
not 3.7. That means we `Path.open` calls `green.os.open` with `file`
being a pathlib._NormalAccessor object, and the other arguments shifted.
Fortunately pathlib doesn't use the `dir_fd` argument, so we have space
in the parameter list. We use some heuristics to detect this and adjust
the parameters (without importing pathlib)
|
| |
|
|
| |
Fixes https://github.com/eventlet/eventlet/issues/580
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit b9bf369778ec9798b3b6cffe59b7fd15f6159013, we stopped sending
100 Continue responses in the middle of a response when the application
was over-eager to start sending back bytes, but I did that by pulling
the headers_sent state out of handle_one_response(), up into
handle_one_request(), and plumbing it through to
* get_environ(),
* Input.__init__(), and finally
* handle_one_response().
This works, but updates a whole bunch of HttpProtocol APIs in ways that
consumers may not have been expecting. For example, if someone wanted to
subclass HttpProtocol and override get_environ(), they may not have
bothered to include *args and **kwargs to accommodate future API changes.
That code should certainly be fixed, but we shouldn't break them
gratuitously.
Now, wait to introduce the headers_sent state until
handle_one_response() once more, and push it directly into the request's
Input. All the same protections with minimal API disruption.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some applications may need to perform some long-running operation during
a client-request cycle. To keep the client from timing out while waiting
for the response, the application issues a status pro tempore, dribbles
out whitespace (or some other filler) periodically, and expects the
client to parse the final response to confirm success or failure.
Previously, if the application was *too* eager and sent data before ever
reading from the request body, we would write headers to the client,
send that initial data, but then *still send the 100 Continue* when the
application finally read the request. Since this would occur on a chunk
boundary, the client cannot parse the size of the next chunk, and
everything goes off the rails.
Now, only be willing to send the 100 Continue response if we have not
sent headers to the client.
|
| |
|
|
|
|
|
| |
We already 400 missing and non-integer Content-Lengths, and Input almost
certainly wasn't intended to handle negative lengths.
Be sure to close the connection, too -- we have no reason to think that
the client's request framing is still good.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* #53: Make a GreenPile with no spawn()s an empty sequence.
Remove the GreenPile.used flag. This was probably originally intended to
address the potential problem of GreenPool.starmap() returning a (still empty)
GreenPile, a problem that has since been addressed by using GreenMap which
requires an explicit marker from the producer. Currently the only effect of
GreenPile.used is to make GreenPile hang when used with an empty sequence.
Test the empty GreenPile case.
Even though GreenMap is probably not intended for general use, make it
slightly more consumer-friendly by adding a done_spawning() method instead of
requiring the consumer to spawn(return_stop_iteration). GreenPool._do_map()
now calls done_spawning(). Remove return_stop_iteration().
Since done_spawning() merely spawns a function that returns StopIteration, any
existing consumer that explicitly does the same will still work. However, this
is potentially a breaking change if any consumers specifically reference
eventlet.greenpool.return_stop_iteration() for that or any other purpose.
Refactor GreenPile.next(), breaking out the bookkeeping detail to new _next()
method. Make subclass GreenMap call base-class _next(), eliminating the need
to replicate that bookkeeping detail.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Issue #535: use Python 2 compatible syntax for keyword-only args.
* Validate that encode_chunked is the *only* keyword argument passed.
* Increase Travis slop factor for ZMQ CPU usage.
The comment in check_idle_cpu_usage() reads:
# This check is reliably unreliable on Travis, presumably because of CPU
# resources being quite restricted by the build environment. The workaround
# is to apply an arbitrary factor that should be enough to make it work nicely.
Empirically -- it's not. Over the last few months there have been many Travis
"failures" that boil down to this one spurious error. Increase from a slop
factor of 1.2 to 5. If that's still unreliable, may bypass this test entirely
on Travis.
|
| |
|
|
|
|
|
|
| |
Previously, we pretended the input wasn't chunked and hoped for the best. On
py2, this would give the caller the raw, chunk-encoded data; for some reason,
on py3, this would hang.
Now, readlines() will behave as expected.
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, we would compare the last item of a byte string
with a newline in a native string. On Python 3, getting a
single item from a byte string give you an integer (which
will not be equal to any string), so readline would return
the entire request body.
While we're at it, fix the return type when the caller requests
that zero bytes be read.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For more context, see #467 and #497.
On py3, urllib.parse.unquote() defaults to decoding via UTF-8 and
replacing invalid UTF-8 sequences with "\N{REPLACEMENT CHARACTER}".
This causes a few problems:
- Since WSGI requires that bytes be decoded as Latin-1 on py3, we
have to do an extra re-encode/decode cycle in encode_dance().
- Applications written for Latin-1 are broken, as there are valid
Latin-1 sequences that are mangled because of the replacement.
- Applications written for UTF-8 cannot differentiate between a
replacement character that was intentionally sent by the client
versus an invalid byte sequence.
Fortunately, unquote() allows us to specify the encoding that should
be used. By specifying Latin-1, we can drop encode_dance() entirely
and preserve as much information from the wire as we can.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
* [bug] reimport submodule as well in patcher.inject
* [dev] add unit-test
* [dev] move unit test to isolated tests
* improve unit test
|
| | |
|
| |
|
|
|
|
| |
* Issue #535: use Python 2 compatible syntax for keyword-only args.
* Validate that encode_chunked is the *only* keyword argument passed.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix for Python 3.7
* Remove redundant piece of code.
* Put back do_handshake_on_connect kwarg
* Use Python 3.7 instead of 3.7-dev
* Fix buildbot failing permissions with 3.7
* tests: env_tpool_zero assert details
* setup: Python 3.7 classificator
|
| |
|
|
|
|
|
|
| |
when it is unsupported on current platform
Solution: eager import all built-in hubs, explicitly check support later
https://github.com/eventlet/eventlet/issues/466
|
| |
|
|
| |
See https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Origin: https://github.com/eventlet/eventlet/pull/517
Benchmarks performed on **some old laptop** running many other programs,
OSX (no way to set CPU affinity) so don't look at exact values, only comparison.
Baseline: 96fccf3dd88fb60368c5d1ab49d8b9eb85e99502
CPython 2.7:
Benchmark_hub_timers 5867319 5733979 -2.27%
Benchmark_pool_spawn 20666 20319 -1.68%
Benchmark_pool_spawn_n 12697 12022 -5.32%
Benchmark_sleep 21982 20385 -7.27%
Benchmark_pool_spawn 20878 20025 -4.09%
Benchmark_spawn 53915 52598 -2.44%
Benchmark_spawn_link1 59215 56062 -5.32%
Benchmark_spawn_link5 69550 67660 -2.72%
Benchmark_spawn_link5_unlink3 72435 68624 -5.26%
Benchmark_pool_spawn_n 12223 12058 -1.35%
Benchmark_spawn_n 7394 7585 +2.58%
Benchmark_spawn_n_kw 7798 7473 -4.17%
Benchmark_spawn_nowait 11309 11510 +1.78%
Again, benchmark environment is not clean, machine is used by many more processes.
I invite seeing these numbers as "no significant change in performance"
while explicit attribute initialisation is better than getattr/del games.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Output compatibe with golang.org/x/tools/cmd/benchcmp
which allows easy automated performance comparison.
Usage:
- git checkout -b newbranch
- change code
- bin/bench-compare -python ./venv-27
- bin/bench-compare -python ./venv-36
- copy benchmark results
- git commit, include benchmark results
|
| |
|
|
|
|
|
|
|
| |
Origin: https://github.com/eventlet/eventlet/pull/517
Related:
https://github.com/eventlet/eventlet/pull/388
https://github.com/eventlet/eventlet/pull/303
https://github.com/eventlet/eventlet/issues/270
https://github.com/eventlet/eventlet/issues/132
|
| |
|
|
| |
Newer OpenSSL requires RSA key at least 2048 bits
|
| | |
|
| | |
|
| |
|
|
| |
Signed-off-by: Lon Hohberger <lon@metamorphism.com>
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
getaddrinfo() behaves differently from the standard implementation as
it will try to contact nameservers if only one (IPv4 or IPv6) entry
is returned from /etc/hosts file.
This patch avoids getaddrinfo() querying nameservers if at least one
entry is fetched through the hosts file to match the behavior of
the original socket.getaddrinfo() implementation.
Closes #515
Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Tests:
- normal operation
- no reply (timeout)
- unexpected source address (w/ timeout &
ignore_unexpected set)
- number of zeroes in ipv6 address string different
- unexpected address
Signed-off-by: Lon Hohberger <lon@metamorphism.com>
|
| |
|
|
|
|
|
|
| |
If the source address for a packet did not match where we sent,
the udp() function would spin in an infinite loop and the timer
would never expire, causing the process to hang.
Signed-off-by: Lon Hohberger <lon@metamorphism.com>
|