| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since [1] we monkey patch subprocess module and it uncovered one issue -
the previous patched_function implementation would produce functions
with all optional arguments lacking their default values which made them
required. This is an issue on Python 3 where there's at least one
optional explicit argument in at least one of the functions
(check_call), on Python 2 the function signature has *args and **kwargs
so it wasn't an issue.
This patch is contributed by Smarkets Limited.
[1] 614a20462aebfe85a54ce35a7daaf1a7dbde44a7
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Eventlet patcher and the way we were patching multi-level http
package don't work well[1][2]. I spent a lot of time trying to make it
work but in the end every solution I came up with was breaking something
else and made the patching and providing green http even more
complicated - I wouldn't envy anyone having to debug it in the future.
After a lot of thinking I decided having our own copy of http with the
necessary modifications applied seems like the most straightforward and
the most reliable solution, even considering its downsides (we need to
keep it up to date ourselves and the API won't 100 % match the regular
http module API on older Python 3 versions as our bundled version is the
most recent one and has bug fixes and extra features implemented).
The code introduces by this commit comes from the following Python
commit (development branch):
commit 6251d66ba9a692d3adf5d2e6818b29ac44130787
Author: Xavier de Gaye <xdegaye@users.sourceforge.net>
Date: 2016-06-15 11:35:29 +0200
Issue #26862: SYS_getdents64 does not need to be defined on android
API 21.
Changes to the original http package code involve:
* Removing unnecessary import(s)
* Replacing some regular imports with eventlet.green imports
* Replacing fullmatch()[3] usage with match() so we stay Python 3.3
compatible
I left urllib.parse imports intact as nothing there performs IO.
Green httplib module is also modified because it used to import
http.client using patcher which was breaking things the same way.
A new dependency, enum-compat, is added to ensure that the enum module
is present on Python 3.3 (the http package code comes the latest Python
development branch and uses enum).
[1] https://github.com/getsentry/raven-python/issues/703
[2] https://github.com/eventlet/eventlet/issues/316
[3] https://docs.python.org/3/library/re.html#re.fullmatch
This patch is contributed by Smarkets Limited.
|
| |
|
|
|
|
|
| |
On PyPy `[Errno 9] Bad file descriptor` because `makefile dup/_drop` dance leaves socket with PyPy special reference counter zero, so it may be garbage collected.
https://github.com/eventlet/eventlet/issues/318
(@temoto) maybe proper fix is to remove `dup` in `makefile()`.
|
| |
|
|
|
|
| |
Now you able to upgrade connection with eventlet.websocket.WebSocketsWSGI when app runs under `gunicorn application:app -k eventlet`
https://github.com/eventlet/eventlet/pull/331
|
| |
|
| |
https://github.com/eventlet/eventlet/pull/314
|
| | |
|
| | |
|
| |
|
|
|
| |
GreenSocket was (family_or_realsock=AF_INET, ...) which breaks code like socket(family=...)
https://github.com/eventlet/eventlet/issues/319
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
green select: Delete unpatched poll once again
https://github.com/eventlet/eventlet/pull/317
Previously attempted in f63165c, had to be reverted in 8ea9df6 because
subprocess was failing after monkey patching.
Turns out we haven't been monkey patching the subprocess module at all,
this patch adds that in order for the tests to pass.
This part is changed because otherwise Popen class instantiation would
cause an infinite loop when monkey patching is applied:
-subprocess_orig = __import__("subprocess")
+subprocess_orig = patcher.original("subprocess")
This patch is contributed by Smarkets Limited.
* green subprocess: Provide green check_output
This patch is contributed by Smarkets Limited.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue can be demonstrated by running the following piece of code:
# t.py
from __future__ import print_function
from eventlet.support import greendns
import sys
if sys.argv[1] == 'yes':
import dns.resolver
print(sys.argv[1], issubclass(greendns.HostsAnswer, greendns.dns.resolver.Answer))
The results:
# Python 2.7.11
% python t.py yes
yes False
% python t.py no
no True
# Python 3.5.1
% python t.py yes
yes False
% python t.py no
no True
Interestingly enough this particular test issue was only affecting Python
3.5+ before 861d684. Why?
* This issue appears to be caused by importing green version of a package
being followed by importing a non-green version of the same package
* When we run tests using nose it first imports the main tests module
(tests/__init__.py) which imports eventlet, that imports
eventlet.convenience and that then imports eventlet.green.sockt.
* Before 861d684 on Python < 3.5 the eventlet.green.socket import mentioned
above would fail to import greendns (because of an import cycle) so when
running those tests greendns was only being correctly imported *after* the
regular dns
* Since 861d684 (or on Python 3.5+) the green socket module correctly
imports greendns which means that when the regular dns subpackages are
being imported in this test file greendns is already imported and the
patching issue demonstrated by the code above is in effect
The patching/greening weirdness is reported[1] now.
Fixes https://github.com/eventlet/eventlet/issues/267
This patch is contributed by Smarkets Limited.
[1] https://github.com/eventlet/eventlet/issues/316
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The green socket module seemed to have only blocking DNS resolution
methods even with dnspython installed which is inconsistent with the
documentation.
This patch has a few consequences:
* an import cycle is eliminated
* if an import cycle reappears here it'll be visible
Note: eliminating the import cycle revealed an issue related to monkey
patching and the way we perform greendns tests (the test failures were
already present on Python 3.5[1] as that version has some import cycle
handling changes). The failures look like this:
======================================================================
FAIL: test_query_ans_types (tests.greendns_test.TestHostsResolver)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/kuba/projects/eventlet/tests/greendns_test.py", line 97, in test_query_ans_types
assert isinstance(ans, greendns.dns.resolver.Answer)
AssertionError
======================================================================
FAIL: test_query_unknown_no_raise (tests.greendns_test.TestHostsResolver)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/kuba/projects/eventlet/tests/greendns_test.py", line 129, in test_query_unknown_no_raise
assert isinstance(ans, greendns.dns.resolver.Answer)
AssertionError
This issue will be addressed in a separate commit.
This patch is contributed by Smarkets Limited.
[1] https://github.com/eventlet/eventlet/issues/267
|
| |
|
|
|
|
|
|
| |
This list of module dependencies are already present in the code three
times and I intend to make a change that will require the same
dependencies reused even more.
This patch is contributed by Smarkets Limited.
|
| |
|
|
| |
This patch is contributed by Smarkets Limited.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pyopenssl 0.13 cannot be installed on my Fedora 23. It fails with:
building 'OpenSSL.crypto' extension
(...)
OpenSSL/crypto/crl.c:6:23: erreur: static declaration of ‘X509_REVOKED_dup’ follows non-static declaration
static X509_REVOKED * X509_REVOKED_dup(X509_REVOKED *orig) {
^
In file included from /usr/include/openssl/ssl.h:156:0,
from OpenSSL/crypto/x509.h:17,
from OpenSSL/crypto/crypto.h:30,
from OpenSSL/crypto/crl.c:3:
/usr/include/openssl/x509.h:751:15: note: previous declaration of ‘X509_REVOKED_dup’ was here
X509_REVOKED *X509_REVOKED_dup(X509_REVOKED *rev);
^
error: command 'gcc' failed with exit status 1
The bug is known: https://github.com/pyca/pyopenssl/issues/276
The workaround is simple: use a more recent version.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In projects which dynamically determine whether to activate eventlet,
it can be hard not to import a low level module like logging before
eventlet. When logging is imported it initialises a threading.RLock
which it uses to protect the logging configuration. If two
greenthreads attempt to claim this lock, the second one will block the
/native/ thread not just itself. As green systems usually only have
one native thread, this will freeze the whole system.
Search the GC for unsafe RLocks and replace their internal Lock with a
safe one while monkey-patching.
The tests pass, but were they to fail, the test process would never
return. To deal with this, I've added a test dependency on
subprocess32 which is a backport of the stdlib subprocess module from
Python3. This offers a timeout option on Popen#communicate, which I've
arbitrarily set at 30 seconds.
|
| |
|
|
| |
Signed-off-by: Steven Erenst <stevenerenst@gmail.com>
|
| |
|
|
|
|
|
|
| |
Real fix must work without any sleep(), even 0 delay.
With all these sync events the code makes perfect sense and execution is properly organized.
So I am 99% sure there is a problem in green/zmq.
Hint for the person willing to take on this in future:
send/recv seems not to switch to zmq trampolining
|
| |
|
|
| |
https://github.com/eventlet/eventlet/pull/278
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to Garth Mollett
Default is: accept - handshake (blocking) - return new connection
Now: accept - return new connection, some time later handshake will be done implicitly
Usual server code:
while server.alive:
conn, addr = listener.accept()
server.pool.spawn(server.process, conn, addr)
is vulnerable to a simple DoS attack, where evil client connects to HTTPS socket and does not perform handshake,
thus blocking server in `accept()` so no other clients can be accepted.
|
| |
|
|
|
| |
Mention that `Logger` objects can be supplied to wsgi server instances
(work done in #75)
|
| | |
|
| |
|
|
|
|
|
| |
Also setsockopt(TCP_NODELAY) sometimes failed (UNIX sockets?).
https://github.com/eventlet/eventlet/issues/301
https://news.ycombinator.com/item?id=10607422
|
| | |
|
| |
|
|
|
|
|
|
| |
Fix the root cause of makefile().writelines() data loss.
https://github.com/eventlet/eventlet/issues/295
Also wsgi.log.write can break file-object API and not return number of bytes again.
https://github.com/eventlet/eventlet/issues/296
|
| | |
|
| |
|
|
|
| |
Fixes surprising errors in artificial IO-dry scenarios (like tests).
Won't hurt in real environment.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Closes https://github.com/eventlet/eventlet/issues/295 (in the wsgi
module we use a custom writelines implementation now).
Those write() calls might write only part of the data (and even if they
don't - it's more readable to make sure all data is written
explicitly).
I changed the test code so that the write() implementation returns the
number of characters logged and it cooperates nicely with writeall()
now.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
.communicate()
https://github.com/eventlet/eventlet/issues/290
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sendto() interface as defined in Python documentation:
socket.sendto(string, address)
socket.sendto(string, flags, address)
I didn't catch the fact that [1] broke this, this patch fixes it and add
a sendto/recvfrom test to make sure it doesn't happen again (turns out
we didn't have any).
GitHub issue: https://github.com/eventlet/eventlet/issues/290
Fixes: bc4d1b5 - gh-274: Handle blocking I/O errors in GreenSocket
[1] bc4d1b5d362e5baaeded35b1e339b9db08172dd2
|
| | |
|
| |
|
|
| |
I hope I didn't miss anything/anyone.
|
| |
|
|
|
|
|
|
|
| |
The original commit[1] was not enough because we run tests using tox and
tox by default doesn't pass all environment variables to the
environments in which tests run - this patch whitelists subset of
environment variables set by Travis, including one that we read and use.
[1] 74f418bcf38ac128cadad641d59d7ecd8f127340
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a socket timeout was set the sendall-like semantics was resulting
in losing information about sent data and raising socket.timeout exception.
The previous GreenSocket.send() implementation used to call fd.send() in a
loop until all data was sent. The issue would manifest if at least one
fd.send() call succeeded and was followed by a fd.send() call that would
block and a trampoline that timed out. The client code would see
socket.timeout being raised and would rightfully assume that no data was
sent which would be incorrect.
Discussed at https://github.com/eventlet/eventlet/issues/260.
The original commit[1] has been reverted because it broke the
test_multiple_readers test. After some debugging I believe I more or
less know what's going on:
The test uses sendall() which calls send() in a loop if send() reports
only part of the data sent. Previously sendall() would call send() only
one anyway because send() had sendall-like semantics; send has an
internal loop on its own and, previously, it'd call the underlying
socket object send() and, in case of partial write, call trampoline
mechanism which would switch to another greenthread ready to run.
After the change partial writes no longer result in trampoline mechanism
being called which means that during this test it's likely that
sendall() doesn't yield control to the hub even once. Similarly the
current recv() implementation - it attemps to read from a socket first
and only yields control to the hub if there's nothing to read at the
moment; when one of the readers obtained control it's likely it'd manage
to read all the data from the socket without yielding control to the hub
and letting the other reader receive any data.
The test changes the sending code so that it not only yields to the hub
but also waits a bit so that both the readers have to yield to the hub
when trying to read and there's no data available; the tests confirmed
this lets both the readers receive some data from the socket which is
the purpose of the test.
[1] 4656eadfa5ae1237036a63ad4004dbee4572debf
|
| | |
|
| |
|
|
|
| |
Newest pip versions don't support it anymore and it's been deprecated
for a while.
|
| |
|
|
| |
fixes https://github.com/eventlet/eventlet/issues/285
|
| |
|
|
|
| |
All those code blocks do the same thing so I decided to extract the
logic into a one place.
|
| |
|
|
|
|
| |
I completely missed those two.
Fixes: 24d283cfdb3cb9e6f0b4cad3d676940b4a3ce252
|
| |
|
|
|
| |
BitBucket: https://github.com/eventlet/eventlet/issues/157
GitHub: https://github.com/eventlet/eventlet/pull/56
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The original patch[1] missed one thing which I just discovered - there's
selectors.DefaultSelector alias for the "the most efficient implementation
available on the current platform"[2].
Before this patch a non-green selector class would be obtained when
DefaultSelector was used.
[1] 0d509ef7d2eea3ed4f8ade35d5d09918c811b56c
[2] https://docs.python.org/3.5/library/selectors.html#selectors.DefaultSelector
|
| |
|
|
|
|
|
|
| |
devpoll is another polling method, added in Python 3.3. We don't have
a green version of it so we better remove it too (see [1] for more
details).
[1] f63165c0e3c85699ebdb454878d1eaea13e90553
|