summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorSergey Shepelev <temotor@gmail.com>2014-04-23 13:27:49 +0400
committerSergey Shepelev <temotor@gmail.com>2014-04-23 15:15:22 +0400
commit1b9f0f0edb285be01bc570b1f64ee2aa2cba49db (patch)
tree90dff04380efcae8250730f9832c4ab2c786f338 /doc
parent92d12567b20dbe159a1a8a8db4d2cd05d15e6588 (diff)
downloadeventlet-1b9f0f0edb285be01bc570b1f64ee2aa2cba49db.tar.gz
python3 compatibility: print function
Diffstat (limited to 'doc')
-rw-r--r--doc/design_patterns.rst41
-rw-r--r--doc/index.rst12
-rw-r--r--doc/modules/wsgi.rst12
-rw-r--r--doc/real_index.html2
-rw-r--r--doc/ssl.rst14
-rw-r--r--doc/threading.rst4
6 files changed, 42 insertions, 43 deletions
diff --git a/doc/design_patterns.rst b/doc/design_patterns.rst
index aa60d43..f27f37d 100644
--- a/doc/design_patterns.rst
+++ b/doc/design_patterns.rst
@@ -10,22 +10,21 @@ Client Pattern
The canonical client-side example is a web crawler. This use case is given a list of urls and wants to retrieve their bodies for later processing. Here is a very simple example::
+ import eventlet
+ from eventlet.green import urllib2
+
+ urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
+ "https://wiki.secondlife.com/w/images/secondlife.jpg",
+ "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
- urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
- "https://wiki.secondlife.com/w/images/secondlife.jpg",
- "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
-
- import eventlet
- from eventlet.green import urllib2
+ def fetch(url):
+ return urllib2.urlopen(url).read()
- def fetch(url):
- return urllib2.urlopen(url).read()
-
- pool = eventlet.GreenPool()
- for body in pool.imap(fetch, urls):
- print "got body", len(body)
+ pool = eventlet.GreenPool()
+ for body in pool.imap(fetch, urls):
+ print("got body", len(body))
-There is a slightly more complex version of this in the :ref:`web crawler example <web_crawler_example>`. Here's a tour of the interesting lines in this crawler.
+There is a slightly more complex version of this in the :ref:`web crawler example <web_crawler_example>`. Here's a tour of the interesting lines in this crawler.
``from eventlet.green import urllib2`` is how you import a cooperatively-yielding version of urllib2. It is the same in all respects to the standard version, except that it uses green sockets for its communication. This is an example of the :ref:`import-green` pattern.
@@ -40,15 +39,15 @@ Server Pattern
--------------------
Here's a simple server-side example, a simple echo server::
-
+
import eventlet
-
+
def handle(client):
while True:
c = client.recv(1)
if not c: break
client.sendall(c)
-
+
server = eventlet.listen(('0.0.0.0', 6000))
pool = eventlet.GreenPool(10000)
while True:
@@ -59,7 +58,7 @@ The file :ref:`echo server example <echo_server_example>` contains a somewhat mo
``server = eventlet.listen(('0.0.0.0', 6000))`` uses a convenience function to create a listening socket.
-``pool = eventlet.GreenPool(10000)`` creates a pool of green threads that could handle ten thousand clients.
+``pool = eventlet.GreenPool(10000)`` creates a pool of green threads that could handle ten thousand clients.
``pool.spawn_n(handle, new_sock)`` launches a green thread to handle the new client. The accept loop doesn't care about the return value of the ``handle`` function, so it uses :meth:`spawn_n <eventlet.greenpool.GreenPool.spawn_n>`, instead of :meth:`spawn <eventlet.greenpool.GreenPool.spawn>`.
@@ -74,13 +73,13 @@ Here's a somewhat contrived example: a server that receives POSTs from clients t
import eventlet
feedparser = eventlet.import_patched('feedparser')
-
+
pool = eventlet.GreenPool()
-
+
def fetch_title(url):
d = feedparser.parse(url)
return d.feed.get('title', '')
-
+
def app(environ, start_response):
pile = eventlet.GreenPile(pool)
for url in environ['wsgi.input'].readlines():
@@ -110,4 +109,4 @@ Note that in line 1, the Pile is constructed using the global pool as its argume
Line 3 is just a spawn, but note that we don't store any return value from it. This is because the return value is kept in the Pile itself. This becomes evident in the next line...
-Line 4 is where we use the fact that the Pile is an iterator. Each element in the iterator is one of the return values from the fetch_title function, which are strings. We can use a normal Python idiom (:func:`join`) to concatenate these incrementally as they happen. \ No newline at end of file
+Line 4 is where we use the fact that the Pile is an iterator. Each element in the iterator is one of the return values from the fetch_title function, which are strings. We can use a normal Python idiom (:func:`join`) to concatenate these incrementally as they happen.
diff --git a/doc/index.rst b/doc/index.rst
index 16d7c1c..b05be08 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -6,16 +6,16 @@ Code talks! This is a simple web crawler that fetches a bunch of urls concurren
urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
"https://wiki.secondlife.com/w/images/secondlife.jpg",
"http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
-
+
import eventlet
- from eventlet.green import urllib2
-
+ from eventlet.green import urllib2
+
def fetch(url):
return urllib2.urlopen(url).read()
-
+
pool = eventlet.GreenPool()
for body in pool.imap(fetch, urls):
- print "got body", len(body)
+ print("got body", len(body))
Contents
=========
@@ -35,7 +35,7 @@ Contents
environment
modules
-
+
authors
history
diff --git a/doc/modules/wsgi.rst b/doc/modules/wsgi.rst
index 401ea99..aa011d7 100644
--- a/doc/modules/wsgi.rst
+++ b/doc/modules/wsgi.rst
@@ -1,7 +1,7 @@
:mod:`wsgi` -- WSGI server
===========================
-The wsgi module provides a simple and easy way to start an event-driven
+The wsgi module provides a simple and easy way to start an event-driven
`WSGI <http://wsgi.org/wsgi/>`_ server. This can serve as an embedded
web server in an application, or as the basis for a more full-featured web
server package. One such package is `Spawning <http://pypi.python.org/pypi/Spawning/>`_.
@@ -10,11 +10,11 @@ To launch a wsgi server, simply create a socket and call :func:`eventlet.wsgi.se
from eventlet import wsgi
import eventlet
-
+
def hello_world(env, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello, World!\r\n']
-
+
wsgi.server(eventlet.listen(('', 8090)), hello_world)
@@ -55,14 +55,14 @@ For example::
import eventlet
def hook(env, arg1, arg2, kwarg3=None, kwarg4=None):
- print 'Hook called: %s %s %s %s %s' % (env, arg1, arg2, kwarg3, kwarg4)
-
+ print('Hook called: %s %s %s %s %s' % (env, arg1, arg2, kwarg3, kwarg4))
+
def hello_world(env, start_response):
env['eventlet.posthooks'].append(
(hook, ('arg1', 'arg2'), {'kwarg3': 3, 'kwarg4': 4}))
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello, World!\r\n']
-
+
wsgi.server(eventlet.listen(('', 8090)), hello_world)
The above code will print the WSGI environment and the other passed function
diff --git a/doc/real_index.html b/doc/real_index.html
index 0747cf5..35248d3 100644
--- a/doc/real_index.html
+++ b/doc/real_index.html
@@ -131,7 +131,7 @@ def fetch(url):
pool = eventlet.GreenPool()
for body in pool.imap(fetch, urls):
- print "got body", len(body)
+ print("got body", len(body))
</code></pre>
diff --git a/doc/ssl.rst b/doc/ssl.rst
index 2b3bca5..0d47364 100644
--- a/doc/ssl.rst
+++ b/doc/ssl.rst
@@ -10,9 +10,9 @@ In either case, the the ``green`` modules handle SSL sockets transparently, just
bodies = [coros.execute(urllib2.urlopen, url)
for url in ("https://secondlife.com","https://google.com")]
for b in bodies:
- print b.wait().read()
-
-
+ print(b.wait().read())
+
+
With Python 2.6
----------------
@@ -34,7 +34,7 @@ Here's an example of a server::
from eventlet.green import socket
from eventlet.green.OpenSSL import SSL
-
+
# insecure context, only for example purposes
context = SSL.Context(SSL.SSLv23_METHOD)
context.set_verify(SSL.VERIFY_NONE, lambda *x: True))
@@ -42,15 +42,15 @@ Here's an example of a server::
# create underlying green socket and wrap it in ssl
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
connection = SSL.Connection(context, sock)
-
+
# configure as server
connection.set_accept_state()
connection.bind(('127.0.0.1', 80443))
connection.listen(50)
-
+
# accept one client connection then close up shop
client_conn, addr = connection.accept()
- print client_conn.read(100)
+ print(client_conn.read(100))
client_conn.shutdown()
client_conn.close()
connection.close()
diff --git a/doc/threading.rst b/doc/threading.rst
index ed24b72..3a0486e 100644
--- a/doc/threading.rst
+++ b/doc/threading.rst
@@ -1,7 +1,7 @@
Threads
========
-Eventlet is thread-safe and can be used in conjunction with normal Python threads. The way this works is that coroutines are confined to their 'parent' Python thread. It's like each thread contains its own little world of coroutines that can switch between themselves but not between coroutines in other threads.
+Eventlet is thread-safe and can be used in conjunction with normal Python threads. The way this works is that coroutines are confined to their 'parent' Python thread. It's like each thread contains its own little world of coroutines that can switch between themselves but not between coroutines in other threads.
.. image:: /images/threading_illustration.png
@@ -19,7 +19,7 @@ The simplest thing to do with :mod:`~eventlet.tpool` is to :func:`~eventlet.tpoo
>>> import thread
>>> from eventlet import tpool
>>> def my_func(starting_ident):
- ... print "running in new thread:", starting_ident != thread.get_ident()
+ ... print("running in new thread:", starting_ident != thread.get_ident())
...
>>> tpool.execute(my_func, thread.get_ident())
running in new thread: True