summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/build/content/dbengine.myt43
-rw-r--r--doc/build/content/docstrings.myt2
-rw-r--r--doc/build/content/document_base.myt2
-rw-r--r--doc/build/content/metadata.myt201
-rw-r--r--doc/build/content/pooling.myt36
5 files changed, 235 insertions, 49 deletions
diff --git a/doc/build/content/dbengine.myt b/doc/build/content/dbengine.myt
index 0c54c3a7c..ea811296b 100644
--- a/doc/build/content/dbengine.myt
+++ b/doc/build/content/dbengine.myt
@@ -13,17 +13,24 @@
<p>
Engines exist for SQLite, Postgres, MySQL, and Oracle, using the Pysqlite, Psycopg (1 or 2), MySQLDB, and cx_Oracle modules. Each engine imports its corresponding module which is required to be installed. For Postgres and Oracle, an alternate module may be specified at construction time as well.
</p>
- <p>An example of connecting to each engine is as follows:</p>
+ <p>The string based argument names for connecting are translated to the appropriate names when the connection is made; argument names include "host" or "hostname" for database host, "database", "db", or "dbname" for the database name (also is dsn for Oracle), "user" or "username" for the user, and "password", "pw", or "passwd" for the password. SQLite expects "filename" or "file" for the filename, or if None it defaults to "":memory:".</p>
+ <p>The connection arguments can be specified as a string + dictionary pair, or a single URL-encoded string, as follows:</p>
<&|formatting.myt:code&>
from sqlalchemy import *
# sqlite in memory
sqlite_engine = create_engine('sqlite', {'filename':':memory:'}, **opts)
+
+ # via URL
+ sqlite_engine = create_engine('sqlite://', **opts)
# sqlite using a file
sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'}, **opts)
+ # via URL
+ sqlite_engine = create_engine('sqlite://filename=querytest.db', **opts)
+
# postgres
postgres_engine = create_engine('postgres',
{'database':'test',
@@ -31,6 +38,9 @@
'user':'scott',
'password':'tiger'}, **opts)
+ # via URL
+ postgres_engine = create_engine('postgres://database=test&amp;host=127.0.0.1&amp;user=scott&amp;password=tiger')
+
# mysql
mysql_engine = create_engine('mysql',
{
@@ -49,20 +59,17 @@
</&>
<p>Note that the general form of connecting to an engine is:</p>
- <&|formatting.myt:code&>
+ <&|formatting.myt:code &>
+ # separate arguments
engine = create_engine(
<enginename>,
{<named DBAPI arguments>},
- <sqlalchemy options>
+ <sqlalchemy options>;
)
+
+ # url
+ engine = create_engine('&lt;enginename&gt;://&lt;named DBAPI arguments&gt;', <sqlalchemy options>)
</&>
- <p>The second argument is a dictionary whose key/value pairs will be passed to the underlying DBAPI connect() method as keyword arguments. Any keyword argument supported by the DBAPI module can be in this dictionary.</p>
- <p>Engines can also be loaded by URL. The above format is converted into <span class="codeline"><% '<enginename>://key=val&key=val' |h %></span>:
- <&|formatting.myt:code&>
- sqlite_engine = create_engine('sqlite://filename=querytest.db')
- postgres_engine = create_engine('postgres://database=test&user=scott&password=tiger')
- </&>
- </p>
</&>
<&|doclib.myt:item, name="methods", description="Database Engine Methods" &>
<p>A few useful methods off the SQLEngine are described here:</p>
@@ -95,7 +102,18 @@
<&|doclib.myt:item, name="options", description="Database Engine Options" &>
<p>The remaining arguments to <span class="codeline">create_engine</span> are keyword arguments that are passed to the specific subclass of <span class="codeline">sqlalchemy.engine.SQLEngine</span> being used, as well as the underlying <span class="codeline">sqlalchemy.pool.Pool</span> instance. All of the options described in the previous section <&formatting.myt:link, path="pooling_configuration"&> can be specified, as well as engine-specific options:</p>
<ul>
- <li>pool=None : an instance of <span class="codeline">sqlalchemy.pool.DBProxy</span> to be used as the underlying source for connections (DBProxy is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments.</li>
+ <li><p>pool=None : an instance of <span class="codeline">sqlalchemy.pool.Pool</span> to be used as the underlying source for connections, overriding the engine's connect arguments (pooling is described in the previous section). If None, a default Pool (QueuePool or SingletonThreadPool as appropriate) will be created using the engine's connect arguments.</p>
+ <p>Example:</p>
+ <&|formatting.myt:code&>
+ from sqlalchemy import *
+ import sqlalchemy.pool as pool
+ import MySQLdb
+
+ def getconn():
+ return MySQLdb.connect(user='ed', dbname='mydb')
+
+ engine = create_engine('mysql', pool=pool.QueuePool(getconn, pool_size=20, max_overflow=40))
+ </&></li>
<li>echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well.</li>
<li>logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.</li>
<li>module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle.</li>
@@ -103,7 +121,8 @@
<li>use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <% "<column1>(+)=<column2>" |h%> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.</li>
<li>use_oids=False : used only by Postgres, will enable the column name "oid" as the object ID column. Postgres as of 8.1 has object IDs disabled by default.</li>
<li>convert_unicode=False : if set to True, all String/character based types will convert Unicode values to raw byte values going into the database, and all raw byte values to Python Unicode coming out in result sets. This is an engine-wide method to provide unicode across the board. For unicode conversion on a column-by-column level, use the Unicode column type instead.</li>
- <li>encoding='utf-8' : the encoding to use when doing unicode translations.</li>
+ <li>encoding='utf-8' : the encoding to use for Unicode translations - passed to all encode/decode functions.</li>
+ <li>echo_uow=False : when True, logs unit of work commit plans to the standard output.</li>
</ul>
</&>
<&|doclib.myt:item, name="proxy", description="Using the Proxy Engine" &>
diff --git a/doc/build/content/docstrings.myt b/doc/build/content/docstrings.myt
index 9ab1146d8..91513fcb9 100644
--- a/doc/build/content/docstrings.myt
+++ b/doc/build/content/docstrings.myt
@@ -14,7 +14,7 @@
<& pydoc.myt:obj_doc, obj=schema &>
<& pydoc.myt:obj_doc, obj=engine, classes=[engine.SQLEngine, engine.ResultProxy, engine.RowProxy] &>
-<& pydoc.myt:obj_doc, obj=sql, classes=[sql.Compiled, sql.ClauseElement, sql.TableClause, sql.ColumnClause] &>
+<& pydoc.myt:obj_doc, obj=sql, classes=[sql.ClauseParameters, sql.Compiled, sql.ClauseElement, sql.TableClause, sql.ColumnClause] &>
<& pydoc.myt:obj_doc, obj=pool, classes=[pool.DBProxy, pool.Pool, pool.QueuePool, pool.SingletonThreadPool] &>
<& pydoc.myt:obj_doc, obj=mapping &>
<& pydoc.myt:obj_doc, obj=mapping.objectstore, classes=[mapping.objectstore.Session, mapping.objectstore.Session.SessionTrans, mapping.objectstore.UnitOfWork] &>
diff --git a/doc/build/content/document_base.myt b/doc/build/content/document_base.myt
index a8b091015..f37857e25 100644
--- a/doc/build/content/document_base.myt
+++ b/doc/build/content/document_base.myt
@@ -23,7 +23,7 @@
onepage='documentation'
index='index'
title='SQLAlchemy Documentation'
- version = '0.1.3'
+ version = '0.1.4'
</%attr>
<%method title>
diff --git a/doc/build/content/metadata.myt b/doc/build/content/metadata.myt
index e35d205d7..5b03ee647 100644
--- a/doc/build/content/metadata.myt
+++ b/doc/build/content/metadata.myt
@@ -107,15 +107,11 @@
>>> othertable is news_articles
True
</&>
-
-
-
</&>
<&|doclib.myt:item, name="creating", description="Creating and Dropping Database Tables" &>
<p>Creating and dropping is easy, just use the <span class="codeline">create()</span> and <span class="codeline">drop()</span> methods:
<&|formatting.myt:code&>
- <&formatting.myt:poplink&>
- employees = Table('employees', engine,
+ <&formatting.myt:poplink&>employees = Table('employees', engine,
Column('employee_id', Integer, primary_key=True),
Column('employee_name', String(60), nullable=False, key='name'),
Column('employee_dept', Integer, ForeignKey("departments.department_id"))
@@ -126,18 +122,162 @@ CREATE TABLE employees(
employee_name VARCHAR(60) NOT NULL,
employee_dept INTEGER REFERENCES departments(department_id)
)
-
{} </&>
-
- <&formatting.myt:poplink&>
- employees.drop() <&|formatting.myt:codepopper, link="sql" &>
+
+ <&formatting.myt:poplink&>employees.drop() <&|formatting.myt:codepopper, link="sql" &>
DROP TABLE employees
{} </&>
</&>
</&>
+
+ <&|doclib.myt:item, name="defaults", description="Column Defaults and OnUpdates" &>
+ <p>SQLAlchemy includes flexible constructs in which to create default values for columns upon the insertion of rows, as well as upon update. These defaults can take several forms: a constant, a Python callable to be pre-executed before the SQL is executed, a SQL expression or function to be pre-executed before the SQL is executed, a pre-executed Sequence (for databases that support sequences), or a "passive" default, which is a default function triggered by the database itself upon insert, the value of which can then be post-fetched by the engine, provided the row provides a primary key in which to call upon.</p>
+ <&|doclib.myt:item, name="oninsert", description="Pre-Executed Insert Defaults" &>
+ <p>A basic default is most easily specified by the "default" keyword argument to Column:</p>
+ <&|formatting.myt:code&>
+ # a function to create primary key ids
+ i = 0
+ def mydefault():
+ i += 1
+ return i
+
+ t = Table("mytable", db,
+ # function-based default
+ Column('id', Integer, primary_key=True, default=mydefault),
+
+ # a scalar default
+ Column('key', String(10), default="default")
+ )
+ </&>
+ <p>The "default" keyword can also take SQL expressions, including select statements or direct function calls:</p>
+ <&|formatting.myt:code&>
+ t = Table("mytable", db,
+ Column('id', Integer, primary_key=True),
+
+ # define 'create_date' to default to now()
+ Column('create_date', DateTime, default=func.now()),
+
+ # define 'key' to pull its default from the 'keyvalues' table
+ Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1))
+ )
+ </&>
+ <p>The "default" keyword argument is shorthand for using a ColumnDefault object in a column definition. This syntax is optional, but is required for other types of defaults, futher described below:</p>
+ <&|formatting.myt:code&>
+ Column('mycolumn', String(30), ColumnDefault(func.get_data()))
+ </&>
+ </&>
+
+ <&|doclib.myt:item, name="onupdate", description="Pre-Executed OnUpdate Defaults" &>
+ <p>Similar to an on-insert default is an on-update default, which is most easily specified by the "onupdate" keyword to Column, which also can be a constanct, plain Python function or SQL expression:</p>
+ <&|formatting.myt:code&>
+ t = Table("mytable", db,
+ Column('id', Integer, primary_key=True),
+
+ # define 'last_updated' to be populated with current_timestamp (the ANSI-SQL version of now())
+ Column('last_updated', DateTime, onupdate=func.current_timestamp()),
+ )
+ </&>
+ <p>To use a ColumnDefault explicitly for an on-update, use the "for_update" keyword argument:</p>
+ <&|formatting.myt:code&>
+ Column('mycolumn', String(30), ColumnDefault(func.get_data(), for_update=True))
+ </&>
+ </&>
+
+ <&|doclib.myt:item, name="passive", description="Inline Default Execution: PassiveDefault" &>
+ <p>A PassiveDefault indicates a column default or on-update value that is executed automatically by the database. This construct is used to specify a SQL function that will be specified as "DEFAULT" when creating tables, and also to indicate the presence of new data that is available to be "post-fetched" after an insert or update execution.</p>
+ <&|formatting.myt:code&>
+ t = Table('test', e,
+ Column('mycolumn', DateTime, PassiveDefault("sysdate"))
+ )
+ </&>
+ <p>A create call for the above table will produce:</p>
+ <&|formatting.myt:code&>
+ CREATE TABLE test (
+ mycolumn datetime default sysdate
+ )
+ </&>
+ <p>PassiveDefaults also send a message to the SQLEngine that data is available after update or insert. The object-relational mapper system uses this information to post-fetch rows after insert or update, so that instances can be refreshed with the new data. Below is a simplified version:</p>
+ <&|formatting.myt:code&>
+ # table with passive defaults
+ mytable = Table('mytable', engine,
+ Column('my_id', Integer, primary_key=True),
+
+ # an on-insert database-side default
+ Column('data1', Integer, PassiveDefault("d1_func")),
+
+ # an on-update database-side default
+ Column('data2', Integer, PassiveDefault("d2_func", for_update=True))
+ )
+ # insert a row
+ mytable.insert().execute(name='fred')
+
+ # ask the engine: were there defaults fired off on that row ?
+ if table.engine.lastrow_has_defaults():
+ # postfetch the row based on primary key.
+ # this only works for a table with primary key columns defined
+ primary_key = table.engine.last_inserted_ids()
+ row = table.select(table.c.id == primary_key[0])
+ </&>
+ <p>Tables that are reflected from the database which have default values set on them, will receive those defaults as PassiveDefaults.</p>
+
+ <&|doclib.myt:item, name="postgres", description="The Catch: Postgres Primary Key Defaults always Pre-Execute" &>
+ <p>Current Postgres support does not rely upon OID's to determine the identity of a row. This is because the usage of OIDs has been deprecated with Postgres and they are disabled by default for table creates as of PG version 8. Pyscopg2's "cursor.lastrowid" function only returns OIDs. Therefore, when inserting a new row which has passive defaults set on the primary key columns, the default function is <b>still pre-executed</b> since SQLAlchemy would otherwise have no way of retrieving the row just inserted.</p>
+ </&>
+ </&>
+ <&|doclib.myt:item, name="sequences", description="Defining Sequences" &>
+ <P>A table with a sequence looks like:</p>
+ <&|formatting.myt:code&>
+ table = Table("cartitems", db,
+ Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
+ Column("description", String(40)),
+ Column("createdate", DateTime())
+ )
+ </&>
+ <p>The Sequence is used with Postgres or Oracle to indicate the name of a Sequence that will be used to create default values for a column. When a table with a Sequence on a column is created by SQLAlchemy, the Sequence object is also created. Similarly, the Sequence is dropped when the table is dropped. Sequences are typically used with primary key columns. When using Postgres, if an integer primary key column defines no explicit Sequence or other default method, SQLAlchemy will create the column with the SERIAL keyword, and will pre-execute a sequence named "tablename_columnname_seq" in order to retrieve new primary key values. Oracle, which has no "auto-increment" keyword, requires that a Sequence be created for a table if automatic primary key generation is desired. Note that for all databases, primary key values can always be explicitly stated within the bind parameters for any insert statement as well, removing the need for any kind of default generation function.</p>
+
+ <p>A Sequence object can be defined on a Table that is then used for a non-sequence-supporting database. In that case, the Sequence object is simply ignored. Note that a Sequence object is <b>entirely optional for all databases except Oracle</b>, as other databases offer options for auto-creating primary key values, such as AUTOINCREMENT, SERIAL, etc. SQLAlchemy will use these default methods for creating primary key values if no Sequence is present on the table metadata.</p>
+
+ <p>A sequence can also be specified with <span class="codeline">optional=True</span> which indicates the Sequence should only be used on a database that requires an explicit sequence, and not those that supply some other method of providing integer values. At the moment, it essentially means "use this sequence only with Oracle and not Postgres".</p>
+ </&>
+ </&>
+ <&|doclib.myt:item, name="indexes", description="Defining Indexes" &>
+ <p>Indexes can be defined on table columns, including named indexes, non-unique or unique, multiple column. Indexes are included along with table create and drop statements. They are not used for any kind of run-time constraint checking...SQLAlchemy leaves that job to the expert on constraint checking, the database itself.</p>
+ <&|formatting.myt:code&>
+ mytable = Table('mytable', engine,
+
+ # define a unique index
+ Column('col1', Integer, unique=True),
+
+ # define a unique index with a specific name
+ Column('col2', Integer, unique='mytab_idx_1'),
+
+ # define a non-unique index
+ Column('col3', Integer, index=True),
+
+ # define a non-unique index with a specific name
+ Column('col4', Integer, index='mytab_idx_2'),
+
+ # pass the same name to multiple columns to add them to the same index
+ Column('col5', Integer, index='mytab_idx_2'),
+
+ Column('col6', Integer),
+ Column('col7', Integer)
+ )
+
+ # create the table. all the indexes will be created along with it.
+ mytable.create()
+
+ # indexes can also be specified standalone
+ i = Index('mytab_idx_3', mytable.c.col6, mytable.c.col7, unique=False)
+
+ # which can then be created separately (will also get created with table creates)
+ i.create()
+
+ </&>
+ </&>
<&|doclib.myt:item, name="adapting", description="Adapting Tables to Alternate Engines" &>
- <p>Occasionally an application will need to reference the same tables within multiple databases simultaneously. Since a Table object is specific to a SQLEngine, an extra method is provided to create copies of the Table object for a different SQLEngine instance, which can represent a different set of connection parameters, or a totally different database driver:
+ <p>A Table object created against a specific engine can be re-created against a new engine using the <span class="codeline">toengine</span> method:</p>
<&|formatting.myt:code&>
# create two engines
@@ -153,7 +293,7 @@ DROP TABLE employees
pg_users = users.toengine(postgres_engine)
</&>
- <p>You can also create tables using a "database neutral" engine, which can serve as a starting point for tables that are then adapted to specific engines:</p>
+ <p>Also available is the "database neutral" ansisql engine:</p>
<&|formatting.myt:code&>
import sqlalchemy.ansisql as ansisql
generic_engine = ansisql.engine()
@@ -162,30 +302,27 @@ DROP TABLE employees
Column('user_id', Integer),
Column('user_name', String(50))
)
-
- sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'})
- sqlite_users = users.toengine(sqlite_engine)
- sqlite_users.create()
</&>
-
- </&>
+ <p>Flexible "multi-engined" tables can also be achieved via the proxy engine, described in the section <&formatting.myt:link, path="dbengine_proxy"&>.</p>
- <&|doclib.myt:item, name="sequences", description="Defining Sequences" &>
- <P>A table with a sequence looks like:</p>
- <&|formatting.myt:code&>
- table = Table("cartitems", db,
- Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
- Column("description", String(40)),
- Column("createdate", DateTime())
- )
- </&>
- <p>The Sequence is used when a Postgres or Oracle database schema defines a sequence of a specific name which must be used to create integer values. If a Sequence is not defined, Postgres will default to regular SERIAL access. Oracle currently has no default primary key method; so explicit primary key values or Sequence objects are required to insert new rows.</p>
-
-<p>Defining a Sequence means that it will be created along with the table.create() call, and that the sequence will be explicitly used when inserting new rows for this table, for databases that support sequences. If the Table is connected to a database that doesnt support sequences, the Sequence object is simply ignored. Note that a Sequence object is <b>entirely optional for all databases except Oracle</b>, as other databases offer options for auto-creating primary key values, such as AUTOINCREMENT, SERIAL, etc. SQLAlchemy will use these default methods for creating primary key values if no Sequence is present on the table metadata.</p>
+ <&|doclib.myt:item, name="primitives", description="Non-engine primitives: TableClause/ColumnClause" &>
-<p>A sequence can also be specified with <span class="codeline">optional=True</span> which indicates the Sequence should only be used on a database that requires an explicit sequence, and not those that supply some other method of providing integer values. At the moment, it essentially means "use this sequence only with Oracle and not Postgres".</p>
-
-<p>More docs TODO in this area include the ColumnDefault and PassiveDefault objects which provide more options to automatic generation of column values.</p>
+ <p>TableClause and ColumnClause are "primitive" versions of the Table and Column objects which dont use engines at all; applications that just want to generate SQL strings but not directly communicate with a database can use TableClause and ColumnClause objects, which are non-singleton and serve as the "lexical" base class of Table and Column:</p>
+ <&|formatting.myt:code&>
+ tab1 = TableClause('table1',
+ ColumnClause('id'),
+ ColumnClause('name'))
+
+ tab2 = TableClause('table2',
+ ColumnClause('id'),
+ ColumnClause('email'))
+
+ tab1.select(tab1.c.name == 'foo')
+ </&>
+
+ <p>TableClause and ColumnClause are strictly lexical. This means they are fully supported within the full range of SQL statement generation, but they don't support schema concepts like creates, drops, primary keys, defaults, nullable status, indexes, or foreign keys.</p>
+ </&>
</&>
+
</&>
diff --git a/doc/build/content/pooling.myt b/doc/build/content/pooling.myt
index 0bec28083..b9f5ecb14 100644
--- a/doc/build/content/pooling.myt
+++ b/doc/build/content/pooling.myt
@@ -27,10 +27,40 @@
</p>
<ul>
<li>echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information.</li>
- <li>use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the <b>same</b> connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using.</li>
- <li>poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python <span class="codeline">Queue.Queue</span> class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm.</li>
+ <li>use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the <b>same</b> connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using. Note that once the connection is returned to the pool, it then may be used by another thread. To guarantee a single unique connection per thread that <b>never</b> changes, use the option <span class="codeline">poolclass=SingletonThreadPool</span>, in which case the use_threadlocal parameter is automatically set to False.</li>
+ <li>poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python <span class="codeline">Queue.Queue</span> class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm. Also included is the ThreadSingletonPool, which provides a single distinct connection per thread and is required with SQLite.</li>
<li>pool_size=5 : used by QueuePool - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.</li>
- <li>max_overflow=10 : the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.</li>
+ <li>max_overflow=10 : used by QueuePool - the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.</li>
</ul>
</&>
+
+ <&|doclib.myt:item, name="custom", description="Custom Pool Construction" &>
+ <p>One level below using a DBProxy to make transparent pools is creating the pool yourself. The pool module comes with two implementations of connection pools: <span class="codeline">QueuePool</span> and <span class="codeline">SingletonThreadPool</span>. While QueuePool uses Queue.Queue to provide connections, SingletonThreadPool provides a single per-thread connection which SQLite requires.</p>
+
+ <p>Constructing your own pool involves passing a callable used to create a connection. Through this method, custom connection schemes can be made, such as a connection that automatically executes some initialization commands to start. The options from the previous section can be used as they apply to QueuePool or SingletonThreadPool.</p>
+ <&|formatting.myt:code, title="Plain QueuePool"&>
+ import sqlalchemy.pool as pool
+ import psycopg2
+
+ def getconn():
+ c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
+ # execute an initialization function on the connection before returning
+ c.cursor.execute("setup_encodings()")
+ return c
+
+ p = pool.QueuePool(getconn, max_overflow=10, pool_size=5, use_threadlocal=True)
+ </&>
+
+ <&|formatting.myt:code, title="SingletonThreadPool"&>
+ import sqlalchemy.pool as pool
+ import sqlite
+
+ def getconn():
+ return sqlite.connect(filename='myfile.db')
+
+ # SQLite connections require the SingletonThreadPool
+ p = pool.SingletonThreadPool(getconn)
+ </&>
+
+ </&>
</&> \ No newline at end of file