summaryrefslogtreecommitdiff
path: root/lib/sqlalchemy/dialects/mssql
diff options
context:
space:
mode:
Diffstat (limited to 'lib/sqlalchemy/dialects/mssql')
-rw-r--r--lib/sqlalchemy/dialects/mssql/base.py102
-rw-r--r--lib/sqlalchemy/dialects/mssql/information_schema.py54
-rw-r--r--lib/sqlalchemy/dialects/mssql/mxodbc.py2
-rw-r--r--lib/sqlalchemy/dialects/mssql/pymssql.py3
-rw-r--r--lib/sqlalchemy/dialects/mssql/pyodbc.py22
-rw-r--r--lib/sqlalchemy/dialects/mssql/zxjdbc.py3
6 files changed, 104 insertions, 82 deletions
diff --git a/lib/sqlalchemy/dialects/mssql/base.py b/lib/sqlalchemy/dialects/mssql/base.py
index 9ba427458..f3628d62a 100644
--- a/lib/sqlalchemy/dialects/mssql/base.py
+++ b/lib/sqlalchemy/dialects/mssql/base.py
@@ -13,9 +13,9 @@
Auto Increment Behavior
-----------------------
-SQL Server provides so-called "auto incrementing" behavior using the ``IDENTITY``
-construct, which can be placed on an integer primary key. SQLAlchemy
-considers ``IDENTITY`` within its default "autoincrement" behavior,
+SQL Server provides so-called "auto incrementing" behavior using the
+``IDENTITY`` construct, which can be placed on an integer primary key.
+SQLAlchemy considers ``IDENTITY`` within its default "autoincrement" behavior,
described at :paramref:`.Column.autoincrement`; this means
that by default, the first integer primary key column in a :class:`.Table`
will be considered to be the identity column and will generate DDL as such::
@@ -52,24 +52,25 @@ specify ``autoincrement=False`` on all integer primary key columns::
An INSERT statement which refers to an explicit value for such
a column is prohibited by SQL Server, however SQLAlchemy will detect this
and modify the ``IDENTITY_INSERT`` flag accordingly at statement execution
- time. As this is not a high performing process, care should be taken to set
- the ``autoincrement`` flag appropriately for columns that will not actually
- require IDENTITY behavior.
+ time. As this is not a high performing process, care should be taken to
+ set the ``autoincrement`` flag appropriately for columns that will not
+ actually require IDENTITY behavior.
Controlling "Start" and "Increment"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific control over the parameters of the ``IDENTITY`` value is supported
-using the :class:`.schema.Sequence` object. While this object normally represents
-an explicit "sequence" for supporting backends, on SQL Server it is re-purposed
-to specify behavior regarding the identity column, including support
-of the "start" and "increment" values::
+using the :class:`.schema.Sequence` object. While this object normally
+represents an explicit "sequence" for supporting backends, on SQL Server it is
+re-purposed to specify behavior regarding the identity column, including
+support of the "start" and "increment" values::
from sqlalchemy import Table, Integer, Sequence, Column
Table('test', metadata,
Column('id', Integer,
- Sequence('blah', start=100, increment=10), primary_key=True),
+ Sequence('blah', start=100, increment=10),
+ primary_key=True),
Column('name', String(20))
).create(some_engine)
@@ -88,10 +89,10 @@ optional and will default to 1,1.
INSERT behavior
^^^^^^^^^^^^^^^^
-Handling of the ``IDENTITY`` column at INSERT time involves two key techniques.
-The most common is being able to fetch the "last inserted value" for a given
-``IDENTITY`` column, a process which SQLAlchemy performs implicitly in many
-cases, most importantly within the ORM.
+Handling of the ``IDENTITY`` column at INSERT time involves two key
+techniques. The most common is being able to fetch the "last inserted value"
+for a given ``IDENTITY`` column, a process which SQLAlchemy performs
+implicitly in many cases, most importantly within the ORM.
The process for fetching this value has several variants:
@@ -106,9 +107,9 @@ The process for fetching this value has several variants:
``implicit_returning=False``, either the ``scope_identity()`` function or
the ``@@identity`` variable is used; behavior varies by backend:
- * when using PyODBC, the phrase ``; select scope_identity()`` will be appended
- to the end of the INSERT statement; a second result set will be fetched
- in order to receive the value. Given a table as::
+ * when using PyODBC, the phrase ``; select scope_identity()`` will be
+ appended to the end of the INSERT statement; a second result set will be
+ fetched in order to receive the value. Given a table as::
t = Table('t', m, Column('id', Integer, primary_key=True),
Column('x', Integer),
@@ -121,17 +122,18 @@ The process for fetching this value has several variants:
INSERT INTO t (x) VALUES (?); select scope_identity()
* Other dialects such as pymssql will call upon
- ``SELECT scope_identity() AS lastrowid`` subsequent to an INSERT statement.
- If the flag ``use_scope_identity=False`` is passed to :func:`.create_engine`,
- the statement ``SELECT @@identity AS lastrowid`` is used instead.
+ ``SELECT scope_identity() AS lastrowid`` subsequent to an INSERT
+ statement. If the flag ``use_scope_identity=False`` is passed to
+ :func:`.create_engine`, the statement ``SELECT @@identity AS lastrowid``
+ is used instead.
A table that contains an ``IDENTITY`` column will prohibit an INSERT statement
that refers to the identity column explicitly. The SQLAlchemy dialect will
detect when an INSERT construct, created using a core :func:`.insert`
construct (not a plain string SQL), refers to the identity column, and
-in this case will emit ``SET IDENTITY_INSERT ON`` prior to the insert statement
-proceeding, and ``SET IDENTITY_INSERT OFF`` subsequent to the execution.
-Given this example::
+in this case will emit ``SET IDENTITY_INSERT ON`` prior to the insert
+statement proceeding, and ``SET IDENTITY_INSERT OFF`` subsequent to the
+execution. Given this example::
m = MetaData()
t = Table('t', m, Column('id', Integer, primary_key=True),
@@ -250,7 +252,8 @@ To generate a clustered primary key use::
which will render the table, for example, as::
- CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL, PRIMARY KEY CLUSTERED (x, y))
+ CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
+ PRIMARY KEY CLUSTERED (x, y))
Similarly, we can generate a clustered unique constraint using::
@@ -272,7 +275,8 @@ for :class:`.Index`.
INCLUDE
^^^^^^^
-The ``mssql_include`` option renders INCLUDE(colname) for the given string names::
+The ``mssql_include`` option renders INCLUDE(colname) for the given string
+names::
Index("my_index", table.c.x, mssql_include=['y'])
@@ -763,11 +767,12 @@ class MSExecutionContext(default.DefaultExecutionContext):
not self.executemany
if self._enable_identity_insert:
- self.root_connection._cursor_execute(self.cursor,
- "SET IDENTITY_INSERT %s ON" %
- self.dialect.identifier_preparer.format_table(
- tbl),
- (), self)
+ self.root_connection._cursor_execute(
+ self.cursor,
+ "SET IDENTITY_INSERT %s ON" %
+ self.dialect.identifier_preparer.format_table(tbl),
+ (),
+ self)
def post_exec(self):
"""Disable IDENTITY_INSERT if enabled."""
@@ -775,11 +780,14 @@ class MSExecutionContext(default.DefaultExecutionContext):
conn = self.root_connection
if self._select_lastrowid:
if self.dialect.use_scope_identity:
- conn._cursor_execute(self.cursor,
- "SELECT scope_identity() AS lastrowid", (), self)
+ conn._cursor_execute(
+ self.cursor,
+ "SELECT scope_identity() AS lastrowid", (), self)
else:
conn._cursor_execute(self.cursor,
- "SELECT @@identity AS lastrowid", (), self)
+ "SELECT @@identity AS lastrowid",
+ (),
+ self)
# fetchall() ensures the cursor is consumed without closing it
row = self.cursor.fetchall()[0]
self._lastrowid = int(row[0])
@@ -963,7 +971,8 @@ class MSSQLCompiler(compiler.SQLCompiler):
(field, self.process(extract.expr, **kw))
def visit_savepoint(self, savepoint_stmt):
- return "SAVE TRANSACTION %s" % self.preparer.format_savepoint(savepoint_stmt)
+ return "SAVE TRANSACTION %s" % \
+ self.preparer.format_savepoint(savepoint_stmt)
def visit_rollback_to_savepoint(self, savepoint_stmt):
return ("ROLLBACK TRANSACTION %s"
@@ -971,7 +980,8 @@ class MSSQLCompiler(compiler.SQLCompiler):
def visit_column(self, column, add_to_result_map=None, **kwargs):
if column.table is not None and \
- (not self.isupdate and not self.isdelete) or self.is_subquery():
+ (not self.isupdate and not self.isdelete) or \
+ self.is_subquery():
# translate for schema-qualified table aliases
t = self._schema_aliased_table(column.table)
if t is not None:
@@ -1169,7 +1179,8 @@ class MSDDLCompiler(compiler.DDLCompiler):
preparer.format_table(index.table),
', '.join(
self.sql_compiler.process(expr,
- include_table=False, literal_binds=True) for
+ include_table=False,
+ literal_binds=True) for
expr in index.expressions)
)
@@ -1177,7 +1188,9 @@ class MSDDLCompiler(compiler.DDLCompiler):
if index.dialect_options['mssql']['include']:
inclusions = [index.table.c[col]
if isinstance(col, util.string_types) else col
- for col in index.dialect_options['mssql']['include']]
+ for col in
+ index.dialect_options['mssql']['include']
+ ]
text += " INCLUDE (%s)" \
% ', '.join([preparer.quote(c.name)
@@ -1482,7 +1495,8 @@ class MSDialect(default.DefaultDialect):
@reflection.cache
@_db_plus_owner
- def get_view_definition(self, connection, viewname, dbname, owner, schema, **kw):
+ def get_view_definition(self, connection, viewname,
+ dbname, owner, schema, **kw):
rp = connection.execute(
sql.text(
"select definition from sys.sql_modules as mod, "
@@ -1607,13 +1621,16 @@ class MSDialect(default.DefaultDialect):
@reflection.cache
@_db_plus_owner
- def get_pk_constraint(self, connection, tablename, dbname, owner, schema, **kw):
+ def get_pk_constraint(self, connection, tablename,
+ dbname, owner, schema, **kw):
pkeys = []
TC = ischema.constraints
C = ischema.key_constraints.alias('C')
# Primary key constraints
- s = sql.select([C.c.column_name, TC.c.constraint_type, C.c.constraint_name],
+ s = sql.select([C.c.column_name,
+ TC.c.constraint_type,
+ C.c.constraint_name],
sql.and_(TC.c.constraint_name == C.c.constraint_name,
TC.c.table_schema == C.c.table_schema,
C.c.table_name == tablename,
@@ -1630,7 +1647,8 @@ class MSDialect(default.DefaultDialect):
@reflection.cache
@_db_plus_owner
- def get_foreign_keys(self, connection, tablename, dbname, owner, schema, **kw):
+ def get_foreign_keys(self, connection, tablename,
+ dbname, owner, schema, **kw):
RR = ischema.ref_constraints
C = ischema.key_constraints.alias('C')
R = ischema.key_constraints.alias('R')
diff --git a/lib/sqlalchemy/dialects/mssql/information_schema.py b/lib/sqlalchemy/dialects/mssql/information_schema.py
index 19d59387d..371a1edcc 100644
--- a/lib/sqlalchemy/dialects/mssql/information_schema.py
+++ b/lib/sqlalchemy/dialects/mssql/information_schema.py
@@ -54,7 +54,8 @@ tables = Table("TABLES", ischema,
Column("TABLE_SCHEMA", CoerceUnicode, key="table_schema"),
Column("TABLE_NAME", CoerceUnicode, key="table_name"),
Column(
- "TABLE_TYPE", String(convert_unicode=True), key="table_type"),
+ "TABLE_TYPE", String(convert_unicode=True),
+ key="table_type"),
schema="INFORMATION_SCHEMA")
columns = Table("COLUMNS", ischema,
@@ -75,42 +76,43 @@ columns = Table("COLUMNS", ischema,
constraints = Table("TABLE_CONSTRAINTS", ischema,
Column("TABLE_SCHEMA", CoerceUnicode, key="table_schema"),
Column("TABLE_NAME", CoerceUnicode, key="table_name"),
- Column(
- "CONSTRAINT_NAME", CoerceUnicode, key="constraint_name"),
+ Column("CONSTRAINT_NAME", CoerceUnicode,
+ key="constraint_name"),
Column("CONSTRAINT_TYPE", String(
convert_unicode=True), key="constraint_type"),
schema="INFORMATION_SCHEMA")
column_constraints = Table("CONSTRAINT_COLUMN_USAGE", ischema,
- Column(
- "TABLE_SCHEMA", CoerceUnicode, key="table_schema"),
- Column(
- "TABLE_NAME", CoerceUnicode, key="table_name"),
- Column(
- "COLUMN_NAME", CoerceUnicode, key="column_name"),
- Column(
- "CONSTRAINT_NAME", CoerceUnicode, key="constraint_name"),
+ Column("TABLE_SCHEMA", CoerceUnicode,
+ key="table_schema"),
+ Column("TABLE_NAME", CoerceUnicode,
+ key="table_name"),
+ Column("COLUMN_NAME", CoerceUnicode,
+ key="column_name"),
+ Column("CONSTRAINT_NAME", CoerceUnicode,
+ key="constraint_name"),
schema="INFORMATION_SCHEMA")
key_constraints = Table("KEY_COLUMN_USAGE", ischema,
- Column(
- "TABLE_SCHEMA", CoerceUnicode, key="table_schema"),
- Column("TABLE_NAME", CoerceUnicode, key="table_name"),
- Column(
- "COLUMN_NAME", CoerceUnicode, key="column_name"),
- Column(
- "CONSTRAINT_NAME", CoerceUnicode, key="constraint_name"),
- Column(
- "ORDINAL_POSITION", Integer, key="ordinal_position"),
+ Column("TABLE_SCHEMA", CoerceUnicode,
+ key="table_schema"),
+ Column("TABLE_NAME", CoerceUnicode,
+ key="table_name"),
+ Column("COLUMN_NAME", CoerceUnicode,
+ key="column_name"),
+ Column("CONSTRAINT_NAME", CoerceUnicode,
+ key="constraint_name"),
+ Column("ORDINAL_POSITION", Integer,
+ key="ordinal_position"),
schema="INFORMATION_SCHEMA")
ref_constraints = Table("REFERENTIAL_CONSTRAINTS", ischema,
- Column(
- "CONSTRAINT_CATALOG", CoerceUnicode, key="constraint_catalog"),
- Column(
- "CONSTRAINT_SCHEMA", CoerceUnicode, key="constraint_schema"),
- Column(
- "CONSTRAINT_NAME", CoerceUnicode, key="constraint_name"),
+ Column("CONSTRAINT_CATALOG", CoerceUnicode,
+ key="constraint_catalog"),
+ Column("CONSTRAINT_SCHEMA", CoerceUnicode,
+ key="constraint_schema"),
+ Column("CONSTRAINT_NAME", CoerceUnicode,
+ key="constraint_name"),
# TODO: is CATLOG misspelled ?
Column("UNIQUE_CONSTRAINT_CATLOG", CoerceUnicode,
key="unique_constraint_catalog"),
diff --git a/lib/sqlalchemy/dialects/mssql/mxodbc.py b/lib/sqlalchemy/dialects/mssql/mxodbc.py
index b6749aa2a..ffe38d8dd 100644
--- a/lib/sqlalchemy/dialects/mssql/mxodbc.py
+++ b/lib/sqlalchemy/dialects/mssql/mxodbc.py
@@ -91,7 +91,7 @@ class MSDialect_mxodbc(MxODBCConnector, MSDialect):
# this is only needed if "native ODBC" mode is used,
# which is now disabled by default.
- #statement_compiler = MSSQLStrictCompiler
+ # statement_compiler = MSSQLStrictCompiler
execution_ctx_cls = MSExecutionContext_mxodbc
diff --git a/lib/sqlalchemy/dialects/mssql/pymssql.py b/lib/sqlalchemy/dialects/mssql/pymssql.py
index 5e50b96ac..8f76336ae 100644
--- a/lib/sqlalchemy/dialects/mssql/pymssql.py
+++ b/lib/sqlalchemy/dialects/mssql/pymssql.py
@@ -9,7 +9,8 @@
.. dialect:: mssql+pymssql
:name: pymssql
:dbapi: pymssql
- :connectstring: mssql+pymssql://<username>:<password>@<freetds_name>?charset=utf8
+ :connectstring: mssql+pymssql://<username>:<password>@<freetds_name>?\
+charset=utf8
:url: http://pymssql.org/
pymssql is a Python module that provides a Python DBAPI interface around
diff --git a/lib/sqlalchemy/dialects/mssql/pyodbc.py b/lib/sqlalchemy/dialects/mssql/pyodbc.py
index 86d896f8b..2e05eada4 100644
--- a/lib/sqlalchemy/dialects/mssql/pyodbc.py
+++ b/lib/sqlalchemy/dialects/mssql/pyodbc.py
@@ -55,9 +55,9 @@ Examples of pyodbc connection string URLs:
DRIVER={SQL Server};Server=host;Database=db;UID=user;PWD=pass;port=123
-* ``mssql+pyodbc://user:pass@host/db?driver=MyDriver`` - connects using a connection
- string that includes a custom
- ODBC driver name. This will create the following connection string::
+* ``mssql+pyodbc://user:pass@host/db?driver=MyDriver`` - connects using a
+ connection string that includes a custom ODBC driver name. This will create
+ the following connection string::
DRIVER={MyDriver};Server=host;Database=db;UID=user;PWD=pass
@@ -85,14 +85,14 @@ Unicode Binds
-------------
The current state of PyODBC on a unix backend with FreeTDS and/or
-EasySoft is poor regarding unicode; different OS platforms and versions of UnixODBC
-versus IODBC versus FreeTDS/EasySoft versus PyODBC itself dramatically
-alter how strings are received. The PyODBC dialect attempts to use all the information
-it knows to determine whether or not a Python unicode literal can be
-passed directly to the PyODBC driver or not; while SQLAlchemy can encode
-these to bytestrings first, some users have reported that PyODBC mis-handles
-bytestrings for certain encodings and requires a Python unicode object,
-while the author has observed widespread cases where a Python unicode
+EasySoft is poor regarding unicode; different OS platforms and versions of
+UnixODBC versus IODBC versus FreeTDS/EasySoft versus PyODBC itself
+dramatically alter how strings are received. The PyODBC dialect attempts to
+use all the information it knows to determine whether or not a Python unicode
+literal can be passed directly to the PyODBC driver or not; while SQLAlchemy
+can encode these to bytestrings first, some users have reported that PyODBC
+mis-handles bytestrings for certain encodings and requires a Python unicode
+object, while the author has observed widespread cases where a Python unicode
is completely misinterpreted by PyODBC, particularly when dealing with
the information schema tables used in table reflection, and the value
must first be encoded to a bytestring.
diff --git a/lib/sqlalchemy/dialects/mssql/zxjdbc.py b/lib/sqlalchemy/dialects/mssql/zxjdbc.py
index c14ebb70a..b23a010e7 100644
--- a/lib/sqlalchemy/dialects/mssql/zxjdbc.py
+++ b/lib/sqlalchemy/dialects/mssql/zxjdbc.py
@@ -9,7 +9,8 @@
.. dialect:: mssql+zxjdbc
:name: zxJDBC for Jython
:dbapi: zxjdbc
- :connectstring: mssql+zxjdbc://user:pass@host:port/dbname[?key=value&key=value...]
+ :connectstring: mssql+zxjdbc://user:pass@host:port/dbname\
+[?key=value&key=value...]
:driverurl: http://jtds.sourceforge.net/