| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by the fix for :ticket:`9618` where floating point
values would lose precision being inserted in bulk, using either the
psycopg2 or psycopg drivers.
Implemented the :class:`_sqltypes.Double` type for SQL Server, having it
resolve to ``REAL``, or :class:`_mssql.REAL`, at DDL rendering time.
Fixed issue in Oracle dialects where ``Decimal`` returning types such as
:class:`_sqltypes.Numeric` would return floating point values, rather than
``Decimal`` objects, when these columns were used in the
:meth:`_dml.Insert.returning` clause to return INSERTed values.
Fixes: #9701
Change-Id: I8865496a6ccac6d44c19d0ca2a642b63c6172da9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
| |
Change-Id: If6f2efd7cd443593a8e7ca06109e51cfd07ed020
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we will keep trying to find workarounds, however this
patch is the "turn it off" patch
Due to a critical bug identified in SQL Server, the SQLAlchemy
"insertmanyvalues" feature which allows fast INSERT of many rows while also
supporting RETURNING unfortunately needs to be disabled for SQL Server. SQL
Server is apparently unable to guarantee that the order of rows inserted
matches the order in which they are sent back by OUTPUT inserted when
table-valued rows are used with INSERT in conjunction with OUTPUT inserted.
We are trying to see if Microsoft is able to confirm this undocumented
behavior however there is no known workaround, other than it's not safe to
use table-valued expressions with OUTPUT inserted for now.
Fixes: #9603
Change-Id: I4b932fb8774390bbdf4e870a1f6cfe9a78c4b105
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed the bulk INSERT strategy used for SQL Server "executemany" with
pyodbc when ``fast_executemany`` is set to ``True`` by using
``fast_executemany`` / ``cursor.executemany()`` for bulk INSERT that does
not include RETURNING, restoring the same behavior as was used in
SQLAlchemy 1.4 when this parameter is set. For INSERT statements that use
RETURNING, the "insertmanyvalues" strategy continues to be used as it is
the only current strategy that supports RETURNING with bulk INSERT.
Previously, SQLAlchemy 2.0 would use "insertmanyvalues" for all INSERT
statements when ``use_insertmanyvalues`` was left at its default of
``False``, ignoring if ``fast_executemany`` was set.
New performance details from end users have shown that ``fast_executemany``
is still much faster for very large datasets as it uses ODBC commands that
can receive all rows in a single round trip, allowing for much larger
datasizes than the batches that can be sent by the current
"insertmanyvalues" strategy.
Fixes: #9586
Change-Id: I85955a10ba77c26cdc0c22e362a827d7aaef2852
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pymssql seems to be maintained again and seems to be working
completely, so let's try re-enabling it.
Fixed issue in the new :class:`.Uuid` datatype which prevented it from
working with the pymssql driver. As pymssql seems to be maintained again,
restored testing support for pymssql.
Tweaked the pymssql dialect to take better advantage of
RETURNING for INSERT statements in order to retrieve last inserted primary
key values, in the same way as occurs for the mssql+pyodbc dialect right
now.
Identified that the ``sqlite`` and ``mssql+pyodbc`` dialects are now
compatible with the SQLAlchemy ORM's "versioned rows" feature, since
SQLAlchemy now computes rowcount for a RETURNING statement in this specific
case by counting the rows returned, rather than relying upon
``cursor.rowcount``. In particular, the ORM versioned rows use case
(documented at :ref:`mapper_version_counter`) should now be fully
supported with the SQL Server pyodbc dialect.
Change-Id: I38a0666587212327aecf8f98e86031ab25d1f14d
References: #5321
Fixes: #9414
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The newly added comment reflection and rendering capability of the MSSQL
dialect, added in :ticket:`7844`, will now be disabled by default if it
cannot be determined that an unsupported backend such as Azure Synapse may
be in use; this backend does not support table and column comments and does
not support the SQL Server routines in use to generate them as well as to
reflect them. A new parameter ``supports_comments`` is added to the dialect
which defaults to ``None``, indicating that comment support should be
auto-detected. When set to ``True`` or ``False``, the comment support is
either enabled or disabled unconditionally.
Fixes: #9142
Change-Id: Ib5cac31806185e7353e15b3d83b580652d304b3b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where a schema name given with brackets, but no dots inside the
name, for parameters such as :paramref:`_schema.Table.schema` would not be
interpreted within the context of the SQL Server dialect's documented
behavior of interpreting explicit brackets as token delimiters, first added
in 1.2 for #2626, when referring to the schema name in reflection
operations. The original assumption for #2626's behavior was that the
special interpretation of brackets was only significant if dots were
present, however in practice, the brackets are not included as part of the
identifier name for all SQL rendering operations since these are not valid
characters within regular or delimited identifiers. Pull request courtesy
Shan.
Fixes: #9133
Closes: #9134
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9134
Pull-request-sha: 5dac87c82cd3063dd8e50f0075c7c00330be6439
Change-Id: I7a507bc38d75a04ffcb7e920298775baae22c6d1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by the combination of :ticket:`8177`, re-enable
setinputsizes for SQL server unless fast_executemany + DBAPI executemany is
used for a statement, along with :ticket:`6047`, implement
"insertmanyvalues", which bypasses DBAPI executemany in place of a custom
DBAPI execute for INSERT statements. setinputsizes would incorrectly not be
used for a multiple parameter-set INSERT statement that used
"insertmanyvalues" if fast_executemany were turned on, as the check would
incorrectly assume this is a DBAPI executemany call. The "regression"
would then be that the "insertmanyvalues" statement format is apparently
slightly more sensitive to multiple rows that don't use the same types
for each row, so in such a case setinputsizes is especially needed.
The fix repairs the fast_executemany check so that it only disables
setinputsizes if true DBAPI executemany is to be used.
Fixes: #8917
Change-Id: I78895606a99848d4f92ecf38ded92dc5d6d48c6f
|
|
|
|
|
|
|
|
| |
command run is "pyupgrade --py37-plus --keep-runtime-typing --keep-percent-format <files...>"
pyupgrade will change assert_ to assertTrue. That was reverted since assertTrue does not
exists in sqlalchemy fixtures
Change-Id: Ie1ed2675c7b11d893d78e028aad0d1576baebb55
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The RETURNING clause now renders columns using the routine as that of the
:class:`.Select` to generate labels, which will include disambiguating
labels, as well as that a SQL function surrounding a named column will be
labeled using the column name itself. This is a more comprehensive change
than a similar one made for the 1.4 series that adjusted the function label
issue only.
includes 1.4's changelog for the backported version which also
fixes an Oracle issue independently of the 2.0 series.
Fixes: #8770
Change-Id: I2ab078a214a778ffe1720dbd864ae4c105a0691d
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed regression caused by SQL Server pyodbc change :ticket:`8177` where we
now use ``setinputsizes()`` by default; for VARCHAR, this fails if the
character size is greater than 4000 (or 2000, depending on data) characters
as the incoming datatype is NVARCHAR, which has a limit of 4000 characters,
despite the fact that VARCHAR can handle unlimited characters. Additional
pyodbc-specific typing information is now passed to ``setinputsizes()``
when the datatype's size is > 2000 characters. The change is also applied
to the :class:`.JSON` type which was also impacted by this issue for large
JSON serializations.
Fixes: #8661
Change-Id: I07fa873e95dbd2c94f3d286e93e8b3229c3a9807
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :class:`.Sequence` construct restores itself to the DDL behavior it
had prior to the 1.4 series, where creating a :class:`.Sequence` with
no additional arguments will emit a simple ``CREATE SEQUENCE`` instruction
**without** any additional parameters for "start value". For most backends,
this is how things worked previously in any case; **however**, for
MS SQL Server, the default value on this database is
``-2**63``; to prevent this generally impractical default
from taking effect on SQL Server, the :paramref:`.Sequence.start` parameter
should be provided. As usage of :class:`.Sequence` is unusual
for SQL Server which for many years has standardized on ``IDENTITY``,
it is hoped that this change has minimal impact.
Fixes: #7211
Change-Id: I1207ea10c8cb1528a1519a0fb3581d9621c27b31
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the feature is enabled for all built in backends
when RETURNING is used,
except for Oracle that doesn't need it, and on
psycopg2 and mssql+pyodbc it is used for all INSERT statements,
not just those that use RETURNING.
third party dialects would need to opt in to the new feature
by setting use_insertmanyvalues to True.
Also adds dialect-level guards against using returning
with executemany where we dont have an implementation to
suit it. execute single w/ returning still defers to the
server without us checking.
Fixes: #6047
Fixes: #7907
Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed yet another regression in SQL Server isolation level fetch (see
:ticket:`8231`, :ticket:`8475`), this time with "Microsoft Dynamics CRM
Database via Azure Active Directory", which apparently lacks the
``system_views`` view entirely. Error catching has been extended that under
no circumstances will this method ever fail, provided database connectivity
is present.
Fixes: #8525
Change-Id: I76a429e3329926069a0367d2e77ca1124b9a059d
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by the fix for :ticket:`8231` released in 1.4.40
where connection would fail if the user does not have permission to query
the dm_exec_sessions or dm_pdw_nodes_exec_sessions system view when trying
to determine the current transaction isolation level.
Fixes: #8475
Change-Id: Ie2bcda92f2ef2d12360ddda47eb6e896313c71f2
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented reflection of the "clustered index" flag ``mssql_clustered``
for the SQL Server dialect. Pull request courtesy John Lennox.
Fixes: #8288
Closes: #8289
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8289
Pull-request-sha: 1bb57352e3e31d8fb7de69ab5e60e5464949f640
Change-Id: Ife367066328f9e47ad823e4098647964a18e21e8
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support table and column comments on MSSQL when
creating a table. Added support for reflecting table comments.
Thanks to Daniel Hall for the help in this pull request.
Fixes: #7844
Closes: #8225
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8225
Pull-request-sha: 540f4eb6395f9feed4b4240e3d22f539021948e9
Change-Id: I69f48c6dda4e00ec3d82fdeff13f3df9d735b7b0
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed issue where the SQL Server dialect's query for the current isolation
level would fail on Azure Synapse Analytics, due to the way in which this
database handles transaction rollbacks after an error has occurred. The
initial query has been modified to no longer rely upon catching an error
when attempting to detect the appropriate system view. Additionally, to
better support this database's very specific "rollback" behavior,
implemented new parameter ``ignore_no_transaction_on_rollback`` indicating
that a rollback should ignore Azure Synapse error 'No corresponding
transaction found. (111214)', which is raised if no transaction is present
in conflict with the Python DBAPI.
Fixes: #8231
Closes: #8233
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8233
Pull-request-sha: c48bd44a9f53d00e5e94f1b8bf996711b6419562
Change-Id: I6407a03148f45cc9eba8fe1d31d4f59ebf9c7ef7
|
|/
|
|
|
|
|
|
|
| |
just in my own testing, if I say insert().return_defaults()
and stringify, I should see it, so make sure all the dialects
default to "insert_returning" etc. , with downgrade on
server version check.
Change-Id: Id64e78fcb03c48b5dcb0feb21cb9cc495edd15e9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ``use_setinputsizes`` parameter for the ``mssql+pyodbc`` dialect now
defaults to ``True``; this is so that non-unicode string comparisons are
bound by pyodbc to pyodbc.SQL_VARCHAR rather than pyodbc.SQL_WVARCHAR,
allowing indexes against VARCHAR columns to take effect. In order for the
``fast_executemany=True`` parameter to continue functioning, the
``use_setinputsizes`` mode now skips the ``cursor.setinputsizes()`` call
specifically when ``fast_executemany`` is True and the specific method in
use is ``cursor.executemany()``, which doesn't support setinputsizes. The
change also adds appropriate pyodbc DBAPI typing to values that are typed
as :class:`_types.Unicode` or :class:`_types.UnicodeText`, as well as
altered the base :class:`_types.JSON` datatype to consider JSON string
values as :class:`_types.Unicode` rather than :class:`_types.String`.
Fixes: #8177
Change-Id: I6c8886663254ae55cf904ad256c906e8f5e11f48
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where :class:`.Table` objects that made use of IDENTITY columns
with a :class:`.Numeric` datatype would produce errors when attempting to
reconcile the "autoincrement" column, preventing construction of the
:class:`.Column` from using the :paramref:`.Column.autoincrement` parameter
as well as emitting errors when attempting to invoke an :class:`.Insert`
construct.
Fixes: #8111
Change-Id: Iaacc4eebfbafb42fa18f9a1a4f43cb2b6b91d28a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new backend-agnostic :class:`_types.Uuid` datatype generalized from
the PostgreSQL dialects to now be a core type, as well as migrated
:class:`_types.UUID` from the PostgreSQL dialect. Thanks to Trevor Gross
for the help on this.
also includes:
* corrects some missing behaviors in the suite literal fixtures
test where row round trips weren't being correctly asserted.
* fixes some of the ISO literal date rendering added in
952383f9ee0 for #5052 to truncate datetime strings for date/time
datatypes in the same way that drivers typically do for bound
parameters; this was not working fully and wasn't caught by the
broken test fixture
Fixes: #7212
Change-Id: I981ac6d34d278c18281c144430a528764c241b04
|
|
|
|
|
|
|
|
| |
Fix issue where a password with a leading "{" would
result in login failure.
Fixes: #8062
Change-Id: If91c2c211937b5eac89b8d525c22a19b0a94c5c4
|
|
|
|
|
|
|
|
| |
this test is failing on CI with "##foo does not exist",
so hypothesize there's some kind of race condition with
global temp table names.
Change-Id: I8c6c26a7fda70f67735ce20af67373c311e48731
|
|
|
|
|
|
|
|
| |
Explicitly specify the collation when reflecting table columns using
MSSQL to prevent "collation conflict" errors.
Fixes: #8035
Change-Id: I4239a5ca8b041f56d7b3bba67b3357c176db31ee
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All modules in sqlalchemy.engine are strictly
typed with the exception of cursor, default, and
reflection. cursor and default pass with non-strict
typing, reflection is waiting on the multi-reflection
refactor.
Behavioral changes:
* create_connect_args() methods return a tuple of list,
dict, rather than a list of list, dict
* removed allow_chars parameter from
pyodbc connector ._get_server_version_info()
method
* the parameter list passed to do_executemany is now
a list in all cases. previously, this was being run
through dialect.execute_sequence_format, which
defaults to tuple and was only intended for individual
tuple params.
* broke up dialect.dbapi into dialect.import_dbapi
class method and dialect.dbapi module object. added
a deprecation path for legacy dialects. it's not
really feasible to type a single attr as a classmethod
vs. module type. The "type_compiler" attribute also
has this problem with greater ability to work around,
left that one for now.
* lots of constants changing to be Enum, so that we can
type them. for fixed tuple-position constants in
cursor.py / compiler.py (which are used to avoid the
speed overhead of namedtuple), using Literal[value]
which seems to work well
* some tightening up in Row regarding __getitem__, which
we can do since we are on full 2.0 style result use
* altered the set_connection_execution_options and
set_engine_execution_options event flows so that the
dictionary of options may be mutated within the event
hook, where it will then take effect as the actual
options used. Previously, changing the dict would
be silently ignored which seems counter-intuitive
and not very useful.
* A lot of DefaultDialect/DefaultExecutionContext
methods and attributes, including underscored ones, move
to interfaces. This is not fully ideal as it means
the Dialect/ExecutionContext interfaces aren't publicly
subclassable directly, but their current purpose
is more of documentation for dialect authors who should
(and certainly are) still be subclassing the DefaultXYZ
versions in all cases
Overall, Result was the most extremely difficult class
hierarchy to type here as this hierarchy passes through
largely amorphous "row" datatypes throughout, which
can in fact by all kinds of different things, like
raw DBAPI rows, or Row objects, or "scalar"/Any, but
at the same time these types have meaning so I tried still
maintaining some level of semantic markings for these,
it highlights how complex Result is now, as it's trying
to be extremely efficient and inlined while also being
very open-ended and extensible.
Change-Id: I98b75c0c09eab5355fc7a33ba41dd9874274f12a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python string values for which a SQL type is determined from the type of
the value, mainly when using :func:`_sql.literal`, will now apply the
:class:`_types.String` type, rather than the :class:`_types.Unicode`
datatype, for Python string values that test as "ascii only" using Python
``str.isascii()``. If the string is not ``isascii()``, the
:class:`_types.Unicode` datatype will be bound instead, which was used in
all string detection previously. This behavior **only applies to in-place
detection of datatypes when using ``literal()`` or other contexts that have
no existing datatype**, which is not usually the case under normal
:class:`_schema.Column` comparison operations, where the type of the
:class:`_schema.Column` being compared always takes precedence.
Use of the :class:`_types.Unicode` datatype can determine literal string
formatting on backends such as SQL Server, where a literal value (i.e.
using ``literal_binds``) will be rendered as ``N'<value>'`` instead of
``'value'``. For normal bound value handling, the :class:`_types.Unicode`
datatype also may have implications for passing values to the DBAPI, again
in the case of SQL Server, the pyodbc driver supports the use of
:ref:`setinputsizes mode <mssql_pyodbc_setinputsizes>` which will handle
:class:`_types.String` versus :class:`_types.Unicode` differently.
Fixes: #7551
Change-Id: I4f8de63e36532ae8ce4c630ee59211349ce95361
|
|
|
|
|
| |
Fixes: #7243
Change-Id: I99880f429dbaac525bdf7d44438aaab6bc8d0ca6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Black's `target-version` was still set to `['py27', 'py36']`. Set it to `[py37]` instead.
Also update Black and other pre-commit hooks and re-format with Black.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #7536
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7536
Pull-request-sha: b3aedf5570d7e0ba6c354e5989835260d0591b08
Change-Id: I8be85636fd2c9449b07a8626050c8bd35bd119d5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds new warnings for all elements that
don't indicate their caching behavior, including user-defined
ClauseElement subclasses and third party dialects.
it additionally adds new documentation to discuss an apparent
performance degradation in 1.4 when caching is disabled as a
result in the significant expense incurred by ORM
lazy loaders, which in 1.3 used BakedQuery so were actually
cached.
As a result of adding the warnings, a fair degree of
lesser used SQL expression objects identified that they did not
define caching behavior so would have been producing
``[no key]``, including PostgreSQL constructs ``hstore``
and ``array``. These have been amended to use inherit
cache where appropriate. "on conflict" constructs in
PostgreSQL, MySQL, SQLite still explicitly don't generate
a cache key at this time.
The change also adds a test for all constructs via
assert_compile() to assert they will not generate cache
warnings.
Fixes: #7394
Change-Id: I85958affbb99bfad0f5efa21bc8f2a95e7e46981
|
|
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
| |
References: #4600
Change-Id: I2a62ddfe00bc562720f0eae700a497495d7a987a
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The :paramref:`_sa.create_engine.implicit_returning` parameter is
deprecated on the :func:`_sa.create_engine` function only; the parameter
remains available on the :class:`_schema.Table` object. This parameter was
originally intended to enable the "implicit returning" feature of
SQLAlchemy when it was first developed and was not enabled by default.
Under modern use, there's no reason this parameter should be disabled, and
it has been observed to cause confusion as it degrades performance and
makes it more difficult for the ORM to retrieve recently inserted server
defaults. The parameter remains available on :class:`_schema.Table` to
specifically suit database-level edge cases which make RETURNING
infeasible, the sole example currently being SQL Server's limitation that
INSERT RETURNING may not be used on a table that has INSERT triggers on it.
Also removed from the Oracle dialect some logic that would upgrade
an Oracle 8/8i server version to use implicit returning if the
parameter were explictly passed; these versions of Oracle
still support RETURNING so the feature is now enabled for all
Oracle versions.
Fixes: #6962
Change-Id: Ib338e300cd7c8026c3083043f645084a8211aed8
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Removed here includes:
* convert_unicode parameters
* encoding create_engine() parameter
* description encoding support
* "non-unicode fallback" modes under Python 2
* String symbols regarding Python 2 non-unicode fallbacks
* any concept of DBAPIs that don't accept unicode
statements, unicode bound parameters, or that return bytes
for strings anywhere except an explicit Binary / BLOB
type
* unicode processors in Python / C
Risk factors:
* Whether all DBAPIs do in fact return Unicode objects for
all entries in cursor.description now
* There was logic for mysql-connector trying to determine
description encoding. A quick test shows Unicode coming
back but it's not clear if there are still edge cases where
they return bytes. if so, these are bugs in that driver,
and at most we would only work around it in the mysql-connector
DBAPI itself (but we won't do that either).
* It seems like Oracle 8 was not expecting unicode bound parameters.
I'm assuming this was all Python 2 stuff and does not apply
for modern cx_Oracle under Python 3.
* third party dialects relying upon built in unicode encoding/decoding
but it's hard to imagine any non-SQLAlchemy database driver not
dealing exclusively in Python unicode strings in Python 3
Change-Id: I97d762ef6d4dd836487b714d57d8136d0310f28a
References: #7257
|
|\ \ \
| |/ /
|/| |
| | | |
quoting" into main
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Adjusted the compiler's generation of "post compile" symbols including
those used for "expanding IN" as well as for the "schema translate map" to
not be based directly on plain bracketed strings with underscores, as this
conflicts directly with SQL Server's quoting format of also using brackets,
which produces false matches when the compiler replaces "post compile" and
"schema translate" symbols. The issue created easy to reproduce examples
both with the :meth:`.Inspector.get_schema_names` method when used in
conjunction with the
:paramref:`_engine.Connection.execution_options.schema_translate_map`
feature, as well in the unlikely case that a symbol overlapping with the
internal name "POSTCOMPILE" would be used with a feature like "expanding
in".
Fixes: #7300
Change-Id: I6255c850b140522a4aba95085216d0bca18ce230
|
|/
|
|
|
|
|
|
|
|
|
| |
Fixes: #6960
Even though a default driver still exists for
each dialect, remove most usages of `dialect://`
to encourage users to explicitly specify
`dialect+driver://`
Change-Id: I0ad42167582df509138fca64996bbb53e379b1af
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major action here is to lift and move future.Connection
and future.Engine fully into sqlalchemy.engine.base. This
removes lots of engine concepts, including:
* autocommit
* Connection running without a transaction, autobegin
is now present in all cases
* most "autorollback" is obsolete
* Core-level subtransactions (i.e. MarkerTransaction)
* "branched" connections, copies of connections
* execution_options() returns self, not a new connection
* old argument formats, distill_params(), simplifies calling
scheme between engine methods
* before/after_execute() events (oriented towards compiled constructs)
don't emit for exec_driver_sql(). before/after_cursor_execute()
is still included for this
* old helper methods superseded by context managers, connection.transaction(),
engine.transaction() engine.run_callable()
* ancient engine-level reflection methods has_table(), table_names()
* sqlalchemy.testing.engines.proxying_engine
References: #7257
Change-Id: Ib20ed816642d873b84221378a9ec34480e01e82c
|
|
|
|
|
| |
References: #4600
Change-Id: I61e35bc93fe95610ae75b31c18a3282558cd4ffe
|
|
|
|
|
| |
Fixes: #7258
Change-Id: I3577f665eca04f2632b69bcb090f0a4ec9271db9
|
|
|
|
|
|
|
|
| |
Also implement reflection of ON DELETE, ON UPDATE
as the data is right there.
Fixes: #7160
Change-Id: Ifff871a8cb1d1bea235616042e16ed3b5c5f19f9
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue with :meth:`.Inspector.has_table` where it would return False
if a local temp table with the same name from a different session happened
to be returned first when querying tempdb. This is a continuation of
:ticket:`6910` which accounted for the temp table existing only in the
alternate session and not the current one.
Fixes: #7168
Change-Id: I19dbb71a63184c6d41822b0e882b7b284ac83786
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in SQL Server ``DATETIMEOFFSET`` where the ODBC implementation
would not generate the correct DDL, for cases where the type were converted
using the ``dialect.type_descriptor()`` method, the usage of which is
illustrated in some documented examples for :class:`.TypeDecorator`, though
not necessary for most datatypes. Regression was introduced by
:ticket:`6366`. As part of this change, the full list of SQL Server date
types have been amended to return a "dialect impl" that generates the same
DDL name as the supertype.
Fixes: #7129
Change-Id: I7d9bea54c0c38e16d1a6ad978cca996006a1b624
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fix: lib/sqlalchemy/sql/lambdas.py
* fix: lib/sqlalchemy/sql/compiler.py
* fix: lib/sqlalchemy/sql/selectable.py
* fix: lib/sqlalchemy/orm/relationships.py
* fix: lib/sqlalchemy/dialects/mssql/base.py
* fix: lib/sql/test_compiler.py
* fix: lib/sqlalchemy/testing/requirements.py
* fix: lib/sqlalchemy/orm/path_registry.py
* fix: lib/sqlalchemy/dialects/postgresql/psycopg2.py
* fix: lib/sqlalchemy/cextension/immutabledict.c
* fix: lib/sqlalchemy/cextension/resultproxy.c
* fix: ./lib/sqlalchemy/dialects/oracle/cx_oracle.py
* fix: examples/versioned_rows/versioned_rows_w_versionid.py
* fix: examples/elementtree/optimized_al.py
* fix: test/orm/test_attribute.py
* fix: test/sql/test_compare.py
* fix: test/sql/test_type_expression.py
* fix: capitalization in test/dialect/mysql/test_compiler.py
* fix: typos in test/dialect/postgresql/test_reflection.py
* fix: typo in tox.ini comment
* fix: typo in /lib/sqlalchemy/orm/decl_api.py
* fix: typo in test/orm/test_update_delete.py
* fix: self-induced typo
* fix: typo in test/orm/test_query.py
* fix: typos in test/dialect/mssql/test_types.py
* fix: typo in test/sql/test_types.py
|
|\ |
|