| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed reflection of covering indexes to report ``include_columns`` as part
of the ``dialect_options`` entry in the reflected index dictionary, thereby
enabling round trips from reflection->create to be complete. Included
columns continue to also be present under the ``include_columns`` key for
backwards compatibility.
Fixes: #7382
Change-Id: I4f16b65caed3a36d405481690a3a92432b5efd62
|
|/
|
|
|
| |
Change-Id: I7aaeb5bc130271624335b79cf586581d6c6c34c7
References: #4600
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds new warnings for all elements that
don't indicate their caching behavior, including user-defined
ClauseElement subclasses and third party dialects.
it additionally adds new documentation to discuss an apparent
performance degradation in 1.4 when caching is disabled as a
result in the significant expense incurred by ORM
lazy loaders, which in 1.3 used BakedQuery so were actually
cached.
As a result of adding the warnings, a fair degree of
lesser used SQL expression objects identified that they did not
define caching behavior so would have been producing
``[no key]``, including PostgreSQL constructs ``hstore``
and ``array``. These have been amended to use inherit
cache where appropriate. "on conflict" constructs in
PostgreSQL, MySQL, SQLite still explicitly don't generate
a cache key at this time.
The change also adds a test for all constructs via
assert_compile() to assert they will not generate cache
warnings.
Fixes: #7394
Change-Id: I85958affbb99bfad0f5efa21bc8f2a95e7e46981
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is so that dialect methods that are called within init
can assume the same argument structure as when they are called
in other places; we can nail down the type of object as well.
This change seems to mostly impact the isolation level routines
in the dialects, as these are called during initialize()
as well as on established connections. these methods can now
assume a non-proxied DBAPI connection object in all cases,
as it is commonly required that attributes like ".autocommit"
are set on the object which don't work well in a proxied
situation.
Other changes:
* adds an interface for the "connectionfairy" concept
called PoolProxiedConnection.
* Removes ``Connectable`` superclass of Connection.
``Connectable`` was originally meant to provide for the
"method which accepts connection or engine" theme. As this
pattern is greatly reduced in 2.0 and Engine no longer extends
from it, the ``Connectable`` superclass doesnt serve any real
purpose.
Leading from that, to set this in I also applied pep 484 annotations
to the Dialect base, and then in the interests of seeing some
of the typing information show up in my IDE did a little bit for Engine,
Connection and others. I hope that it's feasible that we can
add annotations to specific classes and attributes ahead of when we
actually try to mass-populate the whole library. This was
the original spirit of pep-484 that we can apply annotations
gradually. I do of course want to try to do a mass-populate
although i think even in that case we will end up doing a lot
of manual work anyway (in particular for the changes here which
are distinct from what the stubs have).
Fixes: #7122
Change-Id: I5dd7fbff8a7ae520a81c165091af12a6a68826db
|
|
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
| |
References: #4600
Change-Id: I2a62ddfe00bc562720f0eae700a497495d7a987a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generalized the :paramref:`_sa.create_engine.isolation_level` parameter to
the base dialect so that it is no longer dependent on individual dialects
to be present. This parameter sets up the "isolation level" setting to
occur for all new database connections as soon as they are created by the
connection pool, where the value then stays set without being reset on
every checkin.
The :paramref:`_sa.create_engine.isolation_level` parameter is essentially
equivalent in functionality to using the
:paramref:`_engine.Engine.execution_options.isolation_level` parameter via
:meth:`_engine.Engine.execution_options` for an engine-wide setting. The
difference is in that the former setting assigns the isolation level just
once when a connection is created, the latter sets and resets the given
level on each connection checkout.
Fixes: #6342
Change-Id: Id81d6b1c1a94371d901ada728a610696e09e9741
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Removed here includes:
* convert_unicode parameters
* encoding create_engine() parameter
* description encoding support
* "non-unicode fallback" modes under Python 2
* String symbols regarding Python 2 non-unicode fallbacks
* any concept of DBAPIs that don't accept unicode
statements, unicode bound parameters, or that return bytes
for strings anywhere except an explicit Binary / BLOB
type
* unicode processors in Python / C
Risk factors:
* Whether all DBAPIs do in fact return Unicode objects for
all entries in cursor.description now
* There was logic for mysql-connector trying to determine
description encoding. A quick test shows Unicode coming
back but it's not clear if there are still edge cases where
they return bytes. if so, these are bugs in that driver,
and at most we would only work around it in the mysql-connector
DBAPI itself (but we won't do that either).
* It seems like Oracle 8 was not expecting unicode bound parameters.
I'm assuming this was all Python 2 stuff and does not apply
for modern cx_Oracle under Python 3.
* third party dialects relying upon built in unicode encoding/decoding
but it's hard to imagine any non-SQLAlchemy database driver not
dealing exclusively in Python unicode strings in Python 3
Change-Id: I97d762ef6d4dd836487b714d57d8136d0310f28a
References: #7257
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adjusted the compiler's generation of "post compile" symbols including
those used for "expanding IN" as well as for the "schema translate map" to
not be based directly on plain bracketed strings with underscores, as this
conflicts directly with SQL Server's quoting format of also using brackets,
which produces false matches when the compiler replaces "post compile" and
"schema translate" symbols. The issue created easy to reproduce examples
both with the :meth:`.Inspector.get_schema_names` method when used in
conjunction with the
:paramref:`_engine.Connection.execution_options.schema_translate_map`
feature, as well in the unlikely case that a symbol overlapping with the
internal name "POSTCOMPILE" would be used with a feature like "expanding
in".
Fixes: #7300
Change-Id: I6255c850b140522a4aba95085216d0bca18ce230
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :meth:`_engine.Inspector.has_table` method will now consistently check
for views of the given name as well as tables. Previously this behavior was
dialect dependent, with PostgreSQL, MySQL/MariaDB and SQLite supporting it,
and Oracle and SQL Server not supporting it. Third party dialects should
also seek to ensure their :meth:`_engine.Inspector.has_table` method
searches for views as well as tables for the given name.
Fixes: #7161
Change-Id: I9e523c76741b19596c81ef577dc6f0823e44183b
|
|
|
|
|
|
| |
Add mike's example to docs
Change-Id: I96a79084cccca5c792bee697338422f3de0884fb
|
|
|
|
|
|
|
|
| |
Also implement reflection of ON DELETE, ON UPDATE
as the data is right there.
Fixes: #7160
Change-Id: Ifff871a8cb1d1bea235616042e16ed3b5c5f19f9
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue with :meth:`.Inspector.has_table` where it would return False
if a local temp table with the same name from a different session happened
to be returned first when querying tempdb. This is a continuation of
:ticket:`6910` which accounted for the temp table existing only in the
alternate session and not the current one.
Fixes: #7168
Change-Id: I19dbb71a63184c6d41822b0e882b7b284ac83786
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in SQL Server ``DATETIMEOFFSET`` where the ODBC implementation
would not generate the correct DDL, for cases where the type were converted
using the ``dialect.type_descriptor()`` method, the usage of which is
illustrated in some documented examples for :class:`.TypeDecorator`, though
not necessary for most datatypes. Regression was introduced by
:ticket:`6366`. As part of this change, the full list of SQL Server date
types have been amended to return a "dialect impl" that generates the same
DDL name as the supertype.
Fixes: #7129
Change-Id: I7d9bea54c0c38e16d1a6ad978cca996006a1b624
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fix: lib/sqlalchemy/sql/lambdas.py
* fix: lib/sqlalchemy/sql/compiler.py
* fix: lib/sqlalchemy/sql/selectable.py
* fix: lib/sqlalchemy/orm/relationships.py
* fix: lib/sqlalchemy/dialects/mssql/base.py
* fix: lib/sql/test_compiler.py
* fix: lib/sqlalchemy/testing/requirements.py
* fix: lib/sqlalchemy/orm/path_registry.py
* fix: lib/sqlalchemy/dialects/postgresql/psycopg2.py
* fix: lib/sqlalchemy/cextension/immutabledict.c
* fix: lib/sqlalchemy/cextension/resultproxy.c
* fix: ./lib/sqlalchemy/dialects/oracle/cx_oracle.py
* fix: examples/versioned_rows/versioned_rows_w_versionid.py
* fix: examples/elementtree/optimized_al.py
* fix: test/orm/test_attribute.py
* fix: test/sql/test_compare.py
* fix: test/sql/test_type_expression.py
* fix: capitalization in test/dialect/mysql/test_compiler.py
* fix: typos in test/dialect/postgresql/test_reflection.py
* fix: typo in tox.ini comment
* fix: typo in /lib/sqlalchemy/orm/decl_api.py
* fix: typo in test/orm/test_update_delete.py
* fix: self-induced typo
* fix: typo in test/orm/test_query.py
* fix: typos in test/dialect/mssql/test_types.py
* fix: typo in test/sql/test_types.py
|
|
|
|
|
| |
Fixes: #6910
Change-Id: I9986566e1195d42ad7e9a01f0f84ef2074576257
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To service #6718 and #6710, the system by which columns are
given labels in a SELECT statement as well as the system that
gives them keys in a .c or .selected_columns collection have
been refactored to provide a single source of truth for
both, in constrast to the previous approach that included
similar logic repeated in slightly different ways.
Main ideas:
1. ColumnElement attributes ._label, ._anon_label, ._key_label
are renamed to include the letters "tq", meaning
"table-qualified" - these labels are only used when rendering
a SELECT that has LABEL_STYLE_TABLENAME_PLUS_COL for its
label style; as this label style is primarily legacy, the
"tq" names should be isolated so that in a 2.0 style application
these aren't being used at all
2. The means by which the "labels" and "proxy keys" for the elements
of a SELECT has been centralized to a single source of truth;
previously, the three of _generate_columns_plus_names,
_generate_fromclause_column_proxies, and _column_naming_convention
all had duplicated rules between them, as well as that there
were a little bit of labeling rules in compiler._label_select_column
as well; by this we mean that the various "anon_label" "anon_key"
methods on ColumnElement were called by all four of these methods,
where there were many cases where it was necessary that one method
comes up with the same answer as another of the methods. This
has all been centralized into _generate_columns_plus_names
for all the names except the "proxy key", which is generated
by _column_naming_convention.
3. compiler._label_select_column has been rewritten to both not make
any naming decisions nor any "proxy key" decisions, only whether
to label or not to label; the _generate_columns_plus_names method
gives it the information, where the proxy keys come from
_column_naming_convention; previously, these proxy keys were matched
based on restatement of similar (but not really the same) logic in
two places. The heuristics of "whether to label or not to label"
are also reorganized to be much easier to read and understand.
4. a new method compiler._label_returning_column is added for dialects
to use in their "generate returning columns" methods. A
github search reveals a small number of third party dialects also
doing this using the prior _label_select_column method so we
try to make sure _label_select_column continues to work the
exact same way for that specific use case; for the "SELECT" use
case it now needs
5. After some attempts to do it different ways, for the case where
_proxy_key is giving us some kind of anon label, we are hard
changing it to "_no_label" right now, as there's not currently
a way to fully match anonymized labels from stmt.c or
stmt.selected_columns to what will be in the result map. The
idea of "_no_label" is to encourage the user to use label('name')
for columns they want to be able to target by string name that
don't have a natural name.
Change-Id: I7a92a66f3a7e459ccf32587ac0a3c306650daf11
|
|
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the "schema_translate_map" feature would fail to function
correctly in conjunction with an INSERT into a table that has an IDENTITY
column, where the value of the IDENTITY column were specified in the values
of the INSERT thus triggering SQLAlchemy's feature of setting IDENTITY
INSERT to "on"; it's in this directive where the schema translate map would
fail to be honored.
Fixes: #6658
Change-Id: I8235aa639dd465d038a2ad48e7a669f3e5c5c37c
|
|
|
|
| |
Change-Id: I488c9557eda390e4a88319affd4c8813ee274f80
|
|
|
|
|
| |
Fixes: #6345
Change-Id: I2bdccc88e85c94d87519f58e474689ca7896f063
|
|
|
|
|
|
|
|
|
|
| |
The :paramref:`_types.DateTime.timezone` parameter when set to ``True``
will now make use of the ``DATETIMEOFFSET`` column type with SQL Server
when used to emit DDL, rather than ``DATETIME`` where the flag was silently
ignored.
Fixes: #6306
Change-Id: I4def8337046e8c190d424fa4a259ab24d5f9039e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed an additional regression in the same area as that of :ticket:`6184`,
where using a value of 0 for OFFSET in conjunction with LIMIT with SQL
Server would create a statement using "TOP", as was the behavior in 1.3,
however due to caching would then fail to respond accordingly to other
values of OFFSET. If the "0" wasn't first, then it would be fine. For the
fix, the "TOP" syntax is now only emitted if the OFFSET value is omitted
entirely, that is, :meth:`_sql.Select.offset` is not used. Note that this
change now requires that if the "with_ties" or "percent" modifiers are
used, the statement can't specify an OFFSET of zero, it now needs to be
omitted entirely.
Fixes: #6265
Change-Id: If30596b8dcd9f2ce4221cd87c5407fa81f5f9a90
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :meth:`_engine.Dialect.has_table` method now raises an informative
exception if a non-Connection is passed to it, as this incorrect behavior
seems to be common. This method is not intended for external use outside
of a dialect. Please use the :meth:`.Inspector.has_table` method
or for cross-compatibility with older SQLAlchemy versions, the
:meth:`_engine.Engine.has_table` method.
Fixes: #5780
Fixes: #6062
Fixes: #6260
Change-Id: I9b2439675167019b68d682edee3dcdcfce836987
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new flag to the :class:`_engine.Dialect` class called
:attr:`_engine.Dialect.supports_statement_cache`. This flag now needs to be present
directly on a dialect class in order for SQLAlchemy's
:ref:`query cache <sql_caching>` to take effect for that dialect. The
rationale is based on discovered issues such as :ticket:`6173` revealing
that dialects which hardcode literal values from the compiled statement,
often the numerical parameters used for LIMIT / OFFSET, will not be
compatible with caching until these dialects are revised to use the
parameters present in the statement only. For third party dialects where
this flag is not applied, the SQL logging will show the message "dialect
does not support caching", indicating the dialect should seek to apply this
flag once they have verified that no per-statement literal values are being
rendered within the compilation phase.
Fixes: #6184
Change-Id: I6fd5b5d94200458d4cb0e14f2f556dbc25e27e22
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed a regression in MSSQL 2012+ that prevented the order clause
to be rendered when ``offset=0`` is used in a subquery.
Fixed critical regression where the Oracle compiler would not maintain the
correct parameter values in the LIMIT/OFFSET for a select due to a caching
issue.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Fixes: #6163
Fixes: #6173
Change-Id: Ieb12354271d09ad935d684ee0db4fa0128837215
|
|
|
|
| |
Change-Id: I08d150f1780a0f3a848c0edcd40013b5593d18f0
|
|
|
|
|
|
|
|
|
| |
The reflection of filtered indexes broke index reflection error
for MSSQL 2005 that does not support that them
Fixes #5930
Change-Id: I5d1f4fa8ba5bca31e91981076e4ee476ddfba49a
|
|
|
|
|
|
|
|
|
|
| |
The ORM used in :term:`2.0 style` can now return ORM objects from the rows
returned by an UPDATE..RETURNING or INSERT..RETURNING statement, by
supplying the construct to :meth:`_sql.Select.from_statement` in an ORM
context.
Change-Id: I59c9754ff1cb3184580dd5194ecd2971d4e7f8e8
References: #5940
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the "schema_translate_map" feature failed to be taken into
account for the use case of direct execution of
:class:`_schema.DefaultGenerator` objects such as sequences, which included
the case where they were "pre-executed" in order to generate primary key
values when implicit_returning was disabled.
Fixes: #5929
Change-Id: I3fed1d0af28be5ce9c9bb572524dcc8411633f60
|
|\ |
|
| |
| |
| |
| |
| | |
Fixes: #5834
Change-Id: I1f207b84751e7e3425aa9e8e393787eeb9b595b7
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow the "connection" pytest fixture and others work
correctly in conjunction with setup/teardown that expects
to be external to the transaction, remove and prevent any usage
of "xdist" style names that are hardcoded by pytest to run
inside of fixtures, even function level ones. Instead use
pytest autouse fixtures to implement our own
r"setup|teardown_test(?:_class)?" methods so that we can ensure
function-scoped fixtures are run within them. A new more
explicit flow is set up within plugin_base and pytestplugin
such that the order of setup/teardown steps, which there are now
many, is fully documented and controllable. New granularity
has been added to the test teardown phase to distinguish
between "end of the test" when lock-holding structures on
connections should be released to allow for table drops,
vs. "end of the test plus its teardown steps" when we can
perform final cleanup on connections and run assertions
that everything is closed out.
From there we can remove most of the defensive "tear down everything"
logic inside of engines which for many years would frequently dispose
of pools over and over again, creating for a broken and expensive
connection flow. A quick test shows that running test/sql/ against
a single Postgresql engine with the new approach uses 75% fewer new
connections, creating 42 new connections total, vs. 164 new
connections total with the previous system.
As part of this, the new fixtures metadata/connection/future_connection
have been integrated such that they can be combined together
effectively. The fixture_session(), provide_metadata() fixtures
have been improved, including that fixture_session() now strongly
references sessions which are explicitly torn down before
table drops occur afer a test.
Major changes have been made to the
ConnectionKiller such that it now features different "scopes" for
testing engines and will limit its cleanup to those testing
engines corresponding to end of test, end of test class, or
end of test session. The system by which it tracks DBAPI
connections has been reworked, is ultimately somewhat similar to
how it worked before but is organized more clearly along
with the proxy-tracking logic. A "testing_engine" fixture
is also added that works as a pytest fixture rather than a
standalone function. The connection cleanup logic should
now be very robust, as we now can use the same global
connection pools for the whole suite without ever disposing
them, while also running a query for PostgreSQL
locks remaining after every test and assert there are no open
transactions leaking between tests at all. Additional steps
are added that also accommodate for asyncio connections not
explicitly closed, as is the case for legacy sync-style
tests as well as the async tests themselves.
As always, hundreds of tests are further refined to use the
new fixtures where problems with loose connections were identified,
largely as a result of the new PostgreSQL assertions,
many more tests have moved from legacy patterns into the newest.
An unfortunate discovery during the creation of this system is that
autouse fixtures (as well as if they are set up by
@pytest.mark.usefixtures) are not usable at our current scale with pytest
4.6.11 running under Python 2. It's unclear if this is due
to the older version of pytest or how it implements itself for
Python 2, as well as if the issue is CPU slowness or just large
memory use, but collecting the full span of tests takes over
a minute for a single process when any autouse fixtures are in
place and on CI the jobs just time out after ten minutes.
So at the moment this patch also reinvents a small version of
"autouse" fixtures when py2k is running, which skips generating
the real fixture and instead uses two global pytest fixtures
(which don't seem to impact performance) to invoke the
"autouse" fixtures ourselves outside of pytest.
This will limit our ability to do more with fixtures
until we can remove py2k support.
py.test is still observed to be much slower in collection in the
4.6.11 version compared to modern 6.2 versions, so add support for new
TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that
will run the suite for fewer backends under Python 2. For Python 3
pin pytest to modern 6.2 versions where performance for collection
has been improved greatly.
Includes the following improvements:
Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would
be raised rather than :class:`.exc.TimeoutError`. Also repaired the
:paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using
the async engine, which previously would ignore the timeout and block
rather than timing out immediately as is the behavior with regular
:class:`.QueuePool`.
For asyncio the connection pool will now also not interact
at all with an asyncio connection whose ConnectionFairy is
being garbage collected; a warning that the connection was
not properly closed is emitted and the connection is discarded.
Within the test suite the ConnectionKiller is now maintaining
strong references to all DBAPI connections and ensuring they
are released when tests end, including those whose ConnectionFairy
proxies are GCed.
Identified cx_Oracle.stmtcachesize as a major factor in Oracle
test scalability issues, this can be reset on a per-test basis
rather than setting it to zero across the board. the addition
of this flag has resolved the long-standing oracle "two task"
error problem.
For SQL Server, changed the temp table style used by the
"suite" tests to be the double-pound-sign, i.e. global,
variety, which is much easier to test generically. There
are already reflection tests that are more finely tuned
to both styles of temp table within the mssql test
suite. Additionally, added an extra step to the
"dropfirst" mechanism for SQL Server that will remove
all foreign key constraints first as some issues were
observed when using this flag when multiple schemas
had not been torn down.
Identified and fixed two subtle failure modes in the
engine, when commit/rollback fails in a begin()
context manager, the connection is explicitly closed,
and when "initialize()" fails on the first new connection
of a dialect, the transactional state on that connection
is still rolled back.
Fixes: #5826
Fixes: #5827
Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
|
|
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Decimal accuracy and behavior has been improved when extracting floating
point and/or decimal values from JSON strings using the
:meth:`_sql.sqltypes.JSON.Comparator.as_float` method, when the numeric
value inside of the JSON string has many significant digits; previously,
MySQL backends would truncate values with many significant digits and SQL
Server backends would raise an exception due to a DECIMAL cast with
insufficient significant digits. Both backends now use a FLOAT-compatible
approach that does not hardcode significant digits for floating point
values. For precision numerics, a new method
:meth:`_sql.sqltypes.JSON.Comparator.as_numeric` has been added which
accepts arguments for precision and scale, and will return values as Python
``Decimal`` objects with no floating point conversion assuming the DBAPI
supports it (all but pysqlite).
Fixes: #5788
Change-Id: I6eb51fe172a389548dd6e3c65efec9f1f538012e
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed string compilation when both mssql_include and mssql_where are used
Fixes: #5751
Closes: #5752
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5752
Pull-request-sha: aa57ad5d93cd69bf7728d864569c31c7e59b54fb
Change-Id: I1201170affd9911c252df5c9b841e538bb577085
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dialect-specific constructs such as
:meth:`_postgresql.Insert.on_conflict_do_update` can now stringify in-place
without the need to specify an explicit dialect object. The constructs,
when called upon for ``str()``, ``print()``, etc. now have internal
direction to call upon their appropriate dialect rather than the
"default"dialect which doesn't know how to stringify these. The approach
is also adapted to generic schema-level create/drop such as
:class:`_schema.AddConstraint`, which will adapt its stringify dialect to
one indicated by the element within it, such as the
:class:`_postgresql.ExcludeConstraint` object.
mostly towards being able to provide doctest-style
examples for "on conflict" constructs using print statements.
Change-Id: I4b855516fe6dee2df77744c1bb21a373d7fbab93
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The operator changes are:
* `isfalse` is now `is_false`
* `isnot_distinct_from` is now `is_not_distinct_from`
* `istrue` is now `is_true`
* `notbetween` is now `not_between`
* `notcontains` is now `not_contains`
* `notendswith` is now `not_endswith`
* `notilike` is now `not_ilike`
* `notlike` is now `not_like`
* `notmatch` is now `not_match`
* `notstartswith` is now `not_startswith`
* `nullsfirst` is now `nulls_first`
* `nullslast` is now `nulls_last`
Because these are core operators, the internal migration strategy for this
change is to support legacy terms for an extended period of time -- if not
indefinitely -- but update all documentation, tutorials, and internal usage
to the new terms. The new terms are used to define the functions, and
the legacy terms have been deprecated into aliases of the new terms.
Fixes: #5435
Change-Id: Ifbd7cb1cdda5981990243c4fc4b4ff467dc132ac
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes: #5661
### Description
Fixes reflection of composite primary keys to maintain the correct column order in the MSSQL
and SQLite dialects.
Closes: #5662
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5662
Pull-request-sha: b568dec7070b4f3ee46a528bdf16fb237baade2a
Change-Id: I452b23cbf7f389c4a0a34cffce5c32498efe37d2
|
|
|
|
|
|
|
|
|
| |
Add support to ``FETCH {FIRST | NEXT} [ count ] {ROW | ROWS}
{ONLY | WITH TIES}`` in the select for the supported backends,
currently PostgreSQL, Oracle and MSSQL.
Fixes: #5576
Change-Id: Ibb5871a457c0555f82b37e354e7787d15575f1f7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for reflecting "identity" columns, which are now returned
as part of the structure returned by :meth:`_reflection.Inspector.get_columns`.
When reflecting full :class:`_schema.Table` objects, identity columns will
be represented using the :class:`_schema.Identity` construct.
Fixed compilation error on oracle for sequence and identity column
``nominvalue`` and ``nomaxvalue`` options that require no space in them.
Improved test compatibility with oracle 18.
As part of the support for reflecting :class:`_schema.Identity` objects,
the method :meth:`_reflection.Inspector.get_columns` no longer returns
``mssql_identity_start`` and ``mssql_identity_increment`` as part of the
``dialect_options``. Use the information in the ``identity`` key instead.
The mssql dialect will assume that at least MSSQL 2005 is used.
There is no hard exception raised if a previous version is detected,
but operations may fail for older versions.
Fixes: #5527
Fixes: #5324
Change-Id: If039fe637c46b424499e6bac54a2cbc0dc54cb57
|
|
|
|
|
|
|
| |
It's better, the majority of these changes look more readable to me.
also found some docstrings that had formatting / quoting issues.
Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
|
|
|
|
|
|
|
|
|
| |
Fixes: #5597
Fixes the issue where :meth:`_reflection.has_table` always returns
``False`` for temporary tables.
Change-Id: I03ab04c849a157ce8fd28c07ec3bf4407b0f2c94
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on my machine, the owner for a temp table comes out as
dbo, and i am testing against a CI machine. im not sure
what happens on a CI machine except perhaps that it provisions
new databases is changing things. in any case, since we
are searching the tempdb for the name, get the schema/owner also.
Also refines the test to use a single connection and a transaction
that rolls back, doesn't hang here but let's see what CI does.
Change-Id: I522596ccc526cdab14c516b9a566ff666ac57dd6
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Improved support for covering indexes (with INCLUDE columns). Added the
ability for postgresql to render CREATE INDEX statements with an INCLUDE
clause from Core. Index reflection also report INCLUDE columns separately
for both mssql and postgresql (11+).
Fixes: #4458
Change-Id: If0b82103fbc898cdaeaf6a6d2d421c732744acd6
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Make optional sequences render as identity in mssql
Remove unused dialect option sequence_default_column_type
Change-Id: I821eeffcb442f8d1b69186a9b798b15c3d8d6ff3
|