| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
| |
Change-Id: I5cd7e9e9ab8a1dae2bd467a1e4299d7f26183301
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new PostgreSQL type :class:`_postgresql.CITEXT`. Pull request
courtesy Julian David Rath.
Fixes: #9416
Closes: #9417
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/9417
Pull-request-sha: 23a83a342ad6d820ee5749ebccda04e54c373f7d
Change-Id: I54699b9457426c20afbdc0acaa41dc57644b0536
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ORM Insert now includes "bulk" mode that will run
essentially the same process as session.bulk_insert_mappings;
interprets the given list of values as ORM attributes for
key names
* ORM UPDATE has a similar feature, without RETURNING support,
for session.bulk_update_mappings
* Added support for upserts to do RETURNING ORM objects as well
* ORM UPDATE/DELETE with list of parameters + WHERE criteria
is a not implemented; use connection
* ORM UPDATE/DELETE defaults to "auto" synchronize_session;
use fetch if RETURNING is present, evaluate if not, as
"fetch" is much more efficient (no expired object SELECT problem)
and less error prone if RETURNING is available
UPDATE: howver this is inefficient! please continue to
use evaluate for simple cases, auto can move to fetch
if criteria not evaluable
* "Evaluate" criteria will now not preemptively
unexpire and SELECT attributes that were individually
expired. Instead, if evaluation of the criteria indicates that
the necessary attrs were expired, we expire the object
completely (delete) or expire the SET attrs unconditionally
(update). This keeps the object in the same unloaded state
where it will refresh those attrs on the next pass, for
this generally unusual case. (originally #5664)
* Core change! update/delete rowcount comes from len(rows)
if RETURNING was used. SQLite at least otherwise did not
support this. adjusted test_rowcount accordingly
* ORM DELETE with a list of parameters at all is also a not
implemented as this would imply "bulk", and there is no
bulk_delete_mappings (could be, but we dont have that)
* ORM insert().values() with single or multi-values translates
key names based on ORM attribute names
* ORM returning() implemented for insert, update, delete;
explcit returning clauses now interpret rows in an ORM
context, with support for qualifying loader options as well
* session.bulk_insert_mappings() assigns polymorphic identity
if not set.
* explicit RETURNING + synchronize_session='fetch' is now
supported with UPDATE and DELETE.
* expanded return_defaults() to work with DELETE also.
* added support for composite attributes to be present
in the dictionaries used by bulk_insert_mappings and
bulk_update_mappings, which is also the new ORM bulk
insert/update feature, that will expand the composite
values into their individual mapped attributes the way they'd
be on a mapped instance.
* bulk UPDATE supports "synchronize_session=evaluate", is the
default. this does not apply to session.bulk_update_mappings,
just the new version
* both bulk UPDATE and bulk INSERT, the latter with or without
RETURNING, support *heterogenous* parameter sets.
session.bulk_insert/update_mappings did this, so this feature
is maintained. now cursor result can be both horizontally
and vertically spliced :)
This is now a long story with a lot of options, which in
itself is a problem to be able to document all of this
in some way that makes sense. raising exceptions for
use cases we haven't supported is pretty important here
too, the tradition of letting unsupported things just not work
is likely not a good idea at this point, though there
are still many cases that aren't easily avoidable
Fixes: #8360
Fixes: #7864
Fixes: #7865
Change-Id: Idf28379f8705e403a3c6a937f6a798a042ef2540
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the feature is enabled for all built in backends
when RETURNING is used,
except for Oracle that doesn't need it, and on
psycopg2 and mssql+pyodbc it is used for all INSERT statements,
not just those that use RETURNING.
third party dialects would need to opt in to the new feature
by setting use_insertmanyvalues to True.
Also adds dialect-level guards against using returning
with executemany where we dont have an implementation to
suit it. execute single w/ returning still defers to the
server without us checking.
Fixes: #6047
Fixes: #7907
Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improvements to the test suite's integration with pytest such that the
"warnings" plugin, if manually enabled, will not interfere with the test
suite, such that third parties can enable the warnings plugin or make use
of the ``-W`` parameter and SQLAlchemy's test suite will continue to pass.
Additionally, modernized the detection of the "pytest-xdist" plugin so that
plugins can be globally disabled using PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
without breaking the test suite if xdist were still installed. Warning
filters that promote deprecation warnings to errors are now localized to
SQLAlchemy-specific warnings, or within SQLAlchemy-specific sources for
general Python deprecation warnings, so that non-SQLAlchemy deprecation
warnings emitted from pytest plugins should also not impact the test suite.
Fixes: #7599
Change-Id: Ibcf09af25228d39ee5a943fda82d8a9302433726
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major action here is to lift and move future.Connection
and future.Engine fully into sqlalchemy.engine.base. This
removes lots of engine concepts, including:
* autocommit
* Connection running without a transaction, autobegin
is now present in all cases
* most "autorollback" is obsolete
* Core-level subtransactions (i.e. MarkerTransaction)
* "branched" connections, copies of connections
* execution_options() returns self, not a new connection
* old argument formats, distill_params(), simplifies calling
scheme between engine methods
* before/after_execute() events (oriented towards compiled constructs)
don't emit for exec_driver_sql(). before/after_cursor_execute()
is still included for this
* old helper methods superseded by context managers, connection.transaction(),
engine.transaction() engine.run_callable()
* ancient engine-level reflection methods has_table(), table_names()
* sqlalchemy.testing.engines.proxying_engine
References: #7257
Change-Id: Ib20ed816642d873b84221378a9ec34480e01e82c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We haven't had any real cases of the PG "cant drop tables"
condition since this error was first introduced; instead we
seem to get it for a non-critical query during pool reconnect
tests, and I have not been able to isolate what is causing it.
Therefore turn the error into a new class of warning that can
emit within the test suite without failing the test, so that
if we do get a real PG drop timeout, the warning will be there
to show us what the query was in which it was stuck.
Change-Id: I1a9b3c4f7a25b7b9c1af722a721fc44ad5575b0f
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
continuing with producing a SQLAlchemy 1.4.0b2 that internally
does not emit any of its own 2.0 deprecation warnings,
migrate the *args and **kwargs passed to execute() methods
that now must be a single list or dictionary.
Alembic 1.5 is again waiting on this internal consistency to
be present so that it can pass all tests with no 2.0
deprecation warnings.
Change-Id: If6b792e57c8c5dff205419644ab68e631575a2fa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow the "connection" pytest fixture and others work
correctly in conjunction with setup/teardown that expects
to be external to the transaction, remove and prevent any usage
of "xdist" style names that are hardcoded by pytest to run
inside of fixtures, even function level ones. Instead use
pytest autouse fixtures to implement our own
r"setup|teardown_test(?:_class)?" methods so that we can ensure
function-scoped fixtures are run within them. A new more
explicit flow is set up within plugin_base and pytestplugin
such that the order of setup/teardown steps, which there are now
many, is fully documented and controllable. New granularity
has been added to the test teardown phase to distinguish
between "end of the test" when lock-holding structures on
connections should be released to allow for table drops,
vs. "end of the test plus its teardown steps" when we can
perform final cleanup on connections and run assertions
that everything is closed out.
From there we can remove most of the defensive "tear down everything"
logic inside of engines which for many years would frequently dispose
of pools over and over again, creating for a broken and expensive
connection flow. A quick test shows that running test/sql/ against
a single Postgresql engine with the new approach uses 75% fewer new
connections, creating 42 new connections total, vs. 164 new
connections total with the previous system.
As part of this, the new fixtures metadata/connection/future_connection
have been integrated such that they can be combined together
effectively. The fixture_session(), provide_metadata() fixtures
have been improved, including that fixture_session() now strongly
references sessions which are explicitly torn down before
table drops occur afer a test.
Major changes have been made to the
ConnectionKiller such that it now features different "scopes" for
testing engines and will limit its cleanup to those testing
engines corresponding to end of test, end of test class, or
end of test session. The system by which it tracks DBAPI
connections has been reworked, is ultimately somewhat similar to
how it worked before but is organized more clearly along
with the proxy-tracking logic. A "testing_engine" fixture
is also added that works as a pytest fixture rather than a
standalone function. The connection cleanup logic should
now be very robust, as we now can use the same global
connection pools for the whole suite without ever disposing
them, while also running a query for PostgreSQL
locks remaining after every test and assert there are no open
transactions leaking between tests at all. Additional steps
are added that also accommodate for asyncio connections not
explicitly closed, as is the case for legacy sync-style
tests as well as the async tests themselves.
As always, hundreds of tests are further refined to use the
new fixtures where problems with loose connections were identified,
largely as a result of the new PostgreSQL assertions,
many more tests have moved from legacy patterns into the newest.
An unfortunate discovery during the creation of this system is that
autouse fixtures (as well as if they are set up by
@pytest.mark.usefixtures) are not usable at our current scale with pytest
4.6.11 running under Python 2. It's unclear if this is due
to the older version of pytest or how it implements itself for
Python 2, as well as if the issue is CPU slowness or just large
memory use, but collecting the full span of tests takes over
a minute for a single process when any autouse fixtures are in
place and on CI the jobs just time out after ten minutes.
So at the moment this patch also reinvents a small version of
"autouse" fixtures when py2k is running, which skips generating
the real fixture and instead uses two global pytest fixtures
(which don't seem to impact performance) to invoke the
"autouse" fixtures ourselves outside of pytest.
This will limit our ability to do more with fixtures
until we can remove py2k support.
py.test is still observed to be much slower in collection in the
4.6.11 version compared to modern 6.2 versions, so add support for new
TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that
will run the suite for fewer backends under Python 2. For Python 3
pin pytest to modern 6.2 versions where performance for collection
has been improved greatly.
Includes the following improvements:
Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would
be raised rather than :class:`.exc.TimeoutError`. Also repaired the
:paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using
the async engine, which previously would ignore the timeout and block
rather than timing out immediately as is the behavior with regular
:class:`.QueuePool`.
For asyncio the connection pool will now also not interact
at all with an asyncio connection whose ConnectionFairy is
being garbage collected; a warning that the connection was
not properly closed is emitted and the connection is discarded.
Within the test suite the ConnectionKiller is now maintaining
strong references to all DBAPI connections and ensuring they
are released when tests end, including those whose ConnectionFairy
proxies are GCed.
Identified cx_Oracle.stmtcachesize as a major factor in Oracle
test scalability issues, this can be reset on a per-test basis
rather than setting it to zero across the board. the addition
of this flag has resolved the long-standing oracle "two task"
error problem.
For SQL Server, changed the temp table style used by the
"suite" tests to be the double-pound-sign, i.e. global,
variety, which is much easier to test generically. There
are already reflection tests that are more finely tuned
to both styles of temp table within the mssql test
suite. Additionally, added an extra step to the
"dropfirst" mechanism for SQL Server that will remove
all foreign key constraints first as some issues were
observed when using this flag when multiple schemas
had not been torn down.
Identified and fixed two subtle failure modes in the
engine, when commit/rollback fails in a begin()
context manager, the connection is explicitly closed,
and when "initialize()" fails on the first new connection
of a dialect, the transactional state on that connection
is still rolled back.
Fixes: #5826
Fixes: #5827
Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
importantly this means we can remove bound metadata from
the fixtures that are used by Alembic's test suite.
hopefully this is the last one that has to happen to allow
Alembic to be fully 1.4/2.0.
Start moving from @testing.provide_metadata to a pytest
metadata fixture. This does not seem to have any negative
effects even though TablesTest uses a "self.metadata" attribute.
Change-Id: Iae6ab95938a7e92b6d42086aec534af27b5577d3
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure no autocommit warnings occur internally or
within tests.
Also includes fixes for SQL Server full text tests
which apparently have not been working at all for a long
time, as it used long removed APIs. CI has not had
fulltext running for some years and is now installed.
Change-Id: Id806e1856c9da9f0a9eac88cebc7a94ecc95eb96
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where a connection pool event specified with a keyword,
most notably ``insert=True``, would be lost when the event were set up.
This would prevent startup events that need to fire before dialect-level
events from working correctly.
The internal mechanics of the engine connection routine has been altered
such that it's now guaranteed that a user-defined event handler for the
:meth:`_pool.PoolEvents.connect` handler, when established using
``insert=True``, will allow an event handler to run that is definitely
invoked **before** any dialect-specific initialization starts up, most
notably when it does things like detect default schema name.
Previously, this would occur in most cases but not unconditionally.
A new example is added to the schema documentation illustrating how to
establish the "default schema name" within an on-connect event
(upcoming as part of I882edd5bbe06ee5b4d0a9c148854a57b2bcd4741)
Addiional changes to support setting default schema name:
The Oracle dialect now uses
``select sys_context( 'userenv', 'current_schema' ) from dual`` to get
the default schema name, rather than ``SELECT USER FROM DUAL``, to
accommodate for changes to the session-local schema name under Oracle.
Added a read/write ``.autocommit`` attribute to the DBAPI-adaptation layer
for the asyncpg dialect. This so that when working with DBAPI-specific
schemes that need to use "autocommit" directly with the DBAPI connection,
the same ``.autocommit`` attribute which works with both psycopg2 as well
as pg8000 is available.
Fixes: #5716
Fixes: #5708
Change-Id: I7dce56b4345ffc720e25e2aaccb7e42bb29e5671
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pg8000 dialect has been revised and modernized for the most recent
version of the pg8000 driver for PostgreSQL. Changes to the dialect
include:
* All data types are now sent as text rather than binary.
* Using adapters, custom types can be plugged in to pg8000.
* Previously, named prepared statements were used for all statements.
Now unnamed prepared statements are used by default, and named
prepared statements can be used explicitly by calling the
Connection.prepare() method, which returns a PreparedStatement
object.
Pull request courtesy Tony Locke.
Notes by Mike: to get this all working it was needed to break
up JSONIndexType into "str" and "int" subtypes; this will be
needed for any dialect that is dependent on setinputsizes().
also includes @caselit's idea to include query params
in the dbdriver parameter.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Closes: #5451
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5451
Pull-request-sha: 639751ca9c7544801b9ede02e6cbe15a16c59c82
Change-Id: I2869bc52c330916773a41d11d12c297aecc8fcd8
|
|
|
|
|
|
|
|
|
|
|
| |
We want TOX_POSTGRESQL and similar to be the fixed variable
that is configured from CI environment. These variables should refer
to database servers but individual drivers like asyncpg mysqlconnector
etc. should come from local tox.ini. add a new system to generate
per-driver URLs from a simple list of hostname-based URLs delivered
from CI environment.
Change-Id: I4267b4a70742765388c7e7c4432c1da9d9adece2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Execution of literal sql string is deprecated in the
:meth:`.Connection.execute` and a warning is raised when used stating
that it will be coerced to :func:`.text` in a future release.
To execute a raw sql string the new connection method
:meth:`.Connection.exec_driver_sql` was added, that will retain the previous
behavior, passing the string to the DBAPI driver unchanged.
Usage of scalar or tuple positional parameters in :meth:`.Connection.execute`
is also deprecated.
Fixes: #4848
Fixes: #5178
Change-Id: I2830181054327996d594f7f0d59c157d477c3aa9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. ensure provision.py loads dialect implementations when running
reap_dbs.py. Reapers haven't been working since
598f2f7e557073f29563d4d567f43931fc03013f .
2. add some exclusion rules to allow the sqlite_file target to work;
add to tox.
3. add reap dbs target for SQLite, repair SQLite drop_db routine
which also wasn't doing the right thing for memory databases
etc.
4. Fix logging in provision files, as the main provision logger
is the one that's enabled by reap_dbs and maybe others, have all
the provision files use the provision logger.
Fixes: #5180
Fixes: #5168
Change-Id: Ibc1b0106394d20f5bcf847f37b09d185f26ac9b5
|
|
Fixes: #5085
<!-- Provide a general summary of your proposed changes in the Title field above -->
Move dialect-specific provisioning code to dialect-level copies of provision.py.
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [x] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #5092
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5092
Pull-request-sha: 25b9b7a9800549fb823576af8674e8d33ff4b2c1
Change-Id: Ie0b4a69aa472a60bdbd825e04c8595382bcc98e1
|