| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repaired a major shortcoming which was identified in the
:ref:`engine_insertmanyvalues` performance optimization feature first
introduced in the 2.0 series. This was a continuation of the change in
2.0.9 which disabled the SQL Server version of the feature due to a
reliance in the ORM on apparent row ordering that is not guaranteed to take
place. The fix applies new logic to all "insertmanyvalues" operations,
which takes effect when a new parameter
:paramref:`_dml.Insert.returning.sort_by_parameter_order` on the
:meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults`
methods, that through a combination of alternate SQL forms, direct
correspondence of client side parameters, and in some cases downgrading to
running row-at-a-time, will apply sorting to each batch of returned rows
using correspondence to primary key or other unique values in each row
which can be correlated to the input data.
Performance impact is expected to be minimal as nearly all common primary
key scenarios are suitable for parameter-ordered batching to be
achieved for all backends other than SQLite, while "row-at-a-time"
mode operates with a bare minimum of Python overhead compared to the very
heavyweight approaches used in the 1.x series. For SQLite, there is no
difference in performance when "row-at-a-time" mode is used.
It's anticipated that with an efficient "row-at-a-time" INSERT with
RETURNING batching capability, the "insertmanyvalues" feature can be later
be more easily generalized to third party backends that include RETURNING
support but not necessarily easy ways to guarantee a correspondence
with parameter order.
Fixes: #9618
References: #9603
Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :class:`.Sequence` construct restores itself to the DDL behavior it
had prior to the 1.4 series, where creating a :class:`.Sequence` with
no additional arguments will emit a simple ``CREATE SEQUENCE`` instruction
**without** any additional parameters for "start value". For most backends,
this is how things worked previously in any case; **however**, for
MS SQL Server, the default value on this database is
``-2**63``; to prevent this generally impractical default
from taking effect on SQL Server, the :paramref:`.Sequence.start` parameter
should be provided. As usage of :class:`.Sequence` is unusual
for SQL Server which for many years has standardized on ``IDENTITY``,
it is hoped that this change has minimal impact.
Fixes: #7211
Change-Id: I1207ea10c8cb1528a1519a0fb3581d9621c27b31
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the feature is enabled for all built in backends
when RETURNING is used,
except for Oracle that doesn't need it, and on
psycopg2 and mssql+pyodbc it is used for all INSERT statements,
not just those that use RETURNING.
third party dialects would need to opt in to the new feature
by setting use_insertmanyvalues to True.
Also adds dialect-level guards against using returning
with executemany where we dont have an implementation to
suit it. execute single w/ returning still defers to the
server without us checking.
Fixes: #6047
Fixes: #7907
Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For improved security, the :class:`_url.URL` object will now use password
obfuscation by default when ``str(url)`` is called. To stringify a URL with
cleartext password, the :meth:`_url.URL.render_as_string` may be used,
passing the :paramref:`_url.URL.render_as_string.hide_password` parameter
as ``False``. Thanks to our contributors for this pull request.
Fixes: #8567
Closes: #8563
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/8563
Pull-request-sha: d1f1127f753849eb70b8d6cc64badf34e1b9219b
Change-Id: If756c8073ff99ac83876d9833c8fe1d7c76211f9
|
|
|
|
|
| |
Change-Id: Iee14750ba20422931bde4d61eaa570af482c7d8b
References: #8147
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rearchitected the schema reflection API to allow some dialects to make use
of high performing batch queries to reflect the schemas of many tables at
once using much fewer queries. The new performance features are targeted
first at the PostgreSQL and Oracle backends, and may be applied to any
dialect that makes use of SELECT queries against system catalog tables to
reflect tables (currently this omits the MySQL and SQLite dialects which
instead make use of parsing the "CREATE TABLE" statement, however these
dialects do not have a pre-existing performance issue with reflection. MS
SQL Server is still a TODO).
The new API is backwards compatible with the previous system, and should
require no changes to third party dialects to retain compatibility;
third party dialects can also opt into the new system by implementing
batched queries for schema reflection.
Along with this change is an updated reflection API that is fully
:pep:`484` typed, features many new methods and some changes.
Fixes: #4379
Change-Id: I897ec09843543aa7012bcdce758792ed3d415d08
|
|
|
|
|
| |
* Fixes: #7902 - Use tuple instead of raw url in string formatting
* Fix lint error
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
| |
__future__.annotations mode allows us to use non-string
annotations for argument and return types in most cases,
but more importantly it removes a large amount of runtime
overhead that would be spent in evaluating the annotations.
Change-Id: I2f5b6126fe0019713fc50001be3627b664019ede
References: #6810
|
|
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
| |
References: #4600
Change-Id: I2a62ddfe00bc562720f0eae700a497495d7a987a
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current effort is around the stub package, and having typing in
two places makes thing worse, since the types here are usually
outdated compared to the version in the stubs.
Once v2 gets under way we can start consolidating the types
here.
Fixes: #6461
Change-Id: I7132a444bd7138123074bf5bc664b4bb119a85ce
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the "schema_translate_map" feature failed to be taken into
account for the use case of direct execution of
:class:`_schema.DefaultGenerator` objects such as sequences, which included
the case where they were "pre-executed" in order to generate primary key
values when implicit_returning was disabled.
Fixes: #5929
Change-Id: I3fed1d0af28be5ce9c9bb572524dcc8411633f60
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow the "connection" pytest fixture and others work
correctly in conjunction with setup/teardown that expects
to be external to the transaction, remove and prevent any usage
of "xdist" style names that are hardcoded by pytest to run
inside of fixtures, even function level ones. Instead use
pytest autouse fixtures to implement our own
r"setup|teardown_test(?:_class)?" methods so that we can ensure
function-scoped fixtures are run within them. A new more
explicit flow is set up within plugin_base and pytestplugin
such that the order of setup/teardown steps, which there are now
many, is fully documented and controllable. New granularity
has been added to the test teardown phase to distinguish
between "end of the test" when lock-holding structures on
connections should be released to allow for table drops,
vs. "end of the test plus its teardown steps" when we can
perform final cleanup on connections and run assertions
that everything is closed out.
From there we can remove most of the defensive "tear down everything"
logic inside of engines which for many years would frequently dispose
of pools over and over again, creating for a broken and expensive
connection flow. A quick test shows that running test/sql/ against
a single Postgresql engine with the new approach uses 75% fewer new
connections, creating 42 new connections total, vs. 164 new
connections total with the previous system.
As part of this, the new fixtures metadata/connection/future_connection
have been integrated such that they can be combined together
effectively. The fixture_session(), provide_metadata() fixtures
have been improved, including that fixture_session() now strongly
references sessions which are explicitly torn down before
table drops occur afer a test.
Major changes have been made to the
ConnectionKiller such that it now features different "scopes" for
testing engines and will limit its cleanup to those testing
engines corresponding to end of test, end of test class, or
end of test session. The system by which it tracks DBAPI
connections has been reworked, is ultimately somewhat similar to
how it worked before but is organized more clearly along
with the proxy-tracking logic. A "testing_engine" fixture
is also added that works as a pytest fixture rather than a
standalone function. The connection cleanup logic should
now be very robust, as we now can use the same global
connection pools for the whole suite without ever disposing
them, while also running a query for PostgreSQL
locks remaining after every test and assert there are no open
transactions leaking between tests at all. Additional steps
are added that also accommodate for asyncio connections not
explicitly closed, as is the case for legacy sync-style
tests as well as the async tests themselves.
As always, hundreds of tests are further refined to use the
new fixtures where problems with loose connections were identified,
largely as a result of the new PostgreSQL assertions,
many more tests have moved from legacy patterns into the newest.
An unfortunate discovery during the creation of this system is that
autouse fixtures (as well as if they are set up by
@pytest.mark.usefixtures) are not usable at our current scale with pytest
4.6.11 running under Python 2. It's unclear if this is due
to the older version of pytest or how it implements itself for
Python 2, as well as if the issue is CPU slowness or just large
memory use, but collecting the full span of tests takes over
a minute for a single process when any autouse fixtures are in
place and on CI the jobs just time out after ten minutes.
So at the moment this patch also reinvents a small version of
"autouse" fixtures when py2k is running, which skips generating
the real fixture and instead uses two global pytest fixtures
(which don't seem to impact performance) to invoke the
"autouse" fixtures ourselves outside of pytest.
This will limit our ability to do more with fixtures
until we can remove py2k support.
py.test is still observed to be much slower in collection in the
4.6.11 version compared to modern 6.2 versions, so add support for new
TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that
will run the suite for fewer backends under Python 2. For Python 3
pin pytest to modern 6.2 versions where performance for collection
has been improved greatly.
Includes the following improvements:
Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would
be raised rather than :class:`.exc.TimeoutError`. Also repaired the
:paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using
the async engine, which previously would ignore the timeout and block
rather than timing out immediately as is the behavior with regular
:class:`.QueuePool`.
For asyncio the connection pool will now also not interact
at all with an asyncio connection whose ConnectionFairy is
being garbage collected; a warning that the connection was
not properly closed is emitted and the connection is discarded.
Within the test suite the ConnectionKiller is now maintaining
strong references to all DBAPI connections and ensuring they
are released when tests end, including those whose ConnectionFairy
proxies are GCed.
Identified cx_Oracle.stmtcachesize as a major factor in Oracle
test scalability issues, this can be reset on a per-test basis
rather than setting it to zero across the board. the addition
of this flag has resolved the long-standing oracle "two task"
error problem.
For SQL Server, changed the temp table style used by the
"suite" tests to be the double-pound-sign, i.e. global,
variety, which is much easier to test generically. There
are already reflection tests that are more finely tuned
to both styles of temp table within the mssql test
suite. Additionally, added an extra step to the
"dropfirst" mechanism for SQL Server that will remove
all foreign key constraints first as some issues were
observed when using this flag when multiple schemas
had not been torn down.
Identified and fixed two subtle failure modes in the
engine, when commit/rollback fails in a begin()
context manager, the connection is explicitly closed,
and when "initialize()" fails on the first new connection
of a dialect, the transactional state on that connection
is still rolled back.
Fixes: #5826
Fixes: #5827
Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
importantly this means we can remove bound metadata from
the fixtures that are used by Alembic's test suite.
hopefully this is the last one that has to happen to allow
Alembic to be fully 1.4/2.0.
Start moving from @testing.provide_metadata to a pytest
metadata fixture. This does not seem to have any negative
effects even though TablesTest uses a "self.metadata" attribute.
Change-Id: Iae6ab95938a7e92b6d42086aec534af27b5577d3
|
|
|
|
|
|
|
| |
Still having non-reproducible failures where "user_tmp" cannot
be dropped. try isolating the table name around config.ident
Change-Id: I17e0a9674b22d246f0d52943b850e8f6de223305
|
|
|
|
| |
Change-Id: I4940d184a4dc790782fcddfb9873af3cca844398
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This seems to be raising errors lately which was not the
case earlier, however SQL Server seems to produce name conflicts
for named constraints against temp tables in different databases.
I can raise the error every time here running two tests
from ComponentReflectionTest with -n2. Unknown why the issue
is happening now and didn't occur for several months.
https://www.arbinada.com/en/node/1645 has some background.
Change-Id: I8854dfd88503fb855a7e12622ebe97c08915e5bb
|
|
|
|
| |
Change-Id: I4968aa3bde3c4d11d7fe84f18b4a846ba357d16a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression where a connection pool event specified with a keyword,
most notably ``insert=True``, would be lost when the event were set up.
This would prevent startup events that need to fire before dialect-level
events from working correctly.
The internal mechanics of the engine connection routine has been altered
such that it's now guaranteed that a user-defined event handler for the
:meth:`_pool.PoolEvents.connect` handler, when established using
``insert=True``, will allow an event handler to run that is definitely
invoked **before** any dialect-specific initialization starts up, most
notably when it does things like detect default schema name.
Previously, this would occur in most cases but not unconditionally.
A new example is added to the schema documentation illustrating how to
establish the "default schema name" within an on-connect event
(upcoming as part of I882edd5bbe06ee5b4d0a9c148854a57b2bcd4741)
Addiional changes to support setting default schema name:
The Oracle dialect now uses
``select sys_context( 'userenv', 'current_schema' ) from dual`` to get
the default schema name, rather than ``SELECT USER FROM DUAL``, to
accommodate for changes to the session-local schema name under Oracle.
Added a read/write ``.autocommit`` attribute to the DBAPI-adaptation layer
for the asyncpg dialect. This so that when working with DBAPI-specific
schemes that need to use "autocommit" directly with the DBAPI connection,
the same ``.autocommit`` attribute which works with both psycopg2 as well
as pg8000 is available.
Fixes: #5716
Fixes: #5708
Change-Id: I7dce56b4345ffc720e25e2aaccb7e42bb29e5671
|
|
|
|
|
|
|
| |
py2k coverage tests were failing because of this API
usage that seems to pass on Py3k but not py2k
Change-Id: Ic7ceb03c2660f411f487fce73ce5c2fa2c752031
|
|
|
|
|
|
|
| |
It's better, the majority of these changes look more readable to me.
also found some docstrings that had formatting / quoting issues.
Change-Id: I582a45fde3a5648b2f36bab96bad56881321899b
|
|
|
|
|
| |
Fixes: #5506
Change-Id: I718474d76e3c630a1b71e07eaa20cefb104d11de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it's not really correct that URL is mutable and doesn't do
any argument checking. propose replacing it with an immutable
named tuple with rich copy-and-mutate methods.
At the moment this makes a hard change to the CreateEnginePlugin
docs that previously recommended url.query.pop(). I can't find
any plugins on github other than my own that are using this
feature, so see if we can just make a hard change on this one.
Fixes: #5526
Change-Id: I28a0a471d80792fa8c28f4fa573d6352966a4a79
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pg8000 dialect has been revised and modernized for the most recent
version of the pg8000 driver for PostgreSQL. Changes to the dialect
include:
* All data types are now sent as text rather than binary.
* Using adapters, custom types can be plugged in to pg8000.
* Previously, named prepared statements were used for all statements.
Now unnamed prepared statements are used by default, and named
prepared statements can be used explicitly by calling the
Connection.prepare() method, which returns a PreparedStatement
object.
Pull request courtesy Tony Locke.
Notes by Mike: to get this all working it was needed to break
up JSONIndexType into "str" and "int" subtypes; this will be
needed for any dialect that is dependent on setinputsizes().
also includes @caselit's idea to include query params
in the dbdriver parameter.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Closes: #5451
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5451
Pull-request-sha: 639751ca9c7544801b9ede02e6cbe15a16c59c82
Change-Id: I2869bc52c330916773a41d11d12c297aecc8fcd8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MySQL dialect's server_version_info tuple is now all numeric. String
tokens like "MariaDB" are no longer present so that numeric comparison
works in all cases. The .is_mariadb flag on the dialect should be
consulted for whether or not mariadb was detected. Additionally removed
structures meant to support extremely old MySQL versions 3.x and 4.x;
the minimum MySQL version supported is now version 5.0.2.
In addition, as the "MariaDB" name goes away from server version,
expand upon the change in I330815ebe572b6a9818377da56621397335fa702
to support the name "mariadb" throughout the dialect and test suite
when mariadb-only mode is used. This changes the "name" field
on the MariaDB dialect to "mariadb", which then implies a change
throughout the testing requirements system as well as all the
dialect-specific DDL argument names such as "mysql_engine" is
now specified as "mariadb_engine", etc. Make use of the
recent additions to test suite URL provisioning so that we can
force MariaDB databases to have a "mariadb-only" dialect which
allows us to test this name change fully.
Update documentation to refer to MySQL / MariaDB explicitly
as well as indicating the "mariadb_" prefix used for options.
It seems likely that MySQL and MariaDB version numbers are going to
start colliding at some point so having the "mariadb" name
be available as a totally separate dialect name should give us
some options in this regard.
Currently also includes a date related fix to a test for
the postgresql dialect that was implicitly assuming a
non-UTC timezone
Fixes: #4189
Change-Id: I00e76d00f62971e1f067bd61915fa6cc1cf64e5e
|
|
|
|
|
|
|
|
|
|
|
| |
We want TOX_POSTGRESQL and similar to be the fixed variable
that is configured from CI environment. These variables should refer
to database servers but individual drivers like asyncpg mysqlconnector
etc. should come from local tox.ini. add a new system to generate
per-driver URLs from a simple list of hostname-based URLs delivered
from CI environment.
Change-Id: I4267b4a70742765388c7e7c4432c1da9d9adece2
|
|
|
|
| |
Change-Id: I431e1ef41e26d490343204a75a5c097768749768
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. ensure provision.py loads dialect implementations when running
reap_dbs.py. Reapers haven't been working since
598f2f7e557073f29563d4d567f43931fc03013f .
2. add some exclusion rules to allow the sqlite_file target to work;
add to tox.
3. add reap dbs target for SQLite, repair SQLite drop_db routine
which also wasn't doing the right thing for memory databases
etc.
4. Fix logging in provision files, as the main provision logger
is the one that's enabled by reap_dbs and maybe others, have all
the provision files use the provision logger.
Fixes: #5180
Fixes: #5168
Change-Id: Ibc1b0106394d20f5bcf847f37b09d185f26ac9b5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed the imports for provision.py from each dialect
and instead added a call in the central provision.py to
a new dialect level method load_provisioning(). The
provisioning registry works in the same way, so an existing
dialect that is using the provision.py system right now
by importing it as part of the package will still continue to
function. However, to avoid pulling in the testing package when
the dialect is used in a non-testing context, the new hook may be
used. Also removed a module-level dependency
of the testing framework on the orm package.
Revised an internal change to the test system added as a result of
:ticket:`5085` where a testing-related module per dialect would be loaded
unconditionally upon making use of that dialect, pulling in SQLAlchemy's
testing framework as well as the ORM into the module import space. This
would only impact initial startup time and memory to a modest extent,
however it's best that these additional modules aren't reverse-dependent on
straight Core usage.
Fixes: #5180
Change-Id: I6355601da5f6f44d85a2bbc3acb5928559942b9c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added pyproject.toml with black arguments
- Updated black version in precommit hook
- Reformatted the code
Fixes: #5100
Closes: #5103
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5103
Pull-request-sha: 795fd5f896be4a07a2b18e6525674b815ac17593
Change-Id: I14eedbaa51fb531cbf90fcefe6a1e07c8a565625
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes: #5085
<!-- Provide a general summary of your proposed changes in the Title field above -->
Move dialect-specific provisioning code to dialect-level copies of provision.py.
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [x] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #5092
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5092
Pull-request-sha: 25b9b7a9800549fb823576af8674e8d33ff4b2c1
Change-Id: Ie0b4a69aa472a60bdbd825e04c8595382bcc98e1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ``literal_processor`` for the :class:`.Unicode` and
:class:`.UnicodeText` datatypes now render an ``N`` character in front of
the literal string expression as required by SQL Server for Unicode string
values rendered in SQL expressions.
Note that this adds full unicode characters to the standard test suite,
which means we also need to bump MySQL provisioning up to utf8mb4.
Modern installs do not seem to be reproducing the 1271 issue locally,
if it reproduces in CI it would be better for us to skip those ORM-centric
tests for MySQL.
Also remove unused _StringType from SQL Server dialect
Fixes: #4442
Change-Id: Id55817b3e8a2d81ddc8b7b27f85e3f1dcc1cea7e
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Applied on top of a pure run of black -l 79 in
I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9, this set of changes
resolves all remaining flake8 conditions for those codes
we have enabled in setup.cfg.
Included are resolutions for all remaining flake8 issues
including shadowed builtins, long lines, import order, unused
imports, duplicate imports, and docstring issues.
Change-Id: I4f72d3ba1380dd601610ff80b8fb06a2aff8b0fe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a straight reformat run using black as is, with no edits
applied at all.
The black run will format code consistently, however in
some cases that are prevalent in SQLAlchemy code it produces
too-long lines. The too-long lines will be resolved in the
following commit that will resolve all remaining flake8 issues
including shadowed builtins, long lines, import order, unused
imports, duplicate imports, and docstring issues.
Change-Id: I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes to the test suite, a few errant imports, and setup.py:
- mysql and postgresql have unused 'json' imports; remove
- postgresql is exporting the 'json' symbol, remove
- make sure setup.py can find __version__ using " or '
- retry logic in provision create database for postgresql fixed
- refactor test_magazine to use cls.tables rather than globals
- remove unused class in test_scoping
- add a comment to test_deprecations that this test suite itself
is deprecated
- don't use mapper() and orm_mapper() in test_unitofwork, just
use mapper()
- remove dupe test_scalar_set_None test in test_attributes
- Python 2.7 and above includes unittest.SkipTest, remove pre-2.7
fallback
- use imported SkipTest in profiling
- declarative test_reflection tests with "reflectable_autoincrement"
already don't run on oracle or firebird; remove conditional logic
for these, which also removes an "id" symbol
- clean up test in test_functions, remove print statement
- remove dupe test_literal_processor_coercion_native_int_out_of_range
in test/sql/test_types.py
- fix psycopg2_hstore ref
Change-Id: I7b3444f8546aac82be81cd1e7b6d8b2ad6834fe6
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in MSSQL reflection where when two same-named tables in different
schemas had same-named primary key constraints, foreign key constraints
referring to one of the tables would have their columns doubled, causing
errors. Pull request courtesy Sean Dunn.
Fixes: #4228
Change-Id: I7dabaaee0944e1030048826ba39fc574b0d63031
Pull-request: https://github.com/zzzeek/sqlalchemy/pull/457
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug in MySQLdb dialect and variants such as PyMySQL where an
additional "unicode returns" check upon connection makes explicit use of
the "utf8" character set, which in MySQL 8.0 emits a warning that utf8mb4
should be used. This is now replaced with a utf8mb4 equivalent.
Documentation is also updated for the MySQL dialect to specify utf8mb4 in
all examples. Additional changes have been made to the test suite to use
utf8mb3 charsets and databases (there seem to be collation issues in some
edge cases with utf8mb4), and to support configuration default changes made
in MySQL 8.0 such as explicit_defaults_for_timestamp as well as new errors
raised for invalid MyISAM indexes.
Change-Id: Ib596ea7de4f69f976872a33bffa4c902d17dea25
Fixes: #4283
Fixes: #4192
|
|
|
|
|
|
|
|
|
| |
Drops support for cx_Oracle prior to version 5.x, reworks
numeric and binary support.
Fixes: #4064
Change-Id: Ib9ae9aba430c15cd2a6eeb4e5e3fd8e97b5fe480
|
|
|
|
| |
Change-Id: Ida0d01ae9bcc0573b86e24fddea620a38c962822
|
|
|
|
|
|
|
| |
PG CREATE DATABASE. as nobody will connect to it that would
solve the contention issue here
Change-Id: I00a4d52091876e120faff4a8a5493c53280d96f1
|
|
|
|
| |
Change-Id: Id0cf7ae2223507d413aaa22e5f8df066b7ac2b46
|
|
|
|
| |
Change-Id: Ib36949da8f079594494a482423d96e7509673481
|
|
|
|
| |
Change-Id: Ib6efc465cccd7c7661dd089856edfd4979b53517
|
|
|
|
| |
Change-Id: I9c9d101547f4484af447db924dc06afd0392a03e
|
|
|
|
| |
Change-Id: I475f744f8801bc923d738e466d208d662e707413
|
|
|
|
|
|
| |
supporting custom dburi etc.
Change-Id: Ic0ab0b3b4223e40fd335ee3313fda4dfce942100
|
|
|
|
| |
Change-Id: I1ac16bc77642f4f576195ac10443ed8e641e0d49
|
|
|
|
| |
Change-Id: I14a5dd920d14096aa4401cb8b9d71f47d3915879
|
|
|
|
|
|
|
|
|
|
| |
Improve screen output to illustrate which server version is
running for a particular database config, and additionally
allow full overriding for the backend-specific targets in
tox.ini via environment variables, so that CI can inject
multiple server urls for a particular database such as MySQL/MariaDB.
Change-Id: Ibf443bb9fb82e4563efd1bb66058fa9989aa2fda
|