summaryrefslogtreecommitdiff
path: root/lib/sqlalchemy/dialects/sqlite/provision.py
Commit message (Collapse)AuthorAgeFilesLines
* add deterministic imv returning ordering using sentinel columnsMike Bayer2023-04-211-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Repaired a major shortcoming which was identified in the :ref:`engine_insertmanyvalues` performance optimization feature first introduced in the 2.0 series. This was a continuation of the change in 2.0.9 which disabled the SQL Server version of the feature due to a reliance in the ORM on apparent row ordering that is not guaranteed to take place. The fix applies new logic to all "insertmanyvalues" operations, which takes effect when a new parameter :paramref:`_dml.Insert.returning.sort_by_parameter_order` on the :meth:`_dml.Insert.returning` or :meth:`_dml.UpdateBase.return_defaults` methods, that through a combination of alternate SQL forms, direct correspondence of client side parameters, and in some cases downgrading to running row-at-a-time, will apply sorting to each batch of returned rows using correspondence to primary key or other unique values in each row which can be correlated to the input data. Performance impact is expected to be minimal as nearly all common primary key scenarios are suitable for parameter-ordered batching to be achieved for all backends other than SQLite, while "row-at-a-time" mode operates with a bare minimum of Python overhead compared to the very heavyweight approaches used in the 1.x series. For SQLite, there is no difference in performance when "row-at-a-time" mode is used. It's anticipated that with an efficient "row-at-a-time" INSERT with RETURNING batching capability, the "insertmanyvalues" feature can be later be more easily generalized to third party backends that include RETURNING support but not necessarily easy ways to guarantee a correspondence with parameter order. Fixes: #9618 References: #9603 Change-Id: I1d79353f5f19638f752936ba1c35e4dc235a8b7c
* sqlite provisioning is ridiculousMike Bayer2023-03-101-74/+96
| | | | | | try to get file naming to be more sane for pysqlite file databases Change-Id: I68ad8c2f6c6c25930fbffdd79b8d429cd7a7dd9a
* Rewrite positional handling, test for "numeric"Federico Caselli2022-12-051-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changed how the positional compilation is performed. It's rendered by the compiler the same as the pyformat compilation. The string is then processed to replace the placeholders with the correct ones, and to obtain the correct order of the parameters. This vastly simplifies the computation of the order of the parameters, that in case of nested CTE is very hard to compute correctly. Reworked how numeric paramstyle behavers: - added support for repeated parameter, without duplicating them like in normal positional dialects - implement insertmany support. This requires that the dialect supports out of order placehoders, since all parameters that are not part of the VALUES clauses are placed at the beginning of the parameter tuple - support for different identifiers for a numeric parameter. It's for example possible to use postgresql style placeholder $1, $2, etc Added two new dialect based on sqlite to test "numeric" fully using both :1 style and $1 style. Includes a workaround for SQLite's not-really-correct numeric implementation. Changed parmstyle of asyncpg dialect to use numeric, rendering with its native $ identifiers Fixes: #8926 Fixes: #8849 Change-Id: I7c640467d49adfe6d795cc84296fc7403dcad4d6
* implement batched INSERT..VALUES () () for executemanyMike Bayer2022-09-241-0/+16
| | | | | | | | | | | | | | | | | | | | the feature is enabled for all built in backends when RETURNING is used, except for Oracle that doesn't need it, and on psycopg2 and mssql+pyodbc it is used for all INSERT statements, not just those that use RETURNING. third party dialects would need to opt in to the new feature by setting use_insertmanyvalues to True. Also adds dialect-level guards against using returning with executemany where we dont have an implementation to suit it. execute single w/ returning still defers to the server without us checking. Fixes: #6047 Fixes: #7907 Change-Id: I3936d3c00003f02e322f2e43fb949d0e6e568304
* inline mypy config; files ignoring type errors for the momentMike Bayer2022-04-281-0/+2
| | | | | | | | | | | | | | | | | | | to simplify pyproject.toml change the remaining files that aren't going to be typed on this first pass (unless of course someone wants to type some of these) to include # mypy: ignore-errors. for the moment, only a handful of ORM modules are to have more type checking implemented. It's important that ignore-errors is used and not "# type: ignore", as in the latter case, mypy doesn't even read the existing types in the file, which makes it impossible to type any files that refer to those modules at all. to simplify ongoing typing work use inline mypy config for remaining files that are "done" for now, indicating the level of type checking they currently have. Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
* Repair pysqlcipher and use sqlcipher3Mike Bayer2021-03-241-22/+44
| | | | | | | | | | | | | | | | | | | | | | | The ``pysqlcipher`` dialect now imports the ``sqlcipher3`` module for Python 3 by default. Regressions have been repaired such that the connection routine was not working. To better support the post-connection steps of the pysqlcipher dialect, a new hook Dialect.on_connect_url() is added, which supersedes Dialect.on_connect() and is passed the URL object. The dialect now pulls the passphrase and other cipher args from the URL directly without including them in the "connect" args. This will allow any user-defined extensibility to connecting to work as it would for other dialects. The commit also builds upon the extended routines in sqlite/provisioning.py to better support running tests against multiple simultaneous SQLite database files. Additionally enables backend for test_sqlite which was skipping everything for aiosqlite too, fortunately everything there is passing. Fixes: #5848 Change-Id: I43f53ebc62298a84a4abe149e1eb699a027b7915
* Add support for aiosqliteFederico Caselli2021-03-241-9/+34
| | | | | | | | Added support for the aiosqlite database driver for use with the SQLAlchemy asyncio extension. Fixes: #5920 Change-Id: Id11a320516a44e886a6f518d2866a0f992413e55
* reinvent xdist hooks in terms of pytest fixturesMike Bayer2021-01-131-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To allow the "connection" pytest fixture and others work correctly in conjunction with setup/teardown that expects to be external to the transaction, remove and prevent any usage of "xdist" style names that are hardcoded by pytest to run inside of fixtures, even function level ones. Instead use pytest autouse fixtures to implement our own r"setup|teardown_test(?:_class)?" methods so that we can ensure function-scoped fixtures are run within them. A new more explicit flow is set up within plugin_base and pytestplugin such that the order of setup/teardown steps, which there are now many, is fully documented and controllable. New granularity has been added to the test teardown phase to distinguish between "end of the test" when lock-holding structures on connections should be released to allow for table drops, vs. "end of the test plus its teardown steps" when we can perform final cleanup on connections and run assertions that everything is closed out. From there we can remove most of the defensive "tear down everything" logic inside of engines which for many years would frequently dispose of pools over and over again, creating for a broken and expensive connection flow. A quick test shows that running test/sql/ against a single Postgresql engine with the new approach uses 75% fewer new connections, creating 42 new connections total, vs. 164 new connections total with the previous system. As part of this, the new fixtures metadata/connection/future_connection have been integrated such that they can be combined together effectively. The fixture_session(), provide_metadata() fixtures have been improved, including that fixture_session() now strongly references sessions which are explicitly torn down before table drops occur afer a test. Major changes have been made to the ConnectionKiller such that it now features different "scopes" for testing engines and will limit its cleanup to those testing engines corresponding to end of test, end of test class, or end of test session. The system by which it tracks DBAPI connections has been reworked, is ultimately somewhat similar to how it worked before but is organized more clearly along with the proxy-tracking logic. A "testing_engine" fixture is also added that works as a pytest fixture rather than a standalone function. The connection cleanup logic should now be very robust, as we now can use the same global connection pools for the whole suite without ever disposing them, while also running a query for PostgreSQL locks remaining after every test and assert there are no open transactions leaking between tests at all. Additional steps are added that also accommodate for asyncio connections not explicitly closed, as is the case for legacy sync-style tests as well as the async tests themselves. As always, hundreds of tests are further refined to use the new fixtures where problems with loose connections were identified, largely as a result of the new PostgreSQL assertions, many more tests have moved from legacy patterns into the newest. An unfortunate discovery during the creation of this system is that autouse fixtures (as well as if they are set up by @pytest.mark.usefixtures) are not usable at our current scale with pytest 4.6.11 running under Python 2. It's unclear if this is due to the older version of pytest or how it implements itself for Python 2, as well as if the issue is CPU slowness or just large memory use, but collecting the full span of tests takes over a minute for a single process when any autouse fixtures are in place and on CI the jobs just time out after ten minutes. So at the moment this patch also reinvents a small version of "autouse" fixtures when py2k is running, which skips generating the real fixture and instead uses two global pytest fixtures (which don't seem to impact performance) to invoke the "autouse" fixtures ourselves outside of pytest. This will limit our ability to do more with fixtures until we can remove py2k support. py.test is still observed to be much slower in collection in the 4.6.11 version compared to modern 6.2 versions, so add support for new TOX_POSTGRESQL_PY2K and TOX_MYSQL_PY2K environment variables that will run the suite for fewer backends under Python 2. For Python 3 pin pytest to modern 6.2 versions where performance for collection has been improved greatly. Includes the following improvements: Fixed bug in asyncio connection pool where ``asyncio.TimeoutError`` would be raised rather than :class:`.exc.TimeoutError`. Also repaired the :paramref:`_sa.create_engine.pool_timeout` parameter set to zero when using the async engine, which previously would ignore the timeout and block rather than timing out immediately as is the behavior with regular :class:`.QueuePool`. For asyncio the connection pool will now also not interact at all with an asyncio connection whose ConnectionFairy is being garbage collected; a warning that the connection was not properly closed is emitted and the connection is discarded. Within the test suite the ConnectionKiller is now maintaining strong references to all DBAPI connections and ensuring they are released when tests end, including those whose ConnectionFairy proxies are GCed. Identified cx_Oracle.stmtcachesize as a major factor in Oracle test scalability issues, this can be reset on a per-test basis rather than setting it to zero across the board. the addition of this flag has resolved the long-standing oracle "two task" error problem. For SQL Server, changed the temp table style used by the "suite" tests to be the double-pound-sign, i.e. global, variety, which is much easier to test generically. There are already reflection tests that are more finely tuned to both styles of temp table within the mssql test suite. Additionally, added an extra step to the "dropfirst" mechanism for SQL Server that will remove all foreign key constraints first as some issues were observed when using this flag when multiple schemas had not been torn down. Identified and fixed two subtle failure modes in the engine, when commit/rollback fails in a begin() context manager, the connection is explicitly closed, and when "initialize()" fails on the first new connection of a dialect, the transactional state on that connection is still rolled back. Fixes: #5826 Fixes: #5827 Change-Id: Ib1d05cb8c7cf84f9a4bfd23df397dc23c9329bfe
* Repair mssql dep tests; have __only_on__ imply __backend__Mike Bayer2020-12-191-0/+20
| | | | | | | | | | | | | | | | | | CI missed a few SQL Server tests because we run mssql-backendonly in the gerrit job. As there was a test that was "only on" mssql but didn't have backendonly, it never got run and then fails in master where we run mssql fully. Any suite that has an __only_on__ is inherently specific to a backend, so if present this should imply __backend__ so that it definitely runs when we have that backend present. This in turn meant we had to fix a few sqlite_file tests that weren't cleaning up or sharing well as they suddenly became backend tests under sqlite_file. Added a sqlite_file cleanup to test class cleanup for now. Change-Id: I9de1ceabd6596547a65c59059a55b7e5156103fd
* Fix tests failing for SQLite file databases; repair provisioningGord Thompson2020-03-131-4/+25
| | | | | | | | | | | | | | | | | | | | | | 1. ensure provision.py loads dialect implementations when running reap_dbs.py. Reapers haven't been working since 598f2f7e557073f29563d4d567f43931fc03013f . 2. add some exclusion rules to allow the sqlite_file target to work; add to tox. 3. add reap dbs target for SQLite, repair SQLite drop_db routine which also wasn't doing the right thing for memory databases etc. 4. Fix logging in provision files, as the main provision logger is the one that's enabled by reap_dbs and maybe others, have all the provision files use the provision logger. Fixes: #5180 Fixes: #5168 Change-Id: Ibc1b0106394d20f5bcf847f37b09d185f26ac9b5
* Refactor test provisioning to dialect-level filesGord Thompson2020-01-261-0/+54
Fixes: #5085 <!-- Provide a general summary of your proposed changes in the Title field above --> Move dialect-specific provisioning code to dialect-level copies of provision.py. <!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once) --> This pull request is: - [ ] A documentation / typographical error fix - Good to go, no issue or tests are needed - [x] A short code fix - please include the issue number, and create an issue if none exists, which must include a complete example of the issue. one line code fixes without an issue and demonstration will not be accepted. - Please include: `Fixes: #<issue number>` in the commit message - please include tests. one line code fixes without tests will not be accepted. - [ ] A new feature implementation - please include the issue number, and create an issue if none exists, which must include a complete example of how the feature would look. - Please include: `Fixes: #<issue number>` in the commit message - please include tests. **Have a nice day!** Closes: #5092 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5092 Pull-request-sha: 25b9b7a9800549fb823576af8674e8d33ff4b2c1 Change-Id: Ie0b4a69aa472a60bdbd825e04c8595382bcc98e1