summaryrefslogtreecommitdiff
path: root/lib/sqlalchemy/dialects/postgresql
Commit message (Collapse)AuthorAgeFilesLines
...
| * | normalize execute style for events, 2.0Mike Bayer2020-08-201-8/+12
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | The _execute_20 and exec_driver_sql methods should wrap up the parameters so that they represent the single list / single dictionary style of invocation into the legacy methods. then the before_ after_ execute event handlers should be receiving the parameter dictionary as a single dictionary. this requires that we break out distill_params to work differently if event handlers are present. additionally, add deprecation warnings for old argument passing styles. Change-Id: I97cb4d06adfcc6b889f10d01cc7775925cffb116
* | Merge "Implement DDL visitor for PG ENUM with schema translate support"mike bayer2020-08-191-10/+47
|\ \
| * | Implement DDL visitor for PG ENUM with schema translate supportMike Bayer2020-08-191-10/+47
| |/ | | | | | | | | | | | | | | | | | | | | | | Fixed issue where the :class:`_postgresql.ENUM` type would not consult the schema translate map when emitting a CREATE TYPE or DROP TYPE during the test to see if the type exists or not. Additionally, repaired an issue where if the same enum were encountered multiple times in a single DDL sequence, the "check" query would run repeatedly rather than relying upon a cached value. Fixes: #5520 Change-Id: I79f46e29ac0168e873ff178c242f8d78f6679aeb
* | Merge "Add JSON support for mssql"mike bayer2020-08-191-5/+14
|\ \
| * | Add JSON support for mssqlGord Thompson2020-08-191-5/+14
| |/ | | | | | | | | | | | | | | | | | | Added support for the :class:`_types.JSON` datatype on the SQL Server dialect using the :class:`_mssql.JSON` implementation, which implements SQL Server's JSON functionality against the ``NVARCHAR(max)`` datatype as per SQL Server documentation. Implementation courtesy Gord Thompson. Fixes: #4384 Change-Id: I28af79a4d8fafaa68ea032228609bba727784f18
* | Support data types for CREATE SEQUENCE in PostgreSQLFederico Caselli2020-08-181-0/+11
|/ | | | | | | | | Allow specifying the data type when creating a :class:`.Sequence` in PostgreSQL by using the parameter :paramref:`.Sequence.data_type`. Fixes: #5498 Change-Id: I2b4a80aa89b1503c56748dc3ecd2cf145faddd8b
* Update dialect for pg8000 version 1.16.0Tony Locke2020-08-182-40/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pg8000 dialect has been revised and modernized for the most recent version of the pg8000 driver for PostgreSQL. Changes to the dialect include: * All data types are now sent as text rather than binary. * Using adapters, custom types can be plugged in to pg8000. * Previously, named prepared statements were used for all statements. Now unnamed prepared statements are used by default, and named prepared statements can be used explicitly by calling the Connection.prepare() method, which returns a PreparedStatement object. Pull request courtesy Tony Locke. Notes by Mike: to get this all working it was needed to break up JSONIndexType into "str" and "int" subtypes; this will be needed for any dialect that is dependent on setinputsizes(). also includes @caselit's idea to include query params in the dbdriver parameter. Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com> Closes: #5451 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5451 Pull-request-sha: 639751ca9c7544801b9ede02e6cbe15a16c59c82 Change-Id: I2869bc52c330916773a41d11d12c297aecc8fcd8
* Create a real type for Tuple() and handle appropriately in compilerMike Bayer2020-08-171-6/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Improved the :func:`_sql.tuple_` construct such that it behaves predictably when used in a columns-clause context. The SQL tuple is not supported as a "SELECT" columns clause element on most backends; on those that do (PostgreSQL, not surprisingly), the Python DBAPI does not have a "nested type" concept so there are still challenges in fetching rows for such an object. Use of :func:`_sql.tuple_` in a :func:`_sql.select` or :class:`_orm.Query` will now raise a :class:`_exc.CompileError` at the point at which the :func:`_sql.tuple_` object is seen as presenting itself for fetching rows (i.e., if the tuple is in the columns clause of a subquery, no error is raised). For ORM use,the :class:`_orm.Bundle` object is an explicit directive that a series of columns should be returned as a sub-tuple per row and is suggested by the error message. Additionally ,the tuple will now render with parenthesis in all contexts. Previously, the parenthesization would not render in a columns context leading to non-defined behavior. As part of this change, Tuple receives a dedicated datatype which appears to allow us the very desirable change of removing the bindparam._expanding_in_types attribute as well as ClauseList._tuple_values (which might already have not been needed due to #4645). Fixes: #5127 Change-Id: Iecafa0e0aac2f1f37ec8d0e1631d562611c90200
* Provision on different drivers dynamicallyMike Bayer2020-08-141-0/+17
| | | | | | | | | | | We want TOX_POSTGRESQL and similar to be the fixed variable that is configured from CI environment. These variables should refer to database servers but individual drivers like asyncpg mysqlconnector etc. should come from local tox.ini. add a new system to generate per-driver URLs from a simple list of hostname-based URLs delivered from CI environment. Change-Id: I4267b4a70742765388c7e7c4432c1da9d9adece2
* Don't import asyncpg prior to python 3.6Mike Bayer2020-08-141-1/+1
| | | | | | | Continuing on 788ba204a43b37d28cc690138b83e6782f8a46da make sure asyncpg doesn't get imported if version < 3.6. Change-Id: Ic070a0c9d3656f7da7ce2eb85033968fc88a64b9
* Implement rudimentary asyncio support w/ asyncpgMike Bayer2020-08-133-1/+797
| | | | | | | | | | | | | | | | | | | | | | | | | | Using the approach introduced at https://gist.github.com/zzzeek/6287e28054d3baddc07fa21a7227904e We can now create asyncio endpoints that are then handled in "implicit IO" form within the majority of the Core internals. Then coroutines are re-exposed at the point at which we call into asyncpg methods. Patch includes: * asyncpg dialect * asyncio package * engine, result, ORM session classes * new test fixtures, tests * some work with pep-484 and a short plugin for the pyannotate package, which seems to have so-so results Change-Id: Idbcc0eff72c4cad572914acdd6f40ddb1aef1a7d Fixes: #3414
* Clean python UUID importsVlastimil Zíma2020-07-303-22/+3
| | | | | | | | | | | Fixes: #5482 All supported python versions provide 'uuid' module. Closes: #5483 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5483 Pull-request-sha: fc32498a8b639ff21d5898100592782826d2c6dd Change-Id: I8b41b811da7576f724353425dad5d6f581641b4b
* Ensure is_comparison passed for PG RANGE op() methodsJim Bosch2020-07-261-9/+9
| | | | | | | | | | | | | | | Fixed issue where the return type for the various RANGE comparison operators would itself be the same RANGE type rather than BOOLEAN, which would cause an undesirable result in the case that a :class:`.TypeDecorator` that defined result-processing behavior were in use. Pull request courtesy Jim Bosch. Fixes: #5476 Closes: #5477 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5477 Pull-request-sha: 925b117e0c91cdd67d9ddbd9d65f5ca3e88af91f Change-Id: I52ab4d4362d379c8253990f9d328a40990a64520
* more docs for autocommit isolation levelMike Bayer2020-07-121-0/+2
| | | | | | | | this concept is not clear that we offer real DBAPI autocommit everywhere. backport 1.3 with edits as well Change-Id: I2e8328b7fb6e1cdc5453ab29c94276f60c7ca149
* Merge "ORM executemany returning"mike bayer2020-06-281-7/+17
|\
| * ORM executemany returningMike Bayer2020-06-271-7/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Build on #5401 to allow the ORM to take advanage of executemany INSERT + RETURNING. Implemented the feature updated tests to support INSERT DEFAULT VALUES, needed to come up with a new syntax for compiler INSERT INTO table (anycol) VALUES (DEFAULT) which can then be iterated out for executemany. Added graceful degrade to plain executemany for PostgreSQL <= 8.2 Renamed EXECUTEMANY_DEFAULT to EXECUTEMANY_PLAIN Fix issue where unicode identifiers or parameter names wouldn't work with execute_values() under Py2K, because we have to encode the statement and therefore have to encode the insert_single_values_expr too. Correct issue from #5401 to support executemany + return_defaults for a PK that is explicitly pre-generated, meaning we aren't actually getting RETURNING but need to return it from compiled_parameters. Fixes: #5263 Change-Id: Id68e5c158c4f9ebc33b61c06a448907921c2a657
* | Merge "Fix a wide variety of typos and broken links"mike bayer2020-06-261-4/+4
|\ \ | |/ |/|
| * Fix a wide variety of typos and broken linksaplatkouski2020-06-251-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | Note the PR has a few remaining doc linking issues listed in the comment that must be addressed separately. Signed-off-by: aplatkouski <5857672+aplatkouski@users.noreply.github.com> Closes: #5371 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5371 Pull-request-sha: 7e7d233cf3a0c66980c27db0fcdb3c7d93bc2510 Change-Id: I9c36e8d8804483950db4b42c38ee456e384c59e3
* | Default psycopg2 executemany mode to "values_only"Mike Bayer2020-06-251-123/+128
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The psycopg2 dialect now defaults to using the very performant ``execute_values()`` psycopg2 extension for compiled INSERT statements, and also impements RETURNING support when this extension is used. This allows INSERT statements that even include an autoincremented SERIAL or IDENTITY value to run very fast while still being able to return the newly generated primary key values. The ORM will then integrate this new feature in a separate change. Implements RETURNING for insert with executemany Adds support to return_defaults() mode and inserted_primary_key to support mutiple INSERTed rows, via return_defauls_rows and inserted_primary_key_rows accessors. within default execution context, new cached compiler getters are used to fetch primary keys from rows inserted_primary_key now returns a plain tuple. this is not yet a row-like object however this can be added. Adds distinct "values_only" and "batch" modes, as "values" has a lot of benefits but "batch" breaks cursor.rowcount psycopg2 minimum version 2.7 so we can remove the large number of checks for very old versions of psycopg2 simplify tests to no longer distinguish between native and non-native json Fixes: #5401 Change-Id: Ic08fd3423d4c5d16ca50994460c0c234868bd61c
* Propose using RETURNING for bulk updates, deletesMike Bayer2020-06-231-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes several improvements in the area of bulk updates and deletes as well as the new session mechanics. RETURNING is now used for an UPDATE or DELETE statement emitted for a diaelct that supports "full returning" in order to satisfy the "fetch" strategy; this currently includes PostgreSQL and SQL Server. The Oracle dialect does not support RETURNING for more than one row, so a new dialect capability "full_returning" is added in addition to the existing "implicit_returning", indicating this dialect supports RETURNING for zero or more rows, not just a single identity row. The "fetch" strategy will gracefully degrade to the previous SELECT mechanics for dialects that do not support RETURNING. Additionally, the "fetch" strategy will attempt to use evaluation for the VALUES that were UPDATEd, rather than just expiring the updated attributes. Values should be evalutable in all cases where the value is not a SQL expression. The new approach also incurs some changes in the session.execute mechanics, where do_orm_execute() event handlers can now be chained to each return results; this is in turn used by the handler to detect on a per-bind basis if the fetch strategy needs to do a SELECT or if it can do RETURNING. A test suite is added to test_horizontal_shard that breaks up a single UPDATE or DELETE operation among multiple backends where some are SQLite and don't support RETURNING and others are PostgreSQL and do. The session event mechanics are corrected in terms of the "orm pre execute" hook, which now receives a flag "is_reentrant" so that the two ORM implementations for this can skip on their work if they are being called inside of ORMExecuteState.invoke(), where previously bulk update/delete were calling its SELECT a second time. In order for "fetch" to get the correct identity when called as pre-execute, it also requests the identity_token for each mapped instance which is now added as an optional capability of a SELECT for ORM columns. the identity_token that's placed by horizontal_sharding is now made available within each result row, so that even when fetching a merged result of plain rows we can tell which row belongs to which identity token. The evaluator that takes place within the ORM bulk update and delete for synchronize_session="evaluate" now supports the IN and NOT IN operators. Tuple IN is also supported. Fixes: #1653 Change-Id: I2292b56ae004b997cef0ba4d3fc350ae1dd5efc1
* Merge "Added reflection method :meth:`.Inspector.get_sequence_names`"mike bayer2020-06-191-32/+33
|\
| * Added reflection method :meth:`.Inspector.get_sequence_names`Federico Caselli2020-06-031-32/+33
| | | | | | | | | | | | | | | | | | | | Added new reflection method :meth:`.Inspector.get_sequence_names` which returns all the sequences defined. Support for this method has been added to the backend that support :class:`.Sequence`: PostgreSql, Oracle, MSSQL and MariaDB >= 10.3. Fixes: #2056 Change-Id: I0949696a39aa28c849edf2504779241f7443778a
* | Merge "Add documentation regarding row constructo in PostgreSQL"mike bayer2020-06-111-1/+45
|\ \
| * | Add documentation regarding row constructo in PostgreSQLFederico Caselli2020-05-251-1/+45
| | | | | | | | | | | | | | | Fixes: #5331 Change-Id: Ia795a4d4d8ae4944d8a160d18f8b917177acf0de
* | | Turn on caching everywhere, add loggingMike Bayer2020-06-101-1/+1
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A variety of caching issues found by running all tests with statement caching turned on. The cache system now has a more conservative approach where any subclass of a SQL element will by default invalidate the cache key unless it adds the flag inherit_cache=True at the class level, or if it implements its own caching. Add working caching to a few elements that were omitted previously; fix some caching implementations to suit lesser used edge cases such as json casts and array slices. Refine the way BaseCursorResult and CursorMetaData interact with caching; to suit cases like Alembic modifying table structures, don't cache the cursor metadata if it were created against a cursor.description using non-positional matching, e.g. "select *". if a table re-ordered its columns or added/removed, now that data is obsolete. Additionally we have to adapt the cursor metadata _keymap regardless of if we just processed cursor.description, because if we ran against a cached SQLCompiler we won't have the right columns in _keymap. Other refinements to how and when we do this adaption as some weird cases were exposed in the Postgresql dialect, a text() construct that names just one column that is not actually in the statement. Fixed that also as it looks like a cut-and-paste artifact that doesn't actually affect anything. Various issues with re-use of compiled result maps and cursor metadata in conjunction with tables being changed, such as change in order of columns. mappers can be cleared but the class remains, meaning a mapper has to use itself as the cache key not the class. lots of bound parameter / literal issues, due to Alembic creating a straight subclass of bindparam that renders inline directly. While we can update Alembic to not do this, we have to assume other people might be doing this, so bindparam() implements the inherit_cache=True logic as well that was a bit involved. turn on cache stats in logging. Includes a fix to subqueryloader which moves all setup to the create_row_processor() phase and elminates any storage within the compiled context. This includes some changes to create_row_processor() signature and a revising of the technique used to determine if the loader can participate in polymorphic queries, which is also applied to selectinloading. DML update.values() and ordered_values() now coerces the keys as we have tests that pass an arbitrary class here which only includes __clause_element__(), so the key can't be cached unless it is coerced. this in turn changed how composite attributes support bulk update to use the standard approach of ClauseElement with annotations that are parsed in the ORM context. memory profiling successfully caught that the Session from Query was getting passed into _statement_20() so that was a big win for that test suite. Apparently Compiler had .execute() and .scalar() methods stuck on it, these date back to version 0.4 and there was a single test in the PostgreSQL dialect tests that exercised it for no apparent reason. Removed these methods as well as the concept of a Compiler holding onto a "bind". Fixes: #5386 Change-Id: I990b43aab96b42665af1b2187ad6020bee778784
* | Add support for "real" sequences in mssqlGord Thompson2020-05-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added support for "CREATE SEQUENCE" and full :class:`.Sequence` support for Microsoft SQL Server. This removes the deprecated feature of using :class:`.Sequence` objects to manipulate IDENTITY characteristics which should now be performed using ``mssql_identity_start`` and ``mssql_identity_increment`` as documented at :ref:`mssql_identity`. The change includes a new parameter :paramref:`.Sequence.data_type` to accommodate SQL Server's choice of datatype, which for that backend includes INTEGER and BIGINT. The default starting value for SQL Server's version of :class:`.Sequence` has been set at 1; this default is now emitted within the CREATE SEQUENCE DDL for all backends. Fixes: #4235 Fixes: #4633 Change-Id: I6aa55c441e8146c2f002e2e201a7f645e667b916
* | callcount reductions and refinement for cached queriesMike Bayer2020-05-281-5/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit includes that we've removed the "_orm_query" attribute from compile state as well as query context. The attribute created reference cycles and also added method call overhead. As part of this change, the interface for ORMExecuteState changes a bit, as well as the interface for the horizontal sharding extension which now deprecates the "query_chooser" callable in favor of "execute_chooser", which receives the contextual object. This will also work more nicely when we implement the new execution path for bulk updates and deletes. Pre-merge execution options for statement, connection, arguments all up front in Connection. that way they can be passed to the before_execute / after_execute events, and the ExecutionContext doesn't have to merge as second time. Core execute is pretty close to 1.3 now. baked wasn't using the new one()/first()/one_or_none() methods, fixed that. Convert non-buffered cursor strategy to be a stateless singleton. inline all the paths by which the strategy gets chosen, oracle and SQL Server dialects make use of the already-invoked post_exec() hook to establish the alternate strategies, and this is actually much nicer than it was before. Add caching to mapper instance processor for getters. Identified a reference cycle per query that was showing up as a lot of gc cleanup, fixed that. After all that, performance not budging much. Even test_baked_query now runs with significantly fewer function calls than 1.3, still 40% slower. Basically something about the new patterns just makes this slower and while I've walked a whole bunch of them back, it hardly makes a dent. that said, the performance issues are relatively small, in the 20-40% time increase range, and the new caching feature does provide for regular ORM and Core queries that are cached, and they are faster than non-cached. Change-Id: I7b0b0d8ca550c05f79e82f75cd8eff0bbfade053
* Add immutabledict C codeMike Bayer2020-05-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Start trying to convert fundamental objects to C as we now rely on a fairly small core of things, and 1.4 is having problems with complexity added being slower than the performance gains we are trying to build in. immutabledict here does seem to bench as twice as fast as the Python one, see below. However, it does not appear to be used prominently enough to make any dent in the performance tests. at the very least it may provide us some more lift-and-copy code for more C extensions. import timeit from sqlalchemy.util._collections import not_immutabledict, immutabledict def run(dict_cls): for i in range(1000000): d1 = dict_cls({"x": 5, "y": 4}) d2 = d1.union({"x": 17, "new key": "some other value"}, None) assert list(d2) == ["x", "y", "new key"] print( timeit.timeit( "run(d)", "from __main__ import run, not_immutabledict as d", number=1 ) ) print( timeit.timeit( "run(d)", "from __main__ import run, immutabledict as d", number=1 ) ) output: python: 1.8799766399897635 C code: 0.8880784640205093 Change-Id: I29e7104dc21dcc7cdf895bf274003af2e219bf6d
* Documentation updates for ResultProxy -> ResultMike Bayer2020-05-012-4/+4
| | | | | | | | | This is based off of I8091919d45421e3f53029b8660427f844fee0228 and includes all documentation-only changes as a separate merge, once the parent is merged. Change-Id: I711adea23df0f9f0b1fe7c76210bd2de6d31842d
* Deprecate unsupported dialects and dbapiFederico Caselli2020-04-292-0/+27
| | | | | | | | | | | | | | | | | - Deprecate dialects firebird and sybase. - Deprecate DBAPI - mxODBC for mssql - oursql for mysql - pygresql and py-postgresql for postgresql - Removed adodbapi DBAPI for mssql Fixes: #5189 Change-Id: Id9025f4f4de7e97d65aacd0eb4b0c21beb9a67b5
* Merge "Deprecate ``DISTINCT ON`` when not targeting PostgreSQL"mike bayer2020-04-201-0/+2
|\
| * Deprecate ``DISTINCT ON`` when not targeting PostgreSQLFederico Caselli2020-04-201-0/+2
| | | | | | | | | | | | | | | | | | Deprecate usage of ``DISTINCT ON`` in dialect other than PostgreSQL. Previously this was silently ignored. Deprecate old usage of string distinct in MySQL dialect Fixes: #4002 Change-Id: I38fc64aef75e77748083c11d388ec831f161c9c9
* | Support `ARRAY` of `Enum`, `JSON` or `JSONB`Federico Caselli2020-04-205-54/+91
|/ | | | | | | | | | | | | | Added support for columns or type :class:`.ARRAY` of :class:`.Enum`, :class:`.JSON` or :class:`_postgresql.JSONB` in PostgreSQL. Previously a workaround was required in these use cases. Raise an explicit :class:`.exc.CompileError` when adding a table with a column of type :class:`.ARRAY` of :class:`.Enum` configured with :paramref:`.Enum.native_enum` set to ``False`` when :paramref:`.Enum.create_constraint` is not set to ``False`` Fixes: #5265 Fixes: #5266 Change-Id: I83a2d20a599232b7066d0839f3e55ff8b78cd8fc
* Create initial 2.0 engine implementationMike Bayer2020-04-161-2/+4
| | | | | | | | | | | | | | | | | | | Implemented the SQLAlchemy 2 :func:`.future.create_engine` function which is used for forwards compatibility with SQLAlchemy 2. This engine features always-transactional behavior with autobegin. Allow execution options per statement execution. This includes that the before_execute() and after_execute() events now accept an additional dictionary with these options, empty if not passed; a legacy event decorator is added for backwards compatibility which now also emits a deprecation warning. Add some basic tests for execution, transactions, and the new result object. Build out on a new testing fixture that swaps in the future engine completely to start with. Change-Id: I70e7338bb3f0ce22d2f702537d94bb249bd9fb0a Fixes: #4644
* Set up absolute references for create_engine and relatedMike Bayer2020-04-144-15/+19
| | | | | | | includes more replacements for create_engine(), Connection, disambiguation of Result from future/baked Change-Id: Icb60a79ee7a6c45ea9056c211ffd1be110da3b5e
* Run search and replace of symbolic module namesMike Bayer2020-04-147-129/+170
| | | | | | | | Replaces a wide array of Sphinx-relative doc references with an abbreviated absolute form now supported by zzzeeksphinx. Change-Id: I94bffcc3f37885ffdde6238767224296339698a2
* Enable zzzeeksphinx module prefixesMike Bayer2020-04-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | zzzeeksphinx 1.1.2 in git can now convert short prefix names in a configured lookup to fully qualified module names, so that we can have succinct and portable pyrefs that still resolve absolutely. It also includes a formatter that will format all pyrefs in a fully consistent way regardless of the package path, by unconditionally removing all package tokens but always leaving class names in place including for methods, which means we no longer have to deal with tildes in pyrefs. The most immediate goal of the absolute prefixes is that we have lots of "ambiguous" names that appear in muliple places, like select(), ARRAY, ENUM etc. With the incoming future packages there is going to be lots of name overlap so it is necessary that all names eventually use absolute package paths when Sphinx receives them. In multiple stages, pyrefs will be converted using the zzzeeksphinx tools/fix_xrefs.py tool so that doclinks can be made absolute using symbolic prefixes. For this review, the actual search and replace of symbols is not performed, instead some general cleanup to prepare the docs as well as a lookup file used by the tool to do the conversion. this relatively small patch will be backported with appropriate changes to 1.3, 1.2, 1.1 and the tool can then be run on each branch individually. We are shooting for almost no warnings at all for master (still a handful I can't figure out which don't seem to have any impact) , very few for 1.3, and for 1.2 / 1.1 we hope for a significant reduction in warnings. Overall for all versions pyrefs should always point to the correct target, if they are in fact hyperlinked. it's better for a ref to go nowhere and be plain text than go to the wrong thing. Right now, hundreds of API links are pointing to the wrong thing as they are ambiguous names such as refresh(), insert(), update(), select(), join(), JSON etc. and Sphinx sends these all to essesntially random destinations among as many as five or six possible choices per symbol. A shorthand system that allows us to use absolute refs without having to type out a full blown absoulte module is the only way this is going to work, and we should ultimately seek to abandon any use of prefix dot for lookups. Everything should be on an underscore token so at the very least the module spaces can be reorganized without having to search and replace the entire documentation every time. Change-Id: I484a7329034af275fcdb322b62b6255dfeea9151
* Correct ambiguous func / class linksMike Bayer2020-03-251-2/+2
| | | | | | | | | :func:`.sql.expression.select`, :func:`.sql.expression.insert` and :class:`.sql.expression.Insert` were hitting many ambiguous symbol errors, due to future.select, as well as the PG/MySQL variants of Insert. Change-Id: Iac862bfc172a7f7f0cbba5353a83dc203bed376c
* Deprecate plain string in execute and introduce `exec_driver_sql`Federico Caselli2020-03-212-17/+21
| | | | | | | | | | | | | | | Execution of literal sql string is deprecated in the :meth:`.Connection.execute` and a warning is raised when used stating that it will be coerced to :func:`.text` in a future release. To execute a raw sql string the new connection method :meth:`.Connection.exec_driver_sql` was added, that will retain the previous behavior, passing the string to the DBAPI driver unchanged. Usage of scalar or tuple positional parameters in :meth:`.Connection.execute` is also deprecated. Fixes: #4848 Fixes: #5178 Change-Id: I2830181054327996d594f7f0d59c157d477c3aa9
* Don't include PG INCLUDE columns as regular index columnsmike bayer2020-03-181-4/+22
| | | | | | | | | | | | | | | | Fixed issue where a "covering" index, e.g. those which have an INCLUDE clause, would be reflected including all the columns in INCLUDE clause as regular columns. A warning is now emitted if these additional columns are detected indicating that they are currently ignored. Note that full support for "covering" indexes is part of :ticket:`4458`. Pull request courtesy Marat Sharafutdinov. Fixes: #5205 Closes: #5206 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5206 Pull-request-sha: 512a3817bb21991142add2d192fa7ce9b285369d Change-Id: I3196a2bf77dc5a6abd85b2fbf0ebff1b30d4fb00
* Support inspection of computed columnFederico Caselli2020-03-151-2/+26
| | | | | | | | | | | | | | | | | | | Added support for reflection of "computed" columns, which are now returned as part of the structure returned by :meth:`.Inspector.get_columns`. When reflecting full :class:`.Table` objects, computed columns will be represented using the :class:`.Computed` construct. Also improve the documentation in :meth:`Inspector.get_columns`, correctly listing all the returned keys. Fixes: #5063 Fixes: #4051 Closes: #5064 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5064 Pull-request-sha: ba00fc321ce468f8885aad23b3dd33c789e50fbe Change-Id: I789986554fc8ac7f084270474d0b2c12046b1cc2
* Fix tests failing for SQLite file databases; repair provisioningGord Thompson2020-03-131-4/+1
| | | | | | | | | | | | | | | | | | | | | | 1. ensure provision.py loads dialect implementations when running reap_dbs.py. Reapers haven't been working since 598f2f7e557073f29563d4d567f43931fc03013f . 2. add some exclusion rules to allow the sqlite_file target to work; add to tox. 3. add reap dbs target for SQLite, repair SQLite drop_db routine which also wasn't doing the right thing for memory databases etc. 4. Fix logging in provision files, as the main provision logger is the one that's enabled by reap_dbs and maybe others, have all the provision files use the provision logger. Fixes: #5180 Fixes: #5168 Change-Id: Ibc1b0106394d20f5bcf847f37b09d185f26ac9b5
* Don't import provision.py unconditionallyMike Bayer2020-03-031-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | Removed the imports for provision.py from each dialect and instead added a call in the central provision.py to a new dialect level method load_provisioning(). The provisioning registry works in the same way, so an existing dialect that is using the provision.py system right now by importing it as part of the package will still continue to function. However, to avoid pulling in the testing package when the dialect is used in a non-testing context, the new hook may be used. Also removed a module-level dependency of the testing framework on the orm package. Revised an internal change to the test system added as a result of :ticket:`5085` where a testing-related module per dialect would be loaded unconditionally upon making use of that dialect, pulling in SQLAlchemy's testing framework as well as the ORM into the module import space. This would only impact initial startup time and memory to a modest extent, however it's best that these additional modules aren't reverse-dependent on straight Core usage. Fixes: #5180 Change-Id: I6355601da5f6f44d85a2bbc3acb5928559942b9c
* Ensure all nested exception throws have a causeMike Bayer2020-03-021-5/+8
| | | | | | | | | | | | | | | Applied an explicit "cause" to most if not all internally raised exceptions that are raised from within an internal exception catch, to avoid misleading stacktraces that suggest an error within the handling of an exception. While it would be preferable to suppress the internally caught exception in the way that the ``__suppress_context__`` attribute would, there does not as yet seem to be a way to do this without suppressing an enclosing user constructed context, so for now it exposes the internally caught exception as the cause so that full information about the context of the error is maintained. Fixes: #4849 Change-Id: I55a86b29023675d9e5e49bc7edc5a2dc0bcd4751
* While parsing for check constraints, ignore newline characters in the check ↵Eric Borczuk2020-02-291-2/+8
| | | | | | | | | | | | | | | | condition Fixed bug where PostgreSQL reflection of CHECK constraints would fail to parse the constraint if the SQL text contained newline characters. The regular expression has been adjusted to accommodate for this case. Pull request courtesy Eric Borczuk. Fixes: #5170 Closes: #5172 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5172 Pull-request-sha: 5701b7f09f723b727bbee95d19d107d6cc1d7717 Change-Id: If727e9140b645e8b685c3476fb0fa4417c1e6526
* Remove print statement in favor of print() function in docs and examplesAlbert Tugushev2020-02-261-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <!-- Provide a general summary of your proposed changes in the Title field above --> ### Description <!-- Describe your changes in detail --> Remove print statements ### Checklist <!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once) --> This pull request is: - [X] A documentation / typographical error fix - Good to go, no issue or tests are needed - [ ] A short code fix - please include the issue number, and create an issue if none exists, which must include a complete example of the issue. one line code fixes without an issue and demonstration will not be accepted. - Please include: `Fixes: #<issue number>` in the commit message - please include tests. one line code fixes without tests will not be accepted. - [ ] A new feature implementation - please include the issue number, and create an issue if none exists, which must include a complete example of how the feature would look. - Please include: `Fixes: #<issue number>` in the commit message - please include tests. **Have a nice day!** Closes: #5166 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5166 Pull-request-sha: 04a7394f71298322188f0861b4dfe93e5485839d Change-Id: Ib90a59fac929661a18748c6e44966fb87e3978c6
* Result initial introductionMike Bayer2020-02-212-15/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This builds on cc718cccc0bf8a01abdf4068c7ea4f3 which moved RowProxy to Row, allowing Row to be more like a named tuple. - KeyedTuple in ORM is replaced with Row - ResultSetMetaData broken out into "simple" and "cursor" versions for ORM and Core, as well as LegacyCursor version. - Row now has _mapping attribute that supplies full mapping behavior. Row and SimpleRow both have named tuple behavior otherwise. LegacyRow has some mapping features on the tuple which emit deprecation warnings (e.g. keys(), values(), etc). the biggest change for mapping->tuple is the behavior of __contains__ which moves from testing of "key in row" to "value in row". - ResultProxy breaks into ResultProxy and FutureResult (interim), the latter has the newer APIs. Made available to dialects using execution options. - internal reflection methods and most tests move off of implicit Row mapping behavior and move to row._mapping, result.mappings() method using future result - a new strategy system for cursor handling replaces the various subclasses of RowProxy - some execution context adjustments. We will leave EC in but refined things like get_result_proxy() and out parameter handling. Dialects for 1.4 will need to adjust from get_result_proxy() to get_result_cursor_strategy(), if they are using this method - out parameter handling now accommodated by get_out_parameter_values() EC method. Oracle changes for this. external dialect for DB2 for example will also need to adjust for this. - deprecate case_insensitive flag for engine / result, this feature is not used mapping-methods on Row are deprecated, and replaced with Row._mapping.<meth>, including: row.keys() -> use row._mapping.keys() row.items() -> use row._mapping.items() row.values() -> use row._mapping.values() key in row -> use key in row._mapping int in row -> use int < len(row) Fixes: #4710 Fixes: #4878 Change-Id: Ieb9085e9bcff564359095b754da9ae0af55679f0
* Deprecate connection branchingMike Bayer2020-02-211-3/+0
| | | | | | | | | | | | | | | The :meth:`.Connection.connect` method is deprecated as is the concept of "connection branching", which copies a :class:`.Connection` into a new one that has a no-op ".close()" method. This pattern is oriented around the "connectionless execution" concept which is also being removed in 2.0. As part of this change we begin to move the internals away from "connectionless execution" overall. Remove the "connectionless execution" concept from the reflection internals and replace with explicit patterns at the Inspector level. Fixes: #5131 Change-Id: Id23d28a9889212ac5ae7329b85136157815d3e6f
* Pass DDLCompiler IdentifierPreparer to visit_ENUMMike Bayer2020-02-171-3/+8
| | | | | | | | | | | | Fixed issue where the "schema_translate_map" feature would not work with a PostgreSQL native enumeration type (i.e. :class:`.Enum`, :class:`.postgresql.ENUM`) in that while the "CREATE TYPE" statement would be emitted with the correct schema, the schema would not be rendered in the CREATE TABLE statement at the point at which the enumeration was referenced. Fixes: #5158 Change-Id: I41529785de2e736c70a142c2ae5705060bfed73e
* Fixes for public_factory and mysql/pg dml functionsMike Bayer2020-02-082-7/+12
| | | | | | | | | | | | | | | | * ensure that the location indicated by public_factory is importable * adjust all of sqlalchemy.sql.expression locations to be correct * support the case where a public_factory is against a function that has another public_factory already, and already replaced the __init__ on the target class * Use mysql.insert(), postgresql.insert(), don't include .dml in the class path. Change-Id: Iac285289455d8d7102349df3814f7cedc758e639