| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented :attr:`_postgresql.UUID.python_type` attribute for the
:class:`_postgresql.UUID` type object. The attribute will return either
``str`` or ``uuid.UUID`` based on the :paramref:`_postgresql.UUID.as_uuid`
parameter setting. Previously, this attribute was unimplemented. Pull
request courtesy Alex Grönholm.
Fixes: #7943
Closes: #7944
Change-Id: Ic4fbaeee134d586b08339801968e787cc7e14285
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
implement strict typing for schema.py
this module has lots of public API, lots of old decisions
and very hard to follow construction sequences in many
cases, and is also where we get a lot of new feature requests,
so strict typing should help keep things clean.
among improvements here, fixed the pool .info getters
and also figured out how to get ColumnCollection and
related to be covariant so that we may set them up
as returning Column or ColumnClause without any conflicts.
DDL was affected, noting that superclasses of DDLElement
(_DDLCompiles, added recently) can now be passed into
"ddl_if" callables; reorganized ddl into ExecutableDDLElement
as a new name for DDLElement and _DDLCompiles renamed to
BaseDDLElement.
setting up strict also located an API use case that
is completely broken, which is connection.execute(some_default)
returns a scalar value. This case has been deprecated
and new paths have been set up so that connection.scalar()
may be used. This likely wasn't possible in previous
versions because scalar() would assume a CursorResult.
The scalar() change also impacts Session as we have explicit
support (since someone had reported it as a regression)
for session.execute(Sequence()) to work. They will get the
same deprecation message (which omits the word "Connection",
just uses ".execute()" and ".scalar()") and they can then
use Session.scalar() as well. Getting this to type
correctly while still supporting ORM use cases required
some refactoring, and I also set up a keyword only delimeter
for Session.execute() and related as execution_options /
bind_arguments should always be keyword only, applied these
changes to AsyncSession as well.
Additionally simpify Table __init__ now that we are Python
3 only, we can have positional plus explicit kwargs finally.
Simplify Column.__init__ as well again taking advantage
of kw only arguments.
Fill in most/all __init__ methods in sqltypes.py as
the constructor for types is most of the API. should
likely do this for dialect-specific types as well.
Apply _InfoType for all info attributes as should have been
done originally and update descriptor decorators.
Change-Id: I3f9f8ff3f1c8858471ff4545ac83d68c88107527
|
|
|
|
| |
Change-Id: I42ed77f559e3ee5b8c600d98457ee37803ef0ea6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Full "RETURNING" support is implemented for the cx_Oracle dialect, meaning
multiple RETURNING rows are now recived for DML statements that produce
more than one row for RETURNING.
cx_Oracle 7 is now the minimum version for cx_Oracle.
Getting Oracle to do multirow returning took about 5 minutes. however,
getting Oracle's RETURNING system to integrate with ORM-enabled
insert, update, delete, is a big deal because that architecture wasn't
really working very robustly, including some recent changes in 1.4
for FromStatement were done in a hurry, so this patch also cleans up
the FromStatement situation and begins to establish it more concretely
as the base for all ReturnsRows / TextClause ORM scenarios.
Fixes: #6245
Change-Id: I2b4e6007affa51ce311d2d5baa3917f356ab961f
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scaled back a fix made for :ticket:`6581` where "executemany values" mode
for psycopg2 were disabled for all "ON CONFLICT" styles of INSERT, to
not apply to the "ON CONFLICT DO NOTHING" clause, which does not include
any parameters and is safe for "executemany values" mode. "ON CONFLICT
DO UPDATE" is still blocked from "executemany values" as there may
be additional parameters in the DO UPDATE clause that cannot be batched
(which is the original issue fixed by :ticket:`6581`).
Fixes: #7880
Change-Id: Id3e23a0c6699333409a50148fa8923cb8e564bdc
|
|
|
|
|
|
|
| |
this just drove me nuts because it didn't include
render_derived(), doesn't run on PG as given
Change-Id: I5d39336231c97b6cd5477644a718282709db2e1f
|
|
|
|
|
|
|
| |
strict types type_api.py, including TypeDecorator,
NativeForEmulated, etc.
Change-Id: Ib2eba26de0981324a83733954cb7044a29bbd7db
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added :class:`.Double`, :class:`.DOUBLE`, :class:`.DOUBLE_PRECISION`
datatypes to the base ``sqlalchemy.`` module namespace, for explicit use of
double/double precision as well as generic "double" datatypes. Use
:class:`.Double` for generic support that will resolve to DOUBLE/DOUBLE
PRECISION/FLOAT as needed for different backends.
Implemented DDL and reflection support for ``FLOAT`` datatypes which
include an explicit "binary_precision" value. Using the Oracle-specific
:class:`_oracle.FLOAT` datatype, the new parameter
:paramref:`_oracle.FLOAT.binary_precision` may be specified which will
render Oracle's precision for floating point types directly. This value is
interpreted during reflection. Upon reflecting back a ``FLOAT`` datatype,
the datatype returned is one of :class:`_types.DOUBLE_PRECISION` for a
``FLOAT`` for a precision of 126 (this is also Oracle's default precision
for ``FLOAT``), :class:`_types.REAL` for a precision of 63, and
:class:`_oracle.FLOAT` for a custom precision, as per Oracle documentation.
As part of this change, the generic :paramref:`_sqltypes.Float.precision`
value is explicitly rejected when generating DDL for Oracle, as this
precision cannot be accurately converted to "binary precision"; instead, an
error message encourages the use of
:meth:`_sqltypes.TypeEngine.with_variant` so that Oracle's specific form of
precision may be chosen exactly. This is a backwards-incompatible change in
behavior, as the previous "precision" value was silently ignored for
Oracle.
Fixes: #5465
Closes: #7674
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7674
Pull-request-sha: 5c68419e5aee2e27bf21a8ac9eb5950d196c77e5
Change-Id: I831f4af3ee3b23fde02e8f6393c83e23dd7cd34d
|
|
|
|
|
| |
Fixes: #7647
Change-Id: I071f1a53714ebb0dc838fddc665640d46666318f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added compiler support for the PostgreSQL ``NOT VALID`` phrase when rendering
DDL for the :class:`.CheckConstraint`, :class:`.ForeignKeyConstraint`
and :class:`.ForeignKey` schema constructs. Pull request courtesy
Gilbert Gilb's.
Fixes: #7600
Closes: #7601
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7601
Pull-request-sha: 78eecd55fd9fad07030d963f5fd6713c4af60e80
Change-Id: I84bfe84596856eeea2bcca45c04ad23d980a75ec
|
|
|
|
|
| |
Fixes: #7225
Change-Id: Iddb78bf47ac733300bd12db50e16199cc22e9476
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added string rendering to the :class:`.postgresql.UUID` datatype, so that
stringifying a statement with "literal_binds" that uses this type will
render an appropriate string value for the PostgreSQL backend. Pull request
courtesy José Duarte.
Fixes: #7561
Closes: #7563
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7563
Pull-request-sha: cf6fe73265342d7884a940c4b3a34c9552113ec3
Change-Id: I4b162bdcdce2293a90683e36da54e4a891a3c684
|
|
|
|
| |
Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Black's `target-version` was still set to `['py27', 'py36']`. Set it to `[py37]` instead.
Also update Black and other pre-commit hooks and re-format with Black.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #7536
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7536
Pull-request-sha: b3aedf5570d7e0ba6c354e5989835260d0591b08
Change-Id: I8be85636fd2c9449b07a8626050c8bd35bd119d5
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed reflection of covering indexes to report ``include_columns`` as part
of the ``dialect_options`` entry in the reflected index dictionary, thereby
enabling round trips from reflection->create to be complete. Included
columns continue to also be present under the ``include_columns`` key for
backwards compatibility.
Fixes: #7382
Change-Id: I4f16b65caed3a36d405481690a3a92432b5efd62
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is so that dialect methods that are called within init
can assume the same argument structure as when they are called
in other places; we can nail down the type of object as well.
This change seems to mostly impact the isolation level routines
in the dialects, as these are called during initialize()
as well as on established connections. these methods can now
assume a non-proxied DBAPI connection object in all cases,
as it is commonly required that attributes like ".autocommit"
are set on the object which don't work well in a proxied
situation.
Other changes:
* adds an interface for the "connectionfairy" concept
called PoolProxiedConnection.
* Removes ``Connectable`` superclass of Connection.
``Connectable`` was originally meant to provide for the
"method which accepts connection or engine" theme. As this
pattern is greatly reduced in 2.0 and Engine no longer extends
from it, the ``Connectable`` superclass doesnt serve any real
purpose.
Leading from that, to set this in I also applied pep 484 annotations
to the Dialect base, and then in the interests of seeing some
of the typing information show up in my IDE did a little bit for Engine,
Connection and others. I hope that it's feasible that we can
add annotations to specific classes and attributes ahead of when we
actually try to mass-populate the whole library. This was
the original spirit of pep-484 that we can apply annotations
gradually. I do of course want to try to do a mass-populate
although i think even in that case we will end up doing a lot
of manual work anyway (in particular for the changes here which
are distinct from what the stubs have).
Fixes: #7122
Change-Id: I5dd7fbff8a7ae520a81c165091af12a6a68826db
|
|
|
|
|
|
|
| |
Both sync and async versions are supported.
Fixes: #6842
Change-Id: I57751c5028acebfc6f9c43572562405453a2f2a4
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add a new system so that PostgreSQL and other dialects have a
reliable way to add casts to bound parameters in SQL statements,
replacing previous use of setinputsizes() for PG dialects.
rationale:
1. psycopg3 will be using the same SQLAlchemy-side "setinputsizes"
as asyncpg, so we will be seeing a lot more of this
2. the full rendering that SQLAlchemy's compilation is performing
is in the engine log as well as error messages. Without this,
we introduce three levels of SQL rendering, the compiler, the
hidden "setinputsizes" in SQLAlchemy, and then whatever the DBAPI
driver does. With this new approach, users reporting bugs etc.
will be less confused that there are as many as two separate
layers of "hidden rendering"; SQLAlchemy's rendering is again
fully transparent
3. calling upon a setinputsizes() method for every statement execution
is expensive. this way, the work is done behind the caching layer
4. for "fast insertmany()", I also want there to be a fast approach
towards setinputsizes. As it was, we were going to be taking
a SQL INSERT with thousands of bound parameter placeholders and
running a whole second pass on it to apply typecasts. this way,
we will at least be able to build the SQL string once without a huge
second pass over the whole string
5. psycopg2 can use this same system for its ARRAY casts
6. the general need for PostgreSQL to have lots of type casts
is now mostly in the base PostgreSQL dialect and works independently
of a DBAPI being present. dependence on DBAPI symbols that aren't
complete / consistent / hashable is removed
I was originally going to try to build this into bind_expression(),
but it was revealed this worked poorly with custom bind_expression()
as well as empty sets. the current impl also doesn't need to
run a second expression pass over the POSTCOMPILE sections, which
came out better than I originally thought it would.
Change-Id: I363e6d593d059add7bcc6d1f6c3f91dd2e683c0c
|
|/
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the _CompileLabel class included ``__slots__`` but these
weren't used as the superclasses included slots.
Create a ``__slots__`` superclass for ``ClauseElement``,
creating a new class of compilable SQL elements that don't
include heavier features like caching, annotations and
cloning, which are meant to be used only in an ad-hoc
compiler fashion. Create new ``CompilerColumnElement``
from that which serves in column-oriented contexts, but
similarly does not include any expression operator support
as it is intended to be used only to generate a string.
Apply this to both
``_CompileLabel`` as well as PostgreSQL ``_ColonCast``,
which does not actually subclass ``ColumnElement`` as this
class has memoized attributes that aren't worth changing,
and does not include SQL operator capabilities as these
are not needed for these compiler-only objects.
this allows us to more inexpensively add new ad-hoc
labels / casts etc. at compile time, as we will be seeking
to expand out the typecasts that are needed for PostgreSQL
dialects in a subsequent patch.
Change-Id: I52973ae3295cb6e2eb0d7adc816c678a626643ed
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Generalized the :paramref:`_sa.create_engine.isolation_level` parameter to
the base dialect so that it is no longer dependent on individual dialects
to be present. This parameter sets up the "isolation level" setting to
occur for all new database connections as soon as they are created by the
connection pool, where the value then stays set without being reset on
every checkin.
The :paramref:`_sa.create_engine.isolation_level` parameter is essentially
equivalent in functionality to using the
:paramref:`_engine.Engine.execution_options.isolation_level` parameter via
:meth:`_engine.Engine.execution_options` for an engine-wide setting. The
difference is in that the former setting assigns the isolation level just
once when a connection is created, the latter sets and resets the given
level on each connection checkout.
Fixes: #6342
Change-Id: Id81d6b1c1a94371d901ada728a610696e09e9741
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* add a new section to reflection.rst `Schemas and Reflection`.
* this contains some text from the ticket
* migrate some text from `Specifying the Schema Name` to new section
* migrate some text from PostgreSQL dialect to new section
* target text is made more generic
* cross-reference the postgres and new sections to one another, to avoid duplication of docs
* update some docs 'meta' to 'metadata_obj'
Fixes: #4387
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Change-Id: I2b08672753fb2575d30ada07ead77587468fdade
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes: #6960
Even though a default driver still exists for
each dialect, remove most usages of `dialect://`
to encourage users to explicitly specify
`dialect+driver://`
Change-Id: I0ad42167582df509138fca64996bbb53e379b1af
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The major action here is to lift and move future.Connection
and future.Engine fully into sqlalchemy.engine.base. This
removes lots of engine concepts, including:
* autocommit
* Connection running without a transaction, autobegin
is now present in all cases
* most "autorollback" is obsolete
* Core-level subtransactions (i.e. MarkerTransaction)
* "branched" connections, copies of connections
* execution_options() returns self, not a new connection
* old argument formats, distill_params(), simplifies calling
scheme between engine methods
* before/after_execute() events (oriented towards compiled constructs)
don't emit for exec_driver_sql(). before/after_cursor_execute()
is still included for this
* old helper methods superseded by context managers, connection.transaction(),
engine.transaction() engine.run_callable()
* ancient engine-level reflection methods has_table(), table_names()
* sqlalchemy.testing.engines.proxying_engine
References: #7257
Change-Id: Ib20ed816642d873b84221378a9ec34480e01e82c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where "expanding IN" would fail to function correctly with
datatypes that use the :meth:`_types.TypeEngine.bind_expression` method,
where the method would need to be applied to each element of the
IN expression rather than the overall IN expression itself.
Fixed issue where IN expressions against a series of array elements, as can
be done with PostgreSQL, would fail to function correctly due to multiple
issues within the "expanding IN" feature of SQLAlchemy Core that was
standardized in version 1.4. The psycopg2 dialect now makes use of the
:meth:`_types.TypeEngine.bind_expression` method with :class:`_types.ARRAY`
to portably apply the correct casts to elements. The asyncpg dialect was
not affected by this issue as it applies bind-level casts at the driver
level rather than at the compiler level.
as part of this commit the "bind translate" feature has been
simplified and also applies to the names in the POSTCOMPILE tag to
accommodate for brackets.
Fixes: #7177
Change-Id: I08c703adb0a9bd6f5aeee5de3ff6f03cccdccdc5
|
|
|
|
|
|
|
| |
Clarify that match() emits `to_tsquery`, which expects input text
to be in postgresql's own format.
Change-Id: Id723032bca2eededc03ac30681c0dd4ddf76c232
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
any UPPERCASE datatype refers to that exact type name rendered
on the database. So PG's ENUM must render "ENUM" and is
"native" by definition. warn if this flag is passed.
The :class:`_postgresql.ENUM` datatype is PostgreSQL-native and therefore
should not be used with the ``native_enum=False`` flag. This flag is now
ignored if passed to the :class:`_postgresql.ENUM` datatype and a warning
is emitted; previously the flag would cause the type object to fail to
function correctly.
Fixes: #6106
Change-Id: I08e0ec6fcfafd068e1eaf6aec13c8010f09ce94a
|
|
|
|
|
|
|
|
|
| |
Fixes: #6912
Closes: #6920
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/6920
Pull-request-sha: 79af75dfddef25435afd9623698354d280d7c879
Change-Id: Ib6b472452f978378d9f511d17a26988323a89459
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed issue where a too-long constraint name rendered as part of the "ON
CONFLICT ON CONSTRAINT" element of the :class:`_postgresql.Insert`
construct due to naming convention generation would not correctly truncate
the name in the same way that it normally renders within a CREATE TABLE
statement, thus producing a non-matching and too-long constraint name.
Fixes: #6755
Change-Id: Ib27014a5ecbc9cd5861a396f8bb49fbc60bf49fe
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To service #6718 and #6710, the system by which columns are
given labels in a SELECT statement as well as the system that
gives them keys in a .c or .selected_columns collection have
been refactored to provide a single source of truth for
both, in constrast to the previous approach that included
similar logic repeated in slightly different ways.
Main ideas:
1. ColumnElement attributes ._label, ._anon_label, ._key_label
are renamed to include the letters "tq", meaning
"table-qualified" - these labels are only used when rendering
a SELECT that has LABEL_STYLE_TABLENAME_PLUS_COL for its
label style; as this label style is primarily legacy, the
"tq" names should be isolated so that in a 2.0 style application
these aren't being used at all
2. The means by which the "labels" and "proxy keys" for the elements
of a SELECT has been centralized to a single source of truth;
previously, the three of _generate_columns_plus_names,
_generate_fromclause_column_proxies, and _column_naming_convention
all had duplicated rules between them, as well as that there
were a little bit of labeling rules in compiler._label_select_column
as well; by this we mean that the various "anon_label" "anon_key"
methods on ColumnElement were called by all four of these methods,
where there were many cases where it was necessary that one method
comes up with the same answer as another of the methods. This
has all been centralized into _generate_columns_plus_names
for all the names except the "proxy key", which is generated
by _column_naming_convention.
3. compiler._label_select_column has been rewritten to both not make
any naming decisions nor any "proxy key" decisions, only whether
to label or not to label; the _generate_columns_plus_names method
gives it the information, where the proxy keys come from
_column_naming_convention; previously, these proxy keys were matched
based on restatement of similar (but not really the same) logic in
two places. The heuristics of "whether to label or not to label"
are also reorganized to be much easier to read and understand.
4. a new method compiler._label_returning_column is added for dialects
to use in their "generate returning columns" methods. A
github search reveals a small number of third party dialects also
doing this using the prior _label_select_column method so we
try to make sure _label_select_column continues to work the
exact same way for that specific use case; for the "SELECT" use
case it now needs
5. After some attempts to do it different ways, for the case where
_proxy_key is giving us some kind of anon label, we are hard
changing it to "_no_label" right now, as there's not currently
a way to fully match anonymized labels from stmt.c or
stmt.selected_columns to what will be in the result map. The
idea of "_no_label" is to encourage the user to use label('name')
for columns they want to be able to target by string name that
don't have a natural name.
Change-Id: I7a92a66f3a7e459ccf32587ac0a3c306650daf11
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where the PostgreSQL ``ENUM`` datatype as embedded in the
``ARRAY`` datatype would fail to emit correctly in create/drop when the
``schema_translate_map`` feature were also in use. Additionally repairs a
related issue where the same ``schema_translate_map`` feature would not
work for the ``ENUM`` datatype in combination with a ``CAST``, that's also
intrinsic to how the ``ARRAY(ENUM)`` combination works on the PostgreSQL
dialect.
Fixes: #6739
Change-Id: I44b1ad4db4af3acbf639aa422c46c22dd3b0d3a6
|
|
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue in :meth:`_postgresql.Insert.on_conflict_do_nothing` and
:meth:`_postgresql.Insert.on_conflict_do_update` where the name of a unique
constraint passed as the ``constraint`` parameter would not be properly
quoted if it contained characters which required quoting.
Fixes: #6696
Change-Id: I4ffca9b8c72cef4ed39e2de96831ccc11a620422
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed issue where the ``INTERVAL`` datatype on PostgreSQL and Oracle would
produce an ``AttributeError`` when used in the context of a comparison
operation against a ``timedelta()`` object. Pull request courtesy
MajorDallas.
Fixes: #6649
Closes: #6650
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/6650
Pull-request-sha: dd217a975e5f0d3157e81c731791225b6a32889f
Change-Id: I773caf2673294fdb3c92b42895ad714e944d1bf8
|
|
|
|
|
| |
Change-Id: I9fc638995a7c49a3ca3bf3c439428cae95d3c7b9
References: #6637
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure that the MySQL and MariaDB dialect ignore the
:class:`_sql.Identity` construct while rendering the
``AUTO_INCREMENT`` keyword in a create table.
The Oracle and PostgreSQL compiler was updated to not render
:class:`_sql.Identity` if the database version does not support it
(Oracle < 12 and PostgreSQL < 10). Previously it was rendered regardless
of the database version.
Fixes: #6338
Change-Id: I2ca0902fdd7b4be4fc1a563cf5585504cbea9360
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed an argument error in the default and PostgreSQL compilers that
would interfere with an UPDATE..FROM or DELETE..FROM..USING statement
that was then SELECTed from as a CTE.
The incorrect pattern was also fixed in the mysql and sybase dialects.
MySQL supports CTEs but not "returning".
Fixes: #6303
Change-Id: Ic94805611a5ec443749fb6b1fd8a1326b0d83ef7
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed regression where the introduction of the INSERT syntax "INSERT...
VALUES (DEFAULT)" was not supported on some backends that do however
support "INSERT..DEFAULT VALUES", including SQLite. The two syntaxes are
now each individually supported or non-supported for each dialect, for
example MySQL supports "VALUES (DEFAULT)" but not "DEFAULT VALUES".
Support for Oracle is still not enabled as there are unresolved issues
in using RETURNING at the same time.
Fixes: #6254
Change-Id: I47959bc826e3d9d2396ccfa290eb084841b02e77
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :meth:`_engine.Dialect.has_table` method now raises an informative
exception if a non-Connection is passed to it, as this incorrect behavior
seems to be common. This method is not intended for external use outside
of a dialect. Please use the :meth:`.Inspector.has_table` method
or for cross-compatibility with older SQLAlchemy versions, the
:meth:`_engine.Engine.has_table` method.
Fixes: #5780
Fixes: #6062
Fixes: #6260
Change-Id: I9b2439675167019b68d682edee3dcdcfce836987
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Introduce a new parameter :paramref:`_types.Enum.omit_aliases` in
:class:`_types.Enum` type allow filtering aliases when using a pep435 Enum.
Previous versions of SQLAlchemy kept aliases in all cases, creating
database enum type with additional states, meaning that they were treated
as different values in the db. For backward compatibility this flag
defaults to ``False`` in the 1.4 series, but will be switched to ``True``
in a future version. A deprecation warning is raise if this flag is not
specified and the passed enum contains aliases.
Fixes: #6146
Change-Id: I547322ffa90d0273d91bb3bf8bfea6ec934d48b9
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Added a new flag to the :class:`_engine.Dialect` class called
:attr:`_engine.Dialect.supports_statement_cache`. This flag now needs to be present
directly on a dialect class in order for SQLAlchemy's
:ref:`query cache <sql_caching>` to take effect for that dialect. The
rationale is based on discovered issues such as :ticket:`6173` revealing
that dialects which hardcode literal values from the compiled statement,
often the numerical parameters used for LIMIT / OFFSET, will not be
compatible with caching until these dialects are revised to use the
parameters present in the statement only. For third party dialects where
this flag is not applied, the SQL logging will show the message "dialect
does not support caching", indicating the dialect should seek to apply this
flag once they have verified that no per-statement literal values are being
rendered within the compilation phase.
Fixes: #6184
Change-Id: I6fd5b5d94200458d4cb0e14f2f556dbc25e27e22
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed regression caused by :ticket:`6023` where the PostgreSQL cast
operator applied to elements within an :class:`_types.ARRAY` when using
psycopg2 would fail to use the correct type in the case that the datatype
were also embedded within an instance of the :class:`_types.Variant`
adapter.
Additionally, repairs support for the correct CREATE TYPE to be emitted
when using a ``Variant(ARRAY(some_schema_type))``.
Fixes: #6182
Change-Id: I1b9ba7c876980d4650715a0b0801b46bdc72860d
|