| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed the base class for dialect-specific float/double types; Oracle
:class:`_oracle.BINARY_DOUBLE` now subclasses :class:`_sqltypes.Double`,
and internal types for :class:`_sqltypes.Float` for asyncpg and pg8000 now
correctly subclass :class:`_sqltypes.Float`.
Added suite tests to ensure that floating point types, such as
class:`_types.Float` and :class:`_types.Double` are not resolved as
class:`_types.Numeric` in the dialect, since it may not compatible in
all cases, such as when casting a value.
Change-Id: I20b814e8e029d57921d9728a55f2570f74c35c87
|
|
|
|
| |
Change-Id: I625af65b3fb1815b1af17dc2ef47dd697fdc3fb1
|
|
|
|
|
| |
Fixes: #8491
Change-Id: I941d2a3cf92e5609e2045a53cec94522340951db
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rearchitected the schema reflection API to allow some dialects to make use
of high performing batch queries to reflect the schemas of many tables at
once using much fewer queries. The new performance features are targeted
first at the PostgreSQL and Oracle backends, and may be applied to any
dialect that makes use of SELECT queries against system catalog tables to
reflect tables (currently this omits the MySQL and SQLite dialects which
instead make use of parsing the "CREATE TABLE" statement, however these
dialects do not have a pre-existing performance issue with reflection. MS
SQL Server is still a TODO).
The new API is backwards compatible with the previous system, and should
require no changes to third party dialects to retain compatibility;
third party dialects can also opt into the new system by implementing
batched queries for schema reflection.
Along with this change is an updated reflection API that is fully
:pep:`484` typed, features many new methods and some changes.
Fixes: #4379
Change-Id: I897ec09843543aa7012bcdce758792ed3d415d08
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added new backend-agnostic :class:`_types.Uuid` datatype generalized from
the PostgreSQL dialects to now be a core type, as well as migrated
:class:`_types.UUID` from the PostgreSQL dialect. Thanks to Trevor Gross
for the help on this.
also includes:
* corrects some missing behaviors in the suite literal fixtures
test where row round trips weren't being correctly asserted.
* fixes some of the ISO literal date rendering added in
952383f9ee0 for #5052 to truncate datetime strings for date/time
datatypes in the same way that drivers typically do for bound
parameters; this was not working fully and wasn't caught by the
broken test fixture
Fixes: #7212
Change-Id: I981ac6d34d278c18281c144430a528764c241b04
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to simplify pyproject.toml change the remaining files
that aren't going to be typed on this first pass
(unless of course someone wants to type some of these)
to include # mypy: ignore-errors. for the moment, only a handful
of ORM modules are to have more type checking implemented.
It's important that ignore-errors is used and
not "# type: ignore", as in the latter case, mypy doesn't even
read the existing types in the file, which makes it impossible to
type any files that refer to those modules at all.
to simplify ongoing typing work use inline mypy config
for remaining files that are "done" for now, indicating the
level of type checking they currently have.
Change-Id: I98669c1a305c2f0adba85d10b5425541f3fe9533
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added modified ISO-8601 rendering (i.e. ISO-8601 with the T converted to a
space) when using ``literal_binds`` with the SQL compilers provided by the
PostgreSQL, MySQL, MariaDB, MSSQL, Oracle dialects. For Oracle, the ISO
format is wrapped inside of an appropriate TO_DATE() function call.
Previously this rendering was not implemented for dialect-specific
compilation.
Fixes: #5052
Change-Id: I7af15a51fedf5c5a8e76e645f7c3be997ece35f0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All modules in sqlalchemy.engine are strictly
typed with the exception of cursor, default, and
reflection. cursor and default pass with non-strict
typing, reflection is waiting on the multi-reflection
refactor.
Behavioral changes:
* create_connect_args() methods return a tuple of list,
dict, rather than a list of list, dict
* removed allow_chars parameter from
pyodbc connector ._get_server_version_info()
method
* the parameter list passed to do_executemany is now
a list in all cases. previously, this was being run
through dialect.execute_sequence_format, which
defaults to tuple and was only intended for individual
tuple params.
* broke up dialect.dbapi into dialect.import_dbapi
class method and dialect.dbapi module object. added
a deprecation path for legacy dialects. it's not
really feasible to type a single attr as a classmethod
vs. module type. The "type_compiler" attribute also
has this problem with greater ability to work around,
left that one for now.
* lots of constants changing to be Enum, so that we can
type them. for fixed tuple-position constants in
cursor.py / compiler.py (which are used to avoid the
speed overhead of namedtuple), using Literal[value]
which seems to work well
* some tightening up in Row regarding __getitem__, which
we can do since we are on full 2.0 style result use
* altered the set_connection_execution_options and
set_engine_execution_options event flows so that the
dictionary of options may be mutated within the event
hook, where it will then take effect as the actual
options used. Previously, changing the dict would
be silently ignored which seems counter-intuitive
and not very useful.
* A lot of DefaultDialect/DefaultExecutionContext
methods and attributes, including underscored ones, move
to interfaces. This is not fully ideal as it means
the Dialect/ExecutionContext interfaces aren't publicly
subclassable directly, but their current purpose
is more of documentation for dialect authors who should
(and certainly are) still be subclassing the DefaultXYZ
versions in all cases
Overall, Result was the most extremely difficult class
hierarchy to type here as this hierarchy passes through
largely amorphous "row" datatypes throughout, which
can in fact by all kinds of different things, like
raw DBAPI rows, or Row objects, or "scalar"/Any, but
at the same time these types have meaning so I tried still
maintaining some level of semantic markings for these,
it highlights how complex Result is now, as it's trying
to be extremely efficient and inlined while also being
very open-ended and extensible.
Change-Id: I98b75c0c09eab5355fc7a33ba41dd9874274f12a
|
|
|
|
| |
Change-Id: I49abf2607e0eb0623650efdf0091b1fb3db737ea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Re-implement c version immutabledict / processors / resultproxy / utils with cython.
Performance is in general in par or better than the c version
Added a collection module that has cython version of OrderedSet and IdentitySet
Added a new test/perf file to compare the implementations.
Run ``python test/perf/compiled_extensions.py all`` to execute the comparison test.
See results here: https://docs.google.com/document/d/1nOcDGojHRtXEkuy4vNXcW_XOJd9gqKhSeALGG3kYr6A/edit?usp=sharing
Fixes: #7256
Change-Id: I2930ef1894b5048210384728118e586e813f6a76
Signed-off-by: Federico Caselli <cfederico87@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is so that dialect methods that are called within init
can assume the same argument structure as when they are called
in other places; we can nail down the type of object as well.
This change seems to mostly impact the isolation level routines
in the dialects, as these are called during initialize()
as well as on established connections. these methods can now
assume a non-proxied DBAPI connection object in all cases,
as it is commonly required that attributes like ".autocommit"
are set on the object which don't work well in a proxied
situation.
Other changes:
* adds an interface for the "connectionfairy" concept
called PoolProxiedConnection.
* Removes ``Connectable`` superclass of Connection.
``Connectable`` was originally meant to provide for the
"method which accepts connection or engine" theme. As this
pattern is greatly reduced in 2.0 and Engine no longer extends
from it, the ``Connectable`` superclass doesnt serve any real
purpose.
Leading from that, to set this in I also applied pep 484 annotations
to the Dialect base, and then in the interests of seeing some
of the typing information show up in my IDE did a little bit for Engine,
Connection and others. I hope that it's feasible that we can
add annotations to specific classes and attributes ahead of when we
actually try to mass-populate the whole library. This was
the original spirit of pep-484 that we can apply annotations
gradually. I do of course want to try to do a mass-populate
although i think even in that case we will end up doing a lot
of manual work anyway (in particular for the changes here which
are distinct from what the stubs have).
Fixes: #7122
Change-Id: I5dd7fbff8a7ae520a81c165091af12a6a68826db
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add a new system so that PostgreSQL and other dialects have a
reliable way to add casts to bound parameters in SQL statements,
replacing previous use of setinputsizes() for PG dialects.
rationale:
1. psycopg3 will be using the same SQLAlchemy-side "setinputsizes"
as asyncpg, so we will be seeing a lot more of this
2. the full rendering that SQLAlchemy's compilation is performing
is in the engine log as well as error messages. Without this,
we introduce three levels of SQL rendering, the compiler, the
hidden "setinputsizes" in SQLAlchemy, and then whatever the DBAPI
driver does. With this new approach, users reporting bugs etc.
will be less confused that there are as many as two separate
layers of "hidden rendering"; SQLAlchemy's rendering is again
fully transparent
3. calling upon a setinputsizes() method for every statement execution
is expensive. this way, the work is done behind the caching layer
4. for "fast insertmany()", I also want there to be a fast approach
towards setinputsizes. As it was, we were going to be taking
a SQL INSERT with thousands of bound parameter placeholders and
running a whole second pass on it to apply typecasts. this way,
we will at least be able to build the SQL string once without a huge
second pass over the whole string
5. psycopg2 can use this same system for its ARRAY casts
6. the general need for PostgreSQL to have lots of type casts
is now mostly in the base PostgreSQL dialect and works independently
of a DBAPI being present. dependence on DBAPI symbols that aren't
complete / consistent / hashable is removed
I was originally going to try to build this into bind_expression(),
but it was revealed this worked poorly with custom bind_expression()
as well as empty sets. the current impl also doesn't need to
run a second expression pass over the POSTCOMPILE sections, which
came out better than I originally thought it would.
Change-Id: I363e6d593d059add7bcc6d1f6c3f91dd2e683c0c
|
|/
|
|
| |
Change-Id: I8172fdcc3103ff92aa049827728484c8779af6b7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generalized the :paramref:`_sa.create_engine.isolation_level` parameter to
the base dialect so that it is no longer dependent on individual dialects
to be present. This parameter sets up the "isolation level" setting to
occur for all new database connections as soon as they are created by the
connection pool, where the value then stays set without being reset on
every checkin.
The :paramref:`_sa.create_engine.isolation_level` parameter is essentially
equivalent in functionality to using the
:paramref:`_engine.Engine.execution_options.isolation_level` parameter via
:meth:`_engine.Engine.execution_options` for an engine-wide setting. The
difference is in that the former setting assigns the isolation level just
once when a connection is created, the latter sets and resets the given
level on each connection checkout.
Fixes: #6342
Change-Id: Id81d6b1c1a94371d901ada728a610696e09e9741
|
|
|
|
|
|
| |
References: #6023
Change-Id: I0f6cbc34b3c0bfc0b8c86b3ebe4531e23039b6c0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improve the interface used by adapted drivers, like the asyncio ones,
to access the actual connection object returned by the driver.
The :class:`_engine._ConnectionRecord` and
:class:`_engine._ConnectionFairy` now have two new attributes:
* ``dbapi_connection`` always represents a DBAPI compatible
object. For pep-249 drivers, this is the DBAPI connection as it always
has been, previously accessed under the ``.connection`` attribute.
For asyncio drivers that SQLAlchemy adapts into a pep-249 interface,
the returned object will normally be a SQLAlchemy adaption object
called :class:`_engine.AdaptedConnection`.
* ``driver_connection`` always represents the actual connection object
maintained by the third party pep-249 DBAPI or async driver in use.
For standard pep-249 DBAPIs, this will always be the same object
as that of the ``dbapi_connection``. For an asyncio driver, it will be
the underlying asyncio-only connection object.
The ``.connection`` attribute remains available and is now a legacy alias
of ``.dbapi_connection``.
Fixes: #6832
Change-Id: Ib72f97deefca96dce4e61e7c38ba430068d6a82e
|
|
|
|
|
|
| |
Also replace http://pypi.python.org/pypi with https://pypi.org/project
Change-Id: I84b5005c39969a82140706472989f2a30b0c7685
|
|
|
|
| |
Change-Id: I417b4d19c2c17a73ba9c95d59f1562ad5dab2d35
|
|
|
|
| |
Change-Id: I8ead04dd572f0c0020c226254543eb7d93876ee4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
### Description
This change adds support for stream_results for the pg8000 dialect by adding a server side cursor. The server-side cursor is a wrapper around a standard DBAPI cursor, and uses the SQL-level cursors. This is being discussed in issue https://github.com/sqlalchemy/sqlalchemy/issues/6198 and this pull request is really to give a concrete example of what I was suggesting.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [ ] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [x] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
Closes: #6356
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/6356
Pull-request-sha: 071e118a6b09a26c511b39b0d589ebd2de8d508c
Change-Id: Id1a865adf0ff64294c71814681f5b4d593939db6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a new flag to the :class:`_engine.Dialect` class called
:attr:`_engine.Dialect.supports_statement_cache`. This flag now needs to be present
directly on a dialect class in order for SQLAlchemy's
:ref:`query cache <sql_caching>` to take effect for that dialect. The
rationale is based on discovered issues such as :ticket:`6173` revealing
that dialects which hardcode literal values from the compiled statement,
often the numerical parameters used for LIMIT / OFFSET, will not be
compatible with caching until these dialects are revised to use the
parameters present in the statement only. For third party dialects where
this flag is not applied, the SQL logging will show the message "dialect
does not support caching", indicating the dialect should seek to apply this
flag once they have verified that no per-statement literal values are being
rendered within the compilation phase.
Fixes: #6184
Change-Id: I6fd5b5d94200458d4cb0e14f2f556dbc25e27e22
|
|
|
|
|
|
|
|
| |
For unknown reasons a space character got into the error message
being tested in #6099. The fix is entirely non working in 1.4.4.
Fixes: #6099
Change-Id: Ib2929be59d62eb2446fc8f040293c9228df7a531
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Modified the ``is_disconnect()`` handler for the pg8000 dialect, which now
accommodates for a new ``InterfaceError`` emitted by pg8000 1.19.0. Pull
request courtesy Hamdi Burak Usul.
Fixes: #6099
Closes: #6150
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/6150
Pull-request-sha: cd58836d3e19489d5203c02f7cc5f2f2d7c82a20
Change-Id: Ief942e53f6d90c48e8d77c70948fd46eb6c90dbd
|
|
|
|
| |
Change-Id: Ic5bb19ca8be3cb47c95a0d3315d84cb484bac47c
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reworked the "setinputsizes()" set of dialect hooks to be correctly
extensible for any arbirary DBAPI, by allowing dialects individual hooks
that may invoke cursor.setinputsizes() in the appropriate style for that
DBAPI. In particular this is intended to support pyodbc's style of usage
which is fundamentally different from that of cx_Oracle. Added support
for pyodbc.
Fixes: #5649
Change-Id: I9f1794f8368bf3663a286932cfe3992dae244a10
|
|
|
|
|
|
|
|
| |
Due to https://github.com/tlocke/pg8000/commit/3a2e7439ae3613367ec231218d7e0f541466d1e5
we no longer decode the description. pin to 1.16.6 as minimum version
so that we don't need to track version changes in code.
Change-Id: I192c851eb2337f37467560a3cbb87f7235884788
|
|
|
|
| |
Change-Id: I649662d440f83df379922e8c967d28f635f9c85b
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for PostgreSQL "readonly" and "deferrable" flags for all of
psycopg2, asyncpg and pg8000 dialects. This takes advantage of a newly
generalized version of the "isolation level" API to support other kinds of
session attributes set via execution options that are reliably reset
when connections are returned to the connection pool.
Fixes: #5549
Change-Id: I0ad6d7a095e49d331618274c40ce75c76afdc7dd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pg8000 dialect has been revised and modernized for the most recent
version of the pg8000 driver for PostgreSQL. Changes to the dialect
include:
* All data types are now sent as text rather than binary.
* Using adapters, custom types can be plugged in to pg8000.
* Previously, named prepared statements were used for all statements.
Now unnamed prepared statements are used by default, and named
prepared statements can be used explicitly by calling the
Connection.prepare() method, which returns a PreparedStatement
object.
Pull request courtesy Tony Locke.
Notes by Mike: to get this all working it was needed to break
up JSONIndexType into "str" and "int" subtypes; this will be
needed for any dialect that is dependent on setinputsizes().
also includes @caselit's idea to include query params
in the dbdriver parameter.
Co-authored-by: Mike Bayer <mike_mp@zzzcomputing.com>
Closes: #5451
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5451
Pull-request-sha: 639751ca9c7544801b9ede02e6cbe15a16c59c82
Change-Id: I2869bc52c330916773a41d11d12c297aecc8fcd8
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes: #5482
All supported python versions provide 'uuid' module.
Closes: #5483
Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/5483
Pull-request-sha: fc32498a8b639ff21d5898100592782826d2c6dd
Change-Id: I8b41b811da7576f724353425dad5d6f581641b4b
|
|
|
|
|
|
|
| |
includes more replacements for create_engine(), Connection,
disambiguation of Result from future/baked
Change-Id: Icb60a79ee7a6c45ea9056c211ffd1be110da3b5e
|
|
|
|
| |
Change-Id: I08440dc25e40ea1ccea1778f6ee9e28a00808235
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we have strong CI for the DBAPIs and dialects that are actively
supported, this indicates that those DBAPIs that aren't in CI are
continuing to fall behind in support, to the point where we can
not address issues that may arise. As such, the Sybase and Firebird
dialects overall are moving into an explicit "not supported" zone
where we would like to eventually remove them. Additionally,
a pass is made through legacy MySQL and PostgreSQL DBAPI dialects
as well as those which we aren't able to include in CI to note
that these DBAPIs aren't actively supported by the project.
Change-Id: I61f1515b97b741b7534b54e434e3e47065df7b5d
|
|
|
|
| |
Change-Id: I6a71f4924d046cf306961c58dffccf21e9c03911
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Applied on top of a pure run of black -l 79 in
I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9, this set of changes
resolves all remaining flake8 conditions for those codes
we have enabled in setup.cfg.
Included are resolutions for all remaining flake8 issues
including shadowed builtins, long lines, import order, unused
imports, duplicate imports, and docstring issues.
Change-Id: I4f72d3ba1380dd601610ff80b8fb06a2aff8b0fe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a straight reformat run using black as is, with no edits
applied at all.
The black run will format code consistently, however in
some cases that are prevalent in SQLAlchemy code it produces
too-long lines. The too-long lines will be resolved in the
following commit that will resolve all remaining flake8 issues
including shadowed builtins, long lines, import order, unused
imports, duplicate imports, and docstring issues.
Change-Id: I7eda77fed3d8e73df84b3651fd6cfcfe858d4dc9
|
|
|
|
| |
Change-Id: I3ef36bfd0cb0ba62b3123c8cf92370a43156cf8f
|
|
|
|
|
|
|
| |
Revise the fix from 03560c4b83308719067ec635662c35f9a437fb7f
to use compat.text_type for py3k compatibility
Change-Id: Ia6807bd4de3bba4b33b5327a1be7e728b45eb093
|
|
|
|
|
|
|
|
|
| |
Enabled UUID support for the pg8000 driver, which supports native Python
uuid round trips for this datatype. Arrays of UUID are still not supported,
however.
Change-Id: I44ca323c5d9f2cd87327210233bc36a3556eb050
Fixes: #4016
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where the pg8000 driver would fail if using
:meth:`.MetaData.reflect` with a schema name, since the schema name would
be sent as a "quoted_name" object that's a string subclass, which pg8000
doesn't recognize. The quoted_name type is added to pg8000's
py_types collection on connect.
Change-Id: Id0f838320cb66563685e094e4eae2d5116100d27
Fixes: #4041
|
|
|
|
| |
Change-Id: I4e8c2aa8fe817bb2af8707410fa0201f938781de
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
For versions > 1.10.1 pg8000 returns de-serialized JSON objects rather
than a string. SQL parameters are still strings though.
|
|
|
|
|
|
| |
pg8000 uses Versioneer, which means that development versions have
version strings that don't fit into the dotted triple number format.
Released versions will always fit the triple format though.
|
|
|
|
|
| |
The pg8000 dialect now supports the setting of the PostgreSQL parameter
client_encoding from create_engine().
|
|
|
|
| |
- add pg8000 version detection for the "sane multi rowcount" feature
|