| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- add a test for PG reflection of unique index without any unique
constraint
- for PG, don't include 'duplicates_constraint' in the entry
if the index does not actually mirror a constraint
- use a distinct method for unique constraint reflection within table
- catch unique constraint not implemented condition; this may
be within some dialects and also is expected to be supported by
Alembic tests
- migration + changelogs for #3184
- add individual doc notes as well to MySQL, Postgreql
fixes #3184
|
|\ \
| | |
| | |
| | | |
https://bitbucket.org/jerdfelt/sqlalchemy into pr30
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Calls to reflect a table did not create any UniqueConstraint objects.
The reflection core made no calls to get_unique_constraints and as
a result, the sqlite dialect would never reflect any unique constraints.
MySQL transparently converts unique constraints into unique indexes, but
SQLAlchemy would reflect those as an Index object and as a
UniqueConstraint. The reflection core will now deduplicate the unique
constraints.
PostgreSQL would reflect unique constraints as an Index object and as
a UniqueConstraint object. The reflection core will now deduplicate
the unique indexes.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
when you call :meth:`.Connection.connect`, would not share transaction
status with the parent. The architecture of branching has been tweaked
a bit so that the branched connection defers to the parent for
all transactional status and operations.
fixes #3190
|
|/
|
|
|
|
|
|
| |
when you call :meth:`.Connection.connect`, would not share invalidation
status with the parent. The architecture of branching has been tweaked
a bit so that the branched connection defers to the parent for
all invalidation status and operations.
fixes #3215
|
|
|
|
|
|
|
|
|
|
|
|
| |
:meth:`.Inspector.get_temp_view_names`; currently, only the
SQLite dialect supports these methods. The return of temporary
table and view names has been **removed** from SQLite's version
of :meth:`.Inspector.get_table_names` and
:meth:`.Inspector.get_view_names`; other database backends cannot
support this information (such as MySQL), and the scope of operation
is different in that the tables can be local to a session and
typically aren't supported in remote schemas.
fixes #3204
|
| |
|
|
|
|
|
|
|
| |
reports a column that isn't found to be present in the table,
a warning is emitted and the column is skipped. This can occur
for some special system column situations as has been observed
with Oracle. fixes #3180
|
|
|
|
|
|
|
|
|
|
|
| |
any speed improvements :(. code is in a much better place to be run into
C, however
- The ``proc()`` callable passed to the ``create_row_processor()``
method of custom :class:`.Bundle` classes now accepts only a single
"row" argument.
- Deprecated event hooks removed: ``populate_instance``,
``create_instance``, ``translate_row``, ``append_result``
- the getter() idea is somewhat restored; see ref #3175
|
|
|
|
|
|
|
|
| |
view. So copy collections.OrderedDict and use MutableMapping to set up
keys, items, values on our own OrderedDict.
Conflicts:
lib/sqlalchemy/engine/base.py
|
| |
|
|
|
|
|
|
|
|
|
| |
for an INSERT or UPDATE are now sorted when they contribute towards
the "compiled cache" cache key. These keys were previously not
deterministically ordered, meaning the same statement could be
cached multiple times on equivalent keys, costing both in terms of
memory as well as performance.
fixes #3165
|
| |
|
|
|
|
| |
rules, better message formatting
|
| |
|
|
|
|
|
|
|
|
|
|
| |
events from firing for those statements which it uses internally
to detect if a table exists or not. This is achieved using an
execution option ``skip_user_error_events`` that disables the handle
error event for the scope of that execution. In this way, user code
that rewrites exceptions doesn't need to worry about the MySQL
dialect or other dialects that occasionally need to catch
SQLAlchemy specific exceptions.
|
| |
|
|
|
|
| |
to get all flake8 passing
|
|
|
|
|
|
|
|
|
|
|
| |
in all cases unconditionally, the number of use cases that go beyond what
dbapi_error() is expecting has gone too far for an 0.9 release.
Additionally, the number of things we'd like to track is really a lot
more than the five arguments here, and ExecutionContext is really not
suitable as totally public API for this. So restore dbapi_error
to its old version, deprecate, and build out handle_error instead.
This is a lot more extensible and doesn't get in the way of anything
compatibility-wise.
|
|
|
|
|
|
|
| |
:attr:`.ExecutionContext.is_disconnect` which are meaningful within
the :meth:`.ConnectionEvents.dbapi_error` handler to see both the
original DBAPI error as well as whether or not it represents
a disconnect.
|
|
|
|
|
|
|
| |
have been enhanced such that the function handler is now capable
of raising or returning a new exception object, which will replace
the exception normally being thrown by SQLAlchemy.
fixes #3076
|
|
|
|
|
| |
the compiled cache is used. That allows us to cache the whole metadata
and save on creating it at result time, when compiled cache is used.
|
|
|
|
|
|
|
|
|
|
|
| |
- Fixed bug which would occur if a DBAPI exception
occurs when the engine first connects and does its initial checks,
and the exception is not a disconnect exception, yet the cursor
raises an error when we try to close it. In this case the real
exception would be quashed as we tried to log the cursor close
exception via the connection pool and failed, as we were trying
to access the pool's logger in a way that is inappropriate
in this very specific scenario. fixes #3063
|
|\
| |
| | |
Documentation fix-up: "its" vs. "it's"
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Removed ungrammatical apostrophes from documentation, replacing
"it's" with "its" where appropriate (but in a few cases with "it is"
when that read better).
While doing that, I also fixed a couple of minor typos etc.
as I noticed them.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a connection invalidation could occur within an already critical section
like a connection.close(); ultimately, these conditions are caused
by the change in :ticket:`2907`, in that the "reset on return" feature
calls out to the Connection/Transaction in order to handle it, where
"disconnect detection" might be caught. However, it's possible that
the more recent change in :ticket:`2985` made it more likely for this
to be seen as the "connection invalidate" operation is much quicker,
as the issue is more reproducible on 0.9.4 than 0.9.3.
Checks are now added within any section that
an invalidate might occur to halt further disallowed operations
on the invalidated connection. This includes two fixes both at the
engine level and at the pool level. While the issue was observed
with highly concurrent gevent cases, it could in theory occur in
any kind of scenario where a disconnect occurs within the connection
close operation.
fixes #3043
ref #2985
ref #2907
- add some defensive checks during an invalidate situation:
1. _ConnectionRecord.invalidate might be called twice within finalize_fairy
if the _reset() raises an invalidate condition, invalidates, raises and then
goes to invalidate the CR. so check for this.
2. similarly within Conneciton, anytime we do handle_dbapi_error(), we might become invalidated.
so a following finally must check self.__invalid before dealing with the connection
any futher.
|
|
|
|
| |
Found using: https://github.com/intgr/topy
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it for DELETE would fail to target the correct row for DELETE.
Then to compound matters, basic "number of rows matched" checks were
not being performed. Both issues are fixed, however note that the
"rows matched" check requires so-called "sane multi-row count"
functionality; the DBAPI's executemany() method must count up the
rows matched by individual statements and SQLAlchemy's dialect must
mark this feature as supported, currently applies to some mysql dialects,
psycopg2, sqlite only. fixes #3006
- Enabled "sane multi-row count" checking for the psycopg2 DBAPI, as
this seems to be supported as of psycopg2 2.0.9.
|
|
|
|
|
|
|
|
|
|
|
| |
1. make sure pool._invalidate() sets the timestamp up before
invalidating the target connection. we can otherwise show how the
conn.invalidate() + pool._invalidate() can lead to an extra connection
being made.
2. to help with that, soften up the check on connection.invalidate()
when connection is already closed. a warning is fine here
3. add a mutex to test_max_overflow() when we connect, because the way
we're using mock depends on an iterator, that needs to be synchronized
|
|
|
|
|
|
|
| |
implementation allows an event handler to redefine the specific mechanics
by which an arbitrary dialect invokes execute() or executemany() on a
DBAPI cursor. The new events, at this point semi-public and experimental,
are in support of some upcoming transaction-related extensions.
|
|
|
|
|
|
|
|
|
|
| |
after one or more :class:`.Connection` objects have been created
(such as by an orm :class:`.Session` or via explicit connect)
and the listener will pick up events from those connections.
Previously, performance concerns pushed the event transfer from
:class:`.Engine` to :class:`.Connection` at init-time only, but
we've inlined a bunch of conditional checks to make this possible
without any additional function calls. fixes #2978
|
|
|
|
|
|
|
|
| |
within userland engine.dispose(); as some SQLA tests already failed when the replace step
was removed, due to those conns still being referenced, it's likely this will
create surprises for all those users that incorrectly use dispose()
and it's not really worth dealing with. This doesn't affect the change
we made for ref: #2985.
|
|
|
|
|
|
|
|
|
|
|
|
| |
recycles the connection pool when a "disconnect" condition is detected;
instead of discarding the pool and explicitly closing out connections,
the pool is retained and a "generational" timestamp is updated to
reflect the current time, thereby causing all existing connections
to be recycled when they are next checked out. This greatly simplifies
the recycle process, removes the need for "waking up" connect attempts
waiting on the old pool and eliminates the race condition that many
immediately-discarded "pool" objects could be created during the
recycle operation. fixes #2985
|
|
|
|
|
|
|
|
|
|
|
|
| |
emitted for the "_cursor_execute()" method of :class:`.Connection`;
this is the "quick" executor that is used for things like
when a sequence is executed ahead of an INSERT statement, as well as
for dialect startup checks like unicode returns, charset, etc.
the :meth:`.ConnectionEvents.before_cursor_execute` event was already
invoked here. The "executemany" flag is now always set to False
here, as this event always corresponds to a single execution.
Previously the flag could be True if we were acting on behalf of
an executemany INSERT statement.
|
|
|
|
|
| |
and :func:`.event.listens_for`. This is a convenience feature which
will wrap the given listener such that it is only invoked once.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to support dialect-level reflection options for all :class:`.Table`
objects reflected.
- Added a new dialect-level argument ``postgresql_ignore_search_path``;
this argument is accepted by both the :class:`.Table` constructor
as well as by the :meth:`.MetaData.reflect` method. When in use
against Postgresql, a foreign-key referenced table which specifies
a remote schema name will retain that schema name even if the name
is present in the ``search_path``; the default behavior since 0.7.3
has been that schemas present in ``search_path`` would not be copied
to reflected :class:`.ForeignKey` objects. The documentation has been
updated to describe in detail the behavior of the ``pg_get_constraintdef()``
function and how the ``postgresql_ignore_search_path`` feature essentially
determines if we will honor the schema qualification reported by
this function or not. [ticket:2922]
|
|
|
|
|
|
|
|
|
| |
would lead to ``TypeError`` when compared to non-tuple types as it attempted
to apply tuple() to the "other" object unconditionally. The
full range of Python comparison operators have now been implemented on
:class:`.RowProxy`, using an approach that guarantees a comparison
system that is equivalent to that of a tuple, and the "other" object
is only coerced if it's an instance of RowProxy. [ticket:2924]
|
|
|
|
| |
and modernize the engines chapter a bit
|
|
|
|
|
|
|
|
|
|
|
| |
MySQL, to correctly render the SET clause among multiple columns
with the same name across tables. This also changes the name used for
the bound parameter in the SET clause to "<tablename>_<colname>" for
the non-primary table only; as this parameter is typically specified
using the :class:`.Column` object directly this should not have an
impact on applications. The fix takes effect for both
:meth:`.Table.update` as well as :meth:`.Query.update` in the ORM.
[ticket:2912]
|
|
|
|
|
| |
sent to the primary key constraint; existing tests in the PG dialect
confirm this.
|
|
|
|
|
|
|
|
|
|
|
|
| |
reflection now updates the PKC in place.
- support the use case of the empty PrimaryKeyConstraint in order to specify
constraint options; the columns marked as primary_key=True will now be gathered
into the columns collection, rather than being ignored. [ticket:2910]
- add validation such that column specification should only take place
in the PrimaryKeyConstraint directly, or by using primary_key=True flags;
if both are present, they have to match exactly, otherwise the condition is
assumed to be ambiguous, and a warning is emitted; the old behavior of
using the PKC columns only is maintained.
|
|
|
|
| |
- test_autoincrement_col still needs reflection overall
|
|
|
|
| |
- clean up some shenanigans in reflection
|
|
|
|
|
| |
arguments; [ticket:2866]
- add dialect specific kwarg functionality to ForeignKeyConstraint, ForeignKey
|
|
|
|
|
|
| |
double close
- ensure no iterator changed size issues in testing.engines
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
type such as "charset" and "collation". While MySQL wants all character-
based CAST calls to use the CHAR type, we now create a real CHAR
object at CAST time and copy over all the parameters it has, so that
an expression like ``cast(x, mysql.TEXT(charset='utf8'))`` will
render ``CAST(t.col AS CHAR CHARACTER SET utf8)``.
- Added new "unicode returns" detection to the MySQL dialect and
to the default dialect system overall, such that any dialect
can add extra "tests" to the on-first-connect "does this DBAPI
return unicode directly?" detection. In this case, we are
adding a check specifically against the "utf8" encoding with
an explicit "utf8_bin" collation type (after checking that
this collation is available) to test for some buggy unicode
behavior observed with MySQLdb version 1.2.3. While MySQLdb
has resolved this issue as of 1.2.4, the check here should
guard against regressions. The change also allows the "unicode"
checks to log in the engine logs, which was not previously
the case. [ticket:2906]
|