| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Drops support for cx_Oracle prior to version 5.x, reworks
numeric and binary support.
Fixes: #4064
Change-Id: Ib9ae9aba430c15cd2a6eeb4e5e3fd8e97b5fe480
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changed the mechanics of :class:`.ResultProxy` to unconditionally
delay the "autoclose" step until the :class:`.Connection` is done
with the object; in the case where Postgresql ON CONFLICT with
RETURNING returns no rows, autoclose was occurring in this previously
non-existent use case, causing the usual autocommit behavior that
occurs unconditionally upon INSERT/UPDATE/DELETE to fail.
Change-Id: I235a25daf4381b31f523331f810ea04450349722
Fixes: #3955
(cherry picked from commit 8ee363e4917b0dcd64a83b6d26e465c9e61e0ea5)
(cherry picked from commit f52fb5282a046d26b6ee2778e03b995eb117c2ee)
|
|
|
|
| |
Change-Id: I4e8c2aa8fe817bb2af8707410fa0201f938781de
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tests illustrate that exceptions like GreenletExit and
even KeyboardInterrupt can corrupt the state of a DBAPI
connection like that of pymysql and mysqlclient. Intercept
BaseException errors within the handle_error scheme and
invalidate just the connection alone in this case, but not
the whole pool.
The change is backwards-incompatible with a program that
currently intercepts ctrl-C within a database transaction
and wants to continue working on that transaction. Ensure
the event hook can be used to reverse this behavior.
Change-Id: Ifaa013c13826d123eef34e32b7e79fff74f1b21b
Fixes: #3803
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed a bug in the result proxy used mainly by Oracle when binary and
other LOB types are in play, such that when query / statement caching
were used, the type-level result processors, notably that required by
the binary type itself but also any other processor, would become lost
after the first run of the statement due to it being removed from the
cached result metadata.
Change-Id: I751940866cffb4f48de46edc8137482eab59790c
Fixes: #3699
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed bug where when using ``case_sensitive=False`` with an
:class:`.Engine`, the result set would fail to correctly accomodate
for duplicate column names in the result set, causing an error
when the statement is executed in 1.0, and preventing the
"ambiguous column" exception from functioning in 1.1.
Change-Id: If582bb9fdd057e4da3ae42f7180b17d1a1a2d98e
Fixes: #3690
|
|
|
|
|
|
| |
number of columns we're actually reporting on
- add more tests for negative row index
- changelog/migration
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
handled within visit_select(); this attribute was added in the
1.0 series to accommodate the subquery wrapping behavior of
SQL Server and Oracle while also working with positional
column targeting and no longer relying upon "key fallback"
in order to target columns in such a statement. The IBM DB2
third-party dialect also has this use case, but its implementation
is using regular expressions to rewrite the textual SELECT only
and does not make use of a "wrapped" select at this time.
The logic no longer attempts to reconcile proxy set collections as
this was not deterministic, and instead assumes that the select()
and the wrapper select() match their columns postionally,
at least for the column positions they have in common,
so it is now very simple and safe. fixes #3657.
- as a side effect of #3657 it was also revealed that the
strategy of calling upon a ResultProxy._getter was not
correctly calling into NoSuchColumnError when an expected
column was not present, and instead returned None up to
loading.instances() to produce NoneType failures; added
a raiseerr argument to _getter() which is called when we
aren't expecting None, fixes #3658.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
logging, exception, and ``repr()`` purposes now truncate very large
scalar values within each collection, including an
"N characters truncated"
notation, similar to how the display for large multiple-parameter sets
are themselves truncated.
fixes #2837
|
|/ |
|
|
|
|
|
| |
and this time also fix the cext itself to properly handle int vs. long
on py2k
|
|
|
|
|
|
| |
This reverts commit de0d144a395c31eb74084177df95a4858b830f88.
Apparently the test suite is not using the cextensions correctly at the moment.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
method, and its interaction with result-row processing, now allows
the columns passed to the method to be positionally matched with the
result columns in the statement, rather than matching on name alone.
The advantage to this includes that when linking a textual SQL statement
to an ORM or Core table model, no system of labeling or de-duping of
common column names needs to occur, which also means there's no need
to worry about how label names match to ORM columns and so-forth. In
addition, the :class:`.ResultProxy` has been further enhanced to
map column and string keys to a row with greater precision in some
cases. fixes #3501
- reorganize the initialization of ResultMetaData for readability
and complexity; use the name "cursor_description", define the
task of "merging" cursor_description with compiled column information
as its own function, and also define "name extraction" as a separate task.
- fully change the name we use in the "ambiguous column" error to be the
actual name that was ambiguous, modify the C ext also
|
|
|
|
|
|
|
|
|
|
|
| |
From [PEP 479](https://www.python.org/dev/peps/pep-0479/) the correct way to
terminate a generator is to return (which implicitly raises StopIteration)
rather than raise StopIteration.
Without this change using sqlalchemy in python 3.5 or greater results in
these warnings
PendingDeprecationWarning: generator '__iter__' raised StopIteration
which this commit should remove.
|
|
|
|
|
|
|
|
|
| |
by the ORM :class:`.Query` object (part of the performance
enhancements of :ticket:`3175`) would not raise the "this result
does not return rows" exception in the case where the driver
(typically MySQL) fails to generate cursor.description correctly;
an AttributeError against NoneType would be raised instead.
fixes #3481
|
|
|
|
|
|
|
|
| |
un-adjusted internal symbol names for "anonymous" labels, which
are the "foo_1" types of labels we see generated for SQL functions
without labels and similar. This was a side effect of the
performance enhancements implemented as part of references #918.
fixes #3483
|
|
|
|
|
|
|
|
| |
"max_row_buffer" execution option for BufferedRowResultProxy
- also add documentation, changelog and version notes
- rework the max_row_buffer argument to be interpreted from
the execution options upfront when the BufferedRowResultProxy
is first initialized.
|
|
|
|
| |
it to prevent excess memory usage with yield_per
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
That is, after exhausing all rows using the fetch methods, the
DBAPI cursor is released as before and the object may be safely
discarded, but the fetch methods may continue to be called for which
they will return an end-of-result object (None for fetchone, empty list
for fetchmany and fetchall). Only if :meth:`.ResultProxy.close`
is called explicitly will these methods raise the "result is closed"
error.
fixes #3330 fixes #3329
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
such that they are matched to the received result set positionally,
rather than by name. Originally, this was seen as a way to handle
cases where we had columns returned with difficult-to-predict names,
though in modern use that issue has been overcome by anonymous
labeling. In this version, the approach basically reduces function
call count per-result by a few dozen calls, or more for larger
sets of result columns. The approach still degrades into a modern
version of the old approach if textual elements modify the result
map, or if any discrepancy in size exists between
the compiled set of columns versus what was received, so there's no
issue for partially or fully textual compilation scenarios where these
lists might not line up. fixes #918
- callcounts still need to be adjusted down for this so zoomark
tests won't pass at the moment
|
|
|
|
|
|
|
|
|
|
|
| |
any speed improvements :(. code is in a much better place to be run into
C, however
- The ``proc()`` callable passed to the ``create_row_processor()``
method of custom :class:`.Bundle` classes now accepts only a single
"row" argument.
- Deprecated event hooks removed: ``populate_instance``,
``create_instance``, ``translate_row``, ``append_result``
- the getter() idea is somewhat restored; see ref #3175
|
| |
|
|
|
|
| |
to get all flake8 passing
|
|
|
|
|
| |
the compiled cache is used. That allows us to cache the whole metadata
and save on creating it at result time, when compiled cache is used.
|
|
|
|
|
|
|
|
|
| |
would lead to ``TypeError`` when compared to non-tuple types as it attempted
to apply tuple() to the "other" object unconditionally. The
full range of Python comparison operators have now been implemented on
:class:`.RowProxy`, using an approach that guarantees a comparison
system that is equivalent to that of a tuple, and the "other" object
is only coerced if it's an instance of RowProxy. [ticket:2924]
|
| |
|
|
|
|
| |
[ticket:2852]
|
|
|
|
|
|
| |
tuple is; this is accomplished via ensuring tuple() conversion on
both sides within the ``__eq__()`` method as well as
the addition of a ``__lt__()`` method. [ticket:2848]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to rely upon server generated version identifiers, using triggers
or other database-provided versioning features, by passing the value
``False``. The ORM will use RETURNING when available to immediately
load the new version identifier, else it will emit a second SELECT.
[ticket:2793]
- The ``eager_defaults`` flag of :class:`.Mapper` will now allow the
newly generated default values to be fetched using an inline
RETURNING clause, rather than a second SELECT statement, for backends
that support RETURNING.
- Added a new variant to :meth:`.ValuesBase.returning` called
:meth:`.ValuesBase.return_defaults`; this allows arbitrary columns
to be added to the RETURNING clause of the statement without interfering
with the compilers usual "implicit returning" feature, which is used to
efficiently fetch newly generated primary key values. For supporting
backends, a dictionary of all fetched values is present at
:attr:`.ResultProxy.returned_defaults`.
- add a glossary entry for RETURNING
- add documentation for version id generation, [ticket:867]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the import structure of many core modules.
``sqlalchemy.schema`` and ``sqlalchemy.types``
remain in the top-level package, but are now just lists of names
that pull from within ``sqlalchemy.sql``. Their implementations
are now broken out among ``sqlalchemy.sql.type_api``, ``sqlalchemy.sql.sqltypes``,
``sqlalchemy.sql.schema`` and ``sqlalchemy.sql.ddl``, the last of which was
moved from ``sqlalchemy.engine``. ``sqlalchemy.sql.expression`` is also
a namespace now which pulls implementations mostly from ``sqlalchemy.sql.elements``,
``sqlalchemy.sql.selectable``, and ``sqlalchemy.sql.dml``.
Most of the "factory" functions
used to create SQL expression objects have been moved to classmethods
or constructors, which are exposed in ``sqlalchemy.sql.expression``
using a programmatic system. Care has been taken such that all the
original import namespaces remain intact and there should be no impact
on any existing applications. The rationale here was to break out these
very large modules into smaller ones, provide more manageable lists
of function names, to greatly reduce "import cycles" and clarify the
up-front importing of names, and to remove the need for redundant
functions and documentation throughout the expression package.
|
| |
|
|
|
|
|
| |
probably be one in test_query or test_unicode...)
- fix up test_unitofwork
|
| |
|
|
|
|
| |
- went through examples/ and cleaned out excess list() calls
|
|
|
|
|
|
|
|
|
| |
a rollback() before re-raising, so that the stack
trace is preserved from sys.exc_info() before entering
the rollback. This so that the traceback is preserved
when using coroutine frameworks which may have switched
contexts before the rollback function returns.
[ticket:2703]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
labeled columns when apply_labels() is used; this mode
produces a SELECT where each column is labeled as in
<tablename>_<columnname>, to remove column name collisions
for a multiple table select. The fix is that if two labels
collide when combined with the table name, i.e.
"foo.bar_id" and "foo_bar.id", anonymous aliasing will be
applied to one of the dupes. This allows the ORM to handle
both columns independently; previously, 0.7
would in some cases silently emit a second SELECT for the
column that was "duped", and in 0.8 an ambiguous column error
would be emitted. The "keys" applied to the .c. collection
of the select() will also be deduped, so that the "column
being replaced" warning will no longer emit for any select()
that specifies use_labels, though the dupe key will be given
an anonymous label which isn't generally user-friendly.
[ticket:2702]
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"tuple" rows that contain
types which aren't hashable, by setting the flag
"hashable=False" on the corresponding TypeEngine object
in use. Custom types that return unhashable types
(typically lists) can set this flag to False.
[ticket:2592]
- [bug] Applying a column expression to a select
statement using a label with or without other
modifying constructs will no longer "target" that
expression to the underlying Column; this affects
ORM operations that rely upon Column targeting
in order to retrieve results. That is, a query
like query(User.id, User.id.label('foo')) will now
track the value of each "User.id" expression separately
instead of munging them together. It is not expected
that any users will be impacted by this; however,
a usage that uses select() in conjunction with
query.from_statement() and attempts to load fully
composed ORM entities may not function as expected
if the select() named Column objects with arbitrary
.label() names, as these will no longer target to
the Column objects mapped by that entity.
[ticket:2591]
|
|
|
|
|
|
|
| |
API to better support highly specialized
systems such as the Akiban database, including
more hooks to allow an execution context to
access type processors.
|
|
|
|
|
|
|
| |
would double up columns if the same constraint/table
existed in multiple schemas.
- force returns_rows to False for inserts where we know rows shouldnt be returned;
allows post_exec() to use the cursor without issue
|