| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
an INSERT of NULL + pyodbc; pyodbc requires a special
object be passed in order to persist NULL. As the :class:`.VARBINARY`
type is now usually the default for :class:`.LargeBinary` due to
:ticket:`3039`, this issue is partially a regression in 1.0.
The pymssql driver appears to be unaffected.
fixes #3464
|
| |
|
|
|
|
|
|
| |
JSONB support once again, as they suddenly
switched on unconditional decoding of JSONB types in version 2.7.1.
Version detection now specifies 2.7.1 as where we should expect
the DBAPI to do json encoding for us.
fixes #3439
|
| |
|
|
|
|
|
|
|
| |
- changelog
- versionadded + reflink for new pg storage parameters doc
- pep8ing
- add additional tests to definitely check that the Index object
is created all the way with the opts we want
fixes #3455
|
| |\ |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| | |
Add support for specifying PostgreSQL index storage paramters (e.g.
fillfactor).
|
| | |
| |
| |
| |
| |
| |
| | |
features that other objects like :class:`.Index` now do, that
the column expression may be specified as an arbitrary SQL
expression such as :obj:`.cast` or :obj:`.text`.
fixes #3454
|
| | |
| |
| |
| |
| |
| |
| |
| | |
"max_row_buffer" execution option for BufferedRowResultProxy
- also add documentation, changelog and version notes
- rework the max_row_buffer argument to be interpreted from
the execution options upfront when the BufferedRowResultProxy
is first initialized.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
psycopg2cffi dialect, in particular that the current 2.7.0 version
does not have native support for the JSONB type. The version detection
for psycopg2 features has been tuned into a specific sub-version
for psycopg2cffi. Additionally, test coverage has been enabled
for the full series of psycopg2 features under psycopg2cffi.
fixes #3439
|
| |/ |
|
| |
|
|
| |
valid version (Microsoft released the spec late).
|
| |
|
|
|
|
|
|
| |
:func:`.engine_from_config` were not being parsed correctly;
these included ``pool_threadlocal`` and the psycopg2 argument
``use_native_unicode``. fixes #3435
- add legacy_schema_aliasing config parsing for mssql
- move use_native_unicode config arg to the psycopg2 dialect
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
``legacy_schema_aliasing`` which when set to False will disable a
very old and obsolete behavior, that of the compiler's
attempt to turn all schema-qualified table names into alias names,
to work around old and no longer locatable issues where SQL
server could not parse a multi-part identifier name in all
circumstances. The behavior prevented more
sophisticated statements from working correctly, including those which
use hints, as well as CRUD statements that embed correlated SELECT
statements. Rather than continue to repair the feature to work
with more complex statements, it's better to just disable it
as it should no longer be needed for any modern SQL server
version. The flag defaults to True for the 1.0.x series, leaving
current behavior unchanged for this version series. In the 1.1
series, it will default to False. For the 1.0 series,
when not set to either value explicitly, a warning is emitted
when a schema-qualified table is first used in a statement, which
suggests that the flag be set to False for all modern SQL Server
versions.
fixes #3424
fixes #3430
|
| |
|
|
| |
non-Integer column types, fixes #2075
|
| |
|
|
|
|
|
|
|
|
|
| |
pep-249 exception names linked to exception classes of an entirely
different name, preventing SQLAlchemy's own exception wrapping from
wrapping the error appropriately.
The SQLAlchemy dialect in use needs to implement a new
accessor :attr:`.DefaultDialect.dbapi_exception_translation_map`
to support this feature; this is implemented now for the py-postgresql
dialect.
fixes #3421
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(hence becoming two regressions); reports that
SELECT statements would GROUP BY a label name and fail was misconstrued
that certain backends such as SQL Server should not be emitting
ORDER BY or GROUP BY on a simple label name at all; when in fact,
we had forgotten that 0.9 was already emitting ORDER BY on a simple
label name for all backends, as described in :ref:`migration_1068`,
as 1.0 had rewritten this logic as part of :ticket:`2992`.
In 1.0.2, the bug is fixed both that SQL Server, Firebird and others
will again emit ORDER BY on a simple label name when passed a
:class:`.Label` construct that is expressed in the columns clause,
and no backend will emit GROUP BY on a simple label name in this case,
as even Postgresql can't reliably do GROUP BY on a simple name
in every case.
fixes #3338, fixes #3385
|
| |
|
|
|
|
| |
with Firebird, so that the values are again rendered inline when
this is selected. Related to :ticket:`3034`.
fixes #3381
|
| |
|
| |
Fix TypeError: Boolean value of this clause is not defined
|
| |
|
|
| |
See https://github.com/pymssql/pymssql/commit/e7fb15dd29090e1f1bb570842b53aea1ec32d8f0
|
| |
|
|
|
|
|
|
|
|
| |
with the psycopg2 dialect in conjunction with non-ascii values
and ``native_enum=False`` would fail to decode return results properly.
This stemmed from when the PG :class:`.postgresql.ENUM` type used
to be a standalone type without a "non native" option.
fixes #3354
- corrected the assertsql comparison rule to expect a non-ascii
SQL string
|
| |
|
|
|
|
|
| |
:ticket:`3184` would cause index operations to fail on Postgresql
versions 8.4 and earlier. The enhancements are now
disabled when using an older version of Postgresql.
fixes #3343
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
is the flag that per :ticket:`2992` causes an order by or group by
an expression that's also in the columns clause to be copied by
label, even if referenced as the expression object. The behavior
for MSSQL is now the old behavior that copies the whole expression
in by default, as MSSQL can be picky on these particularly in
GROUP BY expressions.
fixes #3338
- Add a test that includes a composed label in a GROUP BY
|
| |
|
|
|
|
|
|
|
| |
operation with unicode parameters. SQLAlchemy now passes both
the statement as well as the bound parameters as unicode
objects, as PyMySQL generally uses string interpolation
internally to produce the final statement, and in the case of
executemany does the "encode" step only on the final statement.
fixes #3337
|
| |
|
|
| |
as up-to-date recommendations as possible
|
| | |
|
| |
|
|
|
|
| |
out.
- add __backend__ to the dialect suite so that it runs on CI.
- will be 1.0.0b3
|
| |\
| |
| |
| |
| | |
Conflicts:
lib/sqlalchemy/dialects/mysql/mysqldb.py
|
| | | |
|
| | |
| |
| |
| | |
these files are present.
|
| | | |
|
| |\ \
| | |
| | |
| | | |
https://bitbucket.org/graingert/sqlalchemy into pr49
|
| | | | |
|
| | | | |
|
| |\ \ \
| |/ /
|/| | |
|
| | | | |
|
| | |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
That is, after exhausing all rows using the fetch methods, the
DBAPI cursor is released as before and the object may be safely
discarded, but the fetch methods may continue to be called for which
they will return an end-of-result object (None for fetchone, empty list
for fetchmany and fetchall). Only if :meth:`.ResultProxy.close`
is called explicitly will these methods raise the "result is closed"
error.
fixes #3330 fixes #3329
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
DROP TYPE instruction when a plain ``table.drop()`` is called,
assuming the object is not associated directly with a
:class:`.MetaData` object. In order to accomodate the use case of
an enumerated type shared between multiple tables, the type should
be associated directly with the :class:`.MetaData` object; in this
case the type will only be created at the metadata level, or if
created directly. The rules for create/drop of
Postgresql enumerated types have been highly reworked in general.
fixes #3319
|
| | | |
|
| | | |
|
| | | |
|
| |\ \
| | |
| | |
| | | |
https://bitbucket.org/iurisilvio/sqlalchemy into pr45
|
| | | | |
|
| | | | |
|
| |\ \ \
| | | |
| | | |
| | | | |
https://bitbucket.org/groner/sqlalchemy into pr42
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
From https://www.sqlite.org/partialindex.html
> Partial indexes have been supported in SQLite since version 3.8.0.
Reflection does not expose the predicate of partial indexes. The
postgresql dialect does detect such indexes and issue a warning. I
looked into matching this level of support, but the sqlite pragma
index_info does not expose the predicate. Getting this data would
probably require parsing the CREATE INDEX statement from sqlite_master.
|
| | | | |
| | | |
| | | |
| | | |
| | | | |
- replace force_result_map with a mini-API for nested result sets, add
coverage
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The "wrapping" employed by the mssql and oracle dialects using the
"iswrapper" argument was not being used intelligently by the compiler,
and the result map was being written incorrectly, using
*more* columns in the result map than were actually returned by
the statement, due to "row number" columns that are inside the
subquery. The compiler now writes out result map on the
"top level" select in all cases
fully, and for the mssql/oracle wrapping case extracts out
the "proxied" columns in a second step, which only includes
those columns that are proxied outwards to the top level.
This change might have implications for 3rd party dialects that
might be imitating oracle's approach. They can safely continue
to use the "iswrapper" kw which is now ignored, but they may
need to also add the _select_wraps argument as well.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
such that they are matched to the received result set positionally,
rather than by name. Originally, this was seen as a way to handle
cases where we had columns returned with difficult-to-predict names,
though in modern use that issue has been overcome by anonymous
labeling. In this version, the approach basically reduces function
call count per-result by a few dozen calls, or more for larger
sets of result columns. The approach still degrades into a modern
version of the old approach if textual elements modify the result
map, or if any discrepancy in size exists between
the compiled set of columns versus what was received, so there's no
issue for partially or fully textual compilation scenarios where these
lists might not line up. fixes #918
- callcounts still need to be adjusted down for this so zoomark
tests won't pass at the moment
|