diff options
| author | Mike Bayer <mike_mp@zzzcomputing.com> | 2020-06-06 20:40:43 -0400 |
|---|---|---|
| committer | Mike Bayer <mike_mp@zzzcomputing.com> | 2020-06-10 15:29:01 -0400 |
| commit | b0cfa7379cf8513a821a3dbe3028c4965d9f85bd (patch) | |
| tree | 19a79632b4f159092d955765ff9f7e842808bce7 /test/sql/test_compiler.py | |
| parent | 3ab2364e78641c4f0e4b6456afc2cbed39b0d0e6 (diff) | |
| download | sqlalchemy-b0cfa7379cf8513a821a3dbe3028c4965d9f85bd.tar.gz | |
Turn on caching everywhere, add logging
A variety of caching issues found by running
all tests with statement caching turned on.
The cache system now has a more conservative approach where
any subclass of a SQL element will by default invalidate
the cache key unless it adds the flag inherit_cache=True
at the class level, or if it implements its own caching.
Add working caching to a few elements that were
omitted previously; fix some caching implementations
to suit lesser used edge cases such as json casts
and array slices.
Refine the way BaseCursorResult and CursorMetaData
interact with caching; to suit cases like Alembic
modifying table structures, don't cache the
cursor metadata if it were created against a
cursor.description using non-positional matching,
e.g. "select *". if a table re-ordered its columns
or added/removed, now that data is obsolete.
Additionally we have to adapt the cursor metadata
_keymap regardless of if we just processed
cursor.description, because if we ran against
a cached SQLCompiler we won't have the right
columns in _keymap.
Other refinements to how and when we do this
adaption as some weird cases
were exposed in the Postgresql dialect,
a text() construct that names just one column that
is not actually in the statement. Fixed that
also as it looks like a cut-and-paste artifact
that doesn't actually affect anything.
Various issues with re-use of compiled result maps
and cursor metadata in conjunction with tables being
changed, such as change in order of columns.
mappers can be cleared but the class remains, meaning
a mapper has to use itself as the cache key not the class.
lots of bound parameter / literal issues, due to Alembic
creating a straight subclass of bindparam that renders
inline directly. While we can update Alembic to not
do this, we have to assume other people might be doing
this, so bindparam() implements the inherit_cache=True
logic as well that was a bit involved.
turn on cache stats in logging.
Includes a fix to subqueryloader which moves all setup to
the create_row_processor() phase and elminates any storage
within the compiled context. This includes some changes
to create_row_processor() signature and a revising of the
technique used to determine if the loader can participate
in polymorphic queries, which is also applied to
selectinloading.
DML update.values() and ordered_values() now coerces the
keys as we have tests that pass an arbitrary class here
which only includes __clause_element__(), so the
key can't be cached unless it is coerced. this in turn
changed how composite attributes support bulk update
to use the standard approach of ClauseElement with
annotations that are parsed in the ORM context.
memory profiling successfully caught that the Session
from Query was getting passed into _statement_20()
so that was a big win for that test suite.
Apparently Compiler had .execute() and .scalar() methods
stuck on it, these date back to version 0.4 and there
was a single test in the PostgreSQL dialect tests
that exercised it for no apparent reason. Removed
these methods as well as the concept of a Compiler
holding onto a "bind".
Fixes: #5386
Change-Id: I990b43aab96b42665af1b2187ad6020bee778784
Diffstat (limited to 'test/sql/test_compiler.py')
| -rw-r--r-- | test/sql/test_compiler.py | 137 |
1 files changed, 137 insertions, 0 deletions
diff --git a/test/sql/test_compiler.py b/test/sql/test_compiler.py index 9881d1247..c12543f82 100644 --- a/test/sql/test_compiler.py +++ b/test/sql/test_compiler.py @@ -75,6 +75,7 @@ from sqlalchemy.sql import elements from sqlalchemy.sql import label from sqlalchemy.sql import operators from sqlalchemy.sql import table +from sqlalchemy.sql import util as sql_util from sqlalchemy.sql.elements import BooleanClauseList from sqlalchemy.sql.expression import ClauseList from sqlalchemy.sql.expression import HasPrefixes @@ -85,7 +86,9 @@ from sqlalchemy.testing import eq_ from sqlalchemy.testing import eq_ignore_whitespace from sqlalchemy.testing import fixtures from sqlalchemy.testing import is_ +from sqlalchemy.testing import is_true from sqlalchemy.testing import mock +from sqlalchemy.testing import ne_ from sqlalchemy.util import u table1 = table( @@ -3493,6 +3496,140 @@ class BindParameterTest(AssertsCompiledSQL, fixtures.TestBase): extracted_parameters=s1_cache_key[1], ) + def test_construct_params_w_bind_clones_post(self): + """test that a BindParameter that has been cloned after the cache + key was generated still matches up when construct_params() + is called with an extracted parameter collection. + + This case occurs now with the ORM as the ORM construction will + frequently run clause adaptation on elements of the statement within + compilation, after the cache key has been generated. this adaptation + hits BindParameter objects which will change their key as they + will usually have unqique=True. So the construct_params() process + when it links its internal bind_names to the cache key binds, + must do this badsed on bindparam._identifying_key, which does not + change across clones, rather than .key which usually will. + + """ + + stmt = select([table1.c.myid]).where(table1.c.myid == 5) + + # get the original bindparam. + original_bind = stmt._where_criteria[0].right + + # it's anonymous so unique=True + is_true(original_bind.unique) + + # cache key against hte original param + cache_key = stmt._generate_cache_key() + + # now adapt the statement + stmt_adapted = sql_util.ClauseAdapter(table1).traverse(stmt) + + # new bind parameter has a different key but same + # identifying key + new_bind = stmt_adapted._where_criteria[0].right + eq_(original_bind._identifying_key, new_bind._identifying_key) + ne_(original_bind.key, new_bind.key) + + # compile the adapted statement but set the cache key to the one + # generated from the unadapted statement. this will look like + # when the ORM runs clause adaption inside of visit_select, after + # the cache key is generated but before the compiler is given the + # core select statement to actually render. + compiled = stmt_adapted.compile(cache_key=cache_key) + + # params set up as 5 + eq_(compiled.construct_params(params={},), {"myid_1": 5}) + + # also works w the original cache key + eq_( + compiled.construct_params( + params={}, extracted_parameters=cache_key[1] + ), + {"myid_1": 5}, + ) + + # now make a totally new statement with the same cache key + new_stmt = select([table1.c.myid]).where(table1.c.myid == 10) + new_cache_key = new_stmt._generate_cache_key() + + # cache keys match + eq_(cache_key.key, new_cache_key.key) + + # ensure we get "10" from construct params. if it matched + # based on .key and not ._identifying_key, it would not see that + # the bind parameter is part of the cache key. + eq_( + compiled.construct_params( + params={}, extracted_parameters=new_cache_key[1] + ), + {"myid_1": 10}, + ) + + def test_construct_params_w_bind_clones_pre(self): + """test that a BindParameter that has been cloned before the cache + key was generated, and was doubled up just to make sure it has to + be unique, still matches up when construct_params() + is called with an extracted parameter collection. + + other ORM feaures like optimized_compare() end up doing something + like this, such as if there are multiple "has()" or "any()" which would + have cloned the join condition and changed the values of bound + parameters. + + """ + + stmt = select([table1.c.myid]).where(table1.c.myid == 5) + + original_bind = stmt._where_criteria[0].right + # it's anonymous so unique=True + is_true(original_bind.unique) + + b1 = original_bind._clone() + b1.value = 10 + b2 = original_bind._clone() + b2.value = 12 + + # make a new statement that uses the clones as distinct + # parameters + modified_stmt = select([table1.c.myid]).where( + or_(table1.c.myid == b1, table1.c.myid == b2) + ) + + cache_key = modified_stmt._generate_cache_key() + compiled = modified_stmt.compile(cache_key=cache_key) + + eq_( + compiled.construct_params(params={}), {"myid_1": 10, "myid_2": 12}, + ) + + # make a new statement doing the same thing and make sure + # the binds match up correctly + new_stmt = select([table1.c.myid]).where(table1.c.myid == 8) + + new_original_bind = new_stmt._where_criteria[0].right + new_b1 = new_original_bind._clone() + new_b1.value = 20 + new_b2 = new_original_bind._clone() + new_b2.value = 18 + modified_new_stmt = select([table1.c.myid]).where( + or_(table1.c.myid == new_b1, table1.c.myid == new_b2) + ) + + new_cache_key = modified_new_stmt._generate_cache_key() + + # cache keys match + eq_(cache_key.key, new_cache_key.key) + + # ensure we get both values + eq_( + compiled.construct_params( + params={}, extracted_parameters=new_cache_key[1] + ), + {"myid_1": 20, "myid_2": 18}, + ) + def test_tuple_expanding_in_no_values(self): expr = tuple_(table1.c.myid, table1.c.name).in_( [(1, "foo"), (5, "bar")] |
