| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Analysis: null_value is not set if any one of the arguments is NULL. So it
returns 1.
Fix: when either argument is NULL, set null_value to true, so that null can
be returned
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
starts with '['
Analysis:
When type is non-scalar and the json document has syntax error
then it is not detected during validating type. And Since other validate
functions take const argument, the error state is not stored eventually.
Fix:
After we run out of all schemas (in case of no error during validation) from
the schema list, go over the json document until there is error in parsing
or json doc has ended.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
--skip-grant-tables
When the server is started with `--event-scheduler=ON` and with
`--skip-grant-tables` (or built as embedded server which has no grant
tables at all), the event scheduler *appears* to be enabled (`SELECT
@@global.event_scheduler` returns `'ON'`), but attempting to
manipulate it in any way returns a misleading error message:
"Cannot proceed, because event scheduler is disabled"
Possible solutions:
1. Fast-fail: fail immediately on startup if `EVENT_SCHEDULER` is set to
any value other than `DISABLED` when starting up without grant
tables, then prevent `SET GLOBAL event_scheduler` while running.
Problem: there are existing setup scripts and code which start with
this combination and assume it will not fail.
2. Warn and change value: if `EVENT_SCHEDULER` is set to any value
other than `DISABLED` when starting, print a warning and change it
immediately to `DISABLED`.
Advantage: The value of the `EVENT_SCHEDULER` system variable after
startup will be consistent with its functional unavailability.
3. Display a clear error: if `EVENT_SCHEDULER` is enabled, but grant
tables are not enabled, then ensure error messages clearly explain
the fact that the combination is not supported.
Advantage: The error message encountered by the end user when
attempting to manipulate the event scheduler (such as `CREATE
EVENT`) is clear and explicit.
This commit implements the combination of solutions (2) and (3): it
will set `EVENT_SCHEDULER=DISABLED` on startup (reflecting the
functional reality) and it will print a startup warning, *and* it will
print a *distinct* error message each time that an end user attempts to
manipulate the event scheduler, so that the end user will clearly understand
the problem even if the startup messages are not visible at that point.
It also adds an MTR test `main.events_skip_grant_tables` to verify the
expected behavior, and the unmodified `main.events_restart` test
continues to demonstrate no change in the error message when the event
scheduler is non-functional for *different* reasons.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services
|
|
|
|
|
|
|
|
|
|
|
| |
object of type 'Item_string' in sql/json_schema.cc
Analysis: make_string_literal() returns pointer of type
Item_basic_constant which is converted to pointer of type Item_string. Now,
Item_string is base class of Item_basic_constant, so the error about
downcasting.
Fix: using constructor of Item_string type directly instead of
downcasting would be more appropriate.
|
|
|
|
|
|
|
|
|
| |
always return 1
Analysis: Implementation used double to store value. longlong is better
choice
Fix: Use longlong for multiple_of and the value to store it and num_flag to
check for decimals.
|
|
|
|
|
|
|
|
| |
Analysis: multipleOf must be strictly greater then 0. However the
implementation only disallowed values strcitly less than 0
Fix: check if value is less than or equal to 0 instead of less than 0 and
return true.
Also fixed the incorrect return value for some other keywords.
|
|
|
|
|
|
|
| |
Analysis: Current implementation does not check the number of elements in
the enum array and whether they are unique or not.
Fix: Add a counter that counts number of elements and before inserting the
element in the enum hash check whether it exists.
|
|
|
|
|
|
|
|
| |
input json schema
Analysis: In case of syntax error while scanning json schema, true is
returned inspite of it being wanring and not error.
Fix: return true instead of false.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unevaluatedProperties with properties declared in subschemas
Analysis:
When a key fails to validate for "properties" when "properties" is being
treated as alternate schema, it needs to fall back on alternate schema
for "properites" itself ("unevaluatedProperties" in context of the bug).
But that doesn't happen and we end up returning false (=validated)
Fix:
When "properties" fails to validate as an alternate schema, fall back on
alternate schema for "properties" itself.
|
|
|
|
|
|
|
|
|
|
| |
regex
Analysis:
When initializing Regexp_processor_pcre object, we set the PCRE2_CASELESS
flag which is responsible for case insensitive comparison.
Fix:
Unset the flag after initializing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comment 2)
Analysis:
flag to check unique gets reset every time. Hence unique values cannot be
correctly checked
Fix:
do not reset the flag that checks unique for null, true and false
values.
comment 3)
Analysis:
current implementation checks for appropriate value but does not
return true.
Fix:
return true on error
comment 4)
Analysis:
Current implementation did not check for value type for values
inside required array.
Fix:
Check values inside required array
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation:
Implementation is made according to json schema validation draft 2020
JSON schema basically has same structure as that of json object, consisting
of key-value pairs. So it can be parsed in the same manner as
any json object.
However, none of the keywords are mandatory, so making guess about the
json value type based only on the keywords would be incorrect.
Hence we need separate objects denoting each keyword.
So during create_object_and_handle_keyword() we create appropriate objects
based on the keywords and validate each of them individually on the json
document by calling respective validate() function if the type matches.
If any of them fails, return false, else return true.
|
|
|
|
|
|
|
|
|
| |
DELETE and UPDATE
Add date conditions transformation to the execution paths
of DELETE and UPDATE commands.
Strip excessive prepared statements from the test file
|
|
|
|
|
|
|
|
| |
engine Memory
Fix incorrect approach to determine whether a field is part of an index.
Remove "force index" hints from tests.
Add tests with composite indexes
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rewrite datetime comparison conditions into sargeable. For example,
YEAR(col) <= val -> col <= YEAR_END(val)
YEAR(col) < val -> col < YEAR_START(val)
YEAR(col) >= val -> col >= YEAR_START(val)
YEAR(col) > val -> col > YEAR_END(val)
YEAR(col) = val -> col BETWEEN YEAR_START(val) AND YEAR_END(val)
Do the same with DATE(col), for example:
DATE(col) <= val -> col <= DAY_END(val)
After such a rewrite index lookup on column "col" can be employed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
variables
In MariaDB, we have a confusing problem where:
* The transaction_isolation option can be set in a configuration file, but it cannot be set dynamically.
* The tx_isolation system variable can be set dynamically, but it cannot be set in a configuration file.
Therefore, we have two different names for the same thing in different contexts. This is needlessly confusing, and it complicates the documentation. The same thing applys for transaction_read_only.
MySQL 5.7 solved this problem by making them into system variables. https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-20.html
This commit takes a similar approach by adding new system variables and marking the original ones as deprecated. This commit also resolves some legacy problems related to SET STATEMENT and transaction_isolation.
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | |\ |
|
| | | | |\ |
|
| | | | | |\ |
|
| | | | | | |\ |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Commit a923d6f49c1ad6fd3f4d6ec02e444c26e4d1dfa8 disabled numeric setting
of character_set_* variables with non-default values:
MariaDB [(none)]> set character_set_client=224;
ERROR 1115 (42000): Unknown character set: '224'
However the corresponding binlog functionality still write numeric
values for log event, and this will break binlog replay if the value is
not default. Now make the server use 'String' type for
'character_set_client' when generating binlog events
Before:
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=224,@@session.collation_connection=224,@@session.collation_server=33/*!*/;
After:
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=utf8mb4,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
Note: prior to the previous commit, setting with '224' or '45' or
'utf8mb4' have the same effect, as they all set the parameter to
'utf8mb4'.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
- agressively -> aggressively
- exising -> existing
- occured -> occurred
- releated -> related
- seperated -> separated
- sucess -> success
- use use -> use
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
* it isn't "pfs" function, don't call it Item_func_pfs,
don't use item_pfsfunc.*
* tests don't depend on performance schema, put in the main suite
* inherit from Item_str_ascii_func
* use connection collation, not utf8mb3_general_ci
* set result length in fix_length_and_dec
* do not set maybe_null
* use my_snprintf() where possible
* don't set m_value.ptr on every invocation
* update sys schema to use the format_pico_time()
* len must be size_t (compilation error on Windows)
* the correct function name for double->double is fabs()
* drop volatile hack
|
|\ \ \ \ \ \ \ \
| |/ / / / / / / |
|
| | | | | | | | |
|
|\ \ \ \ \ \ \ \
| |/ / / / / / / |
|
| |\ \ \ \ \ \ \
| | |/ / / / / / |
|
| | |\ \ \ \ \ \
| | | |/ / / / / |
|
| | | |\ \ \ \ \
| | | | |/ / / / |
|
| | | | |\ \ \ \
| | | | | |/ / / |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Do not set any flags in the items for constant subformulas TRUE/FALSE when
checking pushability of a formula into a view. Occurrences of these
subformulas can be ignored when checking pushability of the formula.
At the same time the items used for these constants became immutable
starting from version 10.7.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
pointer of type 'const struct MY_CHARSET_HANDLER' in my_convert
Type_handler::partition_field_append_value() erroneously
passed the address of my_collation_contextually_typed_binary
to conversion functions copy_and_convert() and my_convert().
This happened because generate_partition_syntax_for_frm()
was called from mysql_create_frm_image() in the stage when
the fields in List<Create_field> can still contain unresolved
contextual collations, like "binary" in the reported crash scenario:
ALTER TABLE t CHANGE COLUMN a a CHAR BINARY;
Fix:
1. Splitting mysql_prepare_create_table() into two parts:
- mysql_prepare_create_table_stage1() interates through
List<Create_field> and calls Create_field::prepare_stage1(),
which performs basic attribute initialization, including
context collation resolution.
- mysql_prepare_create_table_finalize() - the rest of the
old mysql_prepare_create_table() code.
2. Changing mysql_create_frm_image():
It now calls:
- mysql_prepare_create_table_stage1() in the very
beginning, before the partition related code.
- mysql_prepare_create_table_finalize() in the end,
instead of the old mysql_prepare_create_table() call
3. Adding mysql_prepare_create_table() as a wrapper
for two calls:
mysql_prepare_create_table_stage1() ||
mysql_prepare_create_table_finalize()
so the code stays unchanged in the other places
where mysql_prepare_create_table() was used.
4. Changing prototype for Type_handler::Column_definition_prepare_stage1()
Removing arguments:
- handler *file
- ulonglong table_flags
Adding a new argument instead:
- column_definition_type_t type
This allows to call Column_definition_prepare_stage1() and
therefore to call mysql_prepare_create_table_stage1()
before instantiation of a handler.
This simplifies the code, because in case of a partitioned table,
mysql_create_frm_image() creates a handler of the underlying
partition first, the frees it and created a ha_partition
instance instead.
mysql_prepare_create_table() before the fix was called with the final
(ha_partition) handler.
5. Moving parts of Column_definition_prepare_stage1() which
need a pointer to handler and table_flags to
Column_definition_prepare_stage2().
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Test case and minor fixes by Daniel Black
Reviewer: Alexander Barkov
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The failed assertion was about encountering the same rowid value in
two different partitions.
This wasn't possible with InnoDB previously: InnoDB used a global counter
to produce rowid values for hidden PK.
After the fix for MDEV-19506, it uses per-table counters so it's easily
possible to get the same hidden PK values in different tables.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
apply_selectivity_for_table on SELECT
The crash happened due to rows=2 vs rows=1 difference between how the
estimate of number of rows in a derived table is computed in
TABLE_LIST::fetch_number_of_rows() and JOIN::add_keyuses_for_splitting().
Made JOIN::add_keyuses_for_splitting() use the result of computations in
TABLE_LIST::fetch_number_of_rows().
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Created tests for "delete" based on update_use_source.test
For the update_use_source.test tests, data recovery in the table has been changed
from a rollback transaction to a complete delete and re-insert of the data with
optimize table. Cases are now being checked on three engines.
Added tests for update/delete with LooseScan and DuplicateWeedout optimization strategies
Added tests for engine MEMORY on delete and update
Added tests for multi-update with JSON_TABLE
Added tests for multi-update and multi-delete for engine Connect
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The commits for MDEV-30538 and MDEV-30586 could not be cherry-picked into
11.0 separately.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This patch allows to use semi-join optimization at the top level of
single-table update and delete statements.
The problem of supporting such optimization became easy to resolve after
processing a single-table update/delete statement started using JOIN
structure. This allowed to use JOIN::prepare() not only for multi-table
updates/deletes but for single-table ones as well. This was done in the
patch for mdev-28883:
Re-design the upper level of handling UPDATE and DELETE statements.
Note that JOIN::prepare() detects all subqueries that can be considered
as candidates for semi-join optimization. The code added by this patch
looks for such candidates at the top level and if such candidates are found
in the processed single-table update/delete the statement is handled in
the same way as a multi-table update/delete.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
ORDER BY clause without LIMIT clause can be removed from DELETE statements.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This bug caused a crash of the server at the second execution of a stored
function that used DELETE or UPDATE statement if the first execution
of this function reported an error encountered after the prepare phase.
This happened because in such cases the executed DELETE/UPDATE statement
remained marked as prepared. As a result the second execution of SF missed
the prepare phase for the statement altogether and the statement could not
be executed properly.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The problem was caused by an assertion that is not valid anymore.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This patch fixes not only the assertion failure in the function
Field_iterator_table_ref::set_field_iterator() but also:
- fixes the problem of forced materialization of derived tables used
in subqueries contained in WHERE clauses of single-table and multi-table
UPDATE and DELETE statements
- fixes the problem of MDEV-17954 that prevented execution of multi-table
DELETE statements if they use in their WHERE clauses references to
the tables that are updated.
The patch must be considered a complement to the patch for MDEV-28883.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|/ / / / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This patch introduces a new way of handling UPDATE and DELETE commands at
the top level after the parsing phase. This new way of processing update
and delete statements can be seen in the implementation of the prepare()
and execute() methods from the new Sql_cmd_dml class. This class derived
from the Sql_cmd class can be considered as an interface class for processing
such commands as SELECT, INSERT, UPDATE, DELETE and other comands
manipulating data in tables.
With this patch processing of update and delete statements after parsing
proceeds by the following schema:
- precheck of the access rights is performed for the used tables
- the used tables are opened
- context analysis phase is performed for the statement
- the used tables are locked
- the statement is optimized and executed
- clean-up is performed for the statement
The implementation of the method Sql_cmd_dml::execute() adheres this schema.
The virtual functions of the class Sql_cmd_dml used for precheck of the
access rights, context analysis, optimization and execution allow to adjust
this schema for processing data manipulation statements of any types.
This schema of processing data manipulation statements is taken from the
current MySQL code. Moreover the definition the class Sql_cmd_dml introduced
in this patch is almost a full replica of such class in the existing MySQL.
However the implementation of the derived classes for update and delete
statements is quite different. This implementation employs the JOIN class
for all kinds of update and delete statements. It allows to perform main
bulk of context analysis actions by the function JOIN::prepare(). This
guarantees that characteristics and properties of the statement tree
discovered for optimization phase when doing context analysis are the same
for single-table and multi-table updates and deletes.
With this patch the following functions are gone:
mysql_prepare_update(), mysql_multi_update_prepare(),
mysql_update(), mysql_multi_update(),
mysql_prepare_delete(), mysql_multi_delete_prepare(), mysql_delete().
The code within these functions have been used as much as possible though.
The functions mysql_test_update() and mysql_test_delete() are also not
needed anymore. The method Sql_cmd_dml::prepare() serves processing
- update/delete statement
- PREPARE stmt FROM "<update/delete statement>"
- EXECUTE stmt when stmt is prepared from update/delete statement.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Removed an old '* 2' from the HASH join cost. This was made obsolete by
a later patch that added cost for copying the data out from the join buffer
to table->record.
I also added some 'echo' to some test cases to make it easier to debug
test case changes.
Test case changes:
- subselect3_jcl6 and subselect_sj2_jcl6 result changes as materialized
tables changed to hash join + first_match
|