| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
For table references to CTEs the field TABLE_LIST::db must be set to
an empty string as it's done for table references to derived tables in
order CTEs to be processed similar to how derived tables are processed.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
|
|
| |
truncate check constraints expressions
- Reviewed by: daniel@mariadb.org
|
|
|
|
|
|
|
| |
- MDEV-24177: main.sp2 test fails: Result length mismatch
- MDEV-24178: main.upgrade_MDEV-19650 test fails: Result length mismatch
Reviewed by: serg@mariadb.com
|
|
|
|
|
|
|
|
| |
mergeable derived table
Do not check privileges for derived tables/CTEs and their fields.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
|
|
| |
events.
The log line should be added behind the filters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
encountered
The new option --log-innodb-page-corruption is introduced.
When this option is set, backup is not interrupted if innodb corrupted
page is detected. Instead it logs all found corrupted pages in
innodb_corrupted_pages file in backup directory and finishes with error.
For incremental backup corrupted pages are also copied to .delta file,
because we can't do LSN check for such pages during backup,
innodb_corrupted_pages will also be created in incremental backup
directory.
During --prepare, corrupted pages list is read from the file just after
redo log is applied, and each page from the list is checked if it is allocated
in it's tablespace or not. If it is not allocated, then it is zeroed out,
flushed to the tablespace and removed from the list. If all pages are removed
from the list, then --prepare is finished successfully and
innodb_corrupted_pages file is removed from backup directory. Otherwise
--prepare is finished with error message and innodb_corrupted_pages contains
the list of the pages, which are detected as corrupted during backup, and are
allocated in their tablespaces, what means backup directory contains corrupted
innodb pages, and backup can not be considered as consistent.
For incremental --prepare corrupted pages from .delta files are applied
to the base backup, innodb_corrupted_pages is read from both base in
incremental directories, and the same action is proceded for corrupted
pages list as for full --prepare. innodb_corrupted_pages file is
modified or removed only in base directory.
If DDL happens during backup, it is also processed at the end of backup
to have correct tablespace names in innodb_corrupted_pages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The reason for the failure is that
thd->mdl_context.release_transactional_locks()
was called after commit & rollback even in cases where the current
transaction is still active.
For 10.2, 10.3 and 10.4 the fix is simple:
- Replace all calls to thd->mdl_context.release_transactional_locks() with
thd->release_transactional_locks(). The thd function will only call
the mdl_context function if there are no active transactional locks.
In 10.6 we will better fix where we will change the return value for
some trans_xxx() functions to indicate if transaction did close the
transaction or not. This will avoid the need of the indirect call.
Other things:
- trans_xa_commit() and trans_xa_rollback() will automatically
call release_transactional_locks() if the transaction is closed.
- We can't do that for the other functions as the caller of many of these
are doing additional work (like close_thread_tables) before calling
release_transactional_locks().
- Added missing abort_result_set() and missing DBUG_RETURN in
select_create::send_eof()
- Fixed wrong indentation in injector::transaction::commit()
|
| |
|
|
|
|
| |
Reviewed by:serg@mariadb.com
|
| |
|
|
|
|
| |
Add test case
|
|
|
|
|
|
|
|
|
|
|
|
| |
A bogus error message was issued when a condition was pushed into a
materialized derived table or view specified as union of selects with
aggregation when the corresponding columns of the selects had different
names. This happened because the expression pushed into having clauses of
the selects was adjusted for the names of the first select of the union.
The easiest solution was to rename the columns of the other selects to be
name compatible with the columns of the first select.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
|
|
|
|
| |
sequence.
Explicitly setting encoding to UTF-8 when writing to file and
replacing wide characters from MTR_RES_FAILED when writing to
XML file. The wide characters are not allowed in XML.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in TABLE_LIST::is_recursive_with_tables
After the patch for MDEV-23619 the code of st_select_lex::cleanup started
using the list st_select_lex::leaf_tables. This list is built for any
query with FROM clause in the function setup_tables(). If such query is
used in a stored procedure it must be ensured that the list is empty
before each new call of the procedure. Otherwise if the first call of
the procedure is successful while the second call reports an error before
the setup_tables() is invoked then list st_select_lex::leaf_tables would
point to a piece of memory that has been already freed.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
| |
Add wait_condition.
|
|
|
|
| |
Add primary key and wait condition.
|
|
|
|
| |
Add wait_conditions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to a premature cleanup of the unit that specified a recursive CTE
used in the second operand of union the server fell into an infinite
loop in the reported test case. In other cases this premature cleanup
could cause other problems.
The bug is the result of a not quite correct fix for MDEV-17024. The
unit that specifies a recursive CTE has to be cleaned only after the
cleanup of the last external reference to this CTE. It means that
cleanups of the unit triggered not by the cleanup of a external
reference to the CTE must be blocked.
Usage of local table chains in selects to get external references to
recursive CTEs was not correct either because of possible merges of
some selects.
Also fixed a minor bug in st_select_lex::set_explain_type() that caused
typing 'RECURSIVE UNION' instead of 'UNION' in EXPLAIN output for external
references to a recursive CTE.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This bug could manifest itself for a query with WHERE condition containing
top level OR formula such that each conjunct contained a single-range
condition supported by the same index. One of these range conditions must
be fully covered by another range condition that is used later in the OR
formula. Additionally at least one of these condition should be ANDed with
a sargable range condition supported by a different index.
There were several attempts to fix related problems for OR conditions after
the backport of range optimizer code from MySQL (commit
0e19f3e36f7842583feb6bead2c2600cd620bced). Unfortunately the first of these
fixes contained typo remained unnoticed until recently. This typo bug led
to rejection of valid range accesses. This patch fixed this typo bug.
The fix revealed another two bugs: one in a constructor for SEL_ARG,
the other in the function tree_or(). Both are fixed in this patch.
|
| |
| |
| |
| | |
Add a testcase.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Part#1: Revert the patch that caused it:
commit 291be494744abe90f4bdf6b5a35c4c26ee8ddda5
Author: Igor Babaev <igor@askmonty.org>
Date: Thu Sep 24 22:02:00 2020 -0700
MDEV-23811: With large number of indexes optimizer chooses an inefficient plan
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Patch is solving generating report on warning
To repeat the error run single worker:
```
./mtr --mysqld=--lock-wait-timeout=-xx 1st 1st --force --parallel 1
```
or `N` workers with `N+1` tests with failures and `force`
```
./mtr --mysqld=--lock-wait-timeout=-xx 1st 1st grant5 --force --parallel 2
```
- Patch is doing cosmetic fix of `current_test` log file which holds the old log value of test `CURRENT TEST:..` in `mark_log()` in case of `unknown option` and as such
the logic which is using it's content doesn't output valid log content and doesn't generate valid `$test->{'comment'}` message.asdf
- Closing the socket/handler after the removing the handler from IO for
consistency
Reviewed by: serg@mariadb.com
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Changed the test so that it does not rely on specific auto increment
ids. With Galera's default wsrep_auto_increment_control setting it is
not guaranteed that auto increments always start from 1. The test was
occasionally failing due to result content mismatch.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
|
|\ \
| |/ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
__memcmp_avx2_movbe from native_compare
The issue here was the system variable max_sort_length was being applied
to decimals and it was truncating the value for decimals to the number
of bytes set by max_sort_length.
This was leading to a buffer overflow as the values were written
to the buffer without truncation and then we moved the offset to
the number of bytes(set by max_sort_length), that are needed for comparison.
The fix is to not apply max_sort_length for fixed size types like INT,
DECIMALS and only apply max_sort_length for CHAR, VARCHARS, TEXT and
BLOBS.
|
| |
| |
| |
| | |
This was missed in commit d6ea03fa94dc008b30932bf1e8ea40c3346f51c8.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add --system={all, users, plugins, udfs, servers, stats, timezones}
This will dump system information from the server in
a logical form like:
* CREATE USER
* GRANT
* SET DEFAULT ROLE
* CREATE ROLE
* CREATE SERVER
* INSTALL PLUGIN
* CREATE FUNCTION
"stats" is the innodb statistics tables or EITS and
these are dumped as INSERT/REPLACE INTO statements
without recreating the table.
"timezones" is the collection of timezone tables
which are important to transfer to generate identical
results on restoration.
Two other options have an effect on the SQL generated by
--system=all. These are mutually exclusive of each other.
* --replace
* --insert-ignore
--replace will include "OR REPLACE" into the logical form
like:
* CREATE OR REPLACE USER ...
* DROP ROLE IF EXISTS (MySQL-8.0+)
* CREATE OR REPLACE ROLE ...
* UNINSTALL PLUGIN IF EXISTS (10.4+) ... (before INSTALL PLUGIN)
* DROP FUNCTION IF EXISTS (MySQL-5.7+)
* CREATE OR REPLACE [AGGREGATE] FUNCTION
* CREATE OR REPLACE SERVER
--insert-ignore uses the construct " IF NOT EXISTS" where
supported in the logical syntax.
'CREATE OR REPLACE USER' includes protection against
being run as the same user that is importing the mysqldump.
Includes experimental support for dumping mysql-5.7/8.0
system tables and exporting logical SQL compatible with MySQL.
Updates mysqldump man page, including this information and
(removing obsolute bug reference)
Reviewed-by: anel@mariadb.org
|
|
|
|
| |
Test itself is not deterministic.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Implement a different fix for
"MDEV-19232: Floating point precision / value comparison problem"
Instead of truncating decimal values after every division,
truncate them for comparison purposes.
This reverts commit 62d73df6b270 but keeps the test.
|
| |
| |
| |
| | |
followup for 3e807d255e0e and eae10a87ff60
|
| | |
|
| |
| |
| |
| |
| | |
and restore the test modified in the same commit
(the non-replication related deadlock will be reported separately)
|
| |
| |
| |
| |
| |
| |
| |
| | |
upon incremental backup
mariabackup deallocated uninitialized
write_filt_ctxt.u.wf_incremental_ctxt in xtrabackup_copy_datafile() when
some table should be skipped due to parsed DDL redo log record.
|
| |
| |
| |
| |
| | |
Remove unnecessary condition and add necessary include
for non debug Galera library.
|
|\ \
| |/ |
|
| | |
|
| |
| |
| |
| | |
and remove unused files
|
| |
| |
| |
| |
| |
| |
| | |
`LOCK TABLES view_name` should require
* invoker to have SELECT and LOCK TABLES privileges on the view
* either invoker or definer (only if sql security definer) to
have SELECT and LOCK TABLES privileges on the used tables/views.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The maximum innodb key length is 3500 what is hardcoded in
ha_innobase::max_supported_key_length()). The maximum number of innodb indexes
is configured with MAX_INDEXES macro (see also MAX_KEY definition).
The same is currently implemented for blackhole storage engine.
Cherry picked from percona-server 0d90d81c3c507a6b1476246a405504f6e4ef9d4d
Original lp bug 1733049
Reviewed-by: daniel@mariadb.org
|
| |
| |
| |
| | |
DEFAULT for the replicate_do_db is the "" as our documentation states.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
problem:
========
mysqltest: In included file "./include/assert.inc":
included from mysql-test/suite/sys_vars/t/rpl_init_slave_func.test at line 69:
Assertion text: '@@global.max_connections = @start_max_connections'
Assertion result: '0'
mysqltest: In included file "./include/assert.inc":
included from mysql-test/suite/sys_vars/t/rpl_init_slave_func.test at line 86:
Assertion text: '@@global.max_connections = @start_max_connections + 1'
Assertion result: '0'
Analysis:
=========
A slave SQL thread sets its Running state to Yes very early in its
initialisation, before the majority of initialisation actions, including
executing the init_slave command, are done. Thus the testcase has a race
condition where the initial replication setup might finish executing later
than the testcase SET GLOBAL init_slave, making the testcase see its effect
where it checks for its absence.
Fix:
===
Include 'sync_slave_sql_with_master.inc' at the beginning of the test to
ensure that slave applier has completed the execution of 'init_slave' command
and proceeded to event application. Replace the apparently needless RESET
MASTER / RESET SLAVE etc.
Patch is based on:
https://github.com/percona/percona-server/pull/1464/commits/b91e2e6f90611aa299c302929fb8b068e8ac0dee
Author: laurynas-biveinis
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Patch removes dict_index_t::stats_latch. Table/index statistics now
protected with dict_sys->mutex. That way statistics computation can
happen in parallel in several threads and dict_sys->mutex will be locked
only for a short period of time.
This patch is a joint work with Marko Mäkelä
dict_index_t::lock: make mutable which allows to pass const pointer
when only lock is touched in an object
btr_height_get()
btr_get_size(): make index argument const for better type safety
btr_estimate_number_of_different_key_vals(): now returns computed values
instead of setting fields in dict_index_t directly
remove everything related to dict_index_t::stats_latch
dict_stats_index_set_n_diff(): now returns computed values instead
of setting fields in dict_index_t directly
dict_stats_analyze_index(): now returns computed values instead
of setting fields in dict_index_t directly
Reviewed by: Marko Mäkelä
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Let us introduce a dummy variable innodb_max_purge_lag_wait
for waiting that the InnoDB history list length is below
the user-specified limit. Specifically,
SET GLOBAL innodb_max_purge_lag_wait=0;
should wait for all history to be purged. This could be useful
when upgrading from an older version to MariaDB 10.3 or later,
to avoid hitting MDEV-15912.
Note: the history cannot be purged if there exist transactions
that may see old versions.
Reviewed by: Vladislav Vaintroub
|
| |
| |
| |
| |
| |
| |
| | |
session_track_system_variables and max_relay_log_size.
lock LOCK_global_system_variables around the get_one_variable() call
in the Session_sysvars_tracker::store_variable().
|