| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug was introduced by commit be00e279c6061134a33a8099fd69d4304735d02e
The commit was applied for the task MDEV-6480 that allowed to remove top
level disjuncts from WHERE conditions if the range optimizer evaluated them
as always equal to FALSE/NULL.
If such disjuncts are removed the WHERE condition may become an AND formula
and if this formula contains multiple equalities the field JOIN::item_equal
must be updated to refer to these equalities. The above mentioned commit
forgot to do this and it could cause crashes for some queries.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
|
| |
restore the old behavior where without a debugger mtr does not
wait for mysqld to start. It was broken in feacc0aaf2
|
|
|
|
| |
MDEV-20330 Combination of "," (comma), cross join and left join fails to parse
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bug occurs where the float token containing a dot with an 'e'
notation was dropped from the request completely.
This causes a manner of invalid SQL statements like:
select id 1.e, char 10.e(id 2.e), concat 3.e('a'12356.e,'b'1.e,'c'1.1234e)1.e, 12 1.e*2 1.e, 12 1.e/2 1.e, 12 1.e|2 1.e, 12 1.e^2 1.e, 12 1.e%2 1.e, 12 1.e&2 from test;
To be parsed correctly as if it was:
select id, char(id), concat('a','b','c'), 12*2, 12/2, 12|2, 12^2, 12%2, 12&2 from test.test;
This correct parsing occurs when e is followed by any of:
( ) . , | & % * ^ /
|
|
|
|
|
|
|
|
|
| |
Currently, SST scripts assume that the filename specified in
the --log-bin-index argument either does not contain an extension
or uses the standard ".index" extension. Similar assumptions are
used for the log_bin_index parameter read from the configuration
file. This commit adds support for arbitrary extensions for the
index file paths.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the server is started with the --innodb-force-recovery argument
on the command line, then during SST this argument can be passed to
mariabackup only at the --prepare stage, and accordingly it must be
removed from the --mysqld-args list (and it is not should be passed
to mariabackup otherwise).
This commit fixes a flaw in the SST scripts and add a test that
checks the ability to run the joiner node in a configuration that
uses --innodb-force-recovery=1.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
statement
This bug led to reporting bogus messages "No database selected" for DELETE
statements if they used subqueries in their WHERE conditions and these
subqueries contained references to CTEs.
The bug happened because the grammar rule for DELETE statement did not
call the function LEX::check_cte_dependencies_and_resolve_references() and
as a result of it references to CTEs were not identified as such.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug concerned only CREATE TABLE statements of the form
CREATE TABLE <table name> AS <with clause> <union>.
For such a statement not all references to CTE used in <union> were resolved.
As a result a bogus message was reported for the first unresolved reference.
This happened because for such statements the function resolving references
to CTEs LEX::check_cte_dependencies_and_resolve_references() was called
prematurely in the parser.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
| |
result
|
|
|
|
| |
result
|
|
|
|
|
|
| |
fil_space_decrypt(): change signature to return status via dberr_t only.
Also replace impossible condition with an assertion and prove it via
test cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug affected queries with two or more references to a CTE referring
another CTE if the definition of the latter contained an invocation of
a stored function that used a base table. The bug could lead to a bogus
error message or to an assertion failure.
For any non-first reference to CTE cte1 With_element::clone_parsed_spec()
is called that parses the specification of cte1 to construct the unit
structure for this usage of cte1. If cte1 refers to another CTE cte2
outside of the specification of cte1 then With_element::clone_parsed_spec()
has to be called for cte2 as well. This call is made by the function
LEX::resolve_references_to_cte() within the invocation of the function
With_element::clone_parsed_spec() for cte1.
When the specification of a CTE is parsed all table references encountered
in it must be added to the global list of table references for the query.
As the specification for the non-first usage of a CTE is parsed at a
recursive call of the parser the function With_element::clone_parsed_spec()
invoked at this recursive call should takes care of appending the list of
table references encountered in the specification of this CTE cte1 to the
list of table references created for the query. And it should do it after
the call of LEX::resolve_references_to_cte() that resolves references to
CTEs defined outside of the specification of cte1 because this call may
invoke the parser again for specifications of other CTEs and the table
references from their specifications must ultimately appear in the global
list of table references of the query.
The code of With_element::clone_parsed_spec() misplaced the call of
LEX::resolve_references_to_cte(). As a result LEX::query_tables_last used
for the query that was supposed to point to the field 'next_global' of the
last element in the global list of table references actually pointed to
'next_global' of the previous element.
The above inconsistency certainly caused serious problems when table
references used in the stored functions invoked in cloned specifications
of CTEs were added to the global list of table references.
|
|
|
|
|
|
|
|
| |
recognized as an internal or external command, operable program or batch file.
Removed grep from mysqldump command stream and instead,
extend the search_file pattern to search for rows containing
binary zeros instead of any occurance of '00' in the input
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
temporary table
When transaction creates or drops temporary tables and afterward its statement
faces an error even the transactional table statement's cached ROW
format events get involved into binlog and are visible after the transaction's commit.
Fixed with proper analysis of whether the errored-out statement needs
to be rolled back in binlog.
For instance a fact of already cached CREATE or DROP for temporary
tables by previous statements alone
does not cause to retain the being errored-out statement events in the
cache.
Conversely, if the statement creates or drops a temporary table
itself it can't be rolled back - this rule remains.
|
| |
| |
| |
| |
| | |
Use better error message when KILL fails even in case TOI
fails.
|
| |
| |
| |
| |
| | |
* Fix error handling NULL-pointer reference
* Add mtr-suppression on galera_ssl_upgrade
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is
wsrep_thd_LOCK()
wsrep_kill_victim()
lock_rec_other_has_conflicting()
lock_clust_rec_read_check_and_lock()
row_search_mvcc()
ha_innobase::index_read()
ha_innobase::rnd_pos()
handler::ha_rnd_pos()
handler::rnd_pos_by_record()
handler::ha_rnd_pos_by_record()
Rows_log_event::find_row()
Update_rows_log_event::do_exec_row()
Rows_log_event::do_apply_event()
Log_event::apply_event()
wsrep_apply_events()
and mutexes are taken in the order
lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data
When a normal KILL statement is executed, the stack is
innobase_kill_query()
kill_handlerton()
plugin_foreach_with_mask()
ha_kill_query()
THD::awake()
kill_one_thread()
and mutexes are
victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex
This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.
In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.
TOI replication is used, in this approach, purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.
This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When restoring lastinx last_key.keyinfo must be updated as well. The
good example is in _ma_check_index().
The point of failure is extra(HA_EXTRA_NO_KEYREAD) in
ha_maria::get_auto_increment():
1. extra(HA_EXTRA_KEYREAD) saves lastinx;
2. maria_rkey() changes index, so the lastinx and last_key.keyinfo;
3. extra(HA_EXTRA_NO_KEYREAD) restores lastinx but not
last_key.keyinfo.
So we have discrepancy between lastinx and last_key.keyinfo after 3.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
my_copy_fix_mb() passed MIN(src_length,dst_length) to
my_append_fix_badly_formed_tail(). It could break a multi-byte
character in the middle, which put the question mark to the
destination.
Fixing the code to pass the true src_length to
my_append_fix_badly_formed_tail().
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There is a server startup option --gdb a.k.a. --debug-gdb that requests
signals to be set for more convenient debugging. Most notably, SIGINT
(ctrl-c) will not be ignored, and you will be able to interrupt the
execution of the server while GDB is attached to it.
When we are debugging, the signal handlers that would normally display
a terse stack trace are useless.
When we are debugging with rr, the signal handlers may interfere with
a SIGKILL that could be sent to the process by the environment, and ruin
the rr replay trace, due to a Linux kernel bug
https://lkml.org/lkml/2021/10/31/311
To be able to diagnose bugs in kill+restart tests, we may really need
both a trace before the SIGKILL and a trace of the failure after a
subsequent server startup. So, we had better avoid hitting the problem
by simply not installing those signal handlers.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
in row_merge_fts_doc_tokenize, stack smashing
strmake() puts one extra 0x00 byte at the end of the string.
The code in my_strnxfrm_tis620[_nopad] did not take this into
account, so in the reported scenario the 0x00 byte was put outside
of a stack variable, which made ASAN crash.
This problem is already fixed in in MySQL:
commit 19bd66fe43c41f0bde5f36bc6b455a46693069fb
Author: bin.x.su@oracle.com <>
Date: Fri Apr 4 11:35:27 2014 +0800
But the fix does not seem to be correct, as it breaks when finds a zero byte
in the source string.
Using memcpy() instead of strmake().
- Unlike strmake(), memcpy() it does not write beyond the destination
size passed.
- Unlike the MySQL fix, memcpy() does not break on the first 0x00 byte found
in the source string.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
temporary table
When transaction creates or drops temporary tables and afterward its statement
faces an error even the transactional table statement's cached ROW
format events get involved into binlog and are visible after the transaction's commit.
Fixed with proper analysis of whether the errored-out statement needs
to be rolled back in binlog.
For instance a fact of already cached CREATE or DROP for temporary
tables by previous statements alone
does not cause to retain the being errored-out statement events in the
cache.
Conversely, if the statement creates or drops a temporary table
itself it can't be rolled back - this rule remains.
|
|/
|
|
| |
let's not run them under valgrind
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial test case for MySQL Bug #33053297 is based on
mysql/mysql-server@27130e25078864b010d81266f9613d389d4a229b.
innobase_get_field_from_update_vector is not a suitable function to fetch
updated row info, as well as parent table's update vector is not always
suitable. For instance, in case of DELETE it contains undefined data.
castade->update vector seems to be good enough to fetch all base columns
update data, and besides faster, and less error-prone.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The assert inside String::copy() prevents copying from from "str"
if its own String::Ptr also points to the same memory.
The idea of the assert is that copy() performs memory reallocation,
and this reallocation can free (and thus invalidate) the memory pointed by Ptr,
which can lead to further copying from a freed memory.
The assert was incomplete: copy() can free the memory pointed by its Ptr
only if String::alloced is true!
If the String is not alloced, it is still safe to copy even from
the location pointed by Ptr.
This scenario demonstrates a safe copy():
const char *tmp= "123";
String str1(tmp, 3);
String str2(tmp, 3);
// This statement is safe:
str2.copy(str1->ptr(), str1->length(), str1->charset(), cs_to, &errors);
Inside the copy() the parameter "str" is equal to String::Ptr in this example.
But it's still ok to reallocate the memory for str2, because str2
was a constant before the copy() call. Thus reallocation does not
make the memory pointed by str1->ptr() invalid.
Adjusting the assert condition to allow copying for constant strings.
|
| |
|
|
|
|
|
|
|
|
|
| |
Happens with Innodb engine.
Move unlock_locked_table() past drop_open_table(), and
rollback current statement, so that we can actually unlock the table.
Anything else results in assertions, in drop, or unlock, or in close_table.
|
|
|
|
|
| |
DBUG_ASSERT removed as the AUTO INCREMENT can actually be 0 when the
SET insert_id= 0; was done.
|
|
|
|
| |
Get rid of the global big_buffer.
|
|
|
|
| |
$opt_vs_config to $multiconfig to use with other cmake multiconfig generators
|
|
|
|
|
|
|
|
|
| |
Based on mysql/mysql-server@bc9c46bf2894673d0df17cd0ee872d0d99663121
but without sleeps.
The test was verified to hit the debug assertion if the change to
fts_add_doc_by_id() in commit 2d98b967e31623d9027c0db55330dde2c9d1d99a
was reverted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
InnoDB commit fails when consecutive FTS_DOC_ID value
is greater than 4294967295.
Fix is that InnoDB should remove the delta FTS_DOC_ID
value limitations and fts should encode 8 byte value,
remove FTS_DOC_ID_MAX_STEP variable. Replaced the
fts0vlc.ic file with fts0vlc.h
fts_encode_int(): Should be able to encode 10 bytes value
fts_get_encoded_len(): Should get the length of the value
which has 10 bytes
fts_decode_vlc(): Add debug assertion to verify the maximum
length allowed is 10.
mach_read_uint64_little_endian(): Reads 64 bit stored in
little endian format
Added a unit test case which check for minimum and maximum
value to do the fts encoding
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
create_table_info_t::innobase_table_flags(): Refuse to create
a PAGE_COMPRESSED table with PAGE_COMPRESSION_LEVEL=0 if also
innodb_compression_level=0.
The parameter value innodb_compression_level=0 was only somewhat
meaningful for testing or debugging ROW_FORMAT=COMPRESSED tables.
For the page_compressed format, it never made any sense, and the
check in dict_tf_is_valid_not_redundant() that was added in
72378a25830184f91005be7e80cfb28381c79f23 (MDEV-12873) would cause
the server to crash.
|
|
|
|
|
|
|
| |
The assertion is absolutely correct since no data access is possible after
XA PREPARE.
The check is added in mysql_ha_read.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a duplicate of MDEV-18278 89936f11e965, but I will add an
additional assertion
Description:
The frm corruption should not be reported during CREATE TABLE. Normally
it doesn't, and the data to fill TABLE is taken by open_table_from_share
call. However, the vcol data is stored as SQL string in
table->s->vcol_defs.str and is anyway parsed on each table open.
It is impossible [or hard] to avoid, because it's hard to clone the
expression tree in general (it's easier to parse).
Normally parse_vcol_defs should only fail on semantic errors. If so,
error_reported is set to true. Any other failure is not expected during
table creation. There is either unhandled/unacknowledged error, or
something went really wrong, like memory reject. This all should be
asserted anyway.
Solution:
* Set *error_reported=true for the forward references check;
* Assert for every unacknowledged error during table creation.
|
| |
|
|
|
|
|
| |
We should set the charset in
Item_func_json_format::fix_length_and_dec().
|
|
|
|
|
| |
Let us use a minimal-size buffer pool to ensure that page flushing
will be slow enough so that LRU eviction cannot be avoided.
|
|
|
|
|
|
|
|
| |
for their definitions
Do not print illegal table field names for non-top-level SELECT list,
they will not be refered in any case but create problem for parsing
of printed result.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
WRITE_CACHE' failed
Problem:
========
This patch addresses two issues.
First, if a CHANGE MASTER command is issued and an error happens
while locating the replica’s relay logs, the logs can be put into an
invalid state where future updates fail and future CHANGE MASTER
calls crash the server. More specifically, right before a replica
purges the relay logs (part of the `CHANGE MASTER TO` logic), the
relay log is temporarily closed with state LOG_TO_BE_OPENED. If the
server errors in-between the temporary log closure and purge, i.e.
during the function find_log_pos, the log should be closed.
MDEV-25284 reveals the log is not properly closed.
Second, upon issuing a RESET SLAVE ALL command, a slave’s GTID
filters are not cleared (DO_DOMAIN_IDS, IGNORE_DOMIAN_IDS,
IGNORE_SERVER_IDS). MySQL had a similar bug report, Bug #18816897,
which fixed this issue to clear IGNORE_SERVER_IDS after issuing
RESET SLAVE ALL in version 5.7.
Solution:
=========
To fix the first problem, the CHANGE MASTER error handling logic was
extended to transition the relay log state to LOG_CLOSED from
LOG_TO_BE_OPENED.
To fix the second problem, the RESET SLAVE ALL logic is extended to
clear the domain_id filter and ignore_server_ids.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
|
|
|
|
|
|
|
|
|
| |
Schema and table names in a veiw FRM files are:
- in upper case on Linux
- in lower case on Windows
Using the LOWER() function when displaying an FRM file fragment,
to avoid the OS-specific difference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This happens upon CREATE USER and DROP ROLE.
The underlying problem is that our HASH implementation shuffles elements
around when performing an update or delete. This means that when doing a
scan through the HASH table by index, in search of elements to delete or
update one must restart the scan to make sure nothing is missed if at least
one delete / update happened.
More specifically, what happened in this case:
The hash has 131 element, DROP ROLE removes the element
[119]. Its [119]->next was element [129], so [129] is moved to [119].
Now we need to compact the hash, removing the last element [130]. It
gets one bit off its hash value and becomes element [2]. The existing
element [2] is moved to [129], and old [130] is moved to [2].
We cannot simply move [130] to [129] and make [2]->next=130, it won't
work if [2] is itself in the collision list and doesn't belong in [2].
The handle_grant_struct code assumed that it is safe to continue by only
reexamining the currently modified / deleted element index, but that is
not true.
Missing to delete an element in the hash triggered the assertion in
the test case. DROP ROLE would not clear all necessary role->role or
role->user mappings.
To fix the problem we ensure that the scan is restarted, only if an
element was deleted / updated, similar to how bubble-sort keeps sorting
until it finds no more elements to swap.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`!alias_arg || strlen(alias_arg->str) == alias_arg->length' failed with certain connection charset
There were two independent problems which lead to the crash
and to the non-relevant records returned in I_S queries:
- The code in the I_S implementation was not secure
about values with 0x00 bytes.
It's fixed by using check_db_name() and check_table_name()
inside make_table_name_list(), and by adding the test for
0x00 inside check_table_name().
- The code in Item_string::print() did not convert
strings without introducers when restoring
the CREATE VIEW statement from an Item tree.
This made wrong literals inside the "query" line in the view FRM file
in cases when the VIEW parse time
character_set_client!=character_set_connection.
That's fixed by adding a proper conversion.
This change also fixed a similar problem in SHOW PROCEDURE CODE -
the literals were displayed in wrong character set in SP instructions
in cases when the SP parse time
character_set_client!=character_set_connection.
|