| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
switch to definer privileges when populating I_S tables
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
InnoDB stores synced_doc_id + 1 value in FTS_CONFIG table. But
while reading the synced doc id from FTS_CONFIG table after restart,
InnoDB should read synced_doc_id - 1 to get the actual synced
doc id value.
|
|\ \
| |/ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
using a specially crafted strings one could overflow `shift`
variable and cause a crash by dereferencing d10[-2147483648]
(on a sufficiently old gcc).
This is a correct fix and a test case for
Bug #29723340: MYSQL SERVER CRASH AFTER SQL QUERY WITH DATA ?AST
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
IS NULL or <=> with unique field does not mean unique row,
because several NULL possible,
so we can not convert to normal join in this case.
|
| | |
| | |
| | |
| | | |
Data should be sent with length.
|
| | |
| | |
| | |
| | |
| | |
| | | |
The test was marked big for no apparent reason.
Usw wait_all_purged.inc in the canonical way, and make use of
the sequence engine.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
To diagnose a hang in slow shutdown (innodb_fast_shutdown=0),
let us introduce a Boolean startup option in debug builds
that will cause the contents of the InnoDB change buffer
to be dumped to the server error log at startup.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MDEV-18451 Server crashes in maria_create_trn_for_mysql
upon ALTER TABLE
Problem was that when table was locked many times, not all
instances where removed from the transaction by
_ma_remove_table_from_trnman()
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem was in a combination of LOCK TABLES on several Aria
tables followed by an ALTER TABLE. After the ALTER TABLE there
was a force close + reopen of the alter table. The close failed
because the table was still part of a transaction.
Fixed by calling extra(HA_EXTRA_PREPARE_FOR_FORCED_CLOSE) as
part of closing the table, which ensures that the table is not
anymore part of the current transaction.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Apply the correct pattern for debug instrumentation:
SET @save_dbug=@@debug_dbug;
SET debug_dbug='+d,...';
...
SET debug_dbug=@save_dbug;
Numerous tests use statements of the form
SET debug_dbug='-d,...';
which will inadvertently enable all DBUG tracing output,
causing unnecessary waste of resources.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The test main.index_merge_innodb is taking very much time,
especially on later versions (10.2 and 10.3).
Some of this could be attributed to the use of INSERT...SELECT,
which is time-consumingly creating explicit record locks in InnoDB
for the locking read in the SELECT part.
In 10.3 and later, some slowness can be attributed to MDEV-12288,
which makes the InnoDB purge thread spend time to reset transaction
identifiers in the inserted records. If we prevent purge from
running before all tables are dropped, the test seems to be
10% faster on an unoptimized debug build on 10.5. (A proper fix
would be to implement MDEV-515 and stop writing row-level undo log
records for inserts into an empty table or partition.)
At the same time, it should not hurt to make main.index_merge_myisam
to use the sequence engine. Not only could it be a little faster,
but the test would be slightly more readable.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Remove unused variables and type mismatch that was introduced
in commit b393e2cb0c079b30563dcc87a62002c9c778643c
Also, fix a typo in the documentation of the parameter, and
update the test.
|
| | |
| | |
| | |
| | | |
the bug was already fixed in MDEV-17005, so now only test is added
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
encrypted binlogs, SHOW BINLOG EVENTS reports the correct one.
Analysis
Mysqlbinlog output for encrypted binary log
#Q> insert into tab1 values (3,'row 003')
#190912 17:36:35 server id 10221 end_log_pos 980 CRC32 0x53bcb3d3 Table_map: `test`.`tab1` mapped to number 19
# at 940
#190912 17:36:35 server id 10221 end_log_pos 1026 CRC32 0xf2ae5136 Write_rows: table id 19 flags: STMT_END_F
Here we can see Table_map_log_event ends at 980 but Next event starts at 940.
And the reason for that is we do not send START_ENCRYPTION_EVENT to the slave
Solution:-
Send Start_encryption_log_event as Ignorable_log_event to slave(mysqlbinlog),
So that mysqlbinlog can update its log_pos.
Since Slave can request multiple FORMAT_DESCRIPTION_EVENT while master does not
have so We only update slave master pos when master actually have the
FORMAT_DESCRIPTION_EVENT. Similar logic should be applied for START_ENCRYPTION_EVENT.
Also added the test case when new server reads the data from old server which
does not send START_ENCRYPTION_EVENT to slave.
Master Slave Upgrade Scenario.
When Slave is updated first, Slave will have extra logic of handling
START_ENCRYPTION_EVENT But master willnot be sending START_ENCRYPTION_EVENT.
So there will be no issue.
When Master is updated first, It will send START_ENCRYPTION_EVENT to
slave , But slave will ignore this event in queue_event.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
--prepare --export step
When "--export" mariabackup option is used, mariabackup starts the server in
bootstrap mode to generate *.cfg files for the certain innodb tables.
The started instance of the server reads options from the file, pointed
out in "--defaults-file" mariabackup option.
If the server uses the same config file as mariabackup, and binlog is
switched on in that config file, then "mariabackup --prepare --export"
will create binary log files in the server's binary log directory, what
can cause issues.
The fix is to add "--skip-log-bin" in mysld options when the server is
started to generate *.cfg files.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
notification from an earlier failed group.
Analysis:
========
In general if there are three groups.
1 - Inserts 32 which fails due to local entry '32' on slave.
2 - Inserts 33
3 - Inserts 34
Each group considers itself as a waiter and it waits for prior group 'waitee'.
This is done in 'register_wait_for_prior_event_group_commit'. If there is no
other parallel group being scheduled then no waitee will be there.
Let us assume 3 groups are being scheduled in parallel.
3-> waits for 2-> waits for->1
'1' upon completion it checks is there any registered subsequent waiter. If
so it wakes up the subsequent waiter with its execution status. This execution
status is stored in wakeup_error.
If '1' failed then it sends corresponding wakeup_error to 2. Then '2' aborts
and it propagates error to '3'. So all further commits are aborted. This
mechanism works only when all transactions reach a stage where they are
waiting for their prior commit to complete.
In case of optimistic following scenario occurs.
1,2,3 are scheduled in parallel.
3 - Reaches group_commit_code waits for 2 to complete.
1 - errors out sets stop_on_error_sub_id=1.
When a group execution results in error its corresponding sub_id is set to
'stop_on_error_sub_id'. Any new groups queued for execution will check if
their sub_id is > stop_on_error_sub_id. If it is true their execution will be
skipped as prior group execution failed. 'skip_event_group=1' will be set.
Since the execution of SQL thread is about to stop we just skip execution of
all the following event groups. We still do all the normal waiting and wakeup
processing between the event groups as a simple way to ensure that everything
is stopped and cleaned up correctly.
Upon error '1' transaction checks for registered waiters. Since no one is
there it simply goes away.
2 - Starts the execution. It checks do I have a waitee.
Since wait_commit_sub_id == entry->last_committed_sub_id no waitee is set.
Secondly: 'entry->stop_on_error_sub_id' is set by '1'st execution. Now
'handle_parallel_thread' code checks if the current group 'sub_id' is greater
than the 'sub_id' set within 'stop_on_error_sub_id'.
Since the above is true 'skip_event_group=true' is set. Simply call
'wait_for_prior_commit' to wakeup all waiters. Group '2' didn't had any
waitee and its execution is skipped. Hence its wakeup_error=0.It sends a
positive wakeup signal to '3'. Which commits. This results in a missed
transaction. i.e 33 is missed and 34 is committed.
Fix:
===
When a worker learns that an earlier transaction execution has failed, and it
should not proceed for further execution, it should mark its own execution
status as failed so that it alerts its followers to abort as well.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The test encryption.innodb-redo-badkey was accidentally disabled
until commit 23657a21018d0b3d0464bbd55236113ebcd3d4b7 enabled
it recently. Once it was enabled, it started failing randomly.
recv_recover_corrupt_page(): Do not assume that any redo log exists
for the page. A page may be unnecessarily read by read-ahead.
When noting the corruption, reset recv_addr->state to RECV_PROCESSED,
so that even if the same page is re-read again, we will only
decrement recv_sys->n_addrs once.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The test innodb_fts.fulltext_table_evict was only creating 1000 tables
with fulltext indexes, only to check that no tables with fulltext
indexes are being evicted.
The reason why tables containing fulltext indexes cannot be evicted is
that fts_optimize_init() invokes dict_table_prevent_eviction().
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Test innodb_read_only startup (which will be refused after a crash),
and test also innodb_force_recovery=5, and extract some change buffer
merge statistics. Omit any statistics about delete (purge) buffering,
because purge could happen at any time.
Use the sequence storage engine for populating the table.
|
| | |
| | |
| | |
| | | |
Add a test case. MariaDB Server 10.2 is not affected.
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Eliminate one InnoDB table with 128*16384 rows, and use
the sequence engine instead. Also, run everything in a single
transaction, to prevent purge from running concurrently
unnecessarily. (Starting with MariaDB Server 10.3, purge would
reset the DB_TRX_ID after INSERT.)
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
.. SELECT with zerofilled decimal
Also fixes:
MDEV-20560 Assertion `precision > 0' failed in decimal_bin_size upon SELECT with MOD short unsigned decimal
Changing the way how Item_func_mod calculates its max_length.
It now uses decimal_precision(), decimal_scale() and unsigned_flag
of its arguments, like all other Item_num_op descendants do.
|
| | |
| | |
| | |
| | | |
disabling warnings in read_statistics_for_table()
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | | |
In the function test_if_cheaper_ordering we make a decision if using an index is better than
using filesort for ordering. If we chose to do range access then in test_quick_select we
should make sure that cost for table scan is set to DBL_MAX so that it is not picked.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:
=======
During dropping of fts index, InnoDB waits for fts_optimize_remove_table()
and it holds dict_sys->mutex and dict_operaiton_lock even though the
table id is not present in the queue. But fts_optimize_thread does wait
for dict_sys->mutex to process the unrelated table id from the slot.
Solution:
========
Whenever table is added to fts_optimize_wq, update the fts_status
of in-memory fts subsystem to TABLE_IN_QUEUE. Whenever drop index
wants to remove table from the queue, it can check the fts_status
to decide whether it should send the MSG_DELETE_TABLE to the queue.
Removed the following functions because these are all deadcode.
dict_table_wait_for_bg_threads_to_exit(),
fts_wait_for_background_thread_to_start(),fts_start_shutdown(), fts_shudown().
|
| | |
| | |
| | |
| | |
| | | |
- There is no need to add the table in fts_optimize_wq if there is
no fts indexes associated with it.
|
| | | |
|
| | |
| | |
| | |
| | | |
Adjusted test results.
|