| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\ |
|
| |\ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
table
Delete-marked record is on the secondary index and the clustered index
already purged the corresponding record. We cannot detect if such
record is historical and we should not: the algorithm of
row_ins_check_foreign_constraint() skips such record anyway.
|
| |\ \
| | |/
| | |
| | | |
MDEV-18734 FIXME: vcol.partition triggers ASAN heap-use-after-free
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Main idea: don't log-and-crash but propogate error to the upper layers of stack
to handle it and show to a user.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Import operation without .cfg file fails when there is mismatch of index
between metadata table and .ibd file. Moreover, MDEV-19022 shows
that InnoDB can end up with index tree where non-leaf page has only
one child page. So it is unsafe to find the secondary index root page.
This patch does the following when importing the table without .cfg file:
1) If the metadata contains more than one index then InnoDB stops
the import operation and report the user to drop all secondary
indexes before doing import operation.
2) When the metadata contain only clustered index then InnoDB finds the
index id by reading page 0 & page 3 instead of traversing the
whole tablespace.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
partitioned table
ha_partition stores records in array of m_ordered_rec_buffer and uses
it for prio queue in ordered index scan. When the records are restored
from the array the blob buffers may be already freed or rewritten.
The solution is to take temporary ownership of cached blob buffers via
String::swap(). When the record is restored from m_ordered_rec_buffer
the ownership is returned to table fields.
Cleanups:
init_record_priority_queue(): removed needless !m_ordered_rec_buffer
check as there is same assertion few lines before.
dbug_print_row() for arbitrary row pointer
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If lock type is LOCK_GAP or LOCK_ORDINARY, and the transaction holds
implicit lock for the record, then explicit gap-lock will not be set for
the record, as lock_rec_convert_impl_to_expl() returns true and
lock_rec_convert_impl_to_expl() bypasses lock_rec_lock() call.
The fix converts explicit lock to implicit one if requested lock type is
not LOCK_REC_NOT_GAP.
innodb_information_schema test result is also changed as after the fix
the following statements execution:
SET autocommit=0;
INSERT INTO t1 VALUES (5,10);
SELECT * FROM t1 FOR UPDATE;
leads to additional gap lock requests.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A test case to reproduce the issue. The actual fix is in galera
library.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Contains following fixes:
* allow TOI commands to timeout while trying to acquire TOI with
override lock_wait_timeout with a LONG_TIMEOUT only after
succesfully entering TOI
* only ignore lock_wait_timeout on TOI
* fix galera_split_brain test as TOI operation now returns ER_LOCK_WAIT_TIMEOUT after lock_wait_timeout
* explicitly test for TOI
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:
=======
There are two issues that are addressed in this patch:
1) SHOW BINARY LOGS uses caching to store the binary logs that exist
in the log directory; however, if new events are written to the logs,
the caching strategy is unaware. This is okay for users, as it is
okay for SHOW to return slightly old data. The test, however, can
result in inconsistent data. It runs two connections concurrently,
where one shows the logs, and the other adds a new file. The output
of SHOW BINARY LOGS then depends on when the cache is built, with
respect to the time that the second connection rotates the logs.
2) There is a race condition between RESET MASTER and SHOW BINARY
LOGS. More specifically, where they both need the binary log lock to
begin, SHOW BINARY LOGS only needs the lock to build its cache. If
RESET MASTER is issued after SHOW BINARY LOGS has built its cache and
before it has returned the results, the presented data may be
incorrect.
Solution:
========
1) As it is okay for users to see stale data, to make the test
consistent, use DEBUG_SYNC to force the race condition (problem 2) to
make SHOW BINARY LOGS build a cache before RESET MASTER is called.
Then, use additional logic from the next part of the solution to
rebuild the cache.
2) Use an Atomic_counter to keep track of the number of times RESET
MASTER has been called. If the value of the counter changes after
building the cache, the cache should be rebuilt and the analysis
should be restarted.
Reviewed By:
============
Andrei Elkin: <andrei.elkin@mariadb.com>
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: copy vcol_info from table field when it's set up.
|
| | |
| | |
| | |
| | | |
This reverts commit 9b8e207ce03b2ab7a766348738055be9520561bd.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
len was containing garbage, since vctempl->mysql_col_offset was
containing old value while calling row_mysql_store_col_in_innobase_format
from innobase_get_computed_value().
It was not updated after the first ALTER TABLE call, because it's INPLACE
logic considered there's nothing to update, and exited immediately from
ha_innobase::inplace_alter_table().
However, vcol metadata needs an update, since vcols structure is changed
in mysql record.
The regression was introduced by 12614af1fe. There, refcount==1 condition
was removed, which turned out to be crucial, though racy. The idea was to
update vc_templ after each (sequencing) ALTER TABLE.
We should do the same another way, and there may be a plenty of solutions,
but the simplest one is to add a following condition:
if vcol structure is changed, drop vc_templ; it will be recreated on next
ha_innobase::open() call.
in prepare_inplace_alter_table. It is safe, since innodb inplace changes
require at least HA_ALTER_INPLACE_SHARED_LOCK_AFTER_PREPARE, which
guarantee MDL_EXCLUSIVE on this stage.
alter_templ_needs_rebuild() also has to track the columns not indexed, to
keep vc_templ correct.
Note that vc_templ is always kept constructed and available after
ha_innobase::open() call, even on INSERT, though no virtual columns are
evaluated during that statement
inside innodb.
In the test case suplied, it will be recreated on the second ALTER TABLE.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: initialize vcols before key initialization
Also key initialization is moved to a function.
|
|\ \ \
| |/ / |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MDEV-16026: Forbid global system_versioning_asof in non-default time zone
* store `system_versioning_asof` in unix time;
* both session and global vars are processed in session timezone;
* setting `default` does not copy global variable anymore. Instead, it sets
system_time to SYSTEM_TIME_UNSPECIFIED, which means that no 'AS OF' time
is applied and `now()` can be assumed
As a regression, we cannot assign values below 1970 (UTC) anymore
MDEV-16481: set global system_versioning_asof=sf() crashes in specific case
* sys_vars.h: add `MYSQL_TIME` field to `set_var::save_result`
* sys_vars.ic: get rid of calling `var->value->get_date()` from
`Sys_var_vers_asof::update()`
* versioning.sysvars: add test; remove double warning
refactor Sys_var_vers_asof
* inherit from sys_var rather than Sys_var_enum
* remove junk "DEFAULT" keyword. There is DEFAULT in SQL grammar for it.
* make all conversions in check() to avoid possible errors
* avoid double var->value evaluation, which could
consequence in undefined behavior
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`PRIMARY` table `schema`.`child_table`
Problem was that not all normal error codes where not handled
after wsrep_row_upd_check_foreign_constraints() call. Furhermore,
debug assertion did not contain all normal error cases. Changed
ib:: calls to WSREP_ calls to use wsrep instrumentation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:
=========
As a part of MDEV-14398 patch, InnoDB added and removed
the tablespace from default encrypt list. But InnoDB removes
the tablespace from the default encrypt list too early due to
i) other encryption thread working on the tablespace
ii) When tablespace is being flushed at the end of
key rotation
InnoDB fails to decrypt/encrypt the tablespace since
the tablespace removed too early and it leads to
test case failure.
Solution:
=========
Avoid the removal of tablespace from default_encrypt_list
only when
1) Another active encryption thread working on tablespace
2) Eligible for tablespace key rotation
3) Tablespace is in flushing phase
Removed the workaround in encryption.innodb_encryption_filekeys test case.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Set tests to non-valgrind:
oqgraph.social
encryption.innodb-page_encryption
binlog_encryption.encrypted_master
innodb.innodb-page_compression_lz4
main.lock_multi_bug38499
main.lock_multi_bug38691
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In commit 83d2e0841ee30727c609f23957cc592399a3aca4 (MDEV-24041)
we failed to notice that in addition to the bug with
DELETE and ON DELETE CASCADE, there is another bug with
UPDATE and ON UPDATE CASCADE.
row_ins_foreign_fill_virtual(): Use the correct memory heap
for everything that will be reachable from the cascade->update
that we return to the caller.
Note: It is correct to use the shorter-lived cascade->heap for
rec_get_offsets(), because that memory will be abandoned when
row_ins_foreign_fill_virtual() returns.
|
| | |
| | |
| | |
| | | |
enclosed in single quotes).
|
| | | |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ha_innobase::prepare_inplace_alter_table(): Unless the table is
being rebuilt, determine the maximum column length based on the
current ROW_FORMAT of the table. When TABLE_SHARE (and the .frm file)
contains no explicit ROW_FORMAT, InnoDB table creation or rebuild
will use innodb_default_row_format.
Based on mysql/mysql-server@3287d33acdc4260806a2a407ca15e9d1e04dddcb
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
InnoDB tablespace identifiers and page numbers are 32-bit numbers.
Let us use a 32-bit type for them in innochecksum.
The changes in commit 1918bdf32cdbd1f190cc4479f4076ee4a467f25d
broke the build on 32-bit Windows.
Thanks to Vicențiu Ciorbaru for an initial version of this fixup.
|
| | | |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The columns that are part of DEFAULT expression were not read-marked
in statements like UPDATE...SET b=DEFAULT.
The problem is `F(DEFAULT)` expression depends of the left-hand side of an
assignment. However, setup_fields accepts only right-hand side value.
Neither Item::fix_fields does.
Suchwise, b=DEFAULT(b) works fine, because Item_default_field has
information on what field it is default of:
if (thd->mark_used_columns != MARK_COLUMNS_NONE)
def_field->default_value->expr->update_used_tables();
in Item_default_value::fix_fields().
It is not reasonable to pass a left-hand side to Item:fix_fields, because
the case is rare, so the rewrite
b= F(DEFAULT) -> b= F(DEFAULT(b))
is made instead.
Both UPDATE and multi-UPDATE are affected, however any form of INSERT
is not: it marks all the fields in DEFAULT expressions for read in
TABLE::mark_default_fields_for_write().
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The problem is the same as in MDEV-18166: columns in virtual field
expression are not marked for read, while the field itself does.
field->register_field_in_read_map() should be called for read-marking all
fields.
The test is reproduced only in 10.4+, however the fix is applicable to
10.2+.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Several different test cases were failing under the same reason: the
fields in a vcol expression were not marked during marking columns of a key
contatining virtual column for read.
Fix: make marking columns of a key for read a special case where
register_field_in_read_map() is done instead of plain bitmap_set_bit().
Some test cases are only reproducible in 10.4+, but the fix is applicable
to 10.2+
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a 10.2+ part of a jira task
The two bugs regarding virtual column marking have been fixed:
1. UPDATE of a partitioned table, where the optimizer has chosen a
secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
binlog enabled and binlog_row_image=noblob.
3. DELETE from a view on a table with virtual column.
Generally the assertion happens from update_virtual_fields() call
These bugs are root-caused by missing field marking for dependant fields
of a virtual column.
Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.
3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The failing reason was inconsistent truncation rules: the value of virtual
column could have been evaluated to '2000' sometimes instead of '0000' for
value 'a'.
The reason why `c YEAR AS ('aaaa')` was not evaluated same is that len=4 is
a special case insidew Field_year::store.
The correct fix is: always evaluate a bad value to 0000 instead 2000.
The truncated values should be evaluated as usual.
$support_virtual_index is finally changed to 1 in gcol.gcol_ins_upd_innodb,
which is also enough for testing.
The test from original bug report is also added.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Set the innodb_encrypt_tables variable before
timeout happens. It will add the pending tablespace
to default encrypt list and does the encryption/decryption
based on innodb_encrypt_tables.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Used single quotes, back quotes are used with commit
fafb35ee517f309d9e507f6e3908caca5d8cd257 in 10.3 and will be changed.
Reviewed by: serg@mariadb.org
|
| | |
| | |
| | |
| | | |
Do log_drop_table() in case of failed mysql_prepare_create_table().
|
| | |
| | |
| | |
| | | |
host can be NULL
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Proceed with commit fafb35ee517f309d9e507f6e3908caca5d8cd257
Reviewed by: serg@mariadb.com
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Historical query with AS OF point after the last history partition
must include last history partition because it can be overflown
(contain history rows out of right endpoint).
Move left point back to hist_part->id in that case.
|
| | |
| | |
| | |
| | | |
Skip system-invisible keypart in get_schema_stat_record().
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SQLCOM_CREATE_TABLE && !thd->is_current_stmt_binlog_format_row())' failed in void wsrep_commit_empty(THD*, bool)
Using ROLLBACK with `completion_type = CHAIN` result in start of
transaction and implicit commit before previous WSREP internal data is
cleared.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
(thd->lex->sql_command == SQLCOM_CREATE_TABLE && !thd->is_current_stmt_binlog_format_row())
Updates to transaction registry table shouldn't be replicated in
cluster so there is no need to append wsrep keys.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|