| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Step 1 in handling InnoDB AIO assertions better is to get
more detail of the cases of error.
This doesn't resolve MDEV-27593, but increases the level
of information in the assertion.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
collation
ha_innobase::check_if_supported_inplace_alter(): Refuse to change the
collation of a column that would become or remain indexed as part of
the ALTER TABLE operation.
In MariaDB Server 10.6, we will allow this type of operation;
that fix depends on MDEV-15250.
|
|\ \
| |/ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- InnoDB mistakenly identifies the non-unique FTS_DOC_ID index as
FTS_DOC_ID_INDEX while loading the table. dict_load_indexes()
should check whether the index is unique before assigning
fts_doc_id_index
|
| | |
| | |
| | |
| | |
| | |
| | | |
Before version 10, GCC would think that a right shift of an
unsigned char returns int. Let us explicitly cast that back,
to silence a bogus -Wconversion warning.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Also, refactor trx_i_s_common_fill_table() to remove dead code.
Warnings about yynerrs in Bison-generated yyparse() will remain for now.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Redundant InnoDB table fails to set the flags2 while loading
the table. It leads to "Upgrade index name failure" during alter
operation. InnoDB should set the flags2 to FTS_AUX_HEX_NAME when
fts is being loaded
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
table | Assertion `part_share->auto_inc_initialized || !can_use_for_auto_inc_init()' failed.
The bug is caused by a similar mechanism as MDEV-21027.
The function, check_insert_or_replace_autoincrement, failed to open
all the partitions on INSERT SELECT statements and it results in the
assertion error.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
|| !not_redundant()' failed
- In case of discarded tablespace, InnoDB can't read the root page to
assign the n_core_null_bytes. Consecutive instant DDL fails because
of non-matching n_core_null_bytes.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The Spider mixes the comma join with other join types, and thus
ERROR 1054 occurs. This is well-known issue caused by the higher
precedence of JOIN over the comma (,).
We can fix the problem simply by using JOINs instead of commas.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
fil_space_t::acquire_low(): Introduce a parameter that specifies
which flags should be avoided. At all times, referenced() must not
be incremented if the STOPPING flag is set. When fil_system.mutex
is not being held by the current thread, the reference must not be
incremented if the CLOSING flag is set (unless NEEDS_FSYNC is set,
in fil_space_t::flush()).
fil_space_t::acquire(): Invoke acquire_low(STOPPING | CLOSING).
In this way, the reference count cannot be incremented after
fil_space_t::try_to_close() invoked fil_space_t::set_closing().
If the CLOSING flag was set, we must retry acquire_low() after
acquiring fil_system.mutex.
fil_space_t::prepare_acquired(): Replaces prepare(true).
fil_space_t::acquire_and_prepare(): Replaces prepare().
This basically retries fil_space_t::acquire() after
acquiring fil_system.mutex.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
freed by spider_commit()
The heap-use-after-free is caused by the following mechanism:
* In the execution of FLUSH TABLE WITH READ LOCK, the function
spider_free_trx_conn() is called and the connections held by
SPIDER_TRX::trx_conn_hash are freed.
* Then, an instance of ha_spider maintains the freed connections
because they are also referenced from ha_spider::conns.
The ha_spider instance is kept in a lock structure until the
corresponding table is unlocked.
* Spider accesses ha_spider::conns on the implicit UNLOCK TABLE
issued by BEGIN.
In the first place, when the connections have been freed, it means
that there are really no remote table locked by Spider.
Thus, there is no need for Spider to access ha_spider::cons on the
implicit UNLOCK TABLE.
We can fix the bug by removing the above mentioned access to
ha_spider::conns. We also modified spider_free_trx_conn() so that it
frees the connections only when no table is locked to reduce the
chance of another heap-use-after-free on ha_spider::conns.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
during ADD COLUMN
prepare_inplace_alter_table_dict(): If the table will not be rebuilt,
preserve all of the original ROW_FORMAT, including the compressed
page size flags related to ROW_FORMAT=COMPRESSED.
|
| | |
| | |
| | |
| | |
| | | |
hex_to_ascii(): Add #if around the definition to avoid
clang -Wunused-function. Avoid GCC 5 -Wconversion with a cast.
|
|\ \ \
| |/ / |
|
| | | |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
buf_page_print(): Dump the buffer page 32 bytes (64 hexadecimal digits)
per line. In this way, the limitation in mtr
("Data too long for column 'line'") will not be triggered.
Also, do not bother decoding the page contents, because everything
is present in the hexadecimal output.
dict_index_find_on_id_low(): Merge to dict_index_get_if_in_cache_low().
The direct call in buf_page_print() was prone to crashing, in case the
table definition was concurrently evicted or dropped from the
data dictionary cache.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Spider supports (or at least allows) INSERT DELAYED but the
documentation does not specify spider as a storage engine that supports
"INSERT DELAYED".
Also, although not mentioned in the documentation, "INSERT DELAYED" is
not intended to be executed inside a transaction, as can be seen from
the list of supported storage engines.
The current implementation allows executing a delayed insert on a
remote transactional table and this breaks the consistency ensured by
the transaction.
We too remove "internal_delayed", one of the Spider table parameters.
Documentation says,
> Whether to transmit existence of delay to remote servers when
> executing an INSERT DELAYED statement on local server.
This table parameter is only used for "INSERT DELAYED".
Reviewed by: Nayuta Yanagisawa
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
!can_use_for_auto_inc_init()' failed in ha_partition::set_auto_increment_if_higher
ha_partition::set_auto_increment_if_higher expects
part_share->auto_inc_initialized is true or can_use_for_auto_inc_init()
is false (but as the comment of this method says, it returns false
only if we use Spider engine with DROP TABLE or ALTER TABLE query).
However, part_share->auto_inc_initialized becomes true only after all
partitions are opened (since 6dce6aecebe6ef78a14cb5c5c5daa8a355551e40).
Therefore, I added a conditional expression in order to read all
partitions when we execute REPLACE on a table that has an
AUTO_INCREMENT column.
Reviewed by: Nayuta Yanagisawa
Reviewed by: Alexey Botchkov
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Test prior to this change:
CURRENT_TEST: connect.mysql
mysqltest: At line 485: query 'INSERT IGNORE INTO t3 VALUES (5),(10),(30)' failed: ER_GET_ERRMSG (1296): Got error 122 '(1062) Duplicate entry '10' for key 'PRIMARY' [INSERT INTO `t1` (`a`) VALUES (10)]' from CONNECT
So the ignore table option wasn't getting passed to the remove server.
Closes #2008
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In commit 73fee39ea62037780c59161507e89dd76c10b7a3 (MDEV-27985)
a regression was introduced that would cause bpage=nullptr to
be referenced.
buf_flush_LRU_list_batch(): Always terminate the loop upon
encountering a null pointer.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
PageConverter::update_header(): Remove an unnecessary write.
The field that was originally called FIL_PAGE_FILE_FLUSH_LSN only
made sense for the first page of the system tablespace
(initially, for the first page of each file of the system tablespace).
It never had any meaning for .ibd files, and it lost its original
meaning in MariaDB Server 10.8.1 when
commit b07920b634f455c39e3650c6163bec2a8ce0ffe0 (MDEV-27199)
removed the ability to start without ib_logfile0.
If the most significant 32 bits of the LSN are nonzero, this
unnecessary write would write the wrong encryption key identifier
to the page. The first page of any file is never encrypted,
so normally those bytes should be 0 for any .ibd file.
|
| | |
| | |
| | |
| | |
| | |
| | | |
MariaDB never supported this form of preemption via high-priority
transactions. This error code shold not have been added in the
first place, in commit 2e814d4702d71a04388386a9f591d14a35980bfe.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ha_innobase::build_template may initialize m_prebuilt->idx_cond
even if there is no valid pushed_idx_cond_keyno.
This potentially problematic piece of code was found while
working on MDEV-27366
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In any files that were created in the
innodb_checksum_algorithm=full_crc32 format
(commit c0f47a4a58424c621204dacb8016a94b66cb2bce)
any unused data fields will have been zero-initialized
(commit 3926673ce7149aa223103126b6aeac819b10fab5).
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
and failing spider partition test.
With some small datatype changes to the Linux/Solaris my_gethwaddr implementation
the hardware address of AIX can be returned. This is an important aspect
in Spider (and UUID).
Spider test change reviewed by Nayuta Yanagisawa.
my_gethwaddr review by Monty in #2081
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Reading the last page of table with "dynamic page" format would generate
an error when reading after the last row. This was never noticed as
when using Aria as a handler any error messages generated by
_ma_set_fatal_error() was ignored.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If we got a read error from S3, we did not signal threads waiting
to read blocks in the read-range. This caused these threads to
hang forever.
There is still one issue left that the S3 error will be logged as an
'table is crashed' error instead of the IO error. This will be fixed
by a larger patch in 10.6 that improves error reporting from Aria.
There is no test case for this as it is very hard to repeat.
I tested this with a patch that causes random read failures in S3
used perl multi-threaded test with 8 threads to simulate reads.
This patch fixes all found hangs.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
mtr_t::modify(): Set the m_made_dirty flag if needed,
so that buf_pool_t::insert_into_flush_list() will be invoked
while holding log_sys.flush_order_mutex.
This is something that was should have been part of
commit b212f1dac284cc9b7a060f1eed2cd4604c326966 (MDEV-22107).
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
mtr_t::is_empty(): Replaces mtr_t::get_log() and mtr_t::get_memo().
mtr_t::get_log_size(): Replaces mtr_t::get_log().
mtr_t::print(): Remove, unused function.
ReleaseBlocks::ReleaseBlocks(): Remove an unused parameter.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The assert happens in 10.6 with the following command:
./mtr --no-reorder --verbose-restart main.update_ignore_216 main.upgrade_MDEV-19650 main.upgrade_MDEV-23102-1 main.upgrade_MDEV-23102-2 main.upgrade_geometrycolumn_procedure_definer main.upgrade_mdev_24363 main.varbinary sys_vars.aria_log_file_size_basic
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the connecting user doesn't have alter table privilege this isn't
allowed.
This patch removes enable / disable key commands that should never have been here
Closes #2002
|
| | |
| | |
| | |
| | |
| | |
| | | |
- InnoDB fails to create a fts cache while loading the innodb fts
table which is stored in system tablespace. InnoDB should create
the fts cache while loading FTS_DOC_ID column from system column.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Fixed wrong DBUG_ASSERT when waiting for big-block-read
- Update S3_pagecache_reads counter when reading a block from S3.
Before this patch the variable value was always 0
Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
|
| | |
| | |
| | |
| | |
| | | |
The symtom of the bug was that check table on an S3 table when using
--s3_slave-ignore-updates=1 could print "9 when updating keyfile"
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
srv_do_purge(): In commit edde1f6e0d5f5a0115a5253c9b8d428af132f2d1
when the de-facto 32-bit trx_sys_t::history_size() was replaced with
32-bit trx_sys.rseg_history_len, some more variables were changed
from ulint (size_t) to uint32_t.
The history list length is the number of committed transactions whose
undo logs are waiting to be purged. Each TRX_RSEG_HISTORY list is
storing the number of entries in a 32-bit field and each transaction
will occupy at least one undo log page. It is thinkable that the
length of each TRX_RSEG_HISTORY list may approach the maximum
representable number. The number cannot be exceeded, because the
rollback segment header is allocated from the same tablespace as
the undo log header pages it is pointing to, and because the page
numbers of a tablespace are stored in 32 bits. In any case, it is
possible that the total number of unpurged committed transactions
cannot be represented in 32 but 39 bits (corresponding to
128 rollback segments and undo tablespaces).
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
page_cur_insert_rec_low(): When checking for common bytes with
the preceding record, exclude the header bytes of next_rec
that could have been updated by this function.
The scenario where this caused corruption was an insert of
a node pointer record. The child page number was written as
0x203 but recovered as 0x103 because the n_owned field of next_rec
was changed from 1 to 2 before the comparison was invoked.
|