summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* MDEV-20611: MRR scan over partitioned InnoDB table produces "Out of memory" ↵Sergei Petrunia2019-11-1513-18/+343
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | error Fix partitioning and DS-MRR to work together - In ha_partition::index_end(): take into account that ha_innobase (and other engines using DS-MRR) will have inited=RND when initialized for DS-MRR scan. - In ha_partition::multi_range_read_next(): if the MRR scan is using HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's handler will store anything into *range_info. - In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions about how much memory their MRR implementation needs by passing *buffer_size=0. DS-MRR code didn't know about this (actually it used uint for buffer size calculation and would have an under-flow). Returning *buffer_size=0 made ha_partition assume that partitions do not need MRR memory and pass the same buffer to each of them. Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return the amount of buffer space needed, but not more than about @@mrr_buffer_size. * Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its partitions, and partition use DS-MRR, the code will call handler->clone with TABLE (*NOT partition*) name as an argument. DS-MRR has no way of knowing the partition name, so the solution was to have the ::clone() function for the affected storage engine to ignore the name argument and get it elsewhere.
* MDEV-12353 preparation: Replace mtr_x_lock() and friendsMarko Mäkelä2019-11-1422-163/+148
| | | | | | | | | | | | | | | Apart from page latches (buf_block_t::lock), mini-transactions are keeping track of at most one dict_index_t::lock and fil_space_t::latch at a time, and in a rare case, purge_sys.latch. Let us introduce interfaces for acquiring an index latch or a tablespace latch. In a later version, we may want to introduce mtr_t members for holding a latched dict_index_t* and fil_space_t*, and replace the remaining use of mtr_t::m_memo with std::set<buf_block_t*> or with a map<buf_block_t*,byte*> pointing to log records.
* MDEV-20949: Merge 10.2 into 10.3Marko Mäkelä2019-11-1416-315/+383
|\ | | | | | | | | | | | | In the test innodb.instant_alter,4k we would be flagging an error for too large row size. That error was previously only being reported if the table was being rebuilt. Thus, this merge is fixing a small omission in MDEV-11369 (instant ADD COLUMN).
| * MDEV-20949 Stop issuing 'row size' error on DMLEugene Kosov2019-11-1315-264/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move row size check to early CREATE/ALTER TABLE phase. Stop checking on table open. dict_index_add_to_cache(): remove parameter 'strict', stop checking row size dict_index_t::record_size_info_t: this is a result of row size check operation create_table_info_t::row_size_is_acceptable(): performs row size check. Issues error or warning. Writes first overflow field to InnoDB log. create_table_info_t::create_table(): add row size check dict_index_t::record_size_info(): this is a refactored version of dict_index_t::rec_potentially_too_big(). New version doesn't change global state of a program but return all interesting info. And it's callers who decide how to handle row size overflow. dict_index_t::rec_potentially_too_big(): removed
* | Merge 10.2 into 10.3Marko Mäkelä2019-11-142-130/+69
|\ \ | |/
| * Clean up mtr_t::commit() furtherMarko Mäkelä2019-11-131-129/+68
| | | | | | | | | | | | memo_block_unfix(), memo_latch_release(): Merge to ReleaseLatches. memo_slot_release(), ReleaseAll: Clean up the formatting.
| * MDEV-20934: Correct a debug assertionMarko Mäkelä2019-11-131-1/+1
| | | | | | | | | | | | | | | | | | | | A search with PAGE_CUR_GE may land on the supremum record on a leaf page that is not the rightmost leaf page. This could occur when all keys on the current page are smaller than the search key, and the smallest key on the successor page is larger than the search key. ibuf_delete_recs(): Correct the debug assertion accordingly.
| * Fix a typo in mariadb-plugin-mroonga.prermYasuhiro Horimoto2019-11-121-1/+1
| | | | | | | | Closes #1407
* | MDEV-20646: 10.3.18 is slower than 10.3.17Sergei Petrunia2019-11-133-12/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix incorrect change introduced in the fix for MDEV-20109. The patch tried to compute a more precise estimate for the record_count value in SJ-Materialization-Scan strategy (in Sj_materialization_picker::check_qep). However the new formula is worse as it produces extremely optimistic results in common cases where SJ-Materialization-Scan should be used) The old formula produces pessimistic results in cases when Sj-Materialization- Scan is unlikely to be a good choice anyway. So, the old behavior is better.
* | Merge 10.2 into 10.3Marko Mäkelä2019-11-128-507/+256
|\ \ | |/
| * MDEV-12353 preparation: Clean up mtr_tMarko Mäkelä2019-11-124-495/+254
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtr_t::Impl, mtr_t::Command: Merge to mtr_t. MTR_MAGIC_N: Remove. MTR_STATE_COMMITTING: Remove. This state was only being set internally during mtr_t::commit(). mtr_t::Command::m_locks_released: Remove (set-and-never-read member). mtr_t::Command::m_start_lsn: Replaced with the return value of finish_write() and a parameter to release_blocks(). mtr_t::Command::m_end_lsn: Removed as a duplicate of mtr_t::m_commit_lsn. mtr_t::Command::prepare_write(): Replace a switch () with a comparison against 0. Only 2 m_log_mode are allowed.
| * MDEV-14602: Cleanup recv_dblwr_t::find_page()Marko Mäkelä2019-11-121-36/+17
| | | | | | | | Avoid creating std::vector, and use single instead of double traversal.
| * Merge 10.1 into 10.2Marko Mäkelä2019-11-123-8/+8
| |\
| | * MDEV-20953: binlog_encryption.rpl_corruption failed in buildbot due to wrong ↵Sujatha2019-11-123-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | error code Problem: ======== CURRENT_TEST: binlog_encryption.rpl_corruption mysqltest: In included file "./include/wait_for_slave_io_error.inc": ... At line 72: Slave stopped with wrong error code **** Slave stopped with wrong error code: 1743 (expected 1595,1913) **** Analysis: ======== The test emulates the corruption at the various stages of replication for example in binlog file, in network and in relay log etc. It verifies that all corruption cases are handled through appropriate error messages. The test cases which emulate network failure expect following errors. --ER_SLAVE_RELAY_LOG_WRITE_FAILURE (1595) --ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE (1743) Ideally test should expect error codes as 1595 and 1743. But the test actually waits on incorrect error code 1595,1913 Fix: === Added appropriate error code for 'ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE'. Replaced 1913 with 1743.
| * | rpl_semi_sync_gtid_reconnect results mergeAndrei Elkin2019-11-111-0/+12
| | |
* | | merge 10.2->10.3 with conflict resolutionsAndrei Elkin2019-11-1110-5/+290
|\ \ \ | |/ /
| * | manual merge 10.1->10.2Andrei Elkin2019-11-1110-5/+281
| |\ \ | | |/
| | * MDEV-19376 Repl_semi_sync_master::commit_trx assertion failure: ... || ↵Andrei Elkin2019-11-103-1/+108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | !m_active_tranxs->is_tranx_end_pos(trx_wait_binlog_name, trx_wait_binlog_pos) The assert indicates that the current transaction got caught uncleaned from the semisync master's cache when it is signaled to proceed upon its ack receive. The reason of missed cleanup turns out to be a flaw in the gtid connect mode. A submitted by connecting slave value of its last received event's binlog file *name* was adopted into {{Repl_semi_sync_master::m_reply_file_name}} as a part of semisync initialization. Notice that the initialization still refines the position part of the submitted last received event's binlog coordinates. The master side binlog filename:pos refinement is specific to the gtid connect mode for purpose of computing the latest binlog file to resume slave feeding from. Effectively in the gtid connect mode the computed resumption filename:pos may appear smaller in which case a new post-connect time committing transaction may be logged with its filename:pos also less than the submitted coordinates and that triggers the assert. Fixed with making the semisync initialization to use the refined filename:pos. It is guaranteed to be less than any new generated transaction's binlog:pos.
| | * bump the VERSIONDaniel Bartholomew2019-11-081-1/+1
| | |
| | * MDEV-20981 wsrep_sst_mariabackup fails silently when mariabackup is not ↵Hartmut Holzgraefe2019-11-081-0/+7
| | | | | | | | | | | | | | | | | | installed (#1406) Make sure failure to find mariabackup binary does not terminate the script silently, terminate with a clear error message instead
| | * MDEV-20519: Query plan regression with optimizer_use_condition_selectivity > 1Varun Gupta2019-11-076-4/+166
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The issue here is the wrong estimate of the cardinality of a partial join, the cardinality is too high because the function table_cond_selectivity() returns an absurd number 100 while selectivity cannot be greater than 1. When accessing table t by outer reference t1.a via index we do not perform any range analysis for t. Yet we see TABLE::quick_key_parts[key] and TABLE->quick_rows[key] contain a non-zero value though these should have been remained untouched and equal to 0. Thus real cause of the problem is that TABLE::init does not clean the arrays TABLE::quick_key_parts[] and TABLE::>quick_rows[]. It should have done it because the TABLE structure created for any instance of a table can be reused for many queries.
* | | Merge 10.2 into 10.3Marko Mäkelä2019-11-1118-249/+130
|\ \ \ | |/ /
| * | MDEV-21024: Cleanup XDES_CLEAN_BITMarko Mäkelä2019-11-111-2/+4
| | | | | | | | | | | | | | | | | | The XDES_CLEAN_BIT is always set for every element of the page allocation bitmap in the extent descriptor pages. Do not bother touching it, to avoid redundant writes.
| * | MDEV-21024: Clean up dict_hdr_create()Marko Mäkelä2019-11-111-3/+2
| | | | | | | | | | | | The DICT_HDR_MAX_SPACE_ID was already zero-initialized at page allocation.
| * | MDEV-21024: Clean up IMPORT TABLESPACEMarko Mäkelä2019-11-115-57/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | page_rec_write_field(): Remove. dict_create_index_tree_step(): If the SYS_INDEXES.PAGE does not change, do not update it in the data dictionary. Typically, all index page numbers would be unchanged before and after IMPORT TABLESPACE, except if some secondary indexes were created after loading some data. btr_root_fseg_adjust_on_import(): Remove the redundant mtr_t* parameter. Redo logging is disabled during the page adjustments that IMPORT TABLESPACE is performing.
| * | MDEV-21024: Clean up btr_root_raise_and_insert()Marko Mäkelä2019-11-111-7/+1
| | | | | | | | | | | | | | | The root page must never have any siblings, so it is unnecessary to clear those fields.
| * | MDEV-21024: Clean up page allocationMarko Mäkelä2019-11-111-12/+8
| | | | | | | | | | | | | | | | | | | | | fsp_alloc_seg_inode_page(): Ever since commit 3926673ce7149aa223103126b6aeac819b10fab5 all newly allocated pages are zero-initialized. Assert that this is the case for the FSEG_ID fields.
| * | MDEV-21024: Optimize writing BTR_EXTERN_LENMarko Mäkelä2019-11-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | btr_store_big_rec_extern_fields(): Remove the redundant initialization of the most significant 32 bits of BTR_EXTERN_LEN. InnoDB never supported BLOBs that are longer than 4GiB. In fact, dtuple_convert_big_rec() would write emit an error message if a clustered index record tuple would exceed 1,000,000,000 bytes in length. The BTR_EXTERN_LEN in the BLOB pointers in clustered index leaf page records is zero-initialized at least since commit 41bb3537ba507799ab0143acd75ccab72192931e
| * | MDEV-21024: Clean up rtr_adjust_upper_level()Marko Mäkelä2019-11-111-1/+0
| | | | | | | | | | | | | | | Remove the unnecessary retrieval and null-modifications of the preceding page.
| * | Cleanup btr_page_get_prev(), btr_page_get_next()Marko Mäkelä2019-11-1113-145/+89
| | | | | | | | | | | | | | | | | | Remove the redundant parameter mtr_t*. Make use of page_has_prev(), page_has_next() whenever possible.
| * | MDEV-21024: Clean up rtr_adjust_upper_level()Marko Mäkelä2019-11-111-20/+0
| | | | | | | | | | | | | | | Remove the unnecessary retrieval and null-modifications of the preceding page.
| * | bump the VERSIONDaniel Bartholomew2019-11-081-1/+1
| | |
* | | bump the VERSIONDaniel Bartholomew2019-11-081-1/+1
| | |
* | | MDEV-20934: Make the test more robustmariadb-10.3.20Marko Mäkelä2019-11-062-0/+8
| | | | | | | | | | | | | | | | | | | | | Due to MDEV-12288, the slow shutdown in MariaDB 10.3 will include resetting the DB_TRX_ID for all inserted records. This might cause the 60-second shutdown_server timeout to be exceeded. Let us wait for the purge to complete before initiating slow shutdown.
* | | Merge 10.2 into 10.3Marko Mäkelä2019-11-0624-110/+400
|\ \ \ | |/ /
| * | Follow-up to 792c9f9a4977ea428537ca34435d39bd17cec5ffmariadb-10.2.29Marko Mäkelä2019-11-064-29/+19
| | | | | | | | | | | | | | | | | | dict_index_add_to_cache(): Make the 'index' a reference to a pointer, so that the caller will avoid the expensive call to dict_index_get_if_in_cache_low().
| * | Merge 10.1 to 10.2Marko Mäkelä2019-11-0618-58/+257
| |\ \ | | |/
| | * Merge 5.5 into 10.1mariadb-10.1.43Marko Mäkelä2019-11-062-15/+14
| | |\
| | | * bump the VERSIONDaniel Bartholomew2019-11-051-1/+1
| | | |
| | | * MDEV-20971 ASAN heap-use-after-free in list_delete / heap_closeSergei Golubchik2019-11-042-15/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't save/restore HP_INFO as it could be changed by a concurrent thread. different parts of HP_INFO are protected by different mutexes and the mutex that protect most of the HP_INFO does not protect its open_list data. As a bonus, make heap_check_heap() to take const HP_INFO* and not make any changes there whatsoever.
| | * | MDEV-20987 InnoDB fails to start when fts table has FK relationThirunarayanan Balathandayuthapani2019-11-064-32/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | InnoDB: Assertion failure in file .../dict/dict0dict.cc line ... InnoDB: Failing assertion: table->can_be_evicted This fixes a regression that was caused by the fix of MDEV-20621 (commit a41d429765c7ddb528b9b438c68b25ff55d3bd55). MySQL 5.6 (and MariaDB 10.0) introduced eviction of tables from the InnoDB data dictionary cache. Tables that are connected to FOREIGN KEY constraints or FULLTEXT INDEX are exempt of the eviction. With the problematic change, a table that would already be exempt from eviction due to FOREIGN KEY would cause the problem if there also was a FULLTEXT INDEX defined on it. dict_load_table(): Only prevent eviction if table->can_be_evicted holds.
| | * | bump the VERSIONDaniel Bartholomew2019-11-051-1/+1
| | | |
| | * | Fix ninja buildVladislav Vaintroub2019-11-051-1/+2
| | | | | | | | | | | | | | | | | | | | Do not rely on existence of CMakeFiles/${target}.dir directory existence It is not there for custom targets in Ninja build.
| | * | Fix GCC 9.2.1 -Wstringop-truncationMarko Mäkelä2019-11-045-11/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dict_table_rename_in_cache(): Use strcpy() instead of strncpy(), because they are known to be equivalent in this case (the length of old_name was already validated). mariabackup: Invoke strncpy() with one less than the buffer size, and explicitly add NUL as the last byte of the buffer.
| | * | Fix build on !glibc/powerpc*pkubaj2019-11-021-1/+1
| | | | | | | | | | | | Do the same that newer branches do and don't include glibc-related headers on non-glibc environment.
| | * | MDEV-20424: New default value for optimizer_use_condition-selectivity leads ↵Varun Gupta2019-11-014-0/+146
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to bad plan In the function prev_record_reads where one finds the different row combinations for a subset of partial join, it did not take into account the selectivity of tables involved in the subset of partial join.
| | * | MDEV-17896 Assertion `pfs->get_refcount() > 0' failedRobert Bindar2019-11-013-0/+71
| | | | | | | | | | | | | | | | | | | | | | | | Unfortunate DROP TEMPORARY..IF EXISTS on a regular table may allow subsequent CREATE TABLE statements to steal away the PFS_table_share instance from the dropped table.
| | * | List of unstable tests for 10.1.42 releasemariadb-10.1.42Elena Stepanova2019-10-311-123/+81
| | | |
| * | | MDEV-20934 Infinite loop on innodb_fast_shutdown=0 with inconsistent change ↵Marko Mäkelä2019-11-063-8/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | buffer Due to a data corruption bug that may have occurred a long time earlier (possibly involving physical backup and MySQL Bug #69122, which was addressed in commit f166ec71b78fdf7a08ba413509cf00ad9e003b3c) it seems possible that the InnoDB change buffer might end up containing entries, while no buffered changes exist according to the change buffer bitmap pages in the .ibd files. ibuf_delete_recs(): New function, to be invoked on slow shutdown only. Remove all buffered changes for a specific page. ibuf_merge_or_delete_for_page(): If the change buffer bitmap is clean and a slow shutdown is in progress, invoke ibuf_delete_recs(). We do not want to do that during normal operation, due to the additional overhead that is involved. The bitmap page should be consistent with the change buffer in the first place.
| * | | bump the VERSIONDaniel Bartholomew2019-11-051-1/+1
| | | |