summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Roll Forward recovery for binlog teststmp_sachinSachin Kumar2021-06-0723-0/+1007
|
* Fix review commentsSachin Kumar2021-05-112-2/+2
|
* MDEV-22179 rr(record and replay) support for mtrSachin2021-05-101-1/+31
| | | | | | | | | | | | | | | | | | | This feature adds the support for rr in mtr. These 2 options are added --rr run the mysqld in rr record mode --rr_option= run the rr with custom record option, for multiple options use --rr_option= for each option. For example ./mtr main.view --rr_option=-h --rr_option=-u --rr_option=-c=23 --boot-rr run the mysqld performing bootstrap in rr record mode Recording are stored in mysql-test/var/rr folder. To run recording please run rr replay var/rr/mysql-X Limitations Restart will create a new recording. Repeat will work on same recording , So might be harder to debug. If test create the multiple instance of mariadb all will be stored in var/rr
* MDEV-21469: Implement crash-safe binary logging of the user XAbb-10.5-MDEV_21469Sujatha2020-12-2124-46/+1632
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make XA PREPARE, XA COMMIT and XA ROLLBACK statements crash-safe when --log-bin is specified. At execution of XA PREPARE, XA COMMIT and XA ROLLBACK their replication events are made written into the binary log prior to execution of the commands in storage engines. In case the server crashes, after writing to binary log but before all involved engines have processed the command, the following recovery will execute the command's replication events to equalize the states of involved engines with that of binlog. That applies to all XA PREPARE *group* and XA COMMIT or ROLLBACK. On the implementation level the recovery time binary log parsing is augmented to pay attention to the user XA xids to identify the XA transactions' state:s in binary log, and eventually match them against their states in engines, see MYSQL_BIN_LOG::recover_explicit_xa_prepare(). In discrepancy cases the outdated state in the engines is corrected with resubmitting the transaction prepare group of events, or completion ones. The multi-engine partly prepared XA PREPARE case the XA is rolled back first. The fact of multiple-engine involved is registered into Gtid_log_event::flags2 as one bit. The boolean value is sufficient and precise to deal with two engines in XA transaction. With more than 2 recoverable engines the flag method is still correct though may be pessimistic, as it treats all recoverable engines as XA participants. So when the number of such Engines exceeds the number of prepared engines of the XA that XA is treated as partially completed, with all that ensued. As an optimization no new bit is allocated in flags2, instead a pre-existing ones (of MDEV-742) are reused, observing that A. XA "COMPLETE" does not require multi-engine hint for its recovery and that B. the MDEV-742 XA-completion bit is not anyhow used by XA-PREPARE and its GTID log event. Notice the multi-engine flagging is superceded by MDEV-21117 extension in Gtid log event so this part should be taken from there.
* MDEV-21469 Pre-commit: make sure binlog hton is listed in the headAndrei Elkin2020-12-214-15/+84
| | | | | | | | | | | | | | | of the user XA participants list. This patch makes sure that at handling of the user XA the binlog hton heads the list of the transaction branches which was not the case for the multi-engine XA. Also a recovered XA's completion (XA COMMIT or ROLLBACK through a "foreign" connection) is made to invoke first binlog hton's "complete"_by_xid before the general hton loop (and naturally skip the binlog hton action in the loop). The work is a prerequisite for MDEV-21469.
* MDEV-21851: Error in BINLOG_BASE64_EVENT i s always error-logged as if it is ↵Andrei Elkin2020-05-209-10/+16
| | | | | | | done by Slave The prefix of error log message out of a failed BINLOG applying is corrected to be the sql command name.
* MDEV-22636 XML output for mtr doesn't work with valgrind optionRasmus Johansson2020-05-191-0/+1
|
* Merge 10.4 into 10.5Marko Mäkelä2020-05-195-45/+88
|\
| * Move c++ code from my_atomic.h to my_atomic_wrapper.hMonty2020-05-193-45/+59
| | | | | | | | | | This is because it breaks code that is using extern "C" when including my_atomic, which is the case with ha_s3.cc
| * MDEV-22610 Crash in INSERT INTO t1 (VALUES (DEFAULT) UNION VALUES (DEFAULT))Alexander Barkov2020-05-192-0/+29
| | | | | | | | The fix for MDEV-21995 earlier fixed MDEV-22610. Adding tests only.
* | Merge remote-tracking branch 'origin/10.4' into 10.5Alexander Barkov2020-05-198-100/+190
|\ \ | |/
| * Merge remote-tracking branch 'origin/10.3' into 10.4Alexander Barkov2020-05-198-106/+191
| |\
| | * MDEV-21995 Server crashes in Item_field::real_type_handler with table value ↵Alexander Barkov2020-05-198-106/+190
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | constructor 1. Code simplification: Item_default_value handled all these values: a. DEFAULT(field) b. DEFAULT c. IGNORE and had various conditions to distinguish (a) from (b) and from (c). Introducing a new abstract class Item_contextually_typed_value_specification, to handle (b) and (c), so the hierarchy now looks as follows: Item Item_result_field Item_ident Item_field Item_default_value - DEFAULT(field) Item_contextually_typed_value_specification Item_default_specification - DEFAULT Item_ignore_specification - IGNORE 2. Introducing a new virtual method is_evaluable_expression() to determine if an Item is: - a normal expression, so its val_xxx()/get_date() methods can be called - or a just an expression substitute, whose value methods cannot be called. 3. Disallowing Items that are not evalualble expressions in table value constructors.
| | * MDEV-22615 system_time_zone may be incorrectly reported as "unknown".Vladislav Vaintroub2020-05-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | TIME_ZONE_ID_UNKNOWN return code from GetDynamicTimeZoneInformation() does not mean failure. It only means, daylight saving dates in the returned strct are not valid. TIME_ZONE_ID_INVALID means failure, in this case "unknown" should be returned
* | | largepages: osx compile warning fixDaniel Black2020-05-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | my_is_2pow is only used on linux. fixes compile warning: mysys/my_largepage.c:48:23: warning: unused function 'my_is_2pow' [-Wunused-function] static inline my_bool my_is_2pow(size_t n) { return !((n) & ((n) - 1)); } ^ 1 warning generated.
* | | Merge 10.4 into 10.5Marko Mäkelä2020-05-1889-872/+1905
|\ \ \ | |/ /
| * | Merge 10.3 into 10.4Marko Mäkelä2020-05-188-26/+159
| |\ \ | | |/
| | * Merge 10.2 into 10.3Marko Mäkelä2020-05-181-2/+3
| | |\
| | | * MDEV-22611 Assertion btr_search_enabled failed during buffer pool resizingMarko Mäkelä2020-05-181-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit ad6171b91cac33e70bb28fa6865488b2c65e858c (MDEV-22456) we removed the acquisition of the adaptive hash index latch from the caller of btr_search_update_hash_ref(). The tests innodb.innodb_buffer_pool_resize_with_chunks and innodb.innodb_buffer_pool_resize would occasionally fail starting with 10.3, due to MDEV-12288 causing more purge activity during the test. btr_search_update_hash_ref(): After acquiring the adaptive hash index latch, check that the adaptive hash index is still enabled on the page.
| | * | Merge 10.2 into 10.3Marko Mäkelä2020-05-180-0/+0
| | |\ \ | | | |/
| | | * MDEV-22554: galera_sst_mariabackup fails with "Failed to start mysqld.2"Julius Goryavsky2020-05-181-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem is caused by the operation of netcat streamer and does not appear on systems where socat is installed. We need to add the "-N" option for netcat to call shutdown() on the socket when receiving EOF from STDIN.
| | * | MDEV-22554: galera_sst_mariabackup fails with "Failed to start mysqld.2"Julius Goryavsky2020-05-181-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem is caused by the operation of netcat streamer and does not appear on systems where socat is installed. We need to add the "-N" option for netcat to call shutdown() on the socket when receiving EOF from STDIN.
| | * | Merge 10.2 into 10.3Marko Mäkelä2020-05-182-4/+4
| | |\ \ | | | |/
| | | * Merge 10.1 into 10.2Marko Mäkelä2020-05-182-4/+4
| | | |\
| | | | * MDEV-21976: mtr main.udf - broaden localhost (#1543)Daniel Black2020-05-182-4/+4
| | | | | | | | | | | | | | | | | | | | Localhost, depending on the platform can return any 127.0.0.1/8 address.
| | | * | Travis-CI: Remove builds that always fail to make CI useful againOtto Kekäläinen2020-05-171-66/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Also clean away dead code that is not used and will never have any use on the 10.2 branch.
| | | * | Travis-CI: Shorten deb build log to keep it under 4 MBOtto Kekäläinen2020-05-171-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a 4 MB hard limit on Travis-CI and build output needs to be less than that. Silencing the 'make install' step gets rid of a lot of "Installing.." and "Missing.." and removing all mysql-test files will make the dh_missing warnings much shorter. Cherry-picked from 41952c85f1644690249ce624de7609cbebb93638.
| | | * | Travis-CI: Add missing build dependency dh-execOtto Kekäläinen2020-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | Backported from 30b44aaec7120f41ee1383536730947cfa427308.
| | * | | MDEV-21269 Parallel merging of fts index rebuild failsbb-10.3-MDEV-21269Thirunarayanan Balathandayuthapani2020-05-171-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: ======= - During alter rebuild, document read from old table is tokenzied parallelly by innodb_ft_sort_pll_degree threads and stores it in respective merge files. While doing the parallel merge, InnoDB wrongly skips the root level selection of merging buffer records. So it leads to insertion of merge records in non-ascending order. Solution: ========== Build selection tree for the root level also. So that root of selection tree can always contain sorted buffer.
| | * | | Merge remote-tracking branch 'origin/10.2' into 10.3Alexander Barkov2020-05-164-16/+147
| | |\ \ \ | | | |/ /
| | | * | Merge remote-tracking branch 'origin/10.1' into 10.2Alexander Barkov2020-05-164-14/+149
| | | |\ \ | | | | |/ | | | | | | | | | | Also, adding 10.2 related changes for MDEV-22579
| | | | * MDEV-22579 No error when inserting DEFAULT(non_virtual_column) into a ↵Alexander Barkov2020-05-154-4/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | virtual column The code erroneously allowed both: INSERT INTO t1 (vcol) VALUES (DEFAULT); INSERT INTO t1 (vcol) VALUES (DEFAULT(non_virtual_column)); The former is OK, but the latter is not. Adding a new virtual method in Item: virtual bool vcol_assignment_allowed_value() const { return false; } Item_null, Item_param and Item_default_value override it. Item_default_value overrides it in the way to: - allow DEFAULT - disallow DEFAULT(col)
| * | | | MDEV-22456 after-merge fix: introduce Atomic_relaxedMarko Mäkelä2020-05-187-35/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the merge 9e6e43551fc61bc34152f8d60f5d72f0d3814787 we made Atomic_counter a more generic wrapper of std::atomic so that dict_index_t would support the implicit assignment operator. It is better to revert the changes to Atomic_counter and instead introduce Atomic_relaxed as a generic wrapper to std::atomic. Unlike Atomic_counter, we will not define operator++, operator+= or similar, because we want to make the operations more explicit in the users of Atomic_wrapper, because unlike loads and stores, atomic read-modify-write operations always incur some overhead.
| * | | | MDEV-21483 : Galera MTR tests failed: galera.MW-328A galera.MW-328BJan Lindström2020-05-188-6/+85
| | | | | | | | | | | | | | | | | | | | | | | | | Enable tests with additional galera output to find out actual reason for test failures.
| * | | | MDEV-22554: galera_sst_mariabackup fails with "Failed to start mysqld.2"Julius Goryavsky2020-05-181-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem is caused by the operation of netcat streamer and does not appear on systems where socat is installed. We need to add the "-N" option for netcat to call shutdown() on the socket when receiving EOF from STDIN.
| * | | | MDEV-22556: Incorrect result for window function when using encrypt-tmp-files=ONVarun Gupta2020-05-174-2/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The issue here is that end_of_file for encrypted temporary IO_CACHE (used by filesort) is updated using lseek. Encryption adds storage overhead and hides it from the caller by recalculating offsets and lengths. Two different IO_CACHE cannot possibly modify the same file because the encryption key is randomly generated and stored in the IO_CACHE. So when the tempfiles are encrypted DO NOT use lseek to change end_of_file. Further observations about updating end_of_file using lseek 1) The end_of_file update is only used for binlog index files 2) The whole point is to update file length when the file was modified via a different file descriptor. 3) The temporary IO_CACHE files can never be modified via a different file descriptor. 4) For encrypted temporary IO_CACHE, end_of_file should not be updated with lseek
| * | | | Merge 10.3 into 10.4Marko Mäkelä2020-05-167-12/+93
| |\ \ \ \ | | |/ / /
| | * | | Merge 10.2 into 10.3Marko Mäkelä2020-05-165-13/+89
| | |\ \ \ | | | |/ /
| | | * | MDEV-13626: Make test more robustMarko Mäkelä2020-05-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | In commit b1742a5c951633213d756600ee73ba7bfa7800ff we forgot FLUSH TABLES, potentially causing errors for MyISAM system tables.
| | | * | Merge 10.1 into 10.2Marko Mäkelä2020-05-153-3/+62
| | | |\ \ | | | | |/
| | | | * MDEV-22498: SIGSEGV in Bitmap<64u>::merge on SELECTVarun Gupta2020-05-143-3/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the case when the optimizer does the IN-EXISTS transformation, the equality condition is injected in the WHERE OR HAVING clause of the subquery. If the select list of the subquery has a reference to the parent select make sure to use the reference and not the original item.
| | | * | MDEV-22544 Inconsistent and Incorrect rw-lock statsMarko Mäkelä2020-05-151-9/+22
| | | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rw_lock_stats were incorrectly updated. While global statistics have limited usefulness, we cannot remove them from a GA version. This contribution is slightly improving performance in write workloads.
| | | | * | MDEV-22544: Inconsistent and Incorrect rw-lock statsKrunal Bauskar2020-05-141-9/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - There are multiple inconsistency and incorrect way in which rw-lock stats are calculated. - shared rw-lock stats: "rounds" counter is incremented only once for N rounds done in spin-cycle. - all rw-lock stats: If the spin-cycle is short-circuited then attempts are re-counted. [If spin-cycle is interrupted, before it completes srv_n_spin_wait_rounds (default 30) rounds, spin_count is incremented to consider this. If thread resumes spin-cycle (due to unavailability of the locks) and is again interrupted or completed, spin_count is again incremented with the total count, failing to adjust the previous attempt increment]. - s/x rw-lock stats: spin_loop counter is not incremented at-all instead it is projected as 0 (in show engine output) and division to calculate spin-round per spin-loop is adjusted. As per the original semantics spin_loop counter should be incremented once per spin_loop execution. - sx rw-lock stats: sx locks increments spin_loop counter but instead of incrementing it once for a spin_loop invocation it does it multiple times based on how many time spin_loop flow is repeated for same instance post os-wait.
| | * | | | MDEV-18100: Clean up testMarko Mäkelä2020-05-162-0/+6
| | | | | |
| * | | | | Merge 10.3 into 10.4Marko Mäkelä2020-05-1655-1082/+606
| |\ \ \ \ \ | | |/ / / / | | | | | | | | | | | | | | | | | | We will expose some more std::atomic internals in Atomic_counter, so that dict_index_t::lock will support the default assignment operator.
| | * | | | Merge 10.2 into 10.3Marko Mäkelä2020-05-1529-694/+395
| | |\ \ \ \ | | | |/ / /
| | | * | | MDEV-22456 Dropping the adaptive hash index may cause DDL to lock up InnoDBMarko Mäkelä2020-05-1534-903/+552
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
| | * | | | Merge 10.2 into 10.3Marko Mäkelä2020-05-1524-284/+134
| | |\ \ \ \ | | | |/ / /
| | | * | | Amend af784385b4a2b286000fa2c658e34283fe7bba60: Avoid vtable overheadMarko Mäkelä2020-05-153-5/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When neither MSAN nor Valgrind are enabled, declare Field::mark_unused_memory_as_defined() as an empty inline function, instead of declaring it as a virtual function.
| | | * | | span cleanupEugene Kosov2020-05-151-21/+41
| | | | | |