summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* | | | MDEV-22889: Disable innodb.innodb_force_recovery_rollbackMarko Mäkelä2020-06-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test case that was added for MDEV-21217 (commit b68f1d847f1fc00eed795e20162effc8fbc4119b) should have only two possible outcomes for the locking SELECT statement: (1) The statement is blocked, and the test will eventually fail with a lock wait timeout. This is what I observed when the code fix for MDEV-21217 was missing. (2) The lock conflict will ensure that the statement will execute after the rollback has completed, and an empty table will be observed. This is the expected outcome with the recovery fix. What occasionally happens (in some of our CI environments only, so far) is that the locking SELECT will return all 1,000 rows of the table that had been inserted by the transaction that was never supposed to be committed. One possibility is that the transaction was unexpectedly committed when the server was killed. Let us disable the test until the reason of the failure has been determined and addressed.
* | | | Merge 10.4 into 10.5Marko Mäkelä2020-06-1445-132/+523
|\ \ \ \ | |/ / /
| * | | MDEV-21560 Assertion `grant_table || grant_table_role' failed in ↵Sergei Golubchik2020-06-133-1/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check_grant_all_columns With RETURNING it can happen that the user has some privileges on the table (namely, DELETE), but later needs different privileges on individual columns (namely, SELECT). Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR, not an assert.
| * | | Merge 10.3 into 10.4Marko Mäkelä2020-06-1342-183/+479
| |\ \ \ | | |/ /
| | * | Merge 10.2 into 10.3Marko Mäkelä2020-06-1334-135/+350
| | |\ \ | | | |/
| | | * MDEV-21217 innodb_force_recovery=2 may wrongly abort rollbackMarko Mäkelä2020-06-133-1/+52
| | | | | | | | | | | | | | | | | | | | trx_roll_must_shutdown(): Correct the condition that detects the start of shutdown.
| | | * Fix wrong merge of commit d218d1aa49e848cef2bdbe83bbaf08e474d5209cVicențiu Ciorbaru2020-06-123-9/+4
| | | |
| | | * Merge branch '10.1' into 10.2Vicențiu Ciorbaru2020-06-1119-61/+207
| | | |\
| | | | * MDEV-22755 CREATE USER leads to indirect SIGABRT in __stack_chk_fail () from ↵Alexander Barkov2020-06-113-16/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fill_schema_user_privileges + *** stack smashing detected *** (on optimized builds) The code erroneously used buff[100] in a fiew places to make a GRANTEE value in the form: 'user'@'host' Fix: - Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100. - Adding a DBUG_ASSERT to make sure the buffer is enough - Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
| | | | * MDEV-22834: Disks plugin - change datatype to bigintVicențiu Ciorbaru2020-06-102-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On large hard disks (> 2TB), the plugin won't function correctly, always showing 2 TB of available space due to integer overflow. Upgrade table fields to bigint to resolve this problem.
| | | | * MDEV-5924: MariaDB could crash after changing the query_cache sizeOleksandr Byelkin2020-06-103-3/+42
| | | | | | | | | | | | | | | | | | | | | | | | | The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory. (bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
| | | | * Revert "MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single ↵Oleksandr Byelkin2020-06-103-13/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SELECT for DUAL" This reverts commit 443391236d20cd0303fcc9957eb49a6aaf28316e.
| | | | * MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single SELECT for DUALrucha1742020-06-093-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of SELECT without tables which returns either 0 or 1 rows, JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS() was giving 0 in the output. Now it checks if the flag is set, if it is set send_record=1 else 0. 1 is the number of rows that could have been sent to the client if the SELECT query had SQL_CALC_FOUND_ROWS. It is 0 when no rows were sent because the SELECT query did not have SQL_CALC_FOUND_ROWS.
| | | | * MDEV-22717: Conditional jump or move depends on uninitialised value(s) in ↵Sujatha2020-06-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | find_uniq_filename(char*, unsigned long) Fix: === Initialize 'number' variable to '0'.
| | | | * Client spelling mistakesIan Gilfillan2020-06-087-32/+32
| | | | |
| | | | * MDEV-22728: SIGFPE in Unique::get_cost_calc_buff_size from ↵Varun Gupta2020-06-074-1/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | prepare_search_best_index_intersect on optimized builds For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree. Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
| | | * | MDEV-21619 Server crash or assertion failures in my_datetime_to_strAlexander Barkov2020-06-117-0/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Item_cache_datetime::decimals was always copied from example->decimals without limiting to 6 (maximum possible fractional digits), so val_str() later crashed on asserts inside my_time_to_str() and my_datetime_to_str().
| | | * | Remove a stale testMarko Mäkelä2020-06-102-58/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last traces of the special InnoDB table names were removed in commit 0af52734a790b61690ada63492974b3670116e3f but we forgot to remove the test case.
| | | * | MDEV-22849 Reuse skip_trailing_space() in my_hash_sort_utf8mbXAlexander Barkov2020-06-101-15/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replacing the slow loop in my_hash_sort_utf8mbX() to the fast skip_trailing_spaces(), which consumes 8 bytes in one iteration, and is around 8 times faster on long data. Also, renaming: - my_hash_sort_utf8() to my_hash_sort_utf8mb3() - my_hash_sort_utf8_nopad() to my_hash_sort_utf8mb3_nopad() to merge to 10.5 easier (automatically?).
| | | * | innodb: dict_mem_table_add_col - compile warning fix argument 1 null where ↵Daniel Black2020-06-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | non-null expected (#1584) cd /build-mariadb-server-10.5-mysql_release/storage/innobase && /usr/bin/powerpc64le-linux-gnu-g++ -DBTR_CUR_ADAPT -DBTR_CUR_HASH_ADAPT -DCOMPILER_HINTS -DDBUG_TRACE -DEMBEDDED_LIBRARY -DHAVE_BZIP2=1 -DHAVE_C99_INITIALIZERS -DHAVE_CONFIG_H -DHAVE_FALLOC_PUNCH_HOLE_AND_KEEP_SIZE=1 -DHAVE_IB_LINUX_FUTEX=1 -DHAVE_LZ4=1 -DHAVE_LZ4_COMPRESS_DEFAULT=1 -DHAVE_LZMA=1 -DHAVE_NANOSLEEP=1 -DHAVE_OPENSSL -DHAVE_SCHED_GETCPU=1 -DLINUX_NATIVE_AIO=1 -DMUTEX_EVENT -DWITH_INNODB_DISALLOW_WRITES -D_FILE_OFFSET_BITS=64 -Iwsrep-lib/include -Iwsrep-lib/wsrep-API/v26 -I/home/dan/build-mariadb-server-10.5-mysql_release/include -Istorage/innobase/include -Istorage/innobase/handler -Ilibbinlogevents/include -Itpool -Iinclude -Isql -pie -fPIC -Wl,-z,relro,-z,now -fstack-protector --param=ssp-buffer-size=4 -Wconversion -Wno-sign-conversion -O3 -g -static-libgcc -fno-omit-frame-pointer -fno-strict-aliasing -Wno-uninitialized -D_FORTIFY_SOURCE=2 -DDBUG_OFF -Wall -Wextra -Wformat-security -Wno-format-truncation -Wno-init-self -Wno-nonnull-compare -Wno-unused-parameter -Woverloaded-virtual -Wnon-virtual-dtor -Wvla -Wwrite-strings -DUNIV_LINUX -D_GNU_SOURCE=1 -fPIC -fvisibility=hidden -std=gnu++11 -o CMakeFiles/innobase_embedded.dir/dict/dict0load.cc.o -c storage/innobase/dict/dict0load.cc storage/innobase/dict/dict0load.cc: In function ‘const char* dict_process_sys_columns_rec(mem_heap_t*, const rec_t*, dict_col_t*, table_id_t*, const char**, ulint*)’: storage/innobase/dict/dict0load.cc:1653:26: warning: argument 1 null where non-null expected [-Wnonnull] dict_mem_table_add_col(table, heap, name, mtype, ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ prtype, col_len); ~~~~~~~~~~~~~~~~ In file included from storage/innobase/include/dict0dict.h:32:0, from storage/innobase/include/btr0pcur.h:30, from storage/innobase/dict/dict0load.cc:31: storage/innobase/include/dict0mem.h:323:1: note: in a call to function ‘void dict_mem_table_add_col(dict_table_t*, mem_heap_t*, const char*, ulint, ulint, ulint)’ declared here dict_mem_table_add_col( ^~~~~~~~~~~~~~~~~~~~~~
| | * | | MDEV-22268 virtual longlong Item_func_div::int_op(): Assertion `0' failed in ↵Alexander Barkov2020-06-134-7/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Item_func_div::int_op Item_func_div::fix_length_and_dec_temporal() set the return data type to integer in case of @div_precision_increment==0 for temporal input with FSP=0. This caused Item_func_div to call int_op(), which is not implemented, so a crash on DBUG_ASSERT(0) happened. Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
| | * | | MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct listVarun Gupta2020-06-094-38/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backported from MYSQL Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT Issue: ------ The problem occurs when: 1) GROUP_CONCAT (DISTINCT ....) is used in the query. 2) Data size greater than value of system variable: tmp_table_size. The result would contain values that are non-unique. Root cause: ----------- An in-memory structure is used to filter out non-unique values. When the data size exceeds tmp_table_size, the overflow is written to disk as a separate file. The expectation here is that when all such files are merged, the full set of unique values can be obtained. But the Item_func_group_concat::add function is in a bit of hurry. Even as it is adding values to the tree, it wants to decide if a value is unique and write it to the result buffer. This works fine if the configured maximum size is greater than the size of the data. But since tmp_table_size is set to a low value, the size of the tree is smaller and hence requires the creation of multiple copies on disk. Item_func_group_concat currently has no mechanism to merge all the copies on disk and then generate the result. This results in duplicate values. Solution: --------- In case of the DISTINCT clause, don't write to the result buffer immediately. Do the merge and only then put the unique values in the result buffer. This has be done in Item_func_group_concat::val_str. Note regarding result file changes: ----------------------------------- Earlier when a unique value was seen in Item_func_group_concat::add, it was dumped to the output. So result is in the order stored in SE. But with this fix, we wait until all the data is read and the final set of unique values are written to output buffer. So the data appears in the sorted order. This only fixes the cases when we have DISTINCT without ORDER BY clause in GROUP_CONCAT.
| * | | | MDEV-22499 Assertion `(uint) (table_check_constraints - ↵Alexander Barkov2020-06-122-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | share->check_constraints) == (uint) (share->table_check_constraints - share->field_check_constraints)' failed in TABLE_SHARE::init_from_binary_frm_image The patch for MDEV-22111 fixed MDEV-22499 as well. Adding tests only.
| * | | | MDEV-22854 Garbage returned with SELECT ↵Alexander Barkov2020-06-104-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CASE..DEFAULT(timestamp_field_with_now_as_default) Item_default_value did not override val_native(), so the inherited Item_field::val_native() was called. As a result Item_default_value::calculate() was not called and Item_field::val_native() was called on a Field with a non-initialized ptr. Implementing Item_default_value::val_native() properly.
| * | | | MDEV-22734 Assertion `mon > 0 && mon < 13' failed in sec_since_epochAlexander Barkov2020-06-083-3/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When processing a condition like: WHERE timestamp_column='2010-00-01 00:00:00' don't replace the constant to Item_datetime_literal if the constant it has zeros (in the month or in the day).
* | | | | MDEV-22884 Assertion `grant_table || grant_table_role' failed on perfschemaSergei Golubchik2020-06-133-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | when allowing access via perfschema callbacks, update the cached GRANT_INFO to match
* | | | | MDEV-22190 InnoDB: Apparent corruption of an index page ... to be writtenMarko Mäkelä2020-06-131-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An InnoDB check for the validity of index pages would occasionally fail in the test encryption.innodb_encryption_discard_import. An analysis of a "rr replay" failure trace revealed that the problem basically is a combination of two old anomalies, and a recently implemented optimization in MariaDB 10.5. MDEV-15528 allows InnoDB to discard buffer pool pages that were freed. PageBulk::init() will disable the InnoDB validity check, because during native ALTER TABLE (rebuilding tables or creating indexes) we could write inconsistent index pages to data files. In the occasional test failure, page 8:6 would have been written from the buffer pool to the data file and subsequently freed. However, fil_crypt_thread may perform dummy writes to pages that have been freed. In case we are causing an inconsistent page to be re-encrypted on page flush, we should disable the check. In the analyzed "rr replay" trace, a fil_crypt_thread attempted to access page 8:6 twice after it had been freed. On the first call, buf_page_get_gen(..., BUF_PEEK_IF_IN_POOL, ...) returned NULL. The second call succeeded, and shortly thereafter, the server intentionally crashed due to writing the corrupted page.
* | | | | when printing Item_in_optimizer, use precedence of wrapped Itembb-10.5-pr1588Sidney Cammeresi2020-06-125-10/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN (SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE !(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE. it should return the precedence of the inner operation.
* | | | | MDEV-22840: JSON_ARRAYAGG gives wrong results with NULL values and ORDER by ↵Varun Gupta2020-06-124-5/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clause The problem here is similar to the case with DISTINCT, the tree used for ORDER BY needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT as NULLS are rejected by GROUP_CONCAT. Also introduced a comparator function for the order by tree to handle null values with JSON_ARRAYAGG.
* | | | | MDEV-22011: DISTINCT with JSON_ARRAYAGG gives wrong resultsVarun Gupta2020-06-125-10/+200
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure that the Unique tree also holds the NULL bytes of a table record inside the node of the tree. This behaviour for JSON_ARRAYAGG is different from GROUP_CONCAT because in GROUP_CONCAT we just reject NULL values for columns. Also introduced a comparator function for the unique tree to handle null values for distinct inside JSON_ARRAYAGG.
* | | | | MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct listVarun Gupta2020-06-124-38/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backported from MYSQL Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT Issue: ------ The problem occurs when: 1) GROUP_CONCAT (DISTINCT ....) is used in the query. 2) Data size greater than value of system variable: tmp_table_size. The result would contain values that are non-unique. Root cause: ----------- An in-memory structure is used to filter out non-unique values. When the data size exceeds tmp_table_size, the overflow is written to disk as a separate file. The expectation here is that when all such files are merged, the full set of unique values can be obtained. But the Item_func_group_concat::add function is in a bit of hurry. Even as it is adding values to the tree, it wants to decide if a value is unique and write it to the result buffer. This works fine if the configured maximum size is greater than the size of the data. But since tmp_table_size is set to a low value, the size of the tree is smaller and hence requires the creation of multiple copies on disk. Item_func_group_concat currently has no mechanism to merge all the copies on disk and then generate the result. This results in duplicate values. Solution: --------- In case of the DISTINCT clause, don't write to the result buffer immediately. Do the merge and only then put the unique values in the result buffer. This has be done in Item_func_group_concat::val_str. Note regarding result file changes: ----------------------------------- Earlier when a unique value was seen in Item_func_group_concat::add, it was dumped to the output. So result is in the order stored in SE. But with this fix, we wait until all the data is read and the final set of unique values are written to output buffer. So the data appears in the sorted order. This only fixes the cases when we have DISTINCT without ORDER BY clause in GROUP_CONCAT.
* | | | | MDEV-15101: Stop ANALYZE TABLE from flushing table definition cacheSergei Petrunia2020-06-123-3/+12
| | | | | | | | | | | | | | | | | | | | Part#2: forgot to commit the adjustments for the testcases.
* | | | | MDEV-22867 Assertion instant.n_core_fields == n_core_fields failedMarko Mäkelä2020-06-126-4/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a race condition where a table on which a 10.3-style instant ADD COLUMN is emptied during the execution of ALTER TABLE ... DROP COLUMN ..., DROP INDEX ..., ALGORITHM=NOCOPY. In commit 2c4844c9e76427525e8c39a2d72686085efe89c3 the function instant_metadata_lock() would prevent this race condition. But, it would also hold a page latch on the leftmost leaf page of clustered index for the duration of a possible DROP INDEX operation. The race could be fixed by restoring the function instant_metadata_lock() that was removed in commit ea37b144094a0c2ebfc6774047fd473c1b2a8658 but it would be more future-proof to prevent the dict_index_t::clear_instant_add() call from being issued at all. We at some point support DROP COLUMN ..., ADD INDEX ..., ALGORITHM=NOCOPY and that would spend a non-trivial amount of execution time in ha_innobase::inplace_alter(), making a server hang possible. Currently this is not supported and our added test case will notice when the support is introduced. dict_index_t::must_avoid_clear_instant_add(): Determine if a call to clear_instant_add() must be avoided. btr_discard_only_page_on_level(): Preserve the metadata record if must_avoid_clear_instant_add() holds. btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete(): Do not remove the metadata record even if the table becomes empty but must_avoid_clear_instant_add() holds. btr_pcur_store_position(): Relax a debug assertion. This is joint work with Thirunarayanan Balathandayuthapani.
* | | | | MDEV-15101: Stop ANALYZE TABLE from flushing table definition cacheSergei Petrunia2020-06-1210-3/+195
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apply this patch from Percona Server (amended for 10.5): commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84 Author: Laurynas Biveinis <laurynas.biveinis@gmail.com> Date: Tue Nov 14 06:34:19 2017 +0200 Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache) Make ANALYZE TABLE stop flushing affected tables from the table definition cache, which has the effect of not blocking any subsequent new queries involving the table if there's a parallel long-running query: - new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB tables; - in mysql_admin_table, if we are performing ANALYZE TABLE, and the table flag is set, do not remove the table from the table definition cache, do not invalidate query cache; - in partitioning handler, refresh the query optimizer statistics after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE; - new testcases main.percona_nonflushing_analyze_debug, parts.percona_nonflushing_abalyze_debug and a supporting debug sync point. For TokuDB, this change exposes bug TDB-83 (Index cardinality stats updated for handler::info(HA_STATUS_CONST), not often enough for tokudb_cardinality_scale_percent). TokuDB may return different rec_per_key values depending on dynamic variable tokudb_cardinality_scale_percent value. The server does not have a way of knowing that changing this variable invalidates the previous rec_per_key values in any opened table shares, and so does not call info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record of tokudb.bugs.db756_card_part_hash_1_pick, with the new output seeming to be more correct.
* | | | | MDEV-8139: Fix the MSAN instrumentationThirunarayanan Balathandayuthapani2020-06-122-1/+3
| | | | |
* | | | | MDEV-22877 Avoid unnecessary buf_pool.page_hash S-latch acquisitionMarko Mäkelä2020-06-1211-485/+292
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MDEV-15053 did not remove all unnecessary buf_pool.page_hash S-latch acquisition. There are code paths where we are holding buf_pool.mutex (which will sufficiently protect buf_pool.page_hash against changes) and unnecessarily acquire the latch. Many invocations of buf_page_hash_get_locked() can be replaced with the much simpler buf_pool.page_hash_get_low(). In the worst case the thread that is holding buf_pool.mutex will become a victim of MDEV-22871, suffering from a spurious reader-reader conflict with another thread that genuinely needs to acquire a buf_pool.page_hash S-latch. In many places, we were also evaluating page_id_t::fold() while holding buf_pool.mutex. Low-level functions such as buf_pool.page_hash_get_low() must get the page_id_t::fold() as a parameter. buf_buddy_relocate(): Defer the hash_lock acquisition to the critical section that starts by calling buf_page_t::can_relocate().
* | | | | more mysql_create_view link/unlink woesSergei Golubchik2020-06-121-3/+3
| | | | |
* | | | | MDEV-22878 galera.wsrep_strict_ddl hangs in 10.5 after mergeSergei Golubchik2020-06-121-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | if mysql_create_view is aborted when `view` isn't unlinked, it should not be linked back on cleanup
* | | | | MDEV-21851: post-push to fix main.flush_read_lock.Andrei Elkin2020-06-122-4/+4
| | | | |
* | | | | MDEV-16470: switch off user variables (and fixes of its support)bb-10.5-MDEV-22550Oleksandr Byelkin2020-06-1213-73/+89
| | | | |
* | | | | MDEV-22834: Disks plugin - change datatype to bigintbb-10.5-mdev1501Vicențiu Ciorbaru2020-06-122-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On large hard disks (> 2TB), the plugin won't function correctly, always showing 2 TB of available space due to integer overflow. Upgrade table fields to bigint to resolve this problem.
* | | | | MDEV-21851: Error in BINLOG_BASE64_EVENT i s always error-logged as if it is ↵Andrei Elkin2020-06-129-10/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | done by Slave The prefix of error log message out of a failed BINLOG applying is corrected to be the sql command name.
* | | | | MDEV-22602 Disable UPDATE CASCADE for SQL constraintsAleksey Midenkov2020-06-1210-24/+110
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CHECK constraint is checked by check_expression() which walks its items and gets into Item_field::check_vcol_func_processor() to check for conformity with foreign key list. WITHOUT OVERLAPS is checked for same conformity in mysql_prepare_create_table(). Long uniques are already impossible with InnoDB foreign keys. See ER_CANT_CREATE_TABLE in test case. 2 accompanying bugs fixed (test main.constraints failed): 1. check->name.str lived on SP execute mem_root while "check" obj itself lives on SP main mem_root. On second SP execute check->name.str had garbage data. Fixed by allocating from thd->stmt_arena->mem_root which is SP main mem_root. 2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in handle_if_exists_options(). Existing cases for MDEV-16932 in main.constraints cover both fixes.
* | | | | MDEV-22119: main.innodb_ext_key fails sporadicallyVarun Gupta2020-06-122-0/+8
| | | | | | | | | | | | | | | | | | | | Made the test stable by adding more rows so the range scan is cheaper than table scan.
* | | | | MDEV-8139 Fix ScrubbingThirunarayanan Balathandayuthapani2020-06-1222-719/+510
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fil_space_t::freed_ranges: Store ranges of freed page numbers. fil_space_t::last_freed_lsn: Store the most recent LSN of freeing a page. fil_space_t::freed_mutex: Protects freed_ranges, last_freed_lsn. fil_space_create(): Initialize the freed_range mutex. fil_space_free_low(): Frees the freed_range mutex. range_set: Ranges of page numbers. buf_page_create(): Removes the page from freed_ranges when page is being reused. btr_free_root(): Remove the PAGE_INDEX_ID invalidation. Because btr_free_root() and dict_drop_index_tree() are executed in the same atomic mini-transaction, there is no need to invalidate the root page. buf_release_freed_page(): Split from buf_flush_freed_page(). Skip any I/O buf_flush_freed_pages(): Get the freed ranges from tablespace and Write punch-hole or zeroes of the freed ranges. buf_flush_try_neighbors(): Handles the flushing of freed ranges. mtr_t::freed_pages: Variable to store the list of freed pages. mtr_t::add_freed_pages(): To add freed pages. mtr_t::clear_freed_pages(): To clear the freed pages. mtr_t::m_freed_in_system_tablespace: Variable to indicate whether page has been freed in system tablespace. mtr_t::m_trim_pages: Variable to indicate whether the space has been trimmed. mtr_t::commit(): Add the freed page and update the last freed lsn in the tablespace and clear the tablespace freed range if space is trimmed. file_name_t::freed_pages: Store the freed pages during recovery. file_name_t::add_freed_page(), file_name_t::remove_freed_page(): To add and remove freed page during recovery. store_freed_or_init_rec(): Store or remove the freed pages while encountering FREE_PAGE or INIT_PAGE redo log record. recv_init_crash_recovery_spaces(): Add the freed page encountered during recovery to respective tablespace.
* | | | | post-fix for #1504Sergei Golubchik2020-06-1218-23/+30
| | | | |
* | | | | MDEV-22812 "failed to create symbolic link" during the buildSergei Golubchik2020-06-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as cmake manual says If a sequential execution of multiple commands is required, use multiple ``execute_process()`` calls with a single ``COMMAND`` argument.
* | | | | MDEV-21831: Assertion `length == pack_length()' failed in ↵Varun Gupta2020-06-113-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Field_inet6::sort_string upon INSERT into RocksDB table For INET6 columns the values are stored as BINARY columns and returned to the client in TEXT format. For rocksdb the indexes store mem-comparable images for columns, so use the pack_length() to store the mem-comparable form for INET6 columns. This would also remain consistent with CHAR columns.
* | | | | MDEV-22850 Reduce buf_pool.page_hash latch contentionMarko Mäkelä2020-06-115-25/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For reads, the buf_pool.page_hash is protected by buf_pool.mutex or by the hash_lock. There is no need to compute or acquire hash_lock if we are not modifying the buf_pool.page_hash. However, the buf_pool.page_hash latch must be held exclusively when changing buf_page_t::in_file(), or if we desire to prevent buf_page_t::can_relocate() or buf_page_t::buf_fix_count() from changing. rw_lock_lock_word_decr(): Add a comment that explains the polling logic. buf_page_t::set_state(): When in_file() is to be changed, assert that an exclusive buf_pool.page_hash latch is being held. Unfortunately we cannot assert this for set_state(BUF_BLOCK_REMOVE_HASH) because set_corrupt_id() may already have been called. buf_LRU_free_page(): Check buf_page_t::can_relocate() before aqcuiring the hash_lock. buf_block_t::initialise(): Initialize also page.buf_fix_count(). buf_page_create(): Initialize buf_fix_count while not holding any mutex or hash_lock. Acquire the hash_lock only for the duration of inserting the block to the buf_pool.page_hash. buf_LRU_old_init(), buf_LRU_add_block(), buf_page_t::belongs_to_unzip_LRU(): Do not assert buf_page_t::in_file(), because buf_page_create() will invoke buf_LRU_add_block() before acquiring hash_lock and buf_page_t::set_state(). buf_pool_t::validate(): Rely on the buf_pool.mutex and do not unnecessarily acquire any buf_pool.page_hash latches. buf_page_init_for_read(): Clarify that we must acquire the hash_lock upfront in order to prevent a race with buf_pool_t::watch_remove().
* | | | | Add information_schema.spider_wrapper_protocols for knowing available ↵Kentoku SHIBA2020-06-1111-12/+262
| | | | | | | | | | | | | | | | | | | | wrappers of Spider