summaryrefslogtreecommitdiff
path: root/sql/handler.h
Commit message (Collapse)AuthorAgeFilesLines
* MDEV-23805 Make Online DDL to Instant DDL when table is emptybb-10.4-MDEV-23805Thirunarayanan Balathandayuthapani2021-11-121-0/+3
| | | | | | | | | | - In ha_innobase::prepare_inplace_alter_table(), InnoDB should check whether the table is empty. If the table is empty then server should avoid downgrading the MDL after prepare phase. It is more like instant alter, does change only in dicationary and metadata. - Changed few debug test case to make non-empty DDL table
* Merge 10.3 into 10.4Marko Mäkelä2021-11-091-0/+1
|\
| * Merge mariadb-10.3.32 into 10.3Marko Mäkelä2021-11-091-0/+12
| |\
| * | MDEV-25803 innodb.alter_candidate_key fixAleksey Midenkov2021-11-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | There is a case when implicit primary key may be changed when removing NOT NULL from the part of unique key. In that case we update modified_primary_key which is then used to not skip key sorting. According to is_candidate_key() there is no other cases when primary kay may be changed implicitly.
* | | Merge branch '10.3' into 10.4mariadb-10.4.22Oleksandr Byelkin2021-11-051-0/+12
|\ \ \ | | |/ | |/|
| * | Merge branch '10.2' into 10.3mariadb-10.3.32Oleksandr Byelkin2021-11-051-0/+12
| |\ \ | | |/ | |/|
| | * MDEV-26833 Missed statement rollback in case transaction drops or create ↵mariadb-10.2.41Andrei Elkin2021-11-051-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | temporary table When transaction creates or drops temporary tables and afterward its statement faces an error even the transactional table statement's cached ROW format events get involved into binlog and are visible after the transaction's commit. Fixed with proper analysis of whether the errored-out statement needs to be rolled back in binlog. For instance a fact of already cached CREATE or DROP for temporary tables by previous statements alone does not cause to retain the being errored-out statement events in the cache. Conversely, if the statement creates or drops a temporary table itself it can't be rolled back - this rule remains.
* | | Merge 10.3 -> 10.4Sergei Petrunia2021-06-301-0/+2
|\ \ \ | |/ /
| * | Merge 10.2->10.3Sergei Petrunia2021-06-301-0/+2
| |\ \ | | |/
| | * MDEV-25129 Add KEYWORDS view to the INFORMATION_SCHEMAxing-zhi, jiang2021-06-291-0/+2
| | | | | | | | | | | | | | | | | | | | | Add KEYWORDS table and SQL_FUNCTIONS table to INFORMATION_SCHEMA. This commits needs some minor changes when propagated upwards (e.g. func_array in item_create.cc has a termination element that doesn't exist in later versions of MariaDB)
* | | Merge 10.3 into 10.4Marko Mäkelä2021-06-011-2/+0
|\ \ \ | |/ /
| * | Merge 10.2 into 10.3Marko Mäkelä2021-06-011-2/+0
| |\ \ | | |/
| | * Cleanup: Remove handler::update_table_comment()Marko Mäkelä2021-05-271-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only call of the virtual member function handler::update_table_comment() was removed in commit 82d28fada7dc928564aefac802400c6684c11917 (MySQL 5.5.53) but the implementation was not removed. The only non-trivial implementation was for InnoDB. The information is now returned via handler::get_foreign_key_create_info() and ha_statistics::delete_length.
* | | Merge 10.3 into 10.4Marko Mäkelä2021-05-181-1/+11
|\ \ \ | |/ /
| * | Merge 10.2 into 10.3, except MDEV-25682Marko Mäkelä2021-05-181-1/+11
| |\ \ | | |/
| | * MDEV-17515: GTID Replication in optimistic mode deadlockSujatha2021-05-171-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: ======= In slave_parallel_mode=optimistic configuration, when admin commands and DML operation on the same table are scheduled simultaneously for execution, it results in lock conflict and slave server either hangs due to deadlock or goes down with an assert. Analysis: ======== Admin commands OPTIMIZE, REPAIR and ANALYZE are written to binary log as ordinary transactions. When 'slave_parallel_mode' is 'optimistic' DMLs are allowed to run in parallel. But these locks are not detected by parallel replication deadlock detection-and-handling mechanism. At times they result in deadlock or assertion. Fix: === Flag admin commands as DDL in Gtid_log_event at the time of writing to binary log. Add a new bit EXECUTED_TABLE_ADMIN_CMD to 'm_unsafe_rollback_flags'. During 'mysql_admin_table' command execution it accepts a list of tables to be processed and executes them in a loop. Upon successful execution enable 'EXECUTED_TABLE_ADMIN_CMD' bit in thd->transaction.stmt_unsafe_rollback_flags. Gtid_log_event constructor will notice this flag and mark the current transaction with 'FL_DDL' flag. Gtid_log_events marked as FL_DDL will not be scheduled parallel execution, on the slave. They will execute in isolation to prevent deadlocks. Note: Removed the call to 'trans_commit_implicit' from 'mysql_admin_table' function as 'mysql_execute_command' will take care of invoking 'trans_commit_implicit'.
| * | MDEV-24758 heap-use-after-poison in innobase_add_instant_try/rec_copyMarko Mäkelä2021-04-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of commit fd9ca2a742abe2e91b2b77e70915dec7bd3cd7e1 (MDEV-23295) and commit 9a156e1a23046ba3e37bdb1e4e1ad887d3f5829b (MDEV-23345) to 10.3. An instant ADD/DROP/reorder column could create a dummy table object with the wrong ROW_FORMAT when innodb_default_row_format was changed between CREATE TABLE and ALTER TABLE. prepare_inplace_alter_table_dict(): If we had promised that ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT. The rest of the changes are related to adding Alter_inplace_info::inplace_supported to cache the return value of handler::check_if_supported_inplace_alter().
* | | MDEV-22775 [HY000][1553] Changing name of primary key column with foreign ↵Alexander Barkov2021-04-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | key constraint fails. Problem: The problem happened because of a conceptual flaw in the server code: a. The table level CHARSET/COLLATE clause affected all data types, including numeric and temporal ones: CREATE TABLE t1 (a INT) CHARACTER SET utf8 [COLLATE utf8_general_ci]; In the above example, the Column_definition_attributes (and then the FRM record) for the column "a" erroneously inherited "utf8" as its character set. b. The "ALTER TABLE t1 CONVERT TO CHARACTER SET csname" statement also erroneously affected Column_definition_attributes::charset for numeric and temporal data types and wrote "csname" as their character set into FRM files. So now we have arbitrary non-relevant charset ID values for numeric and temporal data types in all FRM files in the world :) The code in the server and the other engines did not seem to be affected by this flaw. Only InnoDB inplace ALTER was affected. Solution: Fixing the code in the way that only character string data types (CHAR,VARCHAR,TEXT,ENUM,SET): - inherit the table level CHARSET/COLLATE clause - get the charset value according to "CONVERT TO CHARACTER SET csname". Numeric and temporal data types now always get &my_charset_numeric in Column_definition_attributes::charset and always write its ID into FRM files: - no matter what the table level CHARSET/COLLATE clause is, and - no matter what "CONVERT TO CHARACTER SET" says. Details: 1. Adding helper classes to pass small parts of HA_CREATE_INFO into Type_handler methods: - Column_derived_attributes - to pass table level CHARSET/COLLATE, so columns that do not have explicit CHARSET/COLLATE clauses can derive them from the table level, e.g. CREATE TABLE t1 (a VARCHAR(1), b CHAR(1)) CHARACTER SET utf8; - Column_bulk_alter_attributes - to pass bulk attribute changes generated by the ALTER related code. These bulk changes affect multiple columns at the same time: ALTER TABLE ... CONVERT TO CHARACTER SET csname; Note, passing the whole HA_CREATE_INFO directly to Type_handler would not be good: HA_CREATE_INFO is huge and would need not desired dependencies in sql_type.h and sql_type.cc. The Type_handler API should use smallest possible data types! 2. Type_handler::Column_definition_prepare_stage1() is now responsible to set Column_definition::charset properly, according to the data type, for example: - For string data types, Column_definition_attributes::charset is set from the table level CHARSET/COLLATE clause (if not specified explicitly in the column definition). - For numeric and temporal fields, Column_definition_attributes::charset is set to &my_charset_numeric, no matter what the table level CHARSET/COLLATE says. - For GEOMETRY, Column_definition_attributes::charset is set to &my_charset_bin, no matter what the table level CHARSET/COLLATE says. Previously this code (setting `charset`) was outside of of Column_definition_prepare_stage1(), namely in mysql_prepare_create_table(), and was erroneously called for all data types. 3. Adding Type_handler::Column_definition_bulk_alter(), to handle "ALTER TABLE .. CONVERT TO". Previously this code was inside get_sql_field_charset() and was erroneously called for all data types. 4. Removing the Schema_specification_st parameter from Type_handler::Column_definition_redefine_stage1(). Column_definition_attributes::charset is now fully properly initialized by Column_definition_prepare_stage1(). So we don't need access to the table level CHARSET/COLLATE clause in Column_definition_redefine_stage1() any more. 5. Other changes: - Removing global function get_sql_field_charset() - Moving the part of the former get_sql_field_charset(), which was responsible to inherit the table level CHARSET/COLLATE clause to new methods: -- Column_definition_attributes::explicit_or_derived_charset() and -- Column_definition::prepare_charset_for_string(). This code is only needed for string data types. Previously it was erroneously called for all data types. - Moving another part, which was responsible to apply the "CONVERT TO" clause, to Type_handler_general_purpose_string::Column_definition_bulk_alter(). - Replacing the call for get_sql_field_charset() in sql_partition.cc to sql_field->explicit_or_derived_charset() - it is perfectly enough. The old code was redundant: get_sql_field_charset() was called from sql_partition.cc only when there were no a "CONVERT TO CHARACTER SET" clause involved, so its purpose was only to inherit the table level CHARSET/COLLATE clause. - Moving the code handling the BINCMP_FLAG flag from mysql_prepare_create_table() to Column_definition::prepare_charset_for_string(): This code is responsible to resolve the BINARY comparison style into the corresponding _bin collation, to do the following transparent rewrite: CREATE TABLE t1 (a VARCHAR(10) BINARY) CHARSET utf8; -> CREATE TABLE t1 (a VARCHAR(10) CHARACTER SET utf8 COLLATE utf8_bin); This code is only needed for string data types. Previously it was erroneously called for all data types. 6. Renaming Table_scope_and_contents_source_pod_st::table_charset to alter_table_convert_to_charset, because the only purpose it's used for is handlering "ALTER .. CONVERT". The new name is much more self-descriptive.
* | | Merge 10.3 into 10.4Marko Mäkelä2021-03-051-1/+7
|\ \ \ | |/ /
| * | Merge 10.2 into 10.3Marko Mäkelä2021-03-031-1/+7
| |\ \ | | |/
| | * MDEV-24532 Table corruption ER_NO_SUCH_TABLE_IN_ENGINE .. on table with ↵Monty2021-03-021-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | foreign key When doing a truncate on an Innodb under lock tables, InnoDB would rename the old table to #sql-... and recreate a new 't1' table. The table lock would still be on the #sql-table. When doing ALTER TABLE, Innodb would do the changes on the #sql table (which would disappear on close). When the SQL layer, as part of inline alter table, would close the original t1 table (#sql in InnoDB) and then reopen the t1 table, Innodb would notice that this does not match it's own (old) t1 table and generate an error. Fixed by adding code in truncate table that if we are under lock tables and truncating an InnoDB table, we would close, reopen and lock the table after truncate. This will remove the #sql table and ensure that lock tables is using the new empty table. Reviewer: Marko Mäkelä
* | | Merge branch 'bb-10.3-release' into bb-10.4-releaseSergei Golubchik2021-02-121-1/+1
|\ \ \ | |/ / | | | | | | | | | Note, the fix for "MDEV-23328 Server hang due to Galera lock conflict resolution" was null-merged. 10.4 version of the fix is coming up separately
| * | Merge branch '10.2' into 10.3Sergei Golubchik2021-02-011-1/+1
| |\ \ | | |/
| | * cleanup: void hton::abort_transaction()Sergei Golubchik2021-01-241-1/+1
| | | | | | | | | | | | | | | | | | | | | and void wsrep_innobase_kill_one_trx() as their return values are never used. Also remove redundant cast and checks that are always true
* | | MDEV-24522 Assertion `inited==NONE' fails upon UPDATE on versioned table ↵Aleksey Midenkov2021-01-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with unique blob Cause: no table->update_handler cloned at the moment of vers_insert_history_row(). update_handler is needed because there can't be several inited indexes at once in the same handler. First index is inited by QUICK_RANGE_SELECT::reset(). Then when history row is inserted check_duplicate_long_entry_key() is done and it requires another index.
* | | Merge 10.3 into 10.4Marko Mäkelä2020-12-011-15/+1
|\ \ \ | |/ /
| * | MDEV-21842 auto_increment does not increment with compound primary key on ↵Alexey Botchkov2020-11-231-15/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | partitioned table. The idea of this fix is that it's enough to prevent the next_auto_inc_val from incrementing if an error, to fix this problem and also the MDEV-17333. So this patch basically reverts the existing fix to the MDEV-17333.
* | | Merge 10.3 into 10.4Marko Mäkelä2020-11-031-2/+4
|\ \ \ | |/ /
| * | Merge 10.2 into 10.3Marko Mäkelä2020-11-021-2/+4
| |\ \ | | |/
| | * MDEV-22387: Do not violate __attribute__((nonnull))Marko Mäkelä2020-11-021-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This follows up commit commit 94a520ddbe39ae97de1135d98699cf2674e6b77e and commit 7c5519c12d46ead947d341cbdcbb6fbbe4d4fe1b. After these changes, the default test suites on a cmake -DWITH_UBSAN=ON build no longer fail due to passing null pointers as parameters that are declared to never be null, but plenty of other runtime errors remain.
* | | MDEV-23586 Mariabackup: GTID saved for replication in 10.4.14 is wrongMonty2020-09-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MDEV-21953 deadlock between BACKUP STAGE BLOCK_COMMIT and parallel replication Fixed by partly reverting MDEV-21953 to put back MDL_BACKUP_COMMIT locking before log_and_order. The original problem for MDEV-21953 was that while a thread was waiting in for another threads to commit in 'log_and_order', it had the MDL_BACKUP_COMMIT lock. The backup thread was waiting to get the MDL_BACKUP_WAIT_COMMIT lock, which blocks all new MDL_BACKUP_COMMIT locks. This causes a deadlock as the waited-for thread can never get past the MDL_BACKUP_COMMIT lock in ha_commit_trans. The main part of the bug fix is to release the MDL_BACKUP_COMMIT lock while a thread is waiting for other 'previous' threads to commit. This ensures that no transactional thread keeps MDL_BACKUP_COMMIT while waiting, which ensures that there are no deadlocks anymore.
* | | Merge branch '10.3' into 10.4Oleksandr Byelkin2020-08-031-8/+8
|\ \ \ | |/ /
| * | Merge branch '10.2' into 10.3Oleksandr Byelkin2020-08-031-8/+8
| |\ \ | | |/
| | * Merge branch '10.1' into 10.2Oleksandr Byelkin2020-08-021-11/+11
| | |\
| | | * Code comment spellfixesIan Gilfillan2020-07-221-11/+11
| | | |
* | | | MDEV-23295 ROW_FORMAT mismatch in instant ALTER TABLEMarko Mäkelä2020-07-271-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An instant ADD/DROP/reorder column could create a dummy table object with the wrong ROW_FORMAT when innodb_default_row_format was changed between CREATE TABLE and ALTER TABLE. prepare_inplace_alter_table_dict(): If we had promised that ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT. dict_table_t::prepare_instant(): Add debug assertions to catch ROW_FORMAT mismatch. The rest of the changes are related to adding Alter_inplace_info::inplace_supported to cache the return value of handler::check_if_supported_inplace_alter().
* | | | MDEV-21953 deadlock between BACKUP STAGE BLOCK_COMMIT and parallel repl.Monty2020-07-211-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The issue was: T1, a parallel slave worker thread, is waiting for another worker thread to commit. While waiting, it has the MDL_BACKUP_COMMIT lock. T2, working for mariabackup, is doing BACKUP STAGE BLOCK_COMMIT and blocks all commits. This causes a deadlock as the thread T1 is waiting for can't commit. Fixed by moving locking of MDL_BACKUP_COMMIT from ha_commit_trans() to commit_one_phase_2() Other things: - Added a new argument to ha_comit_one_phase() to signal if the transaction was a write transaction. - Ensured that ha_maria::implicit_commit() is always called under MDL_BACKUP_COMMIT. This code is not needed in 10.5 - Ensure that MDL_Request values 'type' and 'ticket' are always initialized. This makes it easier to check the state of the MDL_Request. - Moved thd->store_globals() earlier in handle_rpl_parallel_thread() as thd->init_for_queries() could use a MDL that could crash if store_globals where not called. - Don't call ha_enable_transactions() in THD::init_for_queries() as this is both slow (uses MDL locks) and not needed.
* | | | Merge remote-tracking branch 'origin/10.3' into 10.4Monty2020-07-031-1/+2
|\ \ \ \ | |/ / /
| * | | Don't copy uninitialized bytes when copying varstringsMonty2020-07-021-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using field_conv(), which is called in case of field1=field2 copy in fill_records(), full varstring's was copied, including unitialized bytes. This caused valgrind to compilain about usage of unitialized bytes when using Aria static length records. Fixed by not using memcpy when copying varstrings but instead just copy the real bytes.
* | | | Merge 10.3 into 10.4Marko Mäkelä2020-05-301-7/+1
|\ \ \ \ | |/ / /
| * | | Merge 10.2 into 10.3Marko Mäkelä2020-05-271-7/+1
| |\ \ \ | | |/ /
| | * | Fixed crash in aria recovery when using bulk insertMonty2020-05-261-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MDEV-20578 Got error 126 when executing undo undo_key_delete upon Aria crash recovery The crash happens in this scenario: - Table with unique keys and non unique keys - Batch insert (LOAD DATA or INSERT ... SELECT) with REPLACE - Some insert succeeds followed by duplicate key error In the above scenario the table gets corrupted. The bug was that we don't generate any undo entry for the failed insert as the whole insert can be ignored by undo. The code did however not take into account that when bulk insert is used, we would write cached keys to the file on failure and undo would wrongly ignore these. Fixed by moving the writing of the cache keys after we write the aborted-insert event to the log.
* | | | MDEV-21794: Optimizer flag rowid_filter leads to long querySergei Petrunia2020-05-071-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rowid Filter check is just like Index Condition Pushdown check: before we check the filter, we must check if we have walked out of the range we are scanning. (If we did, we should return, and not continue the scan). Consequences of this: - Rowid filtering doesn't work for keys that have partially-covered blob columns (just like Index Condition Pushdown) - The rowid filter function has three return values: CHECK_POS (passed) CHECK_NEG (filtered out), CHECK_OUT_OF_RANGE. All of the above is implemented in this patch
* | | | Merge 10.3 into 10.4Marko Mäkelä2020-04-291-1/+4
|\ \ \ \ | |/ / /
| * | | MDEV-19611 INPLACE ALTER does not fail on bad implicit default valueThirunarayanan Balathandayuthapani2020-04-281-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Inplace alter shouldn't set default date column as '0000-00-00' when table is not empty. So mysql_inplace_alter_table() copied alter_ctx.error_if_not_empty to a new field of Alter_inplace_info. In ha_innobase::check_if_supported_inplace_alter() should check the error_if_not_empty flag and return INPLACE_NOT_SUPPORTED if the table is not empty
* | | | Merge 10.3 into 10.4Marko Mäkelä2020-04-161-1/+12
|\ \ \ \ | |/ / / | | | | | | | | | | | | | | | | | | | | In main.index_merge_myisam we remove the test that was added in commit a2d24def8cc42d27c72d833abfb39ef24a2b96ba because it duplicates the test case that was added in commit 5af12e463549e4bbc2ce6ab720d78937d5e5db4e.
| * | | Merge 10.2 into 10.3Marko Mäkelä2020-04-151-1/+12
| |\ \ \ | | |/ /
| | * | MDEV-21168: Active XA transactions stop slave from working after backupVlad Lesin2020-04-071-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | was restored. Optionally rollback prepared XA's on "mariabackup --prepare". The fix MUST NOT be ported on 10.5+, as MDEV-742 fix solves the issue for slaves.
* | | | Removed double records_in_range calls from multi_range_read_info_constMonty2020-03-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was to remove a performance regression between 10.3 and 10.4 In 10.5 we will have a better implementation of records_in_range that will enable us to get more statistics. This change was not done in 10.4 because the 10.5 will be part of a larger change that is not suitable for the GA 10.4 version Other things: - Changed default handler block_size to 8192 to fix things statistics for engines that doesn't set the block size. - Fixed a bug in spider when using multiple part const ranges (Patch from Kentoku)
* | | | cleanup: key parts comparisonEugene Kosov2020-02-181-0/+14
| | | | | | | | | | | | | | | | Engine specific code moved to engine.