diff options
author | Alexey Botchkov <holyfoot@askmonty.org> | 2022-07-16 16:54:03 +0400 |
---|---|---|
committer | Alexey Botchkov <holyfoot@askmonty.org> | 2022-07-17 01:10:43 +0400 |
commit | 8911823f65a6557ce66ea5f8aecd55b115a85606 (patch) | |
tree | eeeeee73708c22dbf584ca1444be46321a670fa1 /sql/table.cc | |
parent | 65cc89ed9eb9367a7e068286ddc9a896c478c012 (diff) | |
download | mariadb-git-8911823f65a6557ce66ea5f8aecd55b115a85606.tar.gz |
MDEV-26546 SIGSEGV's in spider_db_connect on SHOW TABLE and spider_db… …_mbase::connect (and SIGSEGV's in check_vcol_forward_refs and inline_mysql_mutex_lock)
Not the SPIDER issue - happens to INSERT DELAYED.
the field::make_new_field does't copy the LONG_UNIQUE_HASH_FIELD
flag to the new field. Though the Delayed_insert::get_local_table
copies the field->vcol_info for this field. Ad a result
the parse_vcol_defs doesn't create the expression for that column
so the field->vcol_info->expr is NULL. Which leads to crash.
Backported fix for this from 10.5 - the flagg added in the
Delayed_insert::get_local_table.
Another problem with the USING HASH key is thst the
parse_vcol_defs modifies the table->keys content. Then the same
parse_vcol_defs is called on the table copy that has keys already
modified. Backported fix for that from 10.5 - key copying added
tot the Delayed_insert::get_local_table.
Finally - the created copy has to clear the expr_arena as
this table is not in the thd->open_tables list so won't be
cleared automatically.
Diffstat (limited to 'sql/table.cc')
-rw-r--r-- | sql/table.cc | 49 |
1 files changed, 49 insertions, 0 deletions
diff --git a/sql/table.cc b/sql/table.cc index 00b498fdd77..7957f2da593 100644 --- a/sql/table.cc +++ b/sql/table.cc @@ -3682,6 +3682,55 @@ static void print_long_unique_table(TABLE *table) } #endif +bool copy_keys_from_share(TABLE *outparam, MEM_ROOT *root) +{ + TABLE_SHARE *share= outparam->s; + if (share->key_parts) + { + KEY *key_info, *key_info_end; + KEY_PART_INFO *key_part; + + if (!multi_alloc_root(root, &key_info, share->keys*sizeof(KEY), + &key_part, share->ext_key_parts*sizeof(KEY_PART_INFO), + NullS)) + return 1; + + outparam->key_info= key_info; + + memcpy(key_info, share->key_info, sizeof(*key_info)*share->keys); + memcpy(key_part, key_info->key_part, sizeof(*key_part)*share->ext_key_parts); + + my_ptrdiff_t adjust_ptrs= PTR_BYTE_DIFF(key_part, key_info->key_part); + for (key_info_end= key_info + share->keys ; + key_info < key_info_end ; + key_info++) + { + key_info->table= outparam; + key_info->key_part= reinterpret_cast<KEY_PART_INFO*> + (reinterpret_cast<char*>(key_info->key_part) + adjust_ptrs); + if (key_info->algorithm == HA_KEY_ALG_LONG_HASH) + key_info->flags&= ~HA_NOSAME; + } + for (KEY_PART_INFO *key_part_end= key_part+share->ext_key_parts; + key_part < key_part_end; + key_part++) + { + Field *field= key_part->field= outparam->field[key_part->fieldnr - 1]; + if (field->key_length() != key_part->length && + !(field->flags & BLOB_FLAG)) + { + /* + We are using only a prefix of the column as a key: + Create a new field for the key part that matches the index + */ + field= key_part->field=field->make_new_field(root, outparam, 0); + field->field_length= key_part->length; + } + } + } + return 0; +} + /* Open a table based on a TABLE_SHARE |