summaryrefslogtreecommitdiff
path: root/storage/spider
Commit message (Collapse)AuthorAgeFilesLines
* MDEV-19866 follow-upbb-10.3-mdev-26333Nayuta Yanagisawa2021-10-181-4/+4
| | | | | | Cherry-picking the fix for MDEV-19866 changes the behavior of the Spider slightly. So, I modified a existing test to match the new behavior.
* MDEV-19866 With a Spider table, a SELECT with WHERE involving primary key ↵Kentoku SHIBA2021-10-186-30/+310
| | | | | breaks following SELECTs (#1356) Change checking scanning partitions from part_spec to part_info->read_partitions
* MDEV-26545 Spider does not correctly handle UDF and stored function in where ↵Daniel Ye2021-09-225-1/+414
| | | | | | | conds - Handle stored function conditions correctly, with the same logic as with UDFs. - When running queries on Spider SE, by default, we do not push down WHERE conditions containing usage of UDFs/stored functions to remote data nodes, unless the user demands (by setting spider_use_pushdown_udf). - Disable direct update/delete when a udf condition is skipped.
* MDEV-24523 Execution of JSON_REPLACE failed on SpiderYongxin Xu2021-08-055-0/+167
| | | | | | | | | | JSON_REPLACE() function executed with an error on Spider SE. This patch fixes the problem, and it also fixes the MDEV-24541. The problem is that Item_func_json_insert::func_name() returns the wrong function name "json_update". The Spider SE reconstructs a query based on the return value in some cases. Thus, if the return value is wrong, the Spider SE may generate a wrong query.
* MDEV-24020: Trim with remove_str Fails on SpiderYongxin Xu2021-07-297-9/+366
| | | | This patch fixes the bug that TRIM(BOTH ... FROM $str), TRIM(LEADING ... FROM $str), and TRIM(TRAILING ... FROM $str) failed with errors when executing on Spider.
* MDEV-24517 follow-up: Fix for test with --ps-protocolNayuta Yanagisawa2021-07-251-0/+2
| | | | | | | | | | | | Tests for the Spider storage engine often use the following idiom: --let $command=CREATE TABLE t1 (...);CREATE TABLE t2 (...); ... --eval $command However, the idiom seems to work in the normal protocol, but fails in the prepared statement (ps) protocol. As testing CREATE TABLE statements in the ps protocol, we wrap the idiom by --disable_ps_protocol and --enable_ps_protocol.
* MDEV-24517: JSON_EXTRACT as conditions triggers syntax error on Spider (#1839)Yongxin Xu2021-07-236-0/+230
| | | | | | | | | The `item_func::JSON_EXTRACT_FUNC` was not handled correctly in the previous versions on the Spider storage engine, which makes queries like `SELECT * FROM t1 WHERE json_extract(jdoc, '$.Age')=20` failed with syntax error. This patch writes specific code to handle JSON_EXTRACT in the Spider Storage Engine and fix that bug.
* Merge branch '10.2' into 10.3Sergei Golubchik2021-07-214-9/+107
|\
| * MDEV-25985 Spider handle ">=" as ">" in some casesNayuta Yanagisawa2021-07-144-9/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function spider_db_append_key_where_internal() converts HA_READ_AFTER_KEY to '>'. The conversion seems to be correct for single-column indexes because HA_READ_AFTER_KEY means "read the key after the provided value." However, how about multi-column indexes? Assume that there is a multi-column index on c1 and c2 and we search with the condition 'c1 >= 100 AND c2 > 200'. The key_range.flag corresponds to the search condition could be HA_READ_AFTER_KEY. In such a case, we could not simply convert HA_READ_AFTER_KEY to '>'. The correct conversion is to convert HA_READ_AFTER_KEY to '>' only for the last column in key_part_map and to convert HA_READ_AFTER_KEY to '>=' for the other column. The similar discussion also applies to the conversion from key_range.flag to a sign of inequality.
| * MDEV-17556 Assertion `bitmap_is_set_all(&table->s->all_set)' failedNikita Malyavin2021-01-083-39/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The assertion failed in handler::ha_reset upon SELECT under READ UNCOMMITTED from table with index on virtual column. This was the debug-only failure, though the problem is mush wider: * MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw bitmap. * read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP * The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE * The pointers to the stored MY_BITMAPs, like orig_read_set etc, and sometimes all_set and tmp_set, are assigned to the pointers. * Sometimes tmp_use_all_columns is used to substitute the raw bitmap directly with all_set.bitmap * Sometimes even bitmaps are directly modified, like in TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called. The last three bullets in the list, when used together (which is mostly always) make the program flow cumbersome and impossible to follow, notwithstanding the errors they cause, like this MDEV-17556, where tmp_set pointer was assigned to read_set, write_set and vcol_set, then its bitmap was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call, and then bitmap_clear_all(&tmp_set) was applied to all this. To untangle this knot, the rule should be applied: * Never substitute bitmaps! This patch is about this. orig_*, all_set bitmaps are never substituted already. This patch changes the following function prototypes: * tmp_use_all_columns, dbug_tmp_use_all_columns to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map* * tmp_restore_column_map, dbug_tmp_restore_column_maps to accept MY_BITMAP* instead of my_bitmap_map* These functions now will substitute read_set/write_set/vcol_set directly, and won't touch underlying bitmaps.
| * MDEV-7098 spider/bg.spider_fixes failed in buildbot with safe_mutex: Trying ↵Kentoku SHIBA2020-09-077-539/+1534
| | | | | | | | to unlock mutex conn->mta_conn_mutex that wasn't locked at storage/spider/spd_db_conn.cc, line 671
* | MDEV-24760 SELECT..CASE statement syntax error at Spider Engine tableNayuta Yanagisawa2021-07-194-79/+176
| | | | | | | | | | | | | | | | | | | | | | | | The root cause of the bug is in `spider_db_mbase_util::open_item_func()`. The function handles an instance of the `Item_func` class based on its `Item_func::Functype`. The `Functype` of `CASE WHEN ... THEN` is `CASE_SEARCHED_FUNC`. However, the Spider SE doesn't recognize this `Functype` because `CASE_SEARCHED_FUNC` is newly added by 4de0d92. This results in the wrong handling of `CASE WHEN ... THEN`. The above also applies to `CASE_SIMPLE_FUNC`.
* | fix spider tests for --psSergei Golubchik2021-06-194-0/+11
| | | | | | | | | | | | | | | | | | spider tests use --let $var= many;sql;statements --eval $var and this doesn't work in ps
* | spider tests aren't bigSergei Golubchik2021-06-194-4/+0
| | | | | | | | | | | | and *never* disable tests in suite.pm based on $::opt_big_test, this will make the test skipped both as too big (for ./mtr) and as too small (for ./mtr --big --big).
* | MDEV-17556 Assertion `bitmap_is_set_all(&table->s->all_set)' failedNikita Malyavin2021-01-273-45/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The assertion failed in handler::ha_reset upon SELECT under READ UNCOMMITTED from table with index on virtual column. This was the debug-only failure, though the problem is mush wider: * MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw bitmap. * read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP * The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE * The pointers to the stored MY_BITMAPs, like orig_read_set etc, and sometimes all_set and tmp_set, are assigned to the pointers. * Sometimes tmp_use_all_columns is used to substitute the raw bitmap directly with all_set.bitmap * Sometimes even bitmaps are directly modified, like in TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called. The last three bullets in the list, when used together (which is mostly always) make the program flow cumbersome and impossible to follow, notwithstanding the errors they cause, like this MDEV-17556, where tmp_set pointer was assigned to read_set, write_set and vcol_set, then its bitmap was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call, and then bitmap_clear_all(&tmp_set) was applied to all this. To untangle this knot, the rule should be applied: * Never substitute bitmaps! This patch is about this. orig_*, all_set bitmaps are never substituted already. This patch changes the following function prototypes: * tmp_use_all_columns, dbug_tmp_use_all_columns to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map* * tmp_restore_column_map, dbug_tmp_restore_column_maps to accept MY_BITMAP* instead of my_bitmap_map* These functions now will substitute read_set/write_set/vcol_set directly, and won't touch underlying bitmaps.
* | MDEV-20502 Queries against spider tables return wrong values for columns ↵Kentoku SHIBA2021-01-122-0/+31
| | | | | | | | | | | | following constant declarations. Add test cases.
* | MDEV-20502 Queries against spider tables return wrong values for columns ↵Kentoku SHIBA2021-01-1210-4/+216
| | | | | | | | | | | | following constant declarations. When executing a query like "select id, 0 as const, val from ...", there are 3 columns(items) in Query->select at handlerton->create_group_by(). After that, MariaDB makes a temporary table with 2 columns. The skipped items are const item, so fixing Spider to skip const items for items at Query->select.
* | MDEV-20100 MariaDB 13.3.9 Crash "[ERROR] mysqld got signal 11 ;"Kentoku SHIBA2020-10-205-0/+264
| | | | | | | | Some functions on ha_partition call functions on all partitions, but handler->reset() is only called that pruned by m_partitions_to_reset. So Spider didn't clear pointer on unpruned partitions, if the unpruned partitions are used by next query, Spider reference the pointer that is already freed.
* | MDEV-7098 spider/bg.spider_fixes failed in buildbot with safe_mutex: Trying ↵Kentoku SHIBA2020-09-078-543/+1622
| | | | | | | | to unlock mutex conn->mta_conn_mutex that wasn't locked at storage/spider/spd_db_conn.cc, line 671
* | MDEV-18993 The keep-alive connection (set spider_conn_recycle_mode = 1) in ↵Kentoku SHIBA2020-06-272-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spider would cause cash in MariaDB (#1269) Fix the following valgrind error. ==94390== Thread 29: ==94390== Invalid read of size 8 ==94390== at 0x78389D: thd_increment_bytes_sent (sql_class.cc:4265) ==94390== by 0xC8EC46: net_real_write (net_serv.cc:730) ==94390== by 0xC8E0C8: net_flush (net_serv.cc:383) ==94390== by 0xC8E4D0: net_write_command (net_serv.cc:521) ==94390== by 0xADCE61: cli_advanced_command (client.c:468) ==94390== by 0xAE3CAF: mysql_close_slow_part (client.c:3671) ==94390== by 0xAE3D28: mysql_close (client.c:3683) ==94390== by 0x149E69A8: spider_db_mbase::disconnect() (spd_db_mysql.cc:2217) ==94390== by 0x1491EA26: spider_db_disconnect(st_spider_conn*) (spd_db_conn.cc:297) ==94390== by 0x14948EBE: spider_free_conn_alloc(st_spider_conn*) (spd_conn.cc:196) ==94390== by 0x1494B26A: spider_free_conn(st_spider_conn*) (spd_conn.cc:1251) ==94390== by 0x1494941F: spider_free_conn_from_trx(st_spider_transaction*, st_spider_conn*, bool, bool, int*) (spd_conn.cc:315) ==94390== Address 0x1f0e0990 is 4,832 bytes inside a block of size 25,728 free'd ==94390== at 0x4C2ACBD: free (vg_replace_malloc.c:530) ==94390== by 0x13F5545: my_free (my_malloc.c:222) ==94390== by 0x6C75B7: ilink::operator delete(void*, unsigned long) (sql_list.h:618) ==94390== by 0x77B9F6: THD::~THD() (sql_class.cc:1724) ==94390== by 0x1494FCE0: spider_bg_conn_action(void*) (spd_conn.cc:2580) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so) ==94390== Block was alloc'd at ==94390== at 0x4C29BC3: malloc (vg_replace_malloc.c:299) ==94390== by 0x13F4DFA: my_malloc (my_malloc.c:101) ==94390== by 0x1491CF06: ilink::operator new(unsigned long) (sql_list.h:614) ==94390== by 0x1494F7FD: spider_bg_conn_action(void*) (spd_conn.cc:2501) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so) ==94390== Invalid write of size 8 ==94390== at 0x7838AF: thd_increment_bytes_sent (sql_class.cc:4265) ==94390== by 0xC8EC46: net_real_write (net_serv.cc:730) ==94390== by 0xC8E0C8: net_flush (net_serv.cc:383) ==94390== by 0xC8E4D0: net_write_command (net_serv.cc:521) ==94390== by 0xADCE61: cli_advanced_command (client.c:468) ==94390== by 0xAE3CAF: mysql_close_slow_part (client.c:3671) ==94390== by 0xAE3D28: mysql_close (client.c:3683) ==94390== by 0x149E69A8: spider_db_mbase::disconnect() (spd_db_mysql.cc:2217) ==94390== by 0x1491EA26: spider_db_disconnect(st_spider_conn*) (spd_db_conn.cc:297) ==94390== by 0x14948EBE: spider_free_conn_alloc(st_spider_conn*) (spd_conn.cc:196) ==94390== by 0x1494B26A: spider_free_conn(st_spider_conn*) (spd_conn.cc:1251) ==94390== by 0x1494941F: spider_free_conn_from_trx(st_spider_transaction*, st_spider_conn*, bool, bool, int*) (spd_conn.cc:315) ==94390== Address 0x1f0e0990 is 4,832 bytes inside a block of size 25,728 free'd ==94390== at 0x4C2ACBD: free (vg_replace_malloc.c:530) ==94390== by 0x13F5545: my_free (my_malloc.c:222) ==94390== by 0x6C75B7: ilink::operator delete(void*, unsigned long) (sql_list.h:618) ==94390== by 0x77B9F6: THD::~THD() (sql_class.cc:1724) ==94390== by 0x1494FCE0: spider_bg_conn_action(void*) (spd_conn.cc:2580) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so) ==94390== Block was alloc'd at ==94390== at 0x4C29BC3: malloc (vg_replace_malloc.c:299) ==94390== by 0x13F4DFA: my_malloc (my_malloc.c:101) ==94390== by 0x1491CF06: ilink::operator new(unsigned long) (sql_list.h:614) ==94390== by 0x1494F7FD: spider_bg_conn_action(void*) (spd_conn.cc:2501) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so)
* | MDEV-21884 MariaDB with Spider crashes on a querybb-10.3-MDEV-21884Kentoku SHIBA2020-04-176-0/+293
| |
* | Fix GCC -Wstringop-truncationMarko Mäkelä2020-03-301-0/+1
| |
* | Merge branch '10.2' into 10.3Oleksandr Byelkin2019-12-041-1/+1
|\ \ | |/
* | Merge remote-tracking branch 10.2 into 10.3Jan Lindström2019-12-024-1/+218
|\ \ | |/ | | | | | | | | | | | | | | Conflicts: mysql-test/suite/galera/t/galera_binlog_event_max_size_max-master.opt mysql-test/suite/innodb/r/innodb-mdev-7513.result mysql-test/suite/innodb/t/innodb-mdev-7513.test mysql-test/suite/wsrep/disabled.def storage/innobase/ibuf/ibuf0ibuf.cc
| * MDEV-17508 Fix bug for spider when using "not like"willhan2019-11-254-1/+218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | fix bug for spider where using "not like" (#890) test case: t1 is a spider engine table; CREATE TABLE `t1` ( `id` int(11) NOT NULL DEFAULT '0', `name` char(64) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=SPIDER query: "select * from t1 where name not like 'x%' " would dispatch "select xxx name name like 'x%' " to remote mysqld, is wrong
| * Merge 10.1 into 10.2Eugene Kosov2019-07-091-1/+1
| |\
| | * imporve clang buildEugene Kosov2019-06-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about deprecated `register` keyword. Too much warnings came from Mroonga and I gave up on it.
* | | MDEV-19238 Mariadb spider - crashes on where nullOleksandr Byelkin2019-10-095-0/+200
| | | | | | | | | | | | (fix and explanation came with MDEV-20753 (duplicate of this bug))
* | | Fix CMake warning in spider, in Windows ninja buildVladislav Vaintroub2019-09-121-0/+3
| | |
* | | fix for a compiler warning (#1372)Kentoku SHIBA2019-08-171-1/+1
| | |
* | | MDEV-20273 Add class Item_sum_min_maxAlexander Barkov2019-08-071-2/+2
| | |
* | | spider_db_init(): Do not return uninitialized error_numMarko Mäkelä2019-06-141-119/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the allocation of spider_table_sts_threads failed, we would DBUG_RETURN(error_num) without having initialized it earlier. Pre-initialize error_num to HA_ERR_OUT_OF_MEM and remove a lot of assignments that thus became redundant. This error was introduced in 207594afac99e5e7de1e639d907ce57c53c02294 (Spider 3.3.13).
* | | Merge 10.2 into 10.3Marko Mäkelä2019-05-1436-36/+36
|\ \ \ | |/ /
| * | Merge 10.1 into 10.2Marko Mäkelä2019-05-1336-36/+36
| |\ \ | | |/
| | * Merge branch '5.5' into 10.1Vicențiu Ciorbaru2019-05-1136-36/+36
| | |
* | | Merge branch '10.2' into 10.3Sergei Golubchik2019-03-291-3/+3
|\ \ \ | |/ /
| * | Merge branch '10.1' into 10.2Sergei Golubchik2019-03-291-3/+3
| |\ \ | | |/
| | * cmake: re-enable -Werror in the maintainer modeSergei Golubchik2019-03-271-3/+3
| | | | | | | | | | | | | | | | | | now we can afford it. Fix -Werror errors. Note: * old gcc is bad at detecting uninit variables, disable it. * time_t is int or long, cast it for printf's
| * | fixed auto-merge gone badSergei Golubchik2018-09-241-48/+0
| | |
| * | Merge branch '10.1' into 10.2Sergei Golubchik2018-09-241-0/+48
| |\ \ | | |/
| | * MDEV-16912: Spider Order By column[datatime] limit 5 returns 3 rowsJacob Mathew2018-09-134-0/+175
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem occurs in 10.2 and earlier releases of MariaDB Server because the Partition Engine was not pushing the engine conditions to the underlying storage engine of each partition. This caused Spider to return the first 5 rows in the table with the data provided by the customer. 2 of the 5 rows did not qualify the WHERE clause, so they were removed from the result set by the server. To fix the problem, I have back-ported support for engine condition pushdown in the Partition Engine from MariaDB Server 10.3. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit eb2ca3d on branch bb-10.2-MDEV-16912
| | * MDEV-12900: spider tests failed in buildbot with valgrindJacob Mathew2018-05-213-83/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The failures with valgrind occur as a result of Spider sometimes using the wrong transaction for operations in background threads that send requests to the data nodes. The use of the wrong transaction caused the networking to the data nodes to use the wrong thread in some cases. Valgrind eventually detects this when such a thread is destroyed before it is used to disconnect from the data node by that wrong transaction when it is freed. I have fixed the problem by correcting the transaction used in each of these cases. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit afe5a51 on branch 10.2
| | * MDEV-7914: spider/bg.ha, spider/bg.ha_part crash server sporadically in buildbotJacob Mathew2018-05-184-36/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The crash occurs when a thread that is closing its connection attempts to access Spider transaction information when another thread has freed that memory while processing Spider plugin deinit. This occurs because Spider does not adjust the plugin's reference count when it sets a transaction information pointer for the plugin. The fix I implemented changes the way Spider sets the transaction information pointer to use thd_set_ha_data() so that Spider's plugin reference counter is adjusted as well. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Commit ab9d420 on branch 10.2
| * | MDEV-16912: Spider Order By column[datatime] limit 5 returns 3 rowsbb-10.2-MDEV-16912Jacob Mathew2018-09-114-0/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem occurs in 10.2 and earlier releases of MariaDB Server because the Partition Engine was not pushing the engine conditions to the underlying storage engine of each partition. This caused Spider to return the first 5 rows in the table with the data provided by the customer. 2 of the 5 rows did not qualify the WHERE clause, so they were removed from the result set by the server. To fix the problem, I have back-ported support for engine condition pushdown in the Partition Engine from MariaDB Server 10.3. Author: Jacob Mathew. Reviewer: Kentoku Shiba.
| * | MDEV-15786: ERROR 1062 (23000) at line 365: Duplicate entry 'spider' for key ↵Jacob Mathew2018-07-231-3/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'PRIMARY' The problem occurs on Ubuntu where a Spider package is installed on the system separately from the MariaDB package. MariaDB and Spider upgrades leave the Spider plugin improperly installed. Spider is present in the mysql.plugin table but is not present in information_schema. The problem has been corrected in Spider's installation script. Logic has been added to check for Spider entries in both information_schema and mysql.plugin. If Spider is present in mysql.plugin but is not present in information_schema, then Spider is first removed from mysql.plugin. The subsequent plugin install of Spider will insert entries in both mysql.plugin and information_schema. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit 0897d81 on branch bb-10.3-MDEV-15786
| * | MDEV-12900: spider tests failed in buildbot with valgrindJacob Mathew2018-05-213-83/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The failures with valgrind occur as a result of Spider sometimes using the wrong transaction for operations in background threads that send requests to the data nodes. The use of the wrong transaction caused the networking to the data nodes to use the wrong thread in some cases. Valgrind eventually detects this when such a thread is destroyed before it is used to disconnect from the data node by that wrong transaction when it is freed. I have fixed the problem by correcting the transaction used in each of these cases. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged: Commit 4d576d9 on branch bb-10.3-MDEV-12900
| * | MDEV-7914: spider/bg.ha, spider/bg.ha_part crash server sporadically in buildbotJacob Mathew2018-05-174-36/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The crash occurs when a thread that is closing its connection attempts to access Spider transaction information when another thread has freed that memory while processing Spider plugin deinit. This occurs because Spider does not adjust the plugin's reference count when it sets a transaction information pointer for the plugin. The fix I implemented changes the way Spider sets the transaction information pointer to use thd_set_ha_data() so that Spider's plugin reference counter is adjusted as well. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Commit eabfadc on branch bb-10.3-MDEV-7914
* | | post-merge: gcc 8 warningsSergei Golubchik2019-03-173-6/+6
| | |
* | | MDEV-18313 Supports 'wrapper mariadb' for connection informationKentoku2019-01-3113-932/+1616
| | |
* | | Fix an error at using spider_direct_sql with temporary tableKentoku2019-01-319-4/+111
| | |