summaryrefslogtreecommitdiff
path: root/sql/sql_class.h
Commit message (Collapse)AuthorAgeFilesLines
* Improve documentation of Unique classVicențiu Ciorbaru2020-01-161-4/+9
| | | | | | | * size represents the size of an element in the Unique class * full_size is used when the Unique class counts the number of duplicates stored per element. This requires additional space per Unique element.
* Update FSF AddressVicențiu Ciorbaru2019-05-111-1/+1
| | | | * Update wrong zip-code
* Fix Item tree changes/rollback debug printOleksandr Byelkin2018-01-231-0/+4
|
* MDEV-14609 XA Transction unable to ROLLBACK TO SAVEPOINTAlexander Barkov2018-01-151-0/+24
| | | | | | | | | | | | | | | | The function trans_rollback_to_savepoint(), unlike trans_savepoint(), did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT inside an XA transaction incorrectly returned an error "...command cannot be executed ... in the ACTIVE state...". Partially merging a MySQL patch: 7fb5c47390311d9b1b5367f97cb8fedd4102dd05 This is WL#7193 (Decouple THD and st_transactions)... The currently merged part includes these changes: - Introducing st_xid_state::check_has_uncommitted_xa() - Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(), so now both allow XA_ACTIVE.
* MDEV-13458: Wrong result for aggregate function with distinct clause when ↵Varun Gupta2017-08-091-1/+1
| | | | | | | the value for tmp_table_size is small Fixed by making sure that the sort buffer would have atleast MERGEBUFF2 keys. Also fixed MDEV-13457 by making sure that an empty tree is never dumped to the disk
* bugfix: cmp_item_row::alloc_comparators() allocated on the wrong arenaSergei Golubchik2017-01-151-19/+0
| | | | | | | | | | it used current_thd->alloc() and allocated on the thd's execution arena, not on table->expr_arena. Remove THD::arena_for_cached_items that is temporarily set in update_virtual_fields(), and replaces THD arena in get_datetime_value(). Instead set THD arena to table->expr_arena for the whole duration of update_virtual_fields()
* missing element in prelocked_mode_name[] arraySergei Golubchik2016-09-121-1/+2
| | | | | | different fix for a63a250d40: BUG#23509275 :DBUG_PRINT in THD::decide_logging_format prints incorrectly, access out-of-bound
* DEV-10595 MariaDB daemon leaks memory with specific queryMonty2016-08-251-0/+5
| | | | | The issue was that in some extreme cases when doing GROUP BY, buffers for temporary blobs where not properly cleared.
* Merge branch 'mysql/5.5' into 5.5Sergei Golubchik2016-04-201-1/+13
|\
| * BUG#20574550 MAIN.MERGE TEST CASE FAILS IF BINLOG_FORMAT=ROWVenkatesh Duggirala2016-02-261-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main.merge test case was failing when tested using row based binlog format. While analyzing the issue it was found the following issues: a) The server is calling binlog related code even when a statement will not be binlogged; b) The child table list was not present into table structure by the time to generate the create table statement; c) The tables in the child table list will not be opened yet when generating table create info using row based replication; d) CREATE TABLE LIKE TEMP_TABLE does not preserve original table storage engine when using row based replication; This patch addressed all above issues. @ sql/sql_class.h Added a function to determine if the binary log is disabled to the current session. This is related with issue (a) above. @ sql/sql_table.cc Added code to skip binary logging related code if the statement will not be binlogged. This is related with issue (a) above. Added code to add the children to the query list of the table that will have its CREATE TABLE generated. This is related with issue (b) above. Added code to force the storage engine to be generated into the CREATE TABLE. This is related with issue (d) above. @ storage/myisammrg/ha_myisammrg.cc Added a test to skip a table getting info about a child table if the child table is not opened. This is related to issue (c) above.
* | Merge branch 'mysql/5.5' into 5.5Sergei Golubchik2016-02-091-1/+1
|\ \ | |/ | | | | | | reverted about half of commits as either not applicable or outright wrong
| * Bug#19941403: FATAL_SIGNAL(SIG 6) IN BUILD_EQUAL_ITEMS_FOR_COND | IN ↵Chaithra Gopalareddy2015-11-201-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SQL/SQL_OPTIMIZER.CC:1657 Problem: At the end of first execution select_lex->prep_where is pointing to a runtime created object (temporary table field). As a result server exits trying to access a invalid pointer during second execution. Analysis: While optimizing the join conditions for the query, after the permanent transformation, optimizer makes a copy of the new where conditions in select_lex->prep_where. "prep_where" is what is used as the "where condition" for the query at the start of execution. W.r.t the query in question, "where" condition is actually pointing to a field in the temporary table. As a result, for the second execution the pointer is no more valid resulting in server exit. Fix: At the end of the first execution, select_lex->where will have the original item of the where condition. Make prep_where the new place where the original item of select->where has to be rolled back. Fixed in 5.7 with the wl#7082 - Move permanent transformations from JOIN::optimize to JOIN::prepare Patch for 5.5 includes the following backports from 5.6: Bugfix for Bug12603141 - This makes the first execute statement in the testcase pass in 5.5 However it was noted later in in Bug16163596 that the above bugfix needed to be modified. Although Bug16163596 is reproducible only with changes done for Bug12582849, we have decided include the fix. Considering that Bug12582849 is related to Bug12603141, the fix is also included here. However this results in Bug16317817, Bug16317685, Bug16739050. So fix for the above three bugs is also part of this patch.
| * Bug#20788853 MUTEX ISSUE IN SQL/SQL_SHOW.CC RESULTING IN SIG6. SOURCE LIKELYMarc Alff2015-04-081-0/+1
| | | | | | | | | | | | FILL_VARIABLES Prevent mutexes used in SHOW VARIABLES from being locked twice.
* | compilation error on windowsSergei Golubchik2015-07-311-1/+1
| |
* | Fixed memory loss detected on P8. This can happen when we call after_flush ↵Monty2015-07-251-1/+4
| | | | | | | | | | | | | | | | | | | | but never call after_rollback() or after_commit(). The old code used pthread_setspecific() to store temporary data used by the thread. This is not safe when used with thread pool, as the thread may change for the transaction. The fix is to save the data in THD, which is guaranteed to be properly freed. I also fixed the code so that we don't do a malloc() for every transaction.
* | Fix for MDEV-8301; Statistics for a thread could be counted twice in SHOW ↵Monty2015-06-261-3/+8
| | | | | | | | | | | | STATUS while thread was ending Fixed by adding a marker if we have added the thread statistics to the global counters.
* | Merge remote-tracking branch 'mysql/5.5' into 5.5Sergei Golubchik2015-04-271-6/+3
|\ \ | |/
| * Bug#20094067: BACKPORT BUG#19683834 TO 5.5 AND 5.6Nisha Gopalakrishnan2015-01-271-4/+2
| | | | | | | | | | Backporting the patch and the test case fixed as part of BUG#16041903 and BUG#19683834 respectively.
| * BUG#18054998 - BACKPORT FIX FOR BUG#11765785 to 5.5Thayumanavar2014-01-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backport of the patch of bug#11765785. Commit message by Prabakaran Thirumalai from bug#11765785 is reproduced below: Description: ------------ Global Query ID (global_query_id ) is not incremented for PING and statistics command. These two query types are filtered before incrementing the global query id. This causes race condition and results in duplicate query id for different queries originating from different connections. Analysis: --------- sqlparse.cc::dispath_command() is the only place in code which sets thd->query_ id to global_query_id and then increments it based on the query type. In all other places it is incremented first and then assigned to thd->query_id. This is done such that global_query_id is not incremented for PING and statistics commands in dispatch_command() function. Fix: ---- As per suggestion from Serg, "There is no reason to skip query_id for the PING and STATISTICS command.", removing the check which filters PING and statistics commands. Instead of using get_query_id() and next_query_id() which can still cause race condition if context switch happens soon after executing get_query_id(), changing the code to use next_query_id() instead of get_query_id() as it is done in other parts of code which deals with global_query_id. Removed get_query_id() function and forced next_query_id() caller to use the return value by specifying warn_unused_result attribute.
| * merge 5.1 => 5.5Tor Didriksen2013-11-011-1/+3
| |\
| | * Bug#17617945 BUFFER OVERFLOW IN GET_MERGE_MANY_BUFFS_COST WITH SMALL ↵Tor Didriksen2013-11-011-1/+3
| | | | | | | | | | | | | | | | | | SORT_BUFFER_SIZE get_cost_calc_buff_size() could return wrong value for the size of imerge_cost_buff.
| * | WL#7076: Backporting wl6715 to support both formats Ashish Agarwal2013-08-231-4/+14
| |\ \ | | | | | | | | in 5.5, 5.6, 5.7.
| | * | WL#7076: Backporting wl6715 to support both formats in 5.5, 5.6, 5.7Ashish Agarwal2013-07-021-4/+14
| | | | | | | | | | | | Backporting wl6715 to mysql-5.5
| * | | Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" ANDPraveenkumar Hulakund2013-08-211-1/+8
| |\ \ \ | | | |/ | | |/| | | | | | | | | | | | | "SHOW PROCESSLIST" Merging from 5.1 to 5.5
| | * | Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" ANDPraveenkumar Hulakund2013-08-211-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "SHOW PROCESSLIST" Analysis: ---------- The problem here is, if one connection changes its default db and at the same time another connection executes "SHOW PROCESSLIST", when it wants to read db of the another connection then there is a chance of accessing the invalid memory. The db name stored in THD is not guarded while changing user DB and while reading the user DB in "SHOW PROCESSLIST". So, if THD.db is freed by thd "owner" thread and if another thread executing "SHOW PROCESSLIST" statement tries to read and copy THD.db at the same time then we may endup in the issue reported here. Fix: ---------- Used mutex "LOCK_thd_data" to guard THD.db while freeing it and while copying it to processlist.
| * | | Bug#17083851 BACKPORT BUG#11765744 TO 5.1, 5.5 AND 5.6 prabakaran thirumalai2013-07-301-0/+10
| |\ \ \ | | |/ / | | | / | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Description: Original fix Bug#11765744 changed mutex to read write lock to avoid multiple recursive lock acquire operation on LOCK_status mutex. On Windows, locking read-write lock recursively is not safe. Slim read-write locks, which MySQL uses if they are supported by Windows version, do not support recursion according to their documentation. For our own implementation of read-write lock, which is used in cases when Windows version doesn't support SRW, recursive locking of read-write lock can easily lead to deadlock if there are concurrent lock requests. Fix: This patch reverts the previous fix for bug#11765744 that used read-write locks. Instead problem of recursive locking for LOCK_status mutex is solved by tracking recursion level using counter in THD object and acquiring lock only once when we enter fill_status() function first time.
| | * Bug#17083851 BACKPORT BUG#11765744 TO 5.1, 5.5 AND 5.6 prabakaran thirumalai2013-07-301-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Description: Original fix Bug#11765744 changed mutex to read write lock to avoid multiple recursive lock acquire operation on LOCK_status mutex. On Windows, locking read-write lock recursively is not safe. Slim read-write locks, which MySQL uses if they are supported by Windows version, do not support recursion according to their documentation. For our own implementation of read-write lock, which is used in cases when Windows version doesn't support SRW, recursive locking of read-write lock can easily lead to deadlock if there are concurrent lock requests. Fix: This patch reverts the previous fix for bug#11765744 that used read-write locks. Instead problem of recursive locking for LOCK_status mutex is solved by tracking recursion level using counter in THD object and acquiring lock only once when we enter fill_status() function first time.
| * | Merge from 5.1 to 5.5Chaithra Gopalareddy2013-04-141-0/+5
| |\ \ | | |/
| | * Updated/added copyright headers.Murthy Narkedimilli2013-02-251-1/+1
| | |
| | * Solve a linkage problem with "libmysqld" on several Solaris platforms:Kent Boortz2012-06-261-2/+1
| | | | | | | | | | | | | | | | | | | | | a multiple definition of 'THD::clear_error()' in (at least) libmysqld.a(lib_sql.o) and libmysqld.a(libfederated_a-ha_federated.o). Patch provided by Ramil Kalimullin.
| | * Bug#12636001 : deadlock from thd_security_contextGopal Shankar2012-05-171-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM: Threads end-up in deadlock due to locks acquired as described below, con1: Run Query on a table. It is important that this SELECT must back-off while trying to open the t1 and enter into wait_for_condition(). The SELECT then is blocked trying to lock mysys_var->mutex which is held by con3. The very significant fact here is that mysys_var->current_mutex will still point to LOCK_open, even if LOCK_open is no longer held by con1 at this point. con2: Try dropping table used in con1 or query some table. It will hold LOCK_open and be blocked trying to lock kernel_mutex held by con4. con3: Try killing the query run by con1. It will hold THD::LOCK_thd_data belonging to con1 while trying to lock mysys_var->current_mutex belonging to con1. But current_mutex will point to LOCK_open which is held by con2. con4: Get innodb engine status It will hold kernel_mutex, trying to lock THD::LOCK_thd_data belonging to con1 which is held by con3. So while technically only con2, con3 and con4 participate in the deadlock, con1's mysys_var->current_mutex pointing to LOCK_open is a vital component of the deadlock. CYCLE = (THD::LOCK_thd_data -> LOCK_open -> kernel_mutex -> THD::LOCK_thd_data) FIX: LOCK_thd_data has responsibility of protecting, 1) thd->query, thd->query_length 2) VIO 3) thd->mysys_var (used by KILL statement and shutdown) 4) THD during thread delete. Among above responsibilities, 1), 2)and (3,4) seems to be three independent group of responsibility. If there is different LOCK owning responsibility of (3,4), the above mentioned deadlock cycle can be avoid. This fix introduces LOCK_thd_kill to handle responsibility (3,4), which eliminates the deadlock issue. Note: The problem is not found in 5.5. Introduction MDL subsystem caused metadata locking responsibility to be moved from TDC/TC to MDL subsystem. Due to this, responsibility of LOCK_open is reduced. As the use of LOCK_open is removed in open_table() and mysql_rm_table() the above mentioned CYCLE does not form. Revision ID for changes, open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
| * | Fix for Bug 16395495 - OLD FSF ADDRESS IN GPL HEADERMurthy Narkedimilli2013-03-191-1/+1
| | |
| * | BUG#15891524: RLI_FAKE MODE IS NOT UNSET AFTER BINLOG REPLAYNuno Carvalho2012-11-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a binlog is replayed into a server, e.g.: $ mysqlbinlog binlog.000001 | mysql it sets a pseudo slave mode on the client connection in order to server be able to read binlog events, there is, a format description event is needed to correctly read following events. Also this pseudo slave mode applies to the current connection replication rules that are needed to correctly apply binlog events. If a binlog dump is sourced on a connection, this pseudo slave mode will remains after it, what will apply unexpected rules from customer perspective to following commands. Added a new SET statement to binlog dump that will unset pseudo slave mode at the end of dump file.
| * | Bug#14466617 - INVALID WRITES AND/OR CRASH WITH USERPraveenkumar Hulakund2012-11-071-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | VARIABLES Analysis: ------------- After executing the query, new value of the user defined variables are set in the function "select_dumpvar::send_data". "select_dumpvar::send_data" first calls function "Item_func_set_user_var::save_item_result()". This function checks the nullness of the Item_field passed as parameter to it and saves it. The nullness of item is stored with arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls "Item_func_set_user_var::update()" which notices null result that was saved and calls "Item_func_set_user_var:: update_hash". But here null_value is not set and args[0] is different from that given to function "Item_func_set_user_var:: set_item_result()". This causes "Item_func_set_user_var:: update_hash" function to believe that its getting non-null value. "user_var_entry::length" set to 0 and hence "user_var_entry::value" is made to point to extra_area allocated in "user_var_entry". And "Item_func_set_user_var::update_hash" tries to write at memory beyond extra_area for result type DECIMAL. Because of this invalid write issue is reported by Valgrind. Before this bug was introduced, we avoided this problem by creating "Item_func_set_user_var" object with the same Item_field as arg[0] and as parameter to Item_func_set_user_var::save_item_result(). But now they are refering to different args[0]. Because of this null_value flag set in parameter Item_field in function "Item_func_set_user_var::save_item_result()" is not reflected in "Item_func_set_user_var" object. Fix: ------------ This issue is reported on versions 5.5.24. Issue does not exists in 5.5.23, 5.1, 5.6 and trunk. This issue was introduced by revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for bug #12408412), which was pushed into 5.5 and later releases. This patch has later been reversed in 5.6 and trunk by revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for bug #14664077). Backported this patch in 5.5 also to fix this issue.
| * | Bug#14003080:65104: MAX_USER_CONNECTIONS WITH PROCESSLIST EMPTYPraveenkumar Hulakund2012-05-281-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Analysis: ------------- If server is started with limit of MAX_CONNECTIONS and MAX_USER_CONNECTIONS then only MAX_USER_CONNECTIONS of any particular users can be connected to server and total MAX_CONNECTIONS of client can be connected to server. Server maintains a counter for total CONNECTIONS and total CONNECTIONS from particular user. Here, MAX_CONNECTIONS of connections are created to server. Out of this MAX_CONNECTIONS, connections from particular user (say USER1) are also created. The connections from USER1 is lesser than MAX_USER_CONNECTIONS. After that there was one more connection request from USER1. Since USER1 can still create connections as he havent reached MAX_USER_CONNECTIONS, server increments counter of CONNECTIONS per user. As server already has MAX_CONNECTIONS of connections, next check to total CONNECTION count fails. In this case control is returned WITHOUT decrementing the CONNECTIONS per user. So the counter per user CONNECTIONS goes on incrementing for each attempt until current connections are closed. And because of this counter per CONNECTIONS reached MAX_USER_CONNECTIONS. So, next connections form USER1 user always returns with MAX_USER_CONNECTION limit error, even when total connection to sever are less than MAX_CONNECTIONS. Fix: ------------- This issue is occurred because of not handling counters properly in the server. Changed the code to handle per user connection counters properly.
| * | Merge 5.5.24 back into main 5.5.Joerg Bruehe2012-05-071-1/+2
| |\ \ | | | | | | | | | | | | | | | | This is a weave merge, but without any conflicts. In 14 source files, the copyright year needed to be updated to 2012.
| | * | Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME Georgi Kodinov2012-03-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | USER VARIABLE = CRASH Moved the preparation of the variables that receive the output from SELECT INTO from execution time (JOIN:execute) to compile time (JOIN::prepare). This ensures that if the same variable is used in the SELECT part of SELECT INTO it will be properly marked as non-const for this query. Test case added. Used proper fast iterator.
| * | | merge bug11754117-45670 fixes from 5.1.Andrei Elkin2012-04-211-0/+2
| |\ \ \ | | |/ / | |/| / | | |/
| | * BUG#11754117 incorrect logging of INSERT into auto-increment Andrei Elkin2012-04-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BUG#11761686 insert_id event is not filtered. Two issues are covered. INSERT into autoincrement field which is not the first part in the composed primary key is unsafe by autoincrement logging design. The case is specific to MyISAM engine because Innodb does not allow such table definition. However no warnings and row-format logging in the MIXED mode was done, and that is fixed. Int-, Rand-, User-var log-events were not filtered along with their parent query that made possible them to screw up execution context of the following query. Fixed with deferring their execution until the parent query. ****** Bug#11754117 Post review fixes.
| * | BUG#11763882 - 56652: VALGRIND WARNINGS FOR MEMORY LEAK INSergey Vojtovich2011-11-101-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ALTER TABLE AND/OR PLUGIN/SEMISYNC If a plugin was uninstalled, thread local values for plugin variables of string type with PLUGIN_VAR_MEMALLOC flag were not freed. With this patch these variables are freed when thread is done (like all other variables).
| * | Bug#11763573 - 56299: MUTEX DEADLOCK WITH COM_BINLOG_DUMP, BINLOG PURGE, AND ↵Andrei Elkin2011-10-271-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROCESSLIST/KILL The bug case is similar to one fixed earlier bug_49536. Deadlock involving LOCK_log appears to be possible because the purge running thread is holding LOCK_log whereas there is no sense of doing that and which fact was exploited by the earlier bug fixes. Fixed with small reengineering of rotate_and_purge(), adding two new methods and setting up a policy to execute those instead of the former rotate_and_purge(RP_LOCK_LOG_IS_ALREADY_LOCKED). The policy for using rotate(), purge() is that if the caller acquires LOCK_log itself, it should call rotate(), release the mutex and run purge(). Side effect of this patch is refining error message of bug@11747416 to print the whole path.
| * | 5.1 -> 5.5 mergeSergey Glukhov2011-08-021-7/+0
| |\ \ | | |/
| | * Bug#11766594 59736: SELECT DISTINCT.. INCORRECT RESULT WITH DETERMINISTIC ↵Sergey Glukhov2011-08-021-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FUNCTION IN WHERE C There is an optimization of DISTINCT in JOIN::optimize() which depends on THD::used_tables value. Each SELECT statement inside SP resets used_tables value(see mysql_select()) and it leads to wrong result. The fix is to replace THD::used_tables with LEX::used_tables.
| | * Updated/added copyright headersKent Boortz2011-07-031-0/+1
| | |\
| | | * Bug#11764633 : 57491: THD->MAIN_DA.IS_OK() ASSERT IN EMBEDDEDMayank Prasad2011-05-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue: While running embedded server, if client issues TEE command (\T foo/bar) and "foo/bar" directory doesn't exist, it is suppose to give error. But it was aborting. This was happening because wrong error handler was being called. Solution: Modified calls to correct error handler. In embedded server case, there are two error handler (client and server) which are supposed to be called based on which context code is in. If it is in client context, client error handler should be called otherwise server. Test case: Test case automation is not possible as current (following) code doesn't allow '\T' to be executed from command line (OR command read from a file): [client/mysql.cc] ... static int com_tee(String *buffer __attribute__((unused)), char *line __attribute__((unused))) { char file_name[FN_REFLEN], *end, *param; if (status.batch) << THIS IS TRUE WHILE EXECUTING FROM COMMAND LINE. return 0; ... So, not adding test case in GA. WIll add a test case in mysql-trunk after removing above code so that this could be properly tested before GA.
| | * | Updated/added copyright headersKent Boortz2011-06-301-2/+4
| | |\ \ | | | |/ | | |/|
| | | * Updated/added copyright headersKent Boortz2011-06-301-2/+2
| | | |
| * | | Bug#12727287: Maintainer mode compilation fails with gcc 4.6Davi Arnaut2011-07-071-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCC 4.6 has new -Wunused-but-set-variable flag, which is enabled by -Wall, that causes GCC to emit a warning whenever a local variable is assigned to, but otherwise unused (aside from its declaration). Since the maintainer mode uses -Wall and -Werror, source code which triggers these warnings will be rejected. That is, these warnings become hard errors. The solution is to fix the code which triggers these specific warnings. In most of the cases, this is a welcome cleanup as code which triggers this warning is probably dead anyway.
| * | | Fix for bug #12641342 - "61401: UPDATE PERFORMANCE DEGRADESDmitry Lenev2011-06-161-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GRADUALLY IF A TRIGGER EXISTS". This bug manifested itself in two ways: - Firstly execution of any data-changing statement which required prelocking (i.e. involved stored function or trigger) as part of transaction slowed down a bit all subsequent statements in this transaction. So performance in transaction which periodically involved such statements gradually degraded over time. - Secondly execution of any data-changing statement which required prelocking as part of transaction prevented concurrent FLUSH TABLES WITH READ LOCK from proceeding until the end of transaction instead of end of particular statement. The problem was caused by incorrect handling of metadata lock used in FTWRL implementation for statements requiring prelocked mode. Each statement which changes data acquires global IX lock with STATEMENT duration. This lock is supposed to block concurrent FTWRL from proceeding until the statement ends. When entering prelocked mode, durations of all metadata locks acquired so far were changed to EXPLICIT, to prevent substatements from releasing these locks. When prelocked mode was left, durations of metadata locks were changed to TRANSACTIONAL (with a few exceptions) so they can be properly released at the end of transaction. Unfortunately, this meant that the global IX lock blocking FTWRL with STATEMENT duration was moved to TRANSACTIONAL duration after execution of statement requiring prelocking. Since each subsequent statement that required prelocking and tried to acquire global IX lock with STATEMENT duration got a new instance of MDL_ticket, which was later moved to TRANSACTIONAL duration, this led to unwarranted growth of number of tickets with TRANSACITONAL duration in this connection's MDL_context. As result searching for other tickets in it became slow and acquisition of other metadata locks by this transaction started to hog CPU. Moreover, this also meant that after execution of statement requiring prelocking concurrent FTWRL was blocked until the end of transaction instead of end of statement. This patch solves this problem by not moving locks to EXPLICIT duration when thread enters prelocked mode (unless it is a real LOCK TABLES mode). This step turned out to be not really necessary as substatements don't try to release metadata locks. Consequently, the global IX lock blocking FTWRL keeps its STATEMENT duration and is properly released at the end of statement and the above issue goes away.
| * | | mergeMikael Ronström2011-05-191-0/+1
| |\ \ \