summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Bug #56709: Memory leaks at running the 5.1 test suiteAlexey Kopytov2010-09-228-19/+61
| | | Fixed a number of memory leaks discovered by valgrind.
* merge mysql-5.1-bugteam (local) --> mysql-5.1-bugteamAlfranio Correia2010-09-175-15/+19
|\
| * BUG#55961 Savepoint Identifier should be enclosed with backticksAlfranio Correia2010-09-025-15/+19
| | | | | | Added backticks to the savepoint identifier.
* | Bug#50402 Optimizer producing wrong results when using Index Merge on InnoDB Sergey Glukhov2010-09-164-3/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Subselect executes twice, at JOIN::optimize stage and at JOIN::execute stage. At optimize stage Innodb prebuilt struct which is used for the retrieval of column values is initialized in. ha_innobase::index_read(), prebuilt->sql_stat_start is true. After QUICK_ROR_INTERSECT_SELECT finished his job it restores read_set/write_set bitmaps with initial values and deactivates one of the handlers used by QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup (it's the case when we reuse original handler as one of handlers required by QUICK_ROR_INTERSECT_SELECT object). On second subselect execution inactive handler is activated in QUICK_RANGE_SELECT::reset, file->ha_index_init(). In ha_index_init Innodb prebuilt struct is reinitialized with inappropriate read_set/write_set bitmaps. Further reinitialization in ha_innobase::index_read() does not happen as prebuilt->sql_stat_start is false. It leads to partial retrieval of required field values and we get a mix of field values from different records in the record buffer. The fix is to reset read_set/write_set bitmaps as these values are required for proper intialization of internal InnoDB struct which is used for the retrieval of column values (see build_template(), ha_innodb.cc)
* | Bug #54606 innodb fast alter table + pack_keys=0 prevents Magne Mahre2010-09-163-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | adding new indexes A fast alter table requires that the existing (old) table and indices are unchanged (i.e only new indices can be added). To verify this, the layout and flags of the old table/indices are compared for equality with the new. The PACK_KEYS option is a no-op in InnoDB, but the flag exists, and is used in the table compare. We need to check this (table) option flag before deciding whether an index should be packed or not. If the table has explicitly set PACK_KEYS to 0, the created indices should not be marked as packed/packable.
* | Fixed bug#42503 - "Lost connection" errors when usingDmitry Shulga2010-09-162-6/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | compression protocol. The loss of connection was caused by a malformed packet sent by the server in case when query cache was in use. When storing data in the query cache, the query cache memory allocation algorithm had a tendency to reduce the amount of memory block necessary to store a result set, up to finally storing the entire result set in a single block. With a significant result set, this memory block could turn out to be quite large - 30, 40 MB and on. When such a result set was sent to the client, the entire memory block was compressed and written to network as a single network packet. However, the length of the network packet is limited by 0xFFFFFF (16MB), since the packet format only allows 3 bytes for packet length. As a result, a malformed, overly large packet with truncated length would be sent to the client and break the client/server protocol. The solution is, when sending result sets from the query cache, to ensure that the data is chopped into network packets of size <= 16MB, so that there is no corruption of packet length. This solution, however, has a shortcoming: since the result set is still stored in the query cache as a single block, at the time of sending, we've lost boundaries of individual logical packets (one logical packet = one row of the result set) and thus can end up sending a truncated logical packet in a compressed network packet. As a result, on the client we may require more memory than max_allowed_packet to keep, both, the truncated last logical packet, and the compressed next packet. This never (or in practice never) happens without compression, since without compression it's very unlikely that a) a truncated logical packet would remain on the client when it's time to read the next packet b) a subsequent logical packet that is being read would be so large that size-of-new-packet + size-of-old-packet-tail > max_allowed_packet. To remedy this issue, we send data in 1MB sized packets, that's below the current client default of 16MB for max_allowed_packet, but large enough to ensure there is no unnecessary overhead from too many syscalls per result set.
* | Updated build_mccge.sh and added support for more cpu's in check-cpuMikael Ronstrom2010-09-162-163/+518
| |
* | mergeMattias Jonsson2010-09-133-0/+55
|\ \
| * | Bug #50394: Regression in EXPLAIN with index scan, LIMIT, GROUP BY andMartin Hansson2010-09-133-0/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ORDER BY computed col GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an index on the first table in the query is used, and that index satisfies ordering according to the GROUP BY clause, the query optimizer estimates the number of tuples that need to be read from this index. If there is a LIMIT clause, table statistics on tables following this 'sort table' are employed. There may be a separate ORDER BY clause however, which mandates reading the whole 'sort table' anyway. But the previous estimate was left untouched. Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in conjunction with an ORDER BY clause that mandates using a temporary table.
* | | mergeMattias Jonsson2010-09-1312-316/+465
|\ \ \ | |/ / |/| |
| * | mergeMattias Jonsson2010-09-104-15/+108
| |\ \
| | * | Bug#55458: Partitioned MyISAM table gets crashed by multi-table updateMattias Jonsson2010-09-072-12/+30
| | | | | | | | | | | | Updated according to reviewers comments.
| | * | Bug#55458: Partitioned MyISAM table gets crashed by multi-table updateMattias Jonsson2010-08-104-3/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not. Solution was to also cache the other call and forward it when moving to a new partition to scan.
| * | | mergeMattias Jonsson2010-09-109-301/+357
| |\ \ \
| | * | | Bug#53806: Wrong estimates for range query in partitioned MyISAM tableMattias Jonsson2010-08-276-58/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug#46754: 'rows' field doesn't reflect partition pruning Update of test results after fixing the above bugs. (fix in separate commit).
| | * | | Bug#53806: Wrong estimates for range query in partitioned MyISAM tableMattias Jonsson2010-08-263-243/+299
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug#46754: 'rows' field doesn't reflect partition pruning The EXPLAIN's result in 'rows' field was evaluated to number of rows when the table was opened (not from the table cache) and only the partitions left after pruning was updated with its correct number of rows. The evaluation of the 'rows' field was using handler::records() which is a potentially expensive call, and ignores the partitioning pruning. The fix was to use the handlers stats.records after updating it with ::info(HA_STATUS_VARIABLE) instead.
* | | | | Bug #55779: select does not work properly in mysql serverGleb Shchepa2010-09-133-2/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Version "5.1.42 SUSE MySQL RPM" When a query was using a DATE or DATETIME value formatted using different formatting than "yyyy-mm-dd HH:MM:SS", a query with a greater-or-equal '>=' condition matched only greater values in an indexed TIMESTAMP column. The problem was introduced by the fix for the bug 46362 and partially solved (for DATE and DATETIME columns only) by the fix for the bug 47925. The stored_field_cmp_to_item function has been modified to take into account TIMESTAMP columns like we do for DATE and DATETIME columns.
* | | | | merge 55178,55413,56383Bjorn Munch2010-09-105-6/+50
|\ \ \ \ \ | |/ / / / |/| | | |
| * | | | Bug #56383 provide option to restart mysqld after each mtr testBjorn Munch2010-08-311-0/+8
| | | | | | | | | | | | | | | | | | | | Added --force-restart
| * | | | merge 55413Bjorn Munch2010-08-303-2/+21
| |\ \ \ \
| | * | | | Bug #55413 mysqltest gives parse error for lines matching "^let.*\\.*;$"Bjorn Munch2010-08-103-2/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow escaped quotes also in statements not starting with -- But will not support single unescaped ' or `
| * | | | | Bug #55178 Set timeout on test-to-test-basisBjorn Munch2010-08-302-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | Allow --testcase-timeout=<mins> to be set in .opt file for test
* | | | | | Addendum patch for bug #54190.Alexey Kopytov2010-09-091-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch caused some test failures when merged to 5.5 because, unlike 5.1, it utilizes Item_cache_row to actually cache row values. The problem was that Item_cache_row::bring_value() essentially did nothing. In particular, it did not update its null_value, so all Item_cache_row objects were always having their null_values set to TRUE. This went unnoticed previously, but now when Arg_comparator::compare_row() actually depends on the row's null_value to evaluate the comparison, the problem has surfaced. Fixed by calling the underlying item's bring_value() and updating null_value in Item_cache_row::bring_value(). Since the problem also exists in 5.1 code (albeit hidden, since the relevant code is not used anywhere), the addendum patch is against 5.1.
* | | | | | Automerge.Alexey Kopytov2010-09-095-3/+55
|\ \ \ \ \ \
| * | | | | | Bug #54190: Comparison to row subquery produces incorrectAlexey Kopytov2010-09-095-3/+55
| | |_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | result Row subqueries producing no rows were not handled as UNKNOWN values in row comparison expressions. That was a result of the following two problems: 1. Item_singlerow_subselect did not mark the resulting row value as NULL/UNKNOWN when no rows were produced. 2. Arg_comparator::compare_row() did not take into account that a whole argument may be NULL rather than just individual scalar values. Before bug#34384 was fixed, the above problems were hidden because an uninitialized (i.e. without any stored value) cached object would appear as NULL for scalar values in a row subquery returning an empty result. After the fix Arg_comparator::compare_row() would try to evaluate uninitialized cached objects. Fixed by removing the aforementioned problems.
* | | | | | Bug#51070: Query with a NOT IN subquery predicate returns a wrong result setMartin Hansson2010-09-074-32/+243
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The EXISTS transformation has additional switches to catch the known corner cases that appear when transforming an IN predicate into EXISTS. Guarded conditions are used which are deactivated when a NULL value is seen in the outer expression's row. When the inner query block supplies NULL values, however, they are filtered out because no distinction is made between the guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that filter out NULL values cannot be de-activated in isolation from those that match values or from the outer expression or NULL's. The above problem is handled by making the guarded conditions remember whether they have rejected a NULL value or not, and index access methods are taking this into account as well. The bug consisted of 1) Not resetting the property for every nested loop iteration on the inner query's result. 2) Not propagating the NULL result properly from inner query to IN optimizer. 3) A hack that may or may not have been needed at some point. According to a comment it was aimed to fix #2 by returning NULL when FALSE was actually the result. This caused failures when #2 was properly fixed. The hack is now removed. The fix resolves all three points.
* | | | | | Fixed bug #55421 - Protocol::end_statement(): Assertion `0' onDmitry Shulga2010-09-073-0/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | multi-table UPDATE IGNORE. The problem was that if there was an active SELECT statement during trigger execution, an error risen during the execution may cause a crash. The fix is to temporary reset LEX::current_select before trigger execution and restore it afterwards. This way errors risen during the trigger execution are processed as if there was no active SELECT.
* | | | | | Bug#54543: update ignore with incorrect subquery leads to assertion failure:Martin Hansson2010-09-073-10/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | inited==INDEX When an error occurs while sending the data in a temporary table there was no cleanup performed. This caused a failed assertion in the case when different access methods were used for populating the table vs. retrieving the data from the table if IGNORE was specified and sql_safe_updates = 0. In this case execution continues, but the handler expects to continue with the access method used for row retrieval. Fixed by doing the cleanup even if errors occur.
* | | | | | Fixed bug #47485 - mysql_store_result returns a not NULL result setDmitry Shulga2010-09-074-4/+110
| | | | | | | | | | | | | | | | | | for a prepared statement.
* | | | | | Bug#39932 "create table fails if column for FK is in differentMagne Mahre2010-09-013-1/+41
|/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | case than in corr index". Server was unable to find existing or explicitly created supporting index for foreign key if corresponding statement clause used field names in case different than one used in key specification and created yet another supporting index. In cases when name of constraint (and thus name of generated index) was the same as name of existing/explicitly created index this led to duplicate key name error. The problem was that unlike all other code Key_part_spec::operator==() compared field names in case sensitive fashion. As result routines responsible for getting rid of redundant generated supporting indexes for foreign key were not working properly for versions of field names using different cases. (backported from mysql-trunk)
* | | | | Bug#55846: Link tests fail on Windows - my_compiler.h missingDavi Arnaut2010-08-242-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Make the my_compiler.h header, like my_attribute.h, part of the distribution. This is required due to the dependency of the former on the latter (which can undefine __attribute__).
* | | | | automerge local --> 5.1-bugteam (bug 53034)Gleb Shchepa2010-08-313-1/+39
|\ \ \ \ \
| * | | | | Bug #53034: Multiple-table DELETE statements not acceptingGleb Shchepa2010-08-313-1/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "Access compatibility" syntax The "wild" "DELETE FROM table_name.* ... USING ..." syntax for multi-table DELETE statements is documented but it was lost in the fix for the bug 30234. The table_ident_opt_wild parser rule has been added to restore the lost syntax.
* | | | | | Automerge.Ramil Kalimullin2010-08-304-21/+25
|\ \ \ \ \ \
| * \ \ \ \ \ Automerge.Alexey Kopytov2010-08-304-21/+25
| |\ \ \ \ \ \
| | * | | | | | Bug #54465: assert: field_types == 0 || field_types[field_pos]Alexey Kopytov2010-08-274-21/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | == MYSQL_TYPE_LONGLONG A MIN/MAX() function with a subquery as its argument could lead to a debug assertion on debug builds or wrong data on release ones. The problem was a combination of the following factors: - Item_sum_hybrid::fix_fields() might use the argument (args[0]) to calculate 'hybrid_field_type' which was later used to decide how the data should be sent to the client. - Item_sum::make_field() might use the argument again to calculate the field's type when sending result set metadata to the client. - The argument could be changed in between these two calls via Item::set_arg() leading to inconsistent metadata being reported. Here is what was happening for the bug's test case: 1. Item_sum_hybrid::fix_fields() calculates hybrid_field_type as MYSQL_TYPE_LONGLONG based on args[0] which is an Item::SUBSELECT_ITEM at that time. 2. A temporary table is created to execute the query. create_tmp_field_from_item() creates a Field_long object according to the subselect's max_length. 3. The subselect item in Item_sum_hybrid is replaced by the Item_field object referencing the newly created Field_long. 4. Item_sum::make_field() rightfully returns the MYSQL_TYPE_LONG type when calculating the result set metadata. 5. When sending the actual data, Item::send() relies on the virtual field_type() function which in our case returns previously calculated hybrid_field_type == MYSQL_TYPE_LONGLONG. It looks like the only solution is to never refer to the argument's metadata after the result metadata has been calculated in fix_fields(), since the argument itself may be different by then. In this sense, Item_sum::make_field() should never be used, because it may rely on the argument's metadata and is only called after fix_fields(). The "default" implementation in Item::make_field() should be used instead as it relies only on field_type(), but not on the argument's type. Fixed by removing Item_sum::make_field() so that the superclass implementation Item::make_field() is always used.
* | | | | | | | Fix for bug #51875: crash when loading data into geometry function polyfromwkbRamil Kalimullin2010-08-303-2/+21
|/ / / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Check for number of line strings in the incoming polygon data (wkb) and for number of points in the incoming linestring wkb.
* | | | | | | Merge mysql-5.1-innodb -> mysql-5.1-bugteamVasil Dimov2010-08-2820-136/+365
|\ \ \ \ \ \ \ | |_|/ / / / / |/| | | | | |
| * | | | | | Increment InnoDB Plugin version to 1.0.12.Vasil Dimov2010-08-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | InnoDB Plugin 1.0.11 has been released with MySQL 5.1.50.
| * | | | | | Bug#55832: selects crash too easily when innodb_force_recovery>3Marko Mäkelä2010-08-243-40/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dict_update_statistics_low(): Create bogus statistics for those indexes that cannot be accessed because of the innodb_force_recovery setting. ha_innobase::info(): Calculate statistics for each index, even if innodb_force_recovery is set. Fill in bogus data for those indexes that are not accessed because of the innodb_force_recovery setting.
| * | | | | | Bug#55832: selects crash too easily when innodb_force_recovery>3Marko Mäkelä2010-08-232-38/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dict_update_statistics_low(): Create bogus statistics for those indexes that cannot be accessed because of the innodb_force_recovery setting. ha_innobase::info(): Calculate statistics for each index, even if innodb_force_recovery is set. Fill in bogus data for those indexes that are not accessed because of the innodb_force_recovery setting.
| * | | | | | Fix Bug #54538 - use of exclusive innodb dictionary lock limits performance.Sunny Bains2010-08-205-33/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch doesn't get rid of the need to acquire the dict_sys->mutex but reduces the need to keep the mutex locked for the duration of the query to fsp_get_available_space_in_free_extents() from ha_innobase::info(). rb://390.
| * | | | | | Fix Bug #55027: assertion: mutex_own(&dict_sys->mutex) in dict_table_get_on_id()Sunny Bains2010-08-202-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The callers should indicate that the dictionary is locked or not using the trx->dict_operation_lock_mode == RW_X_LATCH mode. Checking explicitly for system tables is unnecessary. Approved by Marko on IRC.
| * | | | | | Fix bug#55699 - Assertion failure in innodb plugin with large number of threadsSunny Bains2010-08-201-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a debug assertion that was missed in svnrev:2380 (fix for Bug# 35352). Approved by Marko on IRC
| * | | | | | Bug#55626: MIN and MAX reading a delete-marked record from secondary indexMarko Mäkelä2010-08-181-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove a bogus debug assertion that triggered the bug. Add assertions precisely where records must not be delete-marked. And a comment to clarify when the record is allowed to be delete-marked.
| * | | | | | Merge mysql-5.1-innodb from bk-internal into my local treeVasil Dimov2010-08-1711-15/+71
| |\ \ \ \ \ \
| | * | | | | | Address bug #55465 ERROR 1280 (42000): Incorrect index name '<index name>',Jimmy Yang2010-08-063-4/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | adding a couple FK related messages. rb://409 approved by Sunny Bains
| | * | | | | | Backport of revno 3148 mysql-innodb-trunkInaam Rana2010-08-054-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we do a full validation of AHI whenever check tables is called on any table. This patch fixes this by only doing this full check in debug versions. bug#55716 rb://423 approved by: Marko
| | * | | | | | Backport "NULL pointer check for ut_free()" from mysql-trunk-innodb toJimmy Yang2010-08-033-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mysql-5.1-innodb plugin to fix bug #55627 segv in ut_free pars_lexer_close innobase_shutdown innodb-use-sys-malloc=0.
| | * | | | | | Fix Bug #55382 Assignment with SELECT expressions takes unexpected S locksJimmy Yang2010-08-013-8/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | in READ COMMITTED rb://410 Approved by Sunny Bains