| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| | |
5.5 without InnoDB/XtraDB changes
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Backport from 5.6 to 5.5
This makes filesort robust to misc variants of order by / group by
on columns/expressions with zero length.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
AVOID DEADLOCK AFTER RESTORE
Analysis
--------
Accessing the restored NDB table in an active multi-statement
transaction was resulting in deadlock found error.
MySQL Server needs to discover metadata of NDB table from
data nodes after table is restored from backup. Metadata
discovery happens on the first access to restored table.
Current code mandates this statement to be the first one
in the transaction. This is because discover needs exclusive
metadata lock on the table. Lock upgrade at this point can
lead to MDL deadlock and the code was written at the time
when MDL deadlock detector was not present. In case when
discovery attempted in the statement other than the first
one in transaction ER_LOCK_DEADLOCK error is reported
pessimistically.
Fix:
---
Removed the constraint as any potential deadlock will be
handled by deadlock detector. Also changed code in discover
to keep metadata locks of active transaction.
Same issue was present in table auto repair scenario. Same
fix is added in repair path also.
|
|\ \ \
| | | |
| | | |
| | | | |
5.5 with our InnoDB changes
|
| | | | |
|
| |\ \ \
| | |/ /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
client/mysql_upgrade.c
mysql-test/r/func_misc.result
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
mysql-test/suite/innodb/r/innodb-fk.result
mysql-test/t/subselect_sj_mat.test
sql/item.cc
sql/item_func.cc
sql/log.cc
sql/log_event.cc
sql/rpl_utility.cc
sql/slave.cc
sql/sql_class.cc
sql/sql_class.h
sql/sql_select.cc
storage/innobase/dict/dict0crea.c
storage/innobase/dict/dict0dict.c
storage/innobase/handler/ha_innodb.cc
storage/xtradb/dict/dict0crea.c
storage/xtradb/dict/dict0dict.c
storage/xtradb/handler/ha_innodb.cc
vio/viosslfactories.c
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
execution of PS
Correct fix for this bug.
The problem was that Item_func_group_concat() was calling
setup_order(), passing args as the second argument,
ref_pointer_array. While ref_pointer_array should have free
space at the end, as setup_order() can append elements to it.
In this particular case args[] elements were overwritten when
setup_order() was pushing new elements into ref_pointer_array.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Cherry-pick from 10.0:
commit 126523d1906727254ad0f887db688b60b23ebed6
Author: Sergei Golubchik <serg@mariadb.org>
Date: Mon Feb 23 20:53:41 2015 +0100
MDEV-6703 Add "mysqlbinlog --binlog-row-event-max-size" support
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
executions)
Alternative fix that doesn't cause view.test crash in --ps:
Remember when Item_ref was fixed right in the constructor
and did not have a full Item_ref::fix_fields() call. Later
in PS/SP, after Item_ref::cleanup, we use this knowledge
to avoid doing full fix_fields() for items that were never
supposed to be fix_field'ed.
Simplify the test case.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
execution of PS
GROUP_CONCAT() with ORDER BY column position may crash server on PS reexecution.
The problem was that arguments array of GROUP_CONCAT() was adjusted to point to
temporary elements (resolved ORDER BY fields) during first execution.
This patch expands rev. 08763096cb to restore original arguments array as well.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
GET_LOCK() silently accepted negative values and NULL for timeout.
Fixed GET_LOCK() to issue a warning and return NULL in such cases.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
but never call after_rollback() or after_commit().
The old code used pthread_setspecific() to store temporary data used by the thread.
This is not safe when used with thread pool, as the thread may change for the transaction.
The fix is to save the data in THD, which is guaranteed to be properly freed.
I also fixed the code so that we don't do a malloc() for every transaction.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
field.cc
- Fixed warning about overlapping memory copy (backport from 10.0)
Item_subselect.cc
- Fixed core dump in main.view
- Problem was that thd->lex->current_select->master_unit()->item was not set, which caused crash in maxr_as_dependent
sql/mysqld.cc
- Got error on shutdown as we where freeing mutex before all THD objects was freed
(~THD uses some mutex). Fixed by during shutdown freeing THD inside mutex.
sql/log.cc
- log_space_lock and LOCK_log where locked in inconsistenly. Fixed by not having a log_space_lock around purge_logs.
sql/slave.cc
- Remove unnecessary log_space_lock
- Move cond_broadcast inside lock to ensure we don't miss the signal
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
set to 1
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
STATUS while thread was ending
Fixed by adding a marker if we have added the thread statistics to the global counters.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- Removing use of calls to current_thd
- More DBUG_PRINT
- Code style changes
- Made some local functions static
Ensure that calls to print_keyuse are locked with mutex to get all lines in same debug packet
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
SELECT ... WHERE XX IN (SELECT YY)
this was transformed to something like:
SELECT ... WHERE IF_EXISTS(SELECT ... HAVING XX=YY)
The bug was that for normal execution XX was fixed in the original outer SELECT context while in PS it was fixed in the sub query context and this confused the optimizer.
Fixed by ensuring that XX is always fixed in the outer context.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
procedure/trigger that is repeatedly executed.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
|
|/ / /
| | |
| | |
| | | |
5.6.26
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
with plugin-load-add that are already registered at mysql.plugin
- issue just one error message, without this extra warning
- don't abuse ER_UDF_EXISTS, instead add a proper error message for plugins
- report started initialization for each plugin source
|
| | |
| | |
| | |
| | |
| | | |
Don't forget to set thd->lex->unit.insert_table_with_stored_vcol
in the mysql_load(). Othewise virtual columns will not be updated.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
binlog checksums
Fix was to add a test in Query_log_event::Query_log_event() if we are using
CREATE ... SELECT and in this case use trans cache, like we do on the master.
This avoid using (with doesn't have checksum)
Other things:
- Removed dummy call my_checksum(0L, NULL, 0)
- More DBUG_PRINT
- Cleaned up Log_event::need_checksum() to make it more readable (similar as in MySQL 5.6)
- Renamed variable that was hiding another one in create_table_imp()
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
FUNCTIONS/PRIVILEGES DIFFERENTLY'
Fix for bug#11759114 - '51401: GRANT TREATS NONEXISTENT
FUNCTIONS/PRIVILEGES DIFFERENTLY'.
The problem was that attempt to grant EXECUTE or ALTER
ROUTINE privilege on stored procedure which didn't exist
succeed instead of returning an appropriate error like
it happens in similar situation for stored functions or
tables.
The code which handles granting of privileges on individual
routine calls sp_exist_routines() function to check if routine
exists and assumes that the 3rd parameter of the latter
specifies whether it should check for existence of stored
procedure or function. In practice, this parameter had
completely different meaning and, as result, this check was
not done properly for stored procedures.
This fix addresses this problem by bringing sp_exist_routines()
signature and code in line with expectation of its caller.
Conflicts:
mysql-test/r/grant.result
mysql-test/t/grant.test
sql/sp.cc
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
--gtid-ignore-duplicates was comparing sequence numbers as 32-bit, so
after 2**32 transactions things would start to fail.
|
| | | |
| | | |
| | | |
| | | | |
DECIMAL
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
replication
The --gtid-ignore-duplicates option was not working correctly with row-based
replication. When a row event was completed, but before committing, there
was a small window where another multi-source SQL thread could wrongly try
to re-execute the same transaction, without properly ignoring the duplicate
GTID. This would lead to duplicate key error or out-of-order GTID error or
similar.
Thanks to Matt Neth for reporting this and giving an easy way to reproduce
the issue.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
in ha_delete_table()
* only convert ENOENT and HA_ERR_NO_SUCH_TABLE to warnings
* only return real error codes (that is, not ENOENT and
not HA_ERR_NO_SUCH_TABLE)
* intercept HA_ERR_ROW_IS_REFERENCED to generate backward
compatible ER_ROW_IS_REFERENCED
in mysql_rm_table_no_locks()
* no special code to handle HA_ERR_ROW_IS_REFERENCED
* no special code to handle ENOENT and HA_ERR_NO_SUCH_TABLE
* return multi-table error ER_BAD_TABLE_ERROR <table list> only
when there were many errors, not when there were many
tables to drop (but only one table generated an error)
|
| | | |
| | | |
| | | |
| | | |
| | | | |
in innobase: compilation error on windows
other changes: perfschema merge followup
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When RENAME TABLE is executed, it apparently does not check whether the engine
is available (unlike ALTER TABLE .. RENAME, which does). It means that if the
engine in question was not loaded on some reason, the table might become
unusable, since the engine won't know about the change.
With this patch RENAME TABLE fails if storage engine is not available.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
when --bind-address is not specificed explicitly (or set to '*')
MariaDB tries all wildcard addresses. Print a warning (not an error)
if a socket cannot be created for some of them.
Still print an error if a socket cannot be created for an address
that a user has specified expicitly with --bind-address.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
macro in function "any_slave_sql_running"
Fix a handful of "return" that should be DBUG_RETURN in sql/rpl_mi.cc.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
existing index of same as column name.
The default name for the primary key is rather 'PRIMARY' instead of the indexed column name.
|
|\ \ \ \ |
|
| |\ \ \ \
| | | |/ /
| | |/| | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* take into account that example may be NULL
* use example->safe_charset_converter(), copy-paste from
Item::safe_charset_converter() (example might have its own
implementation)
* handle the case when the charset doesn't need conversion
(and return this).
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
semisync plugin and setting rpl_semi_sync_master_enabled
There was race condition between INSTALL PLUGIN and SET. It was caused by a
gap in INSTALL PLUGIN when plugin variables were registered but not fully
initialized. Accessing such variables concurrently may reference uninitialized
memory, specifically sys_var_pluginvar::plugin.
Fixed by initializing sys_var_pluginvar::plugin early, before variable is
registered.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
semisync plugin and setting rpl_semi_sync_master_enabled
Cleanup:
Removed my_intern_plugin_lock() and my_intern_plugin_lock_ci() wrappers. They
were obsoleted by revision f56dd32bf.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
MDEV-7923)
"Range Checked for Each Record" should be only employed when the other
option would be cross-product join (i.e. the other option is so bad that
we hardly risk anything).
Previous logic was: use RCfER if there are no possible quick selects, or
quick select would read > 100 rows. Also, it didn't always work as
expected due to range optimizer changing table->quick_keys and us
looking at sel->quick_keys.
Another angle is that recent versions have enabled use of Join Buffering
in e.g. outer joins. This further reduces the range of cases where RCfER
should be used.
We are still unable to estimate the cost of RCfER with any precision, so
now changing the condition of "no quick select or quick->records> 100"
to a hopefully better condition "no quick select or quick would cost more
than full table scan".
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
type ' 'decimal(10,7)'
Changing the error message to:
"...from type 'decimal(0,?)/*old*/' to type ' 'decimal(10,7)'..."
So it's now clear that the master data type is OLD decimal.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Removing Item_cache::used_table_map, Item_cache::used_tables() and
Item_cache::set_used_tables(). Using the same inherited from
Item_basic_constant implementations instead.
|
| | | | |
| | | | |
| | | | |
| | | | | |
ITEM_FIELD::STR_RESULT
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fhe GEOMETRY field should be handled just as the BLOB field. So that was fiexed in field_conv.
One additional bug was found and fixed meanwhile - thet the geometry field subtypes
should also be merged for UNION command.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
to audit plugin.
The MYSQL_AUDIT_NOTIFY_CONNECTION_CONNECT() call moved to the login_connection()
function. So that it'll be invoked in any thread handling mode.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Check that leaf table list is really built before storing it.
|