| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Fix the crash: IN-to-EXISTS rewrite causes an error (and so
JOIN::optimize() fails with an error, too), don't call
update_used_tables(). Terminate the query execution instead.
* Fix the cause of the error in the IN-to-EXISTS rewrite: don't do
the rewrite if doing it will cause an error of this kind:
This version of MariaDB doesn't yet support 'SUBQUERY in ROW in left
expression of IN/ALL/ANY'
* Fix another issue exposed by this testcase:
JOIN::setup_subquery_caches() may be invoked before any select has
saved its query plan, and will crash because none of the SELECTs
has called create_explain_query_if_not_exists() to create the Explain
Data Structure for this SELECT.
TODO: When merging this to 10.2, remove the poorly-placed call to
create_explain_query_if_not_exists made by fix for M_D_E_V-16153
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Removed duplicate.
Also move the --no-defaults option close to the other "default*"
options.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
THD proc info was assigned from stack allocated temporary buffer
which went out of scope immediately after assignment.
Fixed by removing the use of temp buffer and assign proc info
from string literal.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:
=======
fts_cache_append_deleted_doc_ids() holds the deleted_lock and tries to
access size of deleted_doc_ids. In the meantime, fts_cache_clear()
clears the sync_heap before clearing deleted_doc_ids. It leads to
invalid access of deleted_doc_ids.
Fix:
===
fts_cache_clear() should free the sync_heap after clearing
deleted_doc_ids.
|
| | |
| | |
| | |
| | |
| | |
| | | |
The srv_monitor_event and the srv_monitor_thread would not be
created when InnoDB is in read-only mode. Yet, some code would
unconditionally invoke os_event_set(srv_monitor_event).
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The issue occurs when the subquery_cache is enabled.
When there is a cache miss the division was leading to a value with scale 9.
In the case of cache hit the value returned was of scale 9 and due to the different
values for the scales the where condition evaluated to FALSE, hence the output
was incomplete.
To fix this problem we need to round up the decimal to the limit mentioned in
Item::decimals. This would make sure the values are compared with the same
scale.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When InnoDB is extending a data file, it is updating the FSP_SIZE
field in the first page of the data file.
In commit 8451e09073e8b1a300f177d74a9e3a530776640a (MDEV-11556)
we removed a work-around for this bug and made recovery stricter,
by making it track changes to FSP_SIZE via redo log records, and
extend the data files before any changes are being applied to them.
It turns out that the function fsp_fill_free_list() is not crash-safe
with respect to this when it is initializing the change buffer bitmap
page (page 1, or generally, N*innodb_page_size+1). It uses a separate
mini-transaction that is committed (and will be written to the redo
log file) before the mini-transaction that actually extended the data
file. Hence, recovery can observe a reference to a page that is
beyond the current end of the data file.
fsp_fill_free_list(): Initialize the change buffer bitmap page in
the same mini-transaction.
The rest of the changes are fixing a bug that the use of the separate
mini-transaction was attempting to work around. Namely, we must ensure
that no other thread will access the change buffer bitmap page before
our mini-transaction has been committed and all page latches have been
released.
That is, for read-ahead as well as neighbour flushing, we must avoid
accessing pages that might not yet be durably part of the tablespace.
fil_space_t::committed_size: The size of the tablespace
as persisted by mtr_commit().
fil_space_t::max_page_number_for_io(): Limit the highest page
number for I/O batches to committed_size.
MTR_MEMO_SPACE_X_LOCK: Replaces MTR_MEMO_X_LOCK for fil_space_t::latch.
mtr_x_space_lock(): Replaces mtr_x_lock() for fil_space_t::latch.
mtr_memo_slot_release_func(): When releasing MTR_MEMO_SPACE_X_LOCK,
copy space->size to space->committed_size. In this way, read-ahead
or flushing will never be invoked on pages that do not yet exist
according to FSP_SIZE.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Server auto-sets lower_case_file_system value based on default
datadir's behavior instead of instead of using the directory specified
by the user through the configuration file or command line options.
This patch fixes this problem.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
-Wl,-z,relro,-z,now are linker flags and should
be checked as such.
TODO: perform module, exe shared checks separately
rather than a pure linker check.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
tables on Windows
An oveflow was happening on windows because on Windows sizeof(ulong) is 4 bytes
while it is 8 bytes on Linux.
Switched avg_frequency and avg length for column statistics to ulonglong.
Switched avg_frequency for index statistics to ulonglong.
|
| | |
| | |
| | |
| | |
| | |
| | | |
The only InnoDB changes between Percona XtraDB Server 5.6.47-87.0
and 5.6.48-88.0 are related to InnoDB changes between MySQL 5.6.47
and MySQL 5.6.48, which we had already applied.
|
| | |
| | |
| | |
| | | |
There were no InnoDB changes between MySQL 5.6.48 and MySQL 5.6.49.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This issue was originally reported by Fungo Wang, along with a fix, as
MySQL Bug #98990.
His suggested fix was applied as part of
mysql/mysql-server@a003fc373d1adb3ccea353b5d7d83f6c4c552383
and released in MySQL 5.7.31.
i_s_metrics_fill(): Add the missing call to Field::set_notnull(),
and simplify some code.
|
| | |
| | |
| | |
| | |
| | | |
- i_s_fts_index_cache_fill() should take shared lock of fts cache
before accessing index cache to avoid reading stale data.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
number is in the future"
- Problem is that test case creates iblogfile* files. So existing
ibdata pages could point to future LSN. Fix is that taking the
backup of data before iblogfile* creation and apply it before
exiting the test case.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
S-LOCK / row0quiesce.cc X-LOCK
Problem:
=======
- Read operations are always allowed to hold a secondary index leaf
latch and then look up the corresponding clustered index record.
Flush table operation acquires secondary index latch while holding
a clustered index latch. It leads to deadlock violation.
Fix:
====
- Flush table operation should acquire secondary index before taking
clustered index to avoid deadlock violation with select operation.
|
| | |
| | |
| | |
| | | |
Only install wsrep scripts and links if WSREP_ON is actually set
|
| | |
| | |
| | |
| | |
| | | |
When setting the PLUGIN_AUTH_PAM variable, mark it as a "CACHE" variable
so it can be overridden by the user.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
failed in Diagnostics_area::set_ok_status on FUNCTION replace
When there is REPLACE in the statement, sp_drop_routine_internal() returns
0 (SP_OK) on success which is then assigned to ret. So ret becomes false
and the error state is lost. The expression inside DBUG_ASSERT()
evaluates to false and thus the assertion failure.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
While trying to detect datadir, take into account that one can use
Windows service name as section name in options file, for Windows service.
The historical obscurity is being used by WAMP installations.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Filesort_buffer::spaceleft | SIGSEGV in __memmove_avx_unaligned_erms from my_b_write
Make sure that the sort_buffer that is allocated has atleast space for MERGEBUFF2 keys.
The issue here was that the record length is quite high and sort buffer size is very small,
due to which we end up with zero number of keys in the sort buffer. The Sort_param::max_keys_per_buffer
was zero in such a case, due to which we were flushing empty sort_buffer to the disk.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
accept might return an error, including SOCKET_EAGAIN/
SOCKET_EINTR. The caller, usually handle_connections_sockets
can these however and invalid file descriptor isn't something
to call fcntl on.
Thanks to Etienne Guesnet (ATOS) for diagnosis,
sample patch description and testing.
|
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts commit e0793d386517f4ff9c0267830d558f91c75263aa.
In idiomatic C++, accessor functions should not discard qualifiers.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
- service not using "--defaults-file" can have any name not just "MySQL"
- service with "--defaults-file", without datadir in them
use default datadir (install_root\data)
|
| | |
| | |
| | |
| | |
| | |
| | | |
and remove it on error
Disable existing non-empty datadir for mysql_install_db.exe
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
instead PROCESS privilege
Fix a typo in a source code. Now real required privileges corresponds
to a ones mentions in documentation.
Documentation states that this table requires PROCESS privilege:
https://mariadb.com/kb/en/information-schema-innodb_tablespaces_encryption-table/
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The issue here was that the left expr and right expr of the ANY subquery
had different character sets, so we were converting the left expr to utf8 character set.
So when this conversion was happening we were actually converting the item inside the cache,
it looked like <cache>(convert(t1.l1 using utf8)), which is incorrect.
To fix this problem we are going to store the reference of the left expr and convert that
to utf8 character set, it would look like convert(<cache>(`test`.`t1`.`l1`) using utf8)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
rpl_parallel_conflicts.test
Problem:
========
Relay_log_info::flush reports following MSAN issue.
==17820==WARNING: MemorySanitizer: use-of-uninitialized-value is reported
#5 0x00005584f0981441 in my_write (Filedes=22,
Buffer=0x72500003e818 "5\n./slave-relay-bin.000003\n21385\n
master-bin.000001\n21643\n0\n", '\245' <repeats 141 times>..., Count=118,
MyFlags=532) at /home/sujatha/bug_repo/test-10.5-msan/mysys/my_write.c:49
Analysis:
=========
In parallel replication at the end of each statement execution the worker execution
status is updated in 'relay-log.info' file. When two workers try to flush
the status at the same time, since the write to cache is not serialized both
workers write to the same address simultaneously and increment the
length twice. Because of this the length of buffer is more than actual data.
When flush code tries to read the buffer beyond valid data length MSAN
reports uninitialized values error.
Fix:
===
Serialize the relay log flush operation using "rli->data_lock".
|
|\ \ \
| | | |
| | | |
| | | | |
https://github.com/codership/mariadb-server into 10.1-MDEV-22763
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Backported the support for aborting and replaying stored procedure and fix for trigger
key assigments from 10.4 version.
Backported also two mtr tests: wsrep_sp_bf_abort and MDEV-20225
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Deadlock in DbugParse, on Linux.
In 10.1, DBUG recursive mutex was improperly implemented.
CODE_STATE::locked counter was never updated.
Copy the code around LockMutex/UnlockMutex from 10.2
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Analysis:
========
When "Profiling" is enabled, server collects the resource usage of each
statement that gets executed in current session. Profiling doesn't support
nested statements. In order to ensure this behavior when profiling is enabled
for a statement, there should not be any other active query which is being
profiled. This active query information is stored in 'current' variable. When
a nested query arrives it finds 'current' being not NULL and server aborts.
When 'init_connect' and 'init_slave' system variables are set they contain a
set of statements to be executed. "execute_init_command" is the function call
which invokes "dispatch_command" for each statement provided in
'init_connect', 'init_slave' system variables. "execute_init_command" invokes
"start_new_query" and it passes the statement list to "dispatch_command". This
"dispatch_command" intern invokes "start_new_query" which leads to nesting of
queries. Hence '!current' assert is triggered.
Fix:
===
Remove profiling from "execute_init_command" as it will be done within
"dispatch_command" execution.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
changes.
This requires Galera commit 065e484144c5999709ae8fd19844da72bb785073
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
On FreeBSD, perl isn't in /usr/bin, its in /usr/local/bin or
elsewhere in the path.
Like storage/{maria/unittest/,}ma_test_* , we use /usr/bin/env to
find perl and run it.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Apparently, in Win10, dtrace is avaialable, but it does not work with
MariaDB user probes
|
| | | |
| | | |
| | | |
| | | | |
FreeState() zeros init_settings.out_file, which another thread can be using
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Dynamic_array<st_mysql_const_lex_string*>::at
The code in fill_schema_schemata() did not take into account that
make_db_list() can leave empty db_names if the requested database
name was too long, so the call for db_names.at(0) crashed on assert.
- Moving the code testing if the database directory exists
into a separate function verify_database_directory_exists()
- Modifying the test to check if db_names is not empty
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem:
=======
The "Start binlog_dump" message hasn't been updated to include the slave's
requested GTID position:
20:05:05 139836760311552 [Note] Start binlog_dump to slave_server(2), pos(, 4)
For diagnostic purposes, it would be helpful if the GTID position were
included.
Fix:
===
Imporve "Start binlog_dump" print message to include "using_gtid" and
"GTID position" requested by slave.
Ex:
[Note] Start binlog_dump to slave_server(2), pos(, 4), using_gtid(1),
gtid('1-1-201,2-2-100')
[Note] Start binlog_dump to slave_server(3), pos('mariadb-bin.004142',
507988273), using_gtid(0), gtid('')
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Use krb5_xfree if available, otherwise default to
krb5_free_unparsed_name.
|