| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Assertion `thd->mdl_context.is_lock_owner()` fires when a client is
disconnected, while transaction and and a table is opened through
`HANDLER` interface.
Reason for the assertion is that when a connection closes, its ongoing
transaction is eventually rolled back in
`Wsrep_client_state::bf_rollback()`. This method also releases explicit
which are expected to survive beyond the transaction lifetime.
This patch also removes calls to `mysql_ull_cleanup()`. User level
locks are not supported in combination with Galera, making these calls
unnecessary.
|
|
|
|
| |
one phase
|
|
|
|
|
| |
CREATE TABLE AS SELECT is not supported in combination with streaming
replication.
|
|
|
|
|
|
|
|
|
|
|
| |
The glibc headers declare fallocate only if _GNU_SOURCE is defined.
Without this change, the probe fails with C compilers which do not
support implicit function declarations even if the system does in
fact support the fallocate function.
Upstream rocksdb does not need this because the probe is run with the
C++ compiler, and current g++ versions define _GNU_SOURCE
automatically.
|
|
|
|
|
| |
Corrections from 1e58b8afc086da755cf9209ed17fc36351da5563.
* Re-add #pragma alloca for AIX - now in my_alloca.h
|
|
|
|
|
|
|
|
|
| |
This commit adds a new 'no-sni' option to socat which is required to
properly authenticate with newer socat versions (after version 1.7.4+).
This option is needed to disable the automatic use of the SNI feature
(Server Name Indication) since the SST script directly specifies the
commonname if necessary and automatic activation of the SNI feature
is unnecessary in such scenarios.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug could affect multi-update statements as well as single-table
update statements processed as multi-updates when the where condition
contained a range condition over a non-indexed varchar column. The
optimizer calculates selectivity of such range conditions using histograms.
For each range the buckets containing endpoints of the the range are
determined with a procedure that stores the values of the endpoints in the
space of the record buffer where values of the columns are usually stored.
For a range over a varchar column the value of a endpoint may exceed the
size of the buffer and in such case the value is stored with truncation.
This truncations cannot affect the result of the calculation of the range
selectivity as the calculation employes only the beginning of the value
string. However it can trigger generation of an unexpected error on this
truncation if an update statement is processed.
This patch prohibits truncation messages when selectivity of a range
condition is calculated for a non-indexed column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This is a non-functional change.
simplifying the code logic:
- removing global variables ds_data and ds_meta
- passing these variables as parameters to functions instead
- adding helper classes: Datasink_free_list and Backup_datasinks
- moving some function accepting a ds_ctxt parameter
as methods to ds_ctxt.
|
|
|
|
|
|
|
|
|
| |
trans_rollback_stmt(THD*)
If we are inside stored function or trigger we should not commit
or rollback current statement transaction.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|
|
|
|
| |
Signed-off-by: lilinjie <1136268146@qq.com>
|
|
|
|
|
| |
This problem was fixed earlier by MDEV-27653.
Adding MTR tests only.
|
|
|
|
|
| |
This problem was earlier fixed by MDEV-30034.
Adding MTR tests only.
|
|
|
|
|
|
|
|
|
|
| |
permitted for main PID yyyy" in systemd during SST
server has systemd support and calls sd_notify() to communicate
the status to systemd.
mariabackup links the whole server in, but it should not notify
systemd, because it's not started or managed by systemd.
|
|
|
|
|
|
| |
Spider system tables should be created so that wsrep_on=OFF.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Adding a new argument "flag" to MY_COLLATION_HANDLER::strnncollsp_nchars()
and a flag MY_STRNNCOLLSP_NCHARS_EMULATE_TRIMMED_TRAILING_SPACES.
The flag defines if strnncollsp_nchars() should emulate trailing spaces
which were possibly trimmed earlier (e.g. in InnoDB CHAR compression).
This is important for NOPAD collations.
For example, with this input:
- str1= 'a ' (Latin letter a followed by one space)
- str2= 'a ' (Latin letter a followed by two spaces)
- nchars= 3
if the flag is given, strnncollsp_nchars() will virtually restore
one trailing space to str1 up to nchars (3) characters and compare two
strings as equal:
- str1= 'a ' (one extra trailing space emulated)
- str2= 'a ' (as is)
If the flag is not given, strnncollsp_nchars() does not add trailing
virtual spaces, so in case of a NOPAD collation, str1 will be compared
as less than str2 because it is shorter.
- Field_string::cmp_prefix() now passes the new flag.
Field_varstring::cmp_prefix() and Field_blob::cmp_prefix() do
not pass the new flag.
- The branch in cmp_whole_field() in storage/innobase/rem/rem0cmp.cc
(which handles the CHAR data type) now also passed the new flag.
- Fixing UCA collations to respect the new flag.
Other collations are possibly also affected, however
I had no success in making an SQL script demonstrating the problem.
Other collations will be extended to respect this flags in a separate
patch later.
- Changing the meaning of the last parameter of Field::cmp_prefix()
from "number of bytes" (internal length)
to "number of characters" (user visible length).
The code calling cmp_prefix() from handler.cc was wrong.
After this change, the call in handler.cc became correct.
The code calling cmp_prefix() from key_rec_cmp() in key.cc
was adjusted according to this change.
- Old strnncollsp_nchar() related tests in unittest/strings/strings-t.c
now pass the new flag.
A few new tests also were added, without the flag.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tests innodb.import_tablespace_race, innodn.restart, and innodb.innodb-wl5522 move
the tablespace file between the data directory and the tmp directory specified by
global environment variables. However this is risky because it's not unusual that the
set tmp directory (often under /tmp) is mounted on another disk partition or device,
and 'move_file' command may fail with "Errcode: 18 'Invalid cross-device link.'"
For innodb.import_tablespace_race and innodb.innodb-wl5522, moving files
across directories is not necessary. Modify the tests so they rename
files under the same directory. For innodb.restart, instead of moving
between datadir and MYSQL_TMPDIR, move the files under MYSQLTEST_VARDIR.
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new license.
I am contributing on behalf of my employer Amazon Web Services, Inc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tests innodb.import_tablespace_race, innodn.restart, and innodb.innodb-wl5522 move
the tablespace file between the data directory and the tmp directory specified by
global environment variables. However this is risky because it's not unusual that the
set tmp directory (often under /tmp) is mounted on another disk partition or device,
and 'move_file' command may fail with "Errcode: 18 'Invalid cross-device link.'"
To stabilize mysqltest in the described scenario, and prevent such
behavior in the future, let make_file() check both from file path and to
file path and make sure they are either both under MYSQLTEST_VARDIR or
MYSQL_TMP_DIR.
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new license.
I am contributing on behalf of my employer Amazon Web Services, Inc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is allowed:
STRING_WITH_LEN("string literal")
This is not:
char *str = "pointer to string";
... STRING_WITH_LEN(str) ..
In C++ this is also allowed:
const char str[] = "string literal";
... STRING_WITH_LEN(str) ...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test fails sporadically and very rarely on this:
```
let $org_queries= `SHOW STATUS LIKE 'Queries'`;
SELECT f1();
CALL p1();
let $new_queries= `SHOW STATUS LIKE 'Queries'`;
let $diff= `SELECT SUBSTRING('$new_queries',9)-SUBSTRING('$org_queries',9)`;
```
if COM_QUIT from one of the earlier (in the test) disconnect's
happens between the two SHOW STATUS commands.
Because COM_QUIT increments "Queries".
The directly previous test uses wait_condition to wait for
its disconnects to complete. But there are more disconnects earlier
in the test file and nothing waits for them.
Let's change wait_condition to wait for *all* disconnect to complete.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MariaDB server prints the stack information if a crash happens.
It traverses the stack frames in function `print_with_addr_resolve`.
For *EACH* frame, it tries to parse the file name and line number of the
frame using `addr2line`, or prints `backtrace_symbols_fd` if `addr2line`
fails.
1. Logic in `addr_resolve` function uses addr2line to get the file name
and line numbers. It has a timeout of 500ms to wait for the response
from addr2line. However, that's not enough on small instances
especially if the debug information is in a separate file or
compressed.
Increase the timeout to 5 seconds to support some edge cases, as
experiments showed addr2line may take 2-3 seconds on some frames.
2. While parsing a frame inside of a shared library using `addr2line`,
the file name and line numbers could be `??`, empty or `0` if the
debug info is not loaded.
It's easy to reproduce when glibc-debuginfo is not installed.
Instead of printing a meaningless frame like:
:0(__GI___poll)[0x1505e9197639]
...
??:0(__libc_start_main)[0x7ffff6c8913a]
We want to print the frame information using `backtrace_symbols_fd`,
with the shared library name and a hexadecimal offset.
Stacktrace example on a real instance with this commit:
/lib64/libc.so.6(__poll+0x49)[0x145cbf71a639]
...
/lib64/libc.so.6(__libc_start_main+0xea)[0x7f4d0034d13a]
`addr_resolve` has considered the case of meaningless combination of
file name and line number returned by `addr2line`. e.g. `??:?`
However, conditions like `:0` and `??:0` are not handled. So now the
function will rollback to `backtrace_symbols_fd` in above cases.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
|
|
|
|
|
|
|
|
|
|
|
| |
failed in int wsrep::transaction::before_commit()
CREATE [TEMPORARY] SEQUENCE is internally CREATE+INSERT (initial value)
and it is replicated using statement based replication. In Galera
we use either TOI or RSU so we should skip commit time hooks
for it.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With binlogs enabled, debug assertion ut_ad(xid_seqno > wsrep_seqno)
fired in trx_rseg_update_wsrep_checkpoint() when an applier thread
synced the seqno out of order for write set which had failed
certification. This was caused by releasing commit
order too early when binlogs were on, allowing group
commit to run in parallel and commit following transactions
too early.
Fixed by extending the commit order critical section to cover
call to wsrep_set_SE_checkpoint() also when binlogs are on.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using LEFT() function with a string that is without a charset,
the function crashes. This is because the function assumes that
the string has a charset, and tries to use it to calculate the
length of the string.
Two functions, UNHEX and WEIGHT_STRING, returned a string without
the charset being set to a not null value.
The fix is to set charset when calling val_str on these two functions.
Reviewed-by: Alexander Barkov <bar@mariadb.com>
Reviewed-by: Daniel Black <daniel@mariadb.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to delete the data directory on shutdown
Let us make innodb_buffer_pool_filename a read-only variable
so that a malicious user cannot cause an important file to be
deleted on InnoDB shutdown. An attempt to delete a directory
will fail because it is not a regular file, but what if the
variable pointed to (say) ibdata1, ib_logfile0 or some *.ibd file?
It does not seem to make much sense for this parameter to be
configurable in the first place, but we will not change that in order
to avoid breaking compatibility.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
UNIX_TIMESTAMP() called for a expression of the TIME data type
returned NULL.
Inside Type_handler_timestamp_common::Item_val_native_with_conversion
the call for item->get_date() did not convert TIME to DATETIME
automatically (because it does not have to, by design).
As a result, Type_handler_timestamp_common::TIME_to_native() received
a MYSQL_TIME value with zero date 0000-00-00 and therefore returned "true"
(indicating SQL NULL value).
Fix:
Removing the call for item->get_date().
Instantiating Datetime(item) instead.
This forces automatic TIME to DATETIME conversion
(unless @@old_mode is zero_date_time_cast).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
export on partial backup preparing
The solution is to suppress error messages for missing tablespaces if
mariabackup is launched with "--prepare --export" options.
"mariabackup --prepare --export" invokes itself with --mysqld parameter.
If the parameter is set, then it starts server to feed "FLUSH TABLES ...
FOR EXPORT;" queries for exported tablespaces. This is "normal" server
start, that's why new srv_operation value is introduced.
Reviewed by Marko Makela.
|
|
|
|
| |
show mysql results for mysql_fix_privilege_tables
|
|
|
|
|
|
|
|
| |
EXPLAIN EXTENDED for an UPDATE/DELETE/INSERT/REPLACE statement did not
produce the warning containing the text representation of the query
obtained after the optimization phase. Such warning was produced for
SELECT statements, but not for DML statements.
The patch fixes this defect of EXPLAIN EXTENDED for DML statements.
|
|
|
|
|
|
| |
This was failing to compile with AppleClang 14.0.0.14000029.
Thanks to Arunesh Choudhary for noticing.
|
|
|
|
| |
All the .inc files that included from binlog_encryption are refactored.
|
|
|
|
| |
Moved rpl_parallel.inc to rpl_parallel.test
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Description:
- Before 10.3.8 semisync was a plugin that is built into the server with
MDEV-13073,starting with commit cbc71485e24c31fc822277625512e55c2a8b650b.
There are still some usage of `rpl_semi_sync_master` in mtr.
Note:
- To recognize the replica in the `dump_thread`, replica is creating
local variable `rpl_semi_sync_slave` (the keyword of plugin) in
function `request_transmit`, that is catched by primary in
`is_semi_sync_slave()`. This is the user variable and as such not
related to the obsolete plugin.
- Found in `sys_vars.all_vars` and `rpl_semi_sync_wait_point` tests,
usage of plugins `rpl_semi_sync_master`, `rpl_semi_sync_slave`.
The former test is disabled by default (`sys_vars/disabled.def`)
and marked as `obsolete`, however this patch will remove the queries.
- Add cosmetic fixes to semisync codebase
Reviewer: <brandon.nesterenko@mariadb.com>
Closes PR #2528, PR #2380
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
row_upd_rec_in_place(): Avoid calling page_zip_write_rec() if we
are not modifying any fields that are stored in compressed format.
btr_cur_update_in_place_zip_check(): New function to check if a
ROW_FORMAT=COMPRESSED record can actually be updated in place.
btr_cur_pessimistic_update(): If the BTR_KEEP_POS_FLAG is not set
(we are in a ROLLBACK and cannot write any BLOBs), ignore the potential
overflow and let page_zip_reorganize() or page_zip_compress() handle it.
This avoids a failure when an attempted UPDATE of an NULL column to 0 is
rolled back. During the ROLLBACK, we would try to move a non-updated
long column to off-page storage in order to avoid a compression failure
of the ROW_FORMAT=COMPRESSED page.
page_zip_write_trx_id_and_roll_ptr(): Remove an assertion that would fail
in row_upd_rec_in_place() because the uncompressed page would already
have been modified there.
Thanks to Jean-François Gagné for providing a copy of a page that
triggered these bugs on the ROLLBACK of UPDATE and DELETE.
A 10.6 version of this was tested by Matthias Leich using
cmake -DWITH_INNODB_EXTRA_DEBUG=ON a.k.a. UNIV_ZIP_DEBUG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mtr uses group suffix, but some existing inc and test files use
server_id for expect files. This patch aims to fix that.
For spider:
With this change we will not have to maintain a separate version of
restart_mysqld.inc for spider, that duplicates code, just because
spider tests use different names for expect files, and shutdown_mysqld
requires magical names for them.
With this change spider tests will also be able to use other features
provided by restart_mysqld.inc without code duplication, like the
parameter $restart_parameters (see e.g. the testcase mdev_29904.test
in commit ef1161e5d4f).
Tests run after this change: default, spider, rocksdb, galera, using
the following command
mtr --parallel=auto --force --max-test-fail=0 --skip-core-file
mtr --suite spider,spider/*,spider/*/* \
--skip-test="spider/oracle.*|.*/t\..*" --parallel=auto --big-test \
--force --max-test-fail=0 --skip-core-file
mtr --suite galera --parallel=auto
mtr --suite rocksdb --parallel=auto
|
|
|
|
|
|
|
|
| |
buf_LRU_block_remove_hashed(): Ever since
commit 2e814d4702d71a04388386a9f591d14a35980bfe
we could get page_zip_validate() failures after an ALTER TABLE
operation was aborted and BtrBulk::pageCommit() had never been
executed on some blocks.
|
|
|
|
|
|
|
|
| |
IMPORT TABLESPACE
fil_iterate(): Allocation bitmap pages are never encrypted.
Reviewed by: Thirunarayanan Balathandayuthapani
|
|
|
|
|
|
|
|
|
| |
specified as a relative path) during copy-back operation
Make absolute destination path from relative one, basing on mysql data
directory.
Reviewed by Alexander Barkov.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hang could be seen as show slave status displaying an error like
Last_Error: Could not execute Write_rows_v1
along with
Slave_SQL_Running: Yes
accompanied with one of the replication threads in show-processlist
characteristically having status like
2394 | system user | | NULL | Slave_worker | 50852| closing tables
It turns out that closing tables worker got entrapped in endless looping
in mark_start_commit_inner() across already garbage-collected gco items.
The reclaimed gco links are explained with actually possible
out-of-order groups of events termination due to the Last_Error.
This patch reinforces the correct ordering to perform
finish_event_group's cleanup actions, incl unlinking gco:s
from the active list.
|
|
|
|
|
|
|
|
|
|
|
| |
redundant table rebuild
- InnoDB alter fails to apply the online log during redundant table
rebuild. Problem is that InnoDB wrongly reads the length flags of the
record while applying the temporary log record.
rec_init_offsets_comp_ordinary(): For finding the n_core_null_bytes,
InnoDB should use the same logic as rec_convert_dtuple_to_rec_comp().
|
|
|
|
|
|
| |
header file
Included config file for proper compilation without <my_global.h>
|
|
|
|
|
|
|
| |
to 10.4
convert empty host.db to "%", just as it's done for host.hostname
(in update_hostname())
|
|
|
|
| |
file"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and my_getwd(). The cause is my_errno define which
depends on my_thread_var being a not null pointer
otherwise it will be de-referenced and cause
a SEGV already in the signal handler.
Replace uses of these functions in the output_core_info
using posix read/getcwd functions instead.
The getwd fallback in my_getcwd isn't needed as
its been obsolute for a very long time.
Thanks Vladislav Vaintroub for diagnosis and posix
recommendation.
|
|
|
|
|
|
|
|
| |
--flashback
- Adding test case for --raw without -R
- Adding unsuported combination of --raw and --flashback parameters and
covered with test case
|
|
|
|
| |
- Update per Monty's review for commit 5328dbdaa237d97e443120f2e08437fb274198c2
|