| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FAILING ASSERTION: FLEN == LEN
Problem:
Broken invariant triggered when building a unique index on a
binary column and the input data contains duplicate keys. This was broken
in debug builds only.
Fix:
Fixed length of the binary datatype can be greater than length of
the shorter prefix on which index is being created.
|
|
|
|
|
|
|
|
|
| |
Problem: While printing the Server version, mysql client
doesn't check for the buffer overflow in a
String variable.
Solution: Used a different print function which checks the
allocated length before writing into the string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ACCEPTED BUT PARSED INCORRECTLY
When we are setting the value in a system variable,
We can set it like
set sys_var="Iden1.Iden2"; //1
set sys_var='Iden1.Iden2'; //2
set sys_var=Iden1.Iden2; //3
set sys_var=.ident1.ident2; //4
set sys_var=`Iden1.Iden2`; //5
While parsing, for case 1(when ANSI_QUOTES is enable) and 2,
we will take as string literal(we will make item of type Item_string).
for case 3 & 4, taken as Item_field, where Iden1 is a table name and
iden2 is a field name.
for case 5, again Item_field type, where iden1.iden2 is taken as
field name.
Now in case 1, when we are assigning some value to system variable
(which can take string or enumerate type data), we are setting only
field part.
This means only iden2 value will be set for system variable. This
result in wrong result.
Solution:
(for string type) We need to Document that we are not allowed to set
system variable which takes string as identifier, otherwise result
in unexpected behaviour.
(for enumerate type)
if we pass iden1.iden2, we will give an error ER_WRONG_TYPE_FOR_VAR
(Incorrect argument type to variable).
mysql-test/suite/sys_vars/t/general_log_file_basic.test:
Earlier we used to give ER_WRONG_VALUE_FOR_VAR error, but in the patch of
(Bug32748-Inconsistent handling of assignments to general_log_file/slow_query_log_file)
they quoted this line.But i am not able to find any relation of this with the changes of
patch. So i think We should give error in this case.
mysql-test/suite/sys_vars/t/slow_query_log_file_basic.test:
Earlier we used to give ER_WRONG_VALUE_FOR_VAR error, but in the patch of
(Bug32748-Inconsistent handling of assignments to general_log_file/slow_query_log_file)
they quoted this line.But i am not able to find any relation of this with the changes of
patch. So i think We should give error in this case.
|
|
|
|
| |
A WRONG PORT NUMBER
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
In the clustered index, when an update operation is done the overall
scenario (after rb#4479) is as follows:
1. Delete mark the old record that is to be updated.
2. The old record disowns the blobs.
3. Insert the new record into clustered index.
4. For non-updated blobs, new record must own it. Verified by assert.
5. For non-updated blobs, in new record marked as inherited.
Scenario involving DB_LOCK_WAIT:
If step 3 times out, then we will skip 1 and 2 and will continue from
step 3. This skipping is achieved by the UPD_NODE_INSERT_BLOB state.
In this case, step 4 is not correct. Because of step 1, the new
record need not own the blobs. Hence the assert failure.
Solution:
The assert in step 4 is removed. Instead code is added to ensure that
the record owns the blob.
Note:
This is a regression caused by rb#4479.
rb#4571 approved by Marko
|
|
|
|
|
| |
- Enable shared libmysqld by cmake option
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AUTO_INCREMENT_INCREMENT
Problem:
=======
When auto_increment_increment system variable decreases,
immediate next value of auto increment column is not affected.
Solution:
========
Get the previous inserted value of auto increment column by
subtracting the previous auto_increment_increment from next
auto increment value. After that calculate the current autoinc value
using newly changed auto_increment_increment variable.
Approved by Sunny [rb#4394]
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
It was reported that on Debian and KFreeBSD platforms, i386 architecture
machines certain SSL tests are failing. main.ssl_connect rpl.rpl_heartbeat_ssl
rpl.rpl_ssl1 rpl.rpl_ssl main.ssl_cipher, main.func_encrypt were the tests that
were reportedly failing (crashing). The reason for the crashes are said to be
due to the assembly code of yaSSL.
Solution:
There was initially a workaround suggested i.e., to enable
-DTAOCRYPT_DISABLE_X86ASM flag which would prevent the crash, but at an expense
of 4X reduction of speed. Since this was unacceptable, the fix was the
functions using assembly, now input variables from the function call using
extended inline assembly on GCC instead of relying on direct assembly code.
|
|
|
|
|
| |
Added a new option: WITH_EMBEDDED_SHARED_LIBRARY
|
| |
|
|
|
|
|
| |
Corrected how to find "resolveip"
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
CONFIG FILES CAUSES TEST
Utility as "mysql_upgrade" forks "mysql"/"mysqlcheck". Attaching
"mysql_upgrade" shows following calls after forking "mysql" or
"mysql_check" when configuration file information is passed as
first argument to "mysql_upgrade".
strace -f ./mysql_upgrade --defaults-file=../pdb/my.cnf --socket=../pdb/mysql.sock -f
[pid 6254] stat("/etc/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/etc/mysql/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/usr/local/mysql/etc/my.cnf", 0x7fff8e772680) = -1 ENOENT (No such file or directory)
[pid 6254] stat("/home/user_name/.my.cnf", {st_mode=S_IFREG|0664, st_size=19, ...}) = 0
[pid 6254] open("/home/user_name/.my.cnf", O_RDONLY) = 3
But when tool forks "mysqlcheck"/"mysql", "--no-defaults" is passed
as first argument. Before forking, in function "find_tool" of
"mysql_upgrade", check is made to verify whether tool can be
executable or not by calling "mysqlcheck --help" and "mysql --help".
But argument "--no-defaults", "--defaults-file" or
"defaults-extra-file" is not passed to "mysql" and "mysqlcheck".
So my.cnf is searched in default paths.
Fix:
------
Modified code to pass "--no-defaults" as first argument to "mysql"
and "mysqlcheck" while checking tool can be executed or not.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Performance schema tables are local to a server and they should not
be allowed to be executed by the slave from the relay log.
From 5.6.10, P_S events are not written into the binary log.
But prior to that, from mysql 5.5 onwards, P_S events are written
to the binary log by master.
The following are problematic scenarios:
1. Master 5.5 -> Slave 5.5
========================
A) RBR: Slave crashes
B) SBR: P_S statements are replicated.
2.Master 5.5 -> Slave 5.6
========================
A) RBR: SQL thd generates error
B) SBR : P_S statements are replicated
3. 5.5 binlog executed on a server 5.5 using mysqlbinlog|mysql
=================================================================
A) RBR: Server crash (because of BINLOG'... statement)
B) SBR: P_S statements are executed
4. 5.5 binlog executed on server 5.6 using mysqlbinlog|mysql
================================================================
A) RBR: SQL error (because of BINLOG'... statement)
B) SBR: P_S statements are executed.
The generalized behaviour should be:
a) Slave SQL thread should certainly ignore P_S events read from
the relay log.
b) mysqlbinlog|mysql should replay the binlog succesfully.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem:
The function row_upd_changes_ord_field_binary() is used to decide whether to
use row_upd_clust_rec_by_insert() or row_upd_clust_rec(). The function
row_upd_changes_ord_field_binary() does not make use of charset information.
Based on binary comparison it decides that r1 and r2 differ in their ordering
fields.
In the function row_upd_clust_rec_by_insert(), an update is done by delete +
insert. These operations internally make use of cmp_dtuple_rec_with_match()
to compare records r1 and r2. This comparison takes place with the use of
charset information.
This means that it is possible for the deleted record to be reused in the
subsequent insert. In the given scenario, the characters 'a' and 'A' are
considered equal in the my_charset_latin1. When this happens, the ownership
information of externally stored blobs are not correctly handled.
Solution:
When an update is done by delete followed by insert, disown the relevant
externally stored fields during the delete marking itself (within the same
mtr). If the insert succeeds, then nothing with respect to blob ownership
needs to be done. If the insert fails, then the disown done earlier will be
removed when the operation is rolled back.
rb#4479 approved by Marko.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The maximum value for innodb_thread_sleep_delay is 4294967295 (32-bit) or
18446744073709551615 (64-bit) microseconds. This is way too big, since
the max value of innodb_thread_sleep_delay is limited by
innodb_adaptive_max_sleep_delay if that value is set to non-zero value
(its default is 150,000).
Solution
The maximum value of innodb_thread_sleep_delay should be the same as
the maximum value of innodb_adaptive_max_sleep_delay, which is 1000000.
Approved by Jimmy, rb#4429
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Backported only the softlink part of the patch,
*not* the bumping of library version.
With this patch, the libmysql/ directory contains:
libmysqlclient.a
libmysqlclient_r.a -> libmysqlclient.a
libmysqlclient_r.so -> libmysqlclient.so*
libmysqlclient_r.so.18 -> libmysqlclient.so.18*
libmysqlclient_r.so.18.0.0 -> libmysqlclient.so.18.0.0*
libmysqlclient.so -> libmysqlclient.so.18*
libmysqlclient.so.18 -> libmysqlclient.so.18.0.0*
libmysqlclient.so.18.0.0*
|
| |
| |
| |
| |
| |
| |
| | |
Bug#68338 RFE: make tmpdir a build-time configurable option
Post-push fix: windows needs DEFAULT_TMPDIR as well.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Bug#68338 RFE: make tmpdir a build-time configurable option
Post-push fix: 'cmake -LH | grep TMP' showed TMPDIR as a BOOL option,
which was a bit confusing: show it as a PATH instead.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is a backport of the patch of bug#11765785. Commit message
by Prabakaran Thirumalai from bug#11765785 is reproduced below:
Description:
------------
Global Query ID (global_query_id ) is not incremented for PING and
statistics command. These two query types are filtered before
incrementing the global query id. This causes race condition and
results in duplicate query id for different queries originating from
different connections.
Analysis:
---------
sqlparse.cc::dispath_command() is the only place in code which sets
thd->query_ id to global_query_id and then increments it based on the
query type. In all other places it is incremented first and then
assigned to thd->query_id.
This is done such that global_query_id is not incremented for PING
and statistics commands in dispatch_command() function.
Fix:
----
As per suggestion from Serg, "There is no reason to skip query_id for
the PING and STATISTICS command.", removing the check which filters
PING and statistics commands.
Instead of using get_query_id() and next_query_id() which can still
cause race condition if context switch happens soon after executing
get_query_id(), changing the code to use next_query_id() instead of
get_query_id() as it is done in other parts of code which deals with
global_query_id.
Removed get_query_id() function and forced next_query_id() caller
to use the return value by specifying warn_unused_result attribute.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
FOOT
Description: A typo in create_tailoring() causes the "contraction_flags" to be written
into cs->contractions in the wrong place. This causes two problems:
(1) Anyone relying on `contraction_flags` to decide "could this character be
part of a contraction" is 100% broken.
(2) Anyone relying on `contractions` to determine the weight of a contraction
is mostly broken
Analysis: When we are preparing the contraction in create_tailoring(), we are corrupting the
cs->contractions memory location which is supposed to store the weights(8k) + contraction information(256 bytes). We started storing the contraction information after the 4k location. This is because of logic flaw in the code.
Fix: When we create the contractions, we need to calculate the contraction with (char*) (cs->contractions + 0x40*0x40) from ((char*) cs->contractions) + 0x40*0x40. This makes the "cs->contractions" to move to 8k bytes and stores the contraction information from there. Similarly when we are calculating it for like range queries we need to calculate it from the 8k bytes onwards, this can be done by changing the logic to (const char*) (cs->contractions + 0x40*0x40). And for ucs2 charsets we need to modify the my_cs_can_be_contraction_head() and my_cs_can_be_contraction_tail() to point to 8k+ locations.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
OF ROW DATA
Problem:
========
Inserting a row larger than 4G when server uses RBR leads
to crash.
Analysis:
========
Row-based binary logging logs changes in individual table
rows. During the execution of DML statements in RBR the
actual row data will be stored within "m_rows_buf" buffer
and this buffer contents will be written to binary log.
"m_rows_buf" is prepared within the following function
"Rows_log_event::do_add_row_data".
When a huge row is specified as in this bug scenario where
row size is 4294971520 > UINT_MAX (4294967295) then the
"m_rows_buf" is reallocated to accommodate the row data and
then the row is copied to the buffer. During this realloc
call, the length is getting type casted to "uint" which
results in overflow. Because of the overflow the reallocated
memory happens to be incorrect than what was requested
and it results in a crash during copy of rowdata to buffer.
Hence rows of size > 4GB cannot be written to binary log.
By default the event_length can be stored within 4 bytes
which in turn restricts an event's size to grow. Hence large
rows cannot be replicated using row based replication.
Fix:
===
An error is generated if the row size exceeds 4GB value.
sql/log_event.cc:
An error is generated if the row size exceeds 4GB value.
Debug simulations are added to test the fix.
|
|\ \
| | |
| | |
| | |
| | |
| | | |
- Automerged from bug branch into latest mysql-5.5.
- Fixed trailing whitespaces.
- Updated the copyright notice year to 2014.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
AUTO_INC COLUMN ONLY ON SLAVE
In RBR, if the slave's table as one additional auto_inc column,
then, it will insert the value 0 instead of generating the next
auto_inc number.
We fix this by checking that if an auto_inc extra column exists,
when compared to column data of the row event, we explicitly set
it to NULL and flag the engine that a nulled auto_inc column will
be inserted.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
FROM SUBSELECT
ISSUE : In function find_all_keys.
If selected row do not satisfy condition
then we call unlock_row to release the locked
row. Suppose if we have subquery in condition
and we have an innodb error during its execution.
Then we should not call the unlock_row. If the error
is because of deadlock, innodb will rollback the
transaction. And calling unlock_row without
transaction is an invalid case hence an assertion
failure.
SOLUTION : We call unlock_row only if only there is no
error occurred previously.
The solution is back ported from 5.6
defect number 14226481
sql/filesort.cc:
Now we call unlock_row only if there is no
previous error.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
IN DOCUMENTATION
Problem
-------
The documentation says that we support 'K' prefix
while specifiying size for innodb datafile in the
server variable for innodb_data_file_path ,but the
function srv_parse_megabytes() only handles only
'M' (megabytes) and 'G' (gigabytes) .
Fix
---
Modify srv_parse_megabytes() to handle Kilobytes.
Add in documentation that while specifying size
in KB it should be mentioned in multiples of 1024
other wise they will be rounded off to nearest
MB (megabyte) boundry .(eg if size mentioned
as 2313KB will be considered as 2 MB ).
[ Approved by Marko #rb 2387 ]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
WITH SSL ENABLED
Problem:
It was reported that MySQL community utilities cannot connect to a MySQL
Enterprise 5.6.x server with SSL configured. We can reproduce the issue
when we try to connect an MySQL Enterprise Server with a MySQL Client with
--ssl-ca parameter enabled.
We get an ERROR 2026 (HY000): SSL connection error: unknown error number.
Solution:
The root cause of the problem was determined to be the difference in handling
of the certificates by OpenSSL(Enterprise) and yaSSL(Community). OpenSSL expects
a blank certificate to be sent when a parameter (ssl-ca, or ssl-cert or ssl-key)
has not been specified.On the other hand yaSSL doesn't send any certificate and
since OpenSSL does not expect this behaviour it returns an Unknown SSL error.
The issue was resolved by yaSSL adding capability to send blank certificate when
any of the parameter is missing.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Analysis
--------
Running 'MYSQLD --help --verbose' as ROOT user without
using '--user' option displays the help contents but
aborts at the end with an exit code '1'.
While starting the server, a validation is performed to
ensure when the server is started as root user, it should
be done using '--user' option. Else we abort. In case
of help, we dump the help contents and abort.
Fix:
---
During the validation, we skip aborting the server incase
we are using the help option under the condition mentioned
above.
NOTE: Test case has not been added since it requires using
'root' user.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem: Drop Trigger succeeds even after setting read_only
variable to ON.
Fix: Fix is to report the standard error
(ER_OPTION_PREVENTS_STATEMENT)when global read_only variable
is set to ON.
|
| |/
|/| |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
LOCAL TABLE WHEN ONLY 1 LOCAL ROW
Description: When updating a federated table with UPDATE...
JOIN, the server consistently crashes with Signal 11 when
only 1 row exists in the local table involved in the join
and that 1 row can be joined with a row in the federated
table.
Analysis: Interaction between the federated engine and the
optimizer results in the crash. In our scenario, ie, local
table having only one row, the program is following a
different path because the table is treated as a constant
table by the join optimizer. So in this scenario
"index_read()" is happening in the prepare phase,
since optimizer plan is different for constant table joins.
In this case, "index_read_idx_map()" (inside handler.cc) is
calling "index_read()" and inside "index_read()", matching
rows are fetched and "stored_result" gets populated by
calling "store_result()". And just after "index_read()",
"index_end()" function is called. And in the "index_end()",
its freeing the "stored_result" by calling "free_result()".
So when it reaches the execution phase, in "position()"
function, we are getting assertion at
"DBUG_ASSERT(stored_result);". In all other scenarios (ie,
table with more than 1 row), optimizer plan is different
and "index_read()" is happening in the execution phase.
Fix: So my fix is to have a separate ha_federated member
function for "index_read_idx_map()" which will handle
federated engine separately. So that position() will be
called before index_end() call in constant table scenario.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ERRORS IN THE FK SECTION
ANALYSIS
--------
Any error during the renaming of the table was
incorrectly logged in the dict_foreign_err_file
and it showed up in foreign key section when
we give the query "show engine innodb status".
FIX
---
Prevent renaming error from being logged in
dict_foreign_err_file section.
[Aprooved by marko #rb 2501 ]
|
| |
| |
| |
| |
| |
| |
| | |
IT IS DONE IN-PLACE
Add testcase as innodb-change-buffer-recovery.test
|
| |
| |
| |
| | |
Adding the test cases for the BUG#16451878.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
FAILS TO DROP CREATED EVENTS:
- Check for triggers should exclude mtr's own
- Move the code to before checksum table as it might affect result
of some autdit_log tests (does in 5.6)
- Replace SHOW STATUS LIKE 'slave_open_temp_tables' to be like in 5.6
|
| |
| |
| |
| |
| |
| | |
fix: DROP EVENT e1;
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bug#68338 RFE: make tmpdir a build-time configurable option
Background: Some distributions use tmpfs for mounting /tmp by
default, which has some advantages, but brings also new
issues. Fedora started using tmpfs on /tmp in version 18 for
example. If not configured otherwise in my.cnf, MySQL uses
system's constant P_tmpdir expanded to /tmp on Linux. This can
introduce some problems with limited space in /tmp and also some
data loss in case of replication slave [1].
In case distributions would like to use /var/tmp, which should be
better for MySQL purposes, then we have to patch the source or
change tmpdir option in my.cnf, which is however not updated in
case it has already existed.
Thus, it would be useful to be able to specify default tmpdir
path using a configure option, while using P_tmpdir in case it is
not defined explicitly.
Based on a contribution from Honza Horak
|
| |
| |
| |
| |
| |
| | |
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)
Post Push: Fixing Werror compiler issue
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)
Problem: If slave receives a corrupted row event,
slave server is crashing.
Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.
Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.
sql/field.h:
Removed a function which is not required anymore
sql/log_event.cc:
Adding a validation on the field length before
the tool tries to print the value.
sql/log_event.h:
Changing unpack_row call according to the new arguments
sql/log_event_old.h:
Changing unpack_row call according to the new arguments
sql/rpl_record.cc:
Adding a new argument 'row_end' which tells
the end position of the complete data in the
row event. It will be used to do validation
before doing 'unpack' field.
sql/rpl_record.h:
Adding a new argument 'row_end' which tells
the end position of the complete data in the
row event. It will be used to do validation
before doing 'unpack' field.
sql/rpl_utility.cc:
Now calc_field_size() is required for client too.
|
|
|
|
|
|
|
|
| |
MYSQLBUG SCRIPT DURING INSTALLATION
Bug#68742 : Bug#16530527 : OBSOLETE BUGREPORT ADDRESSES
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
DESTRUCTED THD OBJ
Prior to fix, function check_performance_schema() could leave
behind stale pointers in thread local storage, for the following keys:
- THR_THD (used by _current_thd)
- THR_MALLOC (used for memory allocation)
This is an unsafe practice, which can potentially cause crashes,
and that can cause other bugs when code is modified during maintenance.
With this fix, thread local storage keys used temporarily within
function check_performance_schema() are cleaned up after use.
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug#17867117 - ERROR RESULT WHEN "COUNT + DISTINCT + CASE WHEN" NEED MERGE_WALK
Problem:
COUNT DISTINCT gives incorrect result when it uses a Unique
Tree and its last inserted record has null value.
Here is how COUNT DISTINCT is processed, given that this query is not
using loose index scan.
When a row is produced as a result of joining tables (there is only
one table here), we store the SELECTed value in a Unique tree. This
allows elimination of any duplicates, and thus implements DISTINCT.
When we have processed all rows like this, we walk the Unique tree,
counting its elements, in Aggregator_distinct::endup() (tree->walk());
for each element we call Item_sum_count::add(). Such function wants to
ignore any NULL value, for that it checks item_sum -> args[0] ->
null_value. It is a mistake: when walking the Unique tree, the value
to be aggregated is not item_sum ->args[0] but rather table ->
field[0].
Solution:
instead of item_sum -> args[0] -> null_value, use arg_is_null(), which
knows where to look (like in fix for bug 57932).
As a consequence of this solution, we have to make arg_is_null() a
little more general:
1) Because it was so far only used for AVG() (which always has a
single argument), this function was looking at a single argument; now
that it has to work with COUNT(DISTINCT expression1,expression2), it
must look at all arguments.
2) Because we start using arg_is_null () for COUNT(DISTINCT), i.e. in
Item_sum_count::add (), it implies that we are also using it for
COUNT(no DISTINCT) (same add ()). For COUNT(no DISTINCT), the
nullness to check is that of item_sum -> args[0]. But the null_value
of such item is reliable only if val_*() has been called on it. So far
arg_is_null() was always used after a call to arg_val*(), so could
rely on null_value; but for COUNT, there is no call to arg_val*(), so
arg_is_null() has to call is_null() instead.
Testcase for 16539979 by Neeraj. Testcase for 17867117 contributed by
Xiaobin Lin from Taobao.
|
|\ |
|