diff options
author | Andrei Elkin <andrei.elkin@mariadb.com> | 2018-07-27 22:55:18 +0300 |
---|---|---|
committer | Andrei Elkin <andrei.elkin@mariadb.com> | 2019-01-24 20:44:50 +0200 |
commit | 5d48ea7d07b481ae3930486b4b039e1454273190 (patch) | |
tree | 86e9be451b867ad4f4df061c18d0546744e45ee0 /sql/item_func.h | |
parent | f9ac7032cbc4b7b9af9c8122e6ebfd91af9fbaf9 (diff) | |
download | mariadb-git-5d48ea7d07b481ae3930486b4b039e1454273190.tar.gz |
MDEV-10963 Fragmented BINLOG query
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.
Diffstat (limited to 'sql/item_func.h')
-rw-r--r-- | sql/item_func.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/sql/item_func.h b/sql/item_func.h index 2c0e3a62f6a..e3eab02f213 100644 --- a/sql/item_func.h +++ b/sql/item_func.h @@ -2283,4 +2283,8 @@ bool eval_const_cond(COND *cond); extern bool volatile mqh_used; +bool update_hash(user_var_entry *entry, bool set_null, void *ptr, uint length, + Item_result type, CHARSET_INFO *cs, + bool unsigned_arg); + #endif /* ITEM_FUNC_INCLUDED */ |