| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
list ${OPENSSL_ROOT_DIR}/lib64 explicitly, because
cmake below version 3.23.0 won't search there.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary of changes
- MD_CTX_SIZE is increased
- EVP_CIPHER_CTX_buf_noconst(ctx) does not work anymore, points
to nobody knows where. The assumption made previously was that
(since the function does not seem to be documented)
was that it points to the last partial source block.
Add own partial block buffer for NOPAD encryption instead
- SECLEVEL in CipherString in openssl.cnf
had been downgraded to 0, from 1, to make TLSv1.0 and TLSv1.1 possible
(according to https://github.com/openssl/openssl/blob/openssl-3.0.0/NEWS.md
even though the manual for SSL_CTX_get_security_level claims that it
should not be necessary)
- Workaround Ssl_cipher_list issue, it now returns TLSv1.3 ciphers,
in addition to what was set in --ssl-cipher
- ctx_buf buffer now must be aligned to 16 bytes with openssl(
previously with WolfSSL only), ot crashes will happen
- updated aes-t , to be better debuggable
using function, rather than a huge multiline macro
added test that does "nopad" encryption piece-wise, to test
replacement of EVP_CIPHER_CTX_buf_noconst
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A query in form
SELECT DISTINCT expr_that_is_inferred_to_be_const LIMIT 0 OFFSET n
produces one row when it should produce none. The issue was in
JOIN_TAB::remove_duplicates() in the piece of logic that tried to
avoid duplicate removal for such cases but didn't account for possible
"LIMIT 0".
Fixed by making Select_limit_counters::set_limit() change OFFSET to 0
when LIMIT is 0.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The test innodb.log_file_size would occasionally fail with
an assertion failure !buf_pool.any_io_pending(). Let us wait
for the page cleaner thread to become idle already in
srv_prepare_to_delete_redo_log_file(), like we used to.
|
| | |\ \
| | | |/ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The code was backported from 10.6 bd03c0e51629e1c3969a171137712a6bb854c232
commit. See that commit message for details.
Apart from the above commit trx_lock_t::wait_trx was also backported from
MDEV-24738. trx_lock_t::wait_trx is protected with lock_sys.wait_mutex
in 10.6, but that mutex was implemented only in MDEV-24789. As there is no
need to backport MDEV-24789 for MDEV-27025,
trx_lock_t::wait_trx is protected with the same mutexes as
trx_lock_t::wait_lock.
This fix should not break innodb-lock-schedule-algorithm=VATS. This
algorithm uses an Eldest-Transaction-First (ETF) heuristic, which prefers
older transactions over new ones. In this fix we just insert granted lock
just before the last granted lock of the same transaction, what does not
change transactions execution order.
The changes in lock_rec_create_low() should not break Galera Cluster,
there is a big "if" branch for WSREP. This branch is necessary to provide
the correct transactions execution order, and should not be changed for
the current bug fix.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When lock is checked for conflict, ignore other locks on the record if
they wait for the requesting transaction.
lock_rec_has_to_wait_in_queue() iterates not all locks for
the page, but only the locks located before the waiting lock in the
queue. So there is some invariant - any lock in the queue can wait only
lock which is located before the waiting lock in the queue.
In the case when conflicting lock waits for the transaction of
requesting lock, we need to place the requesting lock before the waiting
lock in the queue to preserve the invariant. That is why we are looking
for the first waiting for requesting transation lock and place the new
lock just after the last granted requesting transaction lock before the
first waiting for requesting transaction lock.
Example:
trx1 waiting lock, trx1 granted lock, ..., trx2 lock - waiting for trx1
place new lock here -----------------^
There are also implicit locks which are lazily converted to explicit
ones, and we need to place the newly created explicit lock to the correct
place in a queue. All explicit locks converted from implicit ones are
placed just after the last non-waiting lock of the same transaction before
the first waiting for the transaction lock.
Code review and cleanup was made by Marko Mäkelä.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
GCC does not understand that the variable have_ndv determines
whether the variable ndv_ll is initialized. Let us add a
redundant initialization to pacify GCC.
|
|\ \ \ \ |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
MTR still uses JSON_HB as the default.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In Histogram_json_hb::point_selectivity(), do return selectivity of 0.0
when the histogram says so.
The logic of "Do not return 0.0 estimate as it causes a multiply-by-zero
meltdown in cost and cardinality calculations" is moved into
records_in_column_ranges() where it is one *once* per column pair (as
opposed to doing once per range, which can cause the error to add-up
to large number when there are many ranges)
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Followup: remove this line from get_column_range_cardinality()
set_if_bigger(res, col_stats->get_avg_frequency());
and make sure it is only used with the binary histograms.
For JSON histograms, it makes the estimates unnecessarily imprecise.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Added a testcase
|
| | | | |
| | | | |
| | | | |
| | | | | |
Fix special handling for values that are right next to buckets with ndv=1.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fix the code in Histogram_json_hb::range_selectivity that handles
special cases: a non-inclusive endpoint hitting a bucket boundary...
|
| | | | |
| | | | |
| | | | |
| | | | | |
In read_bucket_endpoint(), handle all possible parser states.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Encode such characters in hex.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Save extra information in the histogram:
"target_histogram_size": nnn,
"collected_at": "(date and time)",
"collected_by": "(server version)",
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Also report JSON histogram load errors into error log, like it is already
done with other histogram/statistics load errors.
Add test coverage to see what happens if one upgrades but does NOT run
mysql_upgrade.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previous JSON parser was using an API which made the parsing
inefficient: the same JSON contents was parsed again and again.
Switch to using a lower-level parsing API which allows to do
parsing in an efficient way.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
- Make Histogram_json_hb::range_selectivity handle singleton buckets
specially when computing selectivity of the max. endpoint bound.
(for min. endpoint, we already do that).
- Also, fixed comments for Histogram_json_hb::find_bucket
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When loading the histogram, use table->field[N], not table->s->field[N].
When we used the latter we would corrupt the fields's default value. One
of the consequences of that would be that AUTO_INCREMENT fields would
stop working correctly.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Handle the case where the last value in the table cannot be represented
in utf8mb4.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
.. for non-existent values.
Handle this special case.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fix a bug in position_in_interval(). Do not overwrite one interval endpoint
with another.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The problem was introduced in fix for MDEV-26724. That patch has made it
possible for histogram collection to fail. In particular, it fails for
non-assigned characters.
When histogram construction fails, we also abort the computation of
COUNT(DISTINCT). When we try to use the value, we get valgrind failures.
Switched the code to abort the statistics collection in this case.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When computing bucket_capacity= records/histogram->get_width(), round
the value UP, not down.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Part#3:
- make json_escape() return different errors on conversion error
and on out-of-space condition.
- Make histogram code handle conversion errors.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Fix the description
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Change it to LONGBLOB.
Also, update_statistics_for_table() should not "swallow" an error
from open_stat_tables.
|
| | | | |
| | | | |
| | | | |
| | | | | |
.. part#2: correctly pass the charset to JSON [un]escape functions
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Histogram_json_hb::range_selectivity
Add testcase
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Item_func_decode_histogram::val_str should correctly set null_value
when "decoding" JSON histogram.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Correctly handle empty string when [un]escaping JSON
|
| | | | |
| | | | |
| | | | |
| | | | | |
Escape values when serializing to JSON. Un-escape when reading back.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Do not put Histogram objects on MEM_ROOT at all
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Provide buffer of sufficient size.
|