| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
Scheduler: An error occurred when initializing system tables"
if columns or indexes are modified/renamed/dropped in an ALTER TABLE,
stat tables must be updated accordingly (e.g. all statistics for a column
should be dropped). But if a stat table doesn't exist, it's not a reason
to fail the whole ALTER TABLE operation - such an error should be ignored.
|
| |
|
|
| |
comments, rename a function to reflect its usage better
|
| |\ |
|
| | |\ |
|
| | | |\ |
|
| | | | |
| | | |
| | | |
| | | |
| | | | |
Statistics were not read for a table when we had a CREATE TABLE query.
Enforce reading statistics for commands CREATE TABLE, SET and DO.
|
| | |\ \ \
| | |/ / |
|
| | | |\ \
| | | |/ |
|
| | | | | |
|
| | |\ \ \
| | |/ / |
|
| | | |\ \
| | | |/ |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
information schema
To read histograms for a table, we should check if the allocation of statistics was done or not,
if not done we should not try to read histograms for such a table.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`field->table->stats_is_read' failed.
Fixed the assert by making sure that not to use EITS if the column statistics was not allocated.
|
| |\ \ \ \
| |/ / /
| | | |
| | | |
| | | | |
In is_eits_usable(), we disable an assertion that fails due to
MDEV-19334.
|
| | |\ \ \
| | |/ / |
|
| | | |\ \
| | | |/ |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The command SHOW INDEXES ignored setting of the system variable
use_stat_tables to the value of 'preferably' and and showed statistical
data received from the engine. Similarly queries over the table
STATISTICS from INFORMATION_SCHEMA ignored this setting. It happened
because the function fill_schema_table_by_open() did not read any data
from statistical tables.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
For single table updates and multi-table updates , engine independent statistics were not being
read even if the statistics were collected.
Fixed it, so when the optimizer_use_condition_selectivity > 2 then we would read the available
statistics for update queries.
|
| |\ \ \ \
| |/ / / |
|
| | |\ \ \
| | |/ / |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
To fix the crash there we need to make sure that the
server while storing the statistical values in statistical tables should do it
in a multi-byte safe way.
Also there is no need to throw warnings if there is truncation while storing
values from statistical fields.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
and, again, *don't use thd->clear_error()*
this fixed main.sp_notembedded failure on various amd64 platforms
(where ER_STACK_OVERRUN_NEED_MORE happens to fire in open_stat_tables()
under Dummy_error_handler)
|
| | | | |
| | | |
| | | |
| | | |
| | | | |
When sampling data through ANALYZE TABLE, use the estimator to get a
better estimation of avg_frequency instead of just using the raw sampled data.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The variable controls the amount of sampling analyze table performs.
If ANALYZE table with histogram collection is too slow, one can reduce the
time taken by setting analyze_sample_percentage to a lower value of the
total number of rows.
Setting it to 0 will use a formula to compute how many rows to sample:
The number of rows collected is capped to a minimum of 50000 and
increases logarithmically with a coffecient of 4096. The coffecient is
chosen so that we expect an error of less than 3% in our estimations
according to the paper:
"Random Sampling for Histogram Construction: How much is enough?”
– Surajit Chaudhuri, Rajeev Motwani, Vivek Narasayya, ACM SIGMOD, 1998.
The drawback of sampling is that avg_frequency number is computed
imprecisely and will yeild a smaller number than the real one.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The add method does not need to provide the row order number. It was
only used to detect if the minimum/maximum value was populated once or not, so
as to force an update for the first encounter of a value.
|
| | | | | |
|
| |\ \ \ \
| |/ / / |
|
| | |\ \ \
| | |/ / |
|
| | | |\ \
| | | |/ |
|
| | | | |\
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Also, apply the MDEV-17957 changes to encrypted page checksums,
and remove error message output from the checksum function,
because these messages would be useless noise when mariabackup
is retrying reads of corrupted-looking pages, and not that
useful during normal server operation either.
The error messages in fil_space_verify_crypt_checksum()
should be refactored separately.
|
| | | | | | |
|
| | | | | | |
|
| |\ \ \ \ \
| |/ / / / |
|
| | |\ \ \ \
| | |/ / / |
|
| | | |\ \ \
| | | |/ / |
|
| | | | |\ \
| | | | |/ |
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
@@use_stat_tables= PREFERABLY
The problem here is EITS statistics does not calculate statistics for the partitions of the table.
So a temporary solution would be to not read EITS statistics for partitioned tables.
Also disabling reading of EITS for columns that participate in the partition list of a table.
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
- Call delete_statistics_tables() after lock_table_names in drop tables.
This avoids a deadlock issue with FTWRL and future backup locks.
- Added some missing clear_error()
- Ensure we don't clear error caused by the caller
- Updated function comments
|
| |/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Added to new values to the server variable use_stat_tables.
The values are COMPLEMENTARY_FOR_QUERIES and PREFERABLY_FOR_QUERIES.
Both these values don't allow to collect EITS for queries like
analyze table t1;
To collect EITS we would need to use the syntax with persistent like
analyze table t1 persistent for columns (col1,col2...) index (idx1, idx2...) / ALL
Changing the default value from NEVER to PREFERABLY_FOR_QUERIES.
|
| |\ \ \ \
| |/ / / |
|
| | |\ \ \
| | |/ / |
|
| | | |\ \
| | | |/ |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
optimizer_use_condition_selectivity set to 4
No need to read statistics for tables that are not USER tables.
We allocate memory for structures to collect statistics only for USER TABLES.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
for blob column
ANALYZE TABLE <table> does not collect statistical data on min/max values
for BLOB columns of <table>. However these values can be added into
mysql.column_stats manually by executing proper statements.
Unfortunately this led to a memory leak because the memory allocated
for these values was never freed.
This patch provides the server with a function to free memory allocated
for min/max statistical values of BLOB types.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
statistics_for_tables_is_needed
Backport the fix f214d365121 to 10.0
Author: Sergei Golubchik <serg@mariadb.org>
Date: Tue Apr 17 00:44:34 2018 +0200
ASAN error in is_stat_table()
don't memcmp beyond the first argument's end
Also: use my_strcasecmp(table_alias_charset), like elsewhere, not memcmp
|
| |\ \ \ \
| |/ / / |
|
| | |\ \ \
| | |/ / |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
for blob column
ANALYZE TABLE <table> does not collect statistical data on min/max values
for BLOB columns of <table>. However these values can be added into
mysql.column_stats manually by executing proper statements.
Unfortunately this led to a memory leak because the memory allocated
for these values was never freed.
This patch provides the server with a function to free memory allocated
for min/max statistical values of BLOB types.
Temporarily changed the test case until MDEV-16711 is fixed as without
this fix the test case for MDEV-16757 did not fail only for 10.0.
|
| |\ \ \ \
| |/ / / |
|
| | |\ \ \
| | |/ / |
|