| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If the final range restrictions (SEL_ARG tree) over GROUP BY
columns are single-point, we can compute the number of GROUP BY groups.
Example: in the query:
SELECT ... FROM tbl
WHERE keypart1 IN (1,2,3) and keypart2 IN ('foo','bar')
Other things:
- Fixed cost calculation to more correctly count the number of blocks
that may be read. The old code could use the total blocks in the file
even if a range was available.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
calculate_cond_selectivity_for_table() is largely rewritten:
- Process keys in the order of rows found, smaller ranges first. If two
ranges has equal number of rows, use the one with more key parts.
This helps us to mark more used fields to not be used for further
selectivity calculations. See cmp_quick_ranges().
- Ignore keys with fields that where used by previous keys
- Don't use rec_per_key[] to calculate selectivity for smaller
secondary key parts. This does not work as rec_per_key[] value
is calculated in the context of the previous key parts, not for the
key part itself. The one exception is if the previous key parts
are all constants.
Other things:
- Ensure that select->cond_selectivity is always between 0 and 1.
- Ensure that select->opt_range_condition_rows is never updated to
a higher value. It is initially set to the number of rows in table.
- We now store in table->opt_range_condition_rows the lowest number of
rows that any row-read-method has found so far. Before it was only done
for QUICK_SELECT_I::QS_TYPE_ROR_UNION and
QUICK_SELECT_I::QS_TYPE_INDEX_MERGE.
Now it is done for a lot more methods. See
calculate_cond_selectivity_for_table() for details.
- Calculate and use selectivity for the first key part of a multiple key
part if the first key part is a constant.
WHERE key1_part1=5 and key2_part1=5. IF key1 is used, then we can still
use selectivity for key2
Changes in test results:
- 'filtered' is slightly changed, usually to something slightly smaller.
- A few cases where for group by queries the table order changed. This was
because the number of resulting rows from a group by query with MIN/MAX
is now set to be smaller.
- A few index was changed as we now prefer index with more key parts if
the number of resulting rows is the same.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: lilinjie <lilinjie@uniontech.com>
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(addressed review input)
The issue was introduced by @@optimizer_max_sel_arg_weight code.
key_or() calls SEL_ARG::update_weight_locally(). That function
takes O(tree->elements) time.
Without that call, key_or(big_tree, one_element_tree) would take
O(log(big_tree)) when one_element_tree doesn't overlap with elements
of big_tree.
This means, update_weight_locally() can cause a big slowdown.
The fix:
1. key_or() actually doesn't need to call update_weight_locally().
It calls SEL_ARG::tree_delete() and SEL_ARG::insert(). These functions
update SEL_ARG::weight.
It also manipulates the SEL_ARG objects directly, but these
modifications do not change the weight of the tree.
I've just removed the update_weight_locally() call.
2. and_all_keys() also calls update_weight_locally(). It manipulates the
SEL_ARG graph directly.
Removed that call and added the code to update the SEL_ARG graph weight.
Tests main.range and main.range_not_embedded already contain the queries
that have test coverage for the affected code.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Instead, Get the "is_ascending" value from the array of KEY_PART
structures that describes the [pseudo-]index that is being analyzed.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Make the Range Optimizer support descending index key parts.
We follow the approach taken in MySQL-8.
See HowRangeOptimizerHandlesDescKeyparts for the description.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This reverts commit 2d21917e7db2db0900671aac2e29f49e4ff2acd7.
No explainations, lots of code moved, wrong cmake changes
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The problem was in and_all_keys(), the code of MDEV-9759 which calculates
the new tree weight:
First, it didn't take into account the case when
(next->next_key_part=tmp) == NULL
and dereferenced a NULL pointer when getting tmp->weight.
Second, "if (param->alloced_sel_args > SEL_ARG::MAX_SEL_ARGS) break"
could leave the loop with incorrect value of weight.
Fixed by introducing SEL_ARG::update_weight_locally() and calling it
at the end of the function. This allows to avoid caring about all the
above cases.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
(Variant #5, full patch, for 10.5)
Do not produce SEL_ARG graphs that would yield huge numbers of ranges.
Introduce a concept of SEL_ARG graph's "weight". If we are about to
produce a graph whose "weight" exceeds the limit, remove the parts
of SEL_ARG graph that represent the biggest key parts. Do so until
the graph's is within the limit.
Includes
- debug code to verify SEL_ARG graph weight
- A user-visible @@optimizer_max_sel_arg_weight to control the optimization
- Logging the optimization into the optimizer trace.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
available free memory to use completely
let thd->killed to abort range optimizer
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[Warning] InnoDB: Using a partial-field key prefix in search
For a key with keyparts (k1,k2,k3) , if we are building a range over the keyparts
we should make sure that if min_value/max_value for a keypart is not added to
key buffer then the keyparts following should also not be allowed.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Changed the function append_range_all_keyparts to use sel_arg_range_seq_init / sel_arg_range_seq_next to produce ranges.
Also adjusted to print format for the ranges, now the ranges are printed as:
(keypart1_min, keypart2_min,..) OP (keypart1_name,keypart2_name, ..) OP (keypart1_max,keypart2_max, ..)
Also added more tests for range and index merge access for optimizer trace
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |\ |
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
* Update wrong zip-code
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When the chosen execution plan accesses a join table employing a range
rowid filter a quick select to scan this range has to be built. This
quick select is built by a call of SQL_SELECT::test_quick_select().
At this call the function should allow to evaluate only single index
range scans. In order to be able to do this a new parameter was added
to this function.
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This task involves the implementation for the optimizer trace.
This feature produces a trace for any SELECT/UPDATE/DELETE/,
which contains information about decisions taken by the optimizer during
the optimization phase (choice of table access method, various costs,
transformations, etc). This feature would help to tell why some decisions were
taken by the optimizer and why some were rejected.
Trace is session-local, controlled by the @@optimizer_trace variable.
To enable optimizer trace we need to write:
set @@optimizer_trace variable= 'enabled=on';
To display the trace one can run:
SELECT trace FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;
This task also involves:
MDEV-18489: Limit the memory used by the optimizer trace
introduces a switch optimizer_trace_max_mem_size which limits
the memory used by the optimizer trace. This was implemented by
Sergei Petrunia.
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | | |
introduced in the patch fo MDEV-16934.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
Merge branch '10.3' into trunk
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Other things:
- Cleanup of allocated bitmaps done in open(), which
simplifies init_partition_bitmaps()
- Add needed defines in ha_spider.cc to enable new spider code
- Fixed some DBUG_PRINT() to be consistent with normal code
- Removed end space
- The changes in test cases partition_innodb, partition_range,
partition_pruning etc are becasue partitions can now more exactly
calculate the number of rows in a range.
Contains spider patches:
014,015,023,033,035,037,040,042,044,045,049,050,051,053,059
|
|/ / / / |
|
|/ / /
| | |
| | |
| | | |
TRUE)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
bitmap_is_set(table->read_set, field_index))' failed
Found and fixed 2 problems:
- Filesort addon fields didn't mark virtual columns properly
- multi-range-read calculated vcol bitmap but was not using it.
This caused wrong vcol field to be calculated on read, which caused the assert.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
filesort and init_read_record() for the same table.
This will simplify code for WINDOW FUNCTIONS (MDEV-6115)
- Filesort_info renamed to SORT_INFO and moved to filesort.h
- filesort now returns SORT_INFO
- init_read_record() now takes a SORT_INFO parameter.
- unique declaration is moved to uniques.h
- subselect caching of buffers is now more explicit than before
- filesort_buffer is now reusable even if rec_length has changed.
- filsort_free_buffers() and free_io_cache() calls are removed
- Remove one malloc() when using get_addon_fields()
Other things:
- Added --debug-assert-on-not-freed-memory option to make it easier to
debug some not-freed-memory issues.
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The crash was caused by range optimizer using RANGE_OPT_PARAM::min_key
(and max_key) to store keys. Buffer size was a good upper bound for
range analysis and partition pruning, but not for EITS selectivity
calculations.
Fixed by making these buffers variable-size. The sizes are calculated
from [pseudo]indexes used for range analysis.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
can compile with -Wshadow and get much fewer warnings)
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
|
| | |
|
|\ \ |
|