| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed a lot of inconsistencies in optimizer cost calculation. The main
objective was get cost calculation as similar (and accurate) as
possible to make different plans more comparable.
- Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
- Added RECORD_COPY_COST, the cost of finding the next row and copying
it to record for table scans.
- Added INDEX_COPY_COST, the cost of finding the next key and copying it
to record for index scans.
- Added INDEX_NEXT_FIND_COST, the cost of finding the next index entry and
checking it against filter.
- Some scan cost estimates did not take into account
TIME_FOR_COMPARE. Now all scan costs takes this into
account. (main.show_explain)
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
like in optimize_straight_join() and greedy_search()
- JOIN_TAB::scan_time() did not take into account index only scans,
which produced a wrong cost when index scan was used. Fixed by
adding support for covering keys. Cached also the calculated values
to avoid future calls during optimization phase.
- Fixed that most index cost calculation are done the same way and
more close to 'range' calculations. The effects of this change are:
- Cost of index scan is now lower than before, which causes some
tests result to change. (innodb.innodb, main.show_explain)
- Fixed the EQ_REF takes into account clustered and covered keys.
- Ensured that index_scan_cost() ==
range(scan_of_all_rows_in_table_using_one_range) +
MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there is
choice of doing a full index scan and a range-index scan over almost
the whole table then index scan will be preferred (no range-read
setup cost).
- Rowid filter setup cost and filter compare cost now takes into account
fetching and checking the rowid (INDEX_NEXT_FIND_COST).
(main.partition_pruning heap.heap_btree main.log_state)
- Introduced ha_scan_time() that takes into account the CPU cost of
finding the next row and copying the row from the engine to
'record'. This causes costs of table scan to slightly increase and
some test to changed their plan from ALL to RANGE or ALL to ref.
(innodb.innodb_mysql, main.select_pkeycache)
- Introduced ha_scan_and_compare_time() to is like ha_scan_time() but also
adds the cost of checking the where clause (TIME_FOR_COMPARE).
- Cost of index scan was too low before compared to anything else.
- Introduced ha_keyread_time(rows) that takes into account finding the
next row and copying the key value to 'record' (INDEX_COPY_COST).
- Introduced ha_key_scan_time() for calculating an index scan over all
rows.
- Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
- Added index_only_fetch_cost() as a convenience function to
OPT_RANGE.
- keyread_time() cost is slightly reduced to prefer shorter keys.
- All of the above caused some index_merge combinations to be
rejected because of cost (main.index_intersect)
- Added checking of the WHERE clause of the accepted rows to ROR costs in
get_best_ror_intersect()
- Fixed bug in get_best_ror_intersect() where 'min_cost' was not updated,
and the cost we compared with was not the one that was used.
- Removed '- 0.001' from 'join->best_read' and optimize_straight_join()
to ensure that the 'Last_query_cost' status variable contains the same
value as the one that was calculated by the optimizer.
- Extend get_range_limit_read_cost() to take into considering
cost_for_index_read() if there where no quick keys. This will reduce
the computed cost for ORDER BY with LIMIT in some cases.
(main.innodb_ext_key)
- Added INDEX_NEXT_FIND_COST to Range_rowid_filter_cost_info::lookup_cost
to account of the time to find and check the next key value against the
container
- Changed 'JOIN_TAB:::scan_time() to take into consideration clustered and
covered keys. The values are now cached and we only have to call this
function once. Other calls are changed to use the cached values.
Function renamed to JOIN_TAB::estimate_scan_time().
Other things:
- Added some 'if (thd->trace_started())' to speed up code
- Removed not used function Cost_estimate::is_zero()
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect().
(No cost changes)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fetch_cost and index_only_cost now takes into account clustered keys and
index only accesses.
Before the index_only_cost was not correctly taken into account
when considering a filter. When using filter, not matching rows will
only do a index only access.
In the result files, many of the changes are going back to close to what
they where before the "Update cost for hash and cached joins" commit,
as that commit didn't fix the filter cost (too complex to do everything
in one commit).
Other things:
- cost_for_index_read now returns both full cost and index_only_cost
- Ensure that access_cost_factor, used by
best_range_rowid_filter_for_partial_join() is between 0 and 1
(as documented).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch causes no changes in costs or result files.
Changes:
- Store row compare cost separately in Cost_estimate::comp_cost
- Store cost of fetching rows separately in OPT_RANGE
- Use range->fetch_cost instead of adjust_quick_cost(total_cost)
This was done to simplify cost calculation in sql_select.cc:
- We can use range->fetch_cost directly without having to call
adjust_quick_cost(). adjust_quick_cost() is now removed.
Other things:
- Removed some not used functions in Cost_estimate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old code didn't correctly add TIME_FOR_COMPARE to rows that are
part of the scan that will be compared with the attached where clause.
Now the cost calculation for hash join and full join cache join are
identical except for HASH_FANOUT (10%)
The cost for a join with keys is now also uniform.
The total cost for a using a key for lookup is calculated in one place as:
(cost_of_finding_rows_through_key(records) + records/TIME_FOR_COMPARE)*
record_count_of_previous_row_combinations + startup_cost
startup_cost is the cost of a creating a temporary table (if needed)
Best_cost now includes the cost of comparing all WHERE clauses and also
cost of joining with previous row combinations.
Other things:
- Optimizer trace is now printing the total costs, including testing the
WHERE clause (TIME_FOR_COMPARE) and comparing with all previous rows.
- In optimizer trace, include also total cost of query together with the
final join order. This makes it easier to find out where the cost was
calculated.
- Old code used filter even if the cost for it was higher than not using a
filter. This is not corrected.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is that when doing a tree dive (once per group), we need to
compare key values, which is fast. For each new group, we have to
compare the full where clause for the row.
Compared to original code, the cost of group_min_max() has slightly
increased which affects some test with only a few rows.
main.group_min_max and main.distinct have been modified to show the
effect of the change.
The patch also adjust the number of groups in case of quick selects:
- For simple WHERE clauses, ensure that we have at least as many groups
as we have conditions on the used group-by key parts.
The assumption is that each condition will create at least one group.
- Ensure that there are no more groups than rows found by quick_select
Test changes:
- For some small tables there has been a change of
Using index for group-by -> Using index for group-by (scanning)
Range -> Index and Using index for group-by -> Using index
|
|
|
|
|
|
|
|
|
|
| |
Having rows >= 1.0 helps ensure that when we calculate total rows of joins
the number of resulting rows will not be less after the join.
Changes in test cases:
- Join order change for some tables with few records
- 'Filtered' is much higher for tables with few rows, as 1 row is a high
procent of a table with few rows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed also that the 'with_found_constraint parameter' to
matching_candidates_in_table() is as documented: It is now true only
if there is a reference to a previous table in the WHERE condition for
the current examined table (as it was originally documented)
Changes in test results:
- Filtered was 25% smaller for some queries (expected).
- Some join order changed (probably because the tables had very few rows).
- Some more table scans, probably because there would be fewer returned
rows.
- Some tests exposes a bug that if there is more filtered rows, then the
cost for table scan will be higher. This will be fixed in a later commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
calculate_cond_selectivity_for_table() is largely rewritten:
- Process keys in the order of rows found, smaller ranges first. If two ranges
has equal number of rows, use the one with more key parts. This helps
us to mark more used fields to not be used for further selectivity
calculations. See cmp_quick_ranges().
- Ignore keys with fields that where used by previous keys
- Don't use rec_per_key[] to calculate selectivity for smaller
secondary key parts. This does not work as rec_per_key[] value
is calculated in the context of the previous key parts, not for the
key part itself. The one exception is if the previous key parts
is a constant.
Other things:
- Ensure that select->cond_selectivity is always between 0 and 1.
- Ensure that select->opt_range_condition_rows is never updated to
a higher value. It is initially set to the number of rows in table.
- We know store in table->opt_range_condition_rows the lowest number of
rows that any row-read-method has found so far. Before it was only done
for UICK_SELECT_I::QS_TYPE_ROR_UNION and QUICK_SELECT_I::QS_TYPE_INDEX_MERGE.
Now it is done for a lot more methods. See
calculate_cond_selectivity_for_table() for details.
- Calculate and use selectivity for the first key part of a multiple key part
if the first key part is a constant.
WHERE key1_part1=5 and key2_part1=5. IF key1 is used, then we can still
use selectivity for key2
Changes in test results:
- 'filtered' is slighly changed, usually to something slightly smaller
- A few cases where for group by queries the table order changed. This was
because the number of resulting rows from a group by query with MIN/MAX
is now set to be smaller.
- A few index was changed as we know prefer index with more key parts if
the number of resulting rows is the same.
|
|
|
|
|
| |
We where comparing costs when we should be comparing number of rows
that will be examined
|
|
|
|
|
|
|
|
|
| |
- Avoid checking for has_transactions if killed flag is not checked
- Simplify code (Have checked with gcc -O3 that there is improvements)
- Added handler::fast_increment_statstics() to be used when a handler
functions wants to increase two statistics for one row access.
- Made check_limit_rows_examened() inline (even if it didn't make any
difference for gcc 7.5.0), still the right thing to do
|
|
|
|
|
|
|
|
|
| |
This enables optimizer_trace output for the next SQL command.
Idential as if one would have done:
- Store value of @@optimizer_trace
- Set @optimizer_trace="enabled=on"
- Run query
- SELECT * from OPTIMIZER_TRACE
|
|
|
|
|
|
|
| |
If the array size would be 1, the cost would be 0 which is wrong.
Fixed by adding a small (0.001) base value to the lookup cost.
This causes not changes in any result files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No code logic changes was done
a -> gain
b -> cost_of_building_range_filter
a_adj -> gain_adj
r -> row_combinations
Other things:
- Optimized the layout of class Range_rowid_filter_cost_info.
One effect was that I moved key_no to the private section to get
better alignment and had to introduce a get_key_no() function.
- Indentation changes in rowid_filter.cc to avoid long rows.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Updated comments
- Added some extra DEBUG
- Indentation changes and break long lines
- Trivial code changes like:
- Combining 2 statements in one
- Reorder DBUG lines
- Use a variable to store a pointer that is used multiple times
- Moved declaration of variables to start of loop/function
- Removed dead or commented code
- Removed wrong DBUG_EXECUTE code in best_extension_by_limited_search()
|
|
|
|
|
| |
The result file changes are mainly that number of rows is one smaller
for some queries with DISTINCT or GROUP BY
|
|
|
|
|
|
|
|
|
|
|
| |
Other changes:
- In test_quick_select(), assume that if table->used_stats_records is 0
then the table has 0 rows.
- Fixed prepare_simple_select() to populate table->used_stat_records
- Enusre that set_statistics_for_tables() doesn't cause used_stats_records
to be 0 when using stat_tables.
- To get blackhole to work with replication, set stats.records to 2 so
that test_quick_select() doesn't assume the table is empty.
|
|
|
|
|
| |
This removes the error "mariadbd: O_TMPFILE is not supported on..."
on startup.
|
| |
|
| |
|
|
|
|
|
| |
- Rocksdb unique error messages move to a range starting at 300
- Use same error code for Rocksdb error messages that exists in my_base.h
|
|
|
|
| |
with optimizer trace
|
|\ |
|
| | |
|
|\ \
| |/ |
|
| | |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In commit 49e2c8f0a6fefdeac50925f758090d6bd099768d (MDEV-25743)
some more use of the printf-style format "%.*s" was added.
The length parameter is of type int, not size_t.
On 64-bit platforms that follow the LLP64 convention (such as
64-bit Microsoft Windows), sizeof(int)==4 and sizeof(size_t)==8.
Let us explicitly cast the lengths to the correct type in order
to avoid any trouble.
|
| | | |
|
| |\ \
| | |/ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
THE VIEW DEFINITION SELECT
test case only
|
| | | |
| | | |
| | | |
| | | | |
now when SLES12.3 is gone, we can enforce it
|
| | | | |
|
| | | |\ |
|
| | | | |\ |
|
| | | | | |\ |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Per bug report, cycles was woefully insufficient to
detect any implementation error.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
in lf-hash
MariaDB server crashes on ARM (weak memory model architecture) while
concurrently executing l_find to load node->key and add_to_purgatory
to store node->key = NULL. l_find then uses key (which is NULL), to
pass it to a comparison function.
The specific problem is the out-of-order execution that happens on a
weak memory model architecture. Two essential reorderings are possible,
which need to be prevented.
a) As l_find has no barriers in place between the optimistic read of
the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
the processor can reorder the load to happen after the while-loop.
In that case, a concurrent thread executing add_to_purgatory on the same
node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
before key is loaded in l_find.
b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
CAS succeed, the key field is written to by add_to_purgatory. However,
due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
can be moved ahead of the two CAS operations, which makes the value of
the local purgatory list stored by add_to_purgatory visible to all threads
operating on the list. As the node is not marked as deleted yet, the
same error occurs in l_find.
This change three accesses to be atomic.
* optimistic read of key in l_find lf_hash.cc#L117
* read of link for verification lf_hash.cc#L124
* write of key in add_to_purgatory lf_alloc-pin.c#L253
Reviewers: Sergei Vojtovich, Sergei Golubchik
Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This bug was introduced by commit be00e279c6061134a33a8099fd69d4304735d02e
The commit was applied for the task MDEV-6480 that allowed to remove top
level disjuncts from WHERE conditions if the range optimizer evaluated them
as always equal to FALSE/NULL.
If such disjuncts are removed the WHERE condition may become an AND formula
and if this formula contains multiple equalities the field JOIN::item_equal
must be updated to refer to these equalities. The above mentioned commit
forgot to do this and it could cause crashes for some queries.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Include gronnga and groonga-normalizer-mysql install path
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
During startup, InnoDB must write a FILE_CHECKPOINT record.
However, before MDEV-12353 (in MariaDB Server 10.2, 10.3, 10.4)
the corresponding record MLOG_CHECKPOINT was encoded in a different way.
When we are upgrading from a logically empty 10.2, 10.3, or 10.4 redo log,
we must not write anything to the old log file, because if the server were
killed during the upgrade, we would end up with a corrupted log file, and
both the old and the new server would refuse to start up.
On upgrade, we must simply create a new logically empty log file
and replace the old ib_logfile0 with that.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This is a low hanging fruit. Before this patch std::map::emplace() was
a ~50% of the whole recv_sys_t::parse() operation in by test.
After the fix it's only ~20%.
recv_sys_t::parse() recv_sys_t::pages is a collection of all pages
to recovery. Often, there are multiple changes for a single page.
Often, they go in a row and for such cases let's avoid
lookup in a std::map. cached_pages_it serves as a cache
of size 1.
recv_sys_t::add(): replace page_id argument with a std::map::iterator
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
[Adjusting Sergei Krivonos's patch]
"duplicates_removal" may contain multiple elements inside it and
so should have a JSON array as a value (and not object).
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This reverts commit 2d21917e7db2db0900671aac2e29f49e4ff2acd7.
No explainations, lots of code moved, wrong cmake changes
|