| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Optimize the copy that occurs in ets:lookup_element
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Michal Muskala pointed to me that sometimes ets:lookup_element can be
slower than ets:lookup, even when the value is found. For example:
```
erlperf "ets:lookup(ac_tab, {env, kernel, logger})." "ets:lookup_element(ac_tab, {env, kernel, logger}, 2)."
Code || QPS Time Rel
ets:lookup(ac_tab, {env, kernel, logger}). 1 2859 Ki 349 ns 100%
ets:lookup_element(ac_tab, {env, kernel, logger}, 2). 1 2164 Ki 462 ns 75%
```
The reason for this is that lookup can find the size of the allocation
needed directly in the DbTerm, then copy the whole thing efficiently
with copy_shallow_x, whereas lookup_element must first walk all over
the value to be copied in size_object_x, and then walk it again in
copy_struct_x.
This patch fixes this in three steps:
- Make db_store_term (used by ets:insert among others) use a
specialized version of copy_struct that makes the fields be laid in
order (instead of being chaotically interspersed as usual)
- Finding the size of the needed allocation can now be done
efficiently (see db_field_size)
- And we can use copy_shallow_x for the copy like lookup (just with a
small change so that it does not assume that it is copying a tuple).
Result:
```
PATH=~/otp/bin:$PATH erlperf "ets:lookup(ac_tab, {env, kernel, logger})." "ets:lookup_element(ac_tab, {env, kernel, logger}, 2)."
Code || QPS Time Rel
ets:lookup_element(ac_tab, {env, kernel, logger}, 2). 1 2945 Ki 339 ns 100%
ets:lookup(ac_tab, {env, kernel, logger}). 1 2860 Ki 349 ns 97%
```
I've tried measuring the potential impact on ets:insert performance
by looking at the 2 throughput benchmarks in ets_SUITE.
More specifically, I looked at the 50% insert/50% delete mix
(since it maximizes the impact of insert) and at the 1 process case
(since my change should not affect concurrency in any way).
Here are the results for the long benchmark, with 3 runs in each
configuration, listing all three kinds of table tested:
[set, {write_concurrency, auto}, {read_concurrency, true}]:
Baseline: 3.55 / 3.31 / 2.85
This patch: 3.20 / 3.08 / 3.44
[set, {write_concurrency, true}, {read_concurrency, true}]:
Baseline: 3.31 / 3.11 / 2.78
This patch: 3.13 / 3.08 / 3.42
[ordered_set, {write_concurrency, true}, {read_concurrency, true}]:
Baseline: 1.51 / 1.56 / 1.17
This patch: 1.50 / 1.37 / 1.59
(all running on my M1 MBP)
There is no obvious performance regression, but the noise is
sufficiently high that I may have missed some.
Co-authored-by: Sverker Eriksson <sverker@erlang.org>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After discussion in https://github.com/erlang/otp/issues/6730
Changed `db_term_list_prepend*` to `db_term_list_append*` so values to
be stored against a key have their order preserved. Previously due to
building the linked list via prepending, the values insert order was
reversed from OTP 23 onward.
|
| | |
|
|\ \
| | |
| | |
| | | |
OTP-18292
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Concurrent ets:rename could cause ets:delete_all_objects
to fail halfway through with badarg
"the table identifier does not refer to an existing ETS table"
|
|\ \ \
| |_|/
|/| |
| | | |
sverker/25/erts/ets-insert-trap-fixing
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Symptom:
VM crash if table is deleted or renamed during ets:insert
and ets:insert_new with long list of tuples.
* Handle table deletion during yielding insert correctly.
Both deletion by ets:delete/1 and owner process exit.
We no longer commit the entire list for insert. If
table is deleted, the remaining list with uninserted
tuples will be freed.
* Handle table renaming correctly. Do not change table midstream.
Only use initial BIF_ARG_1 for first lookup. Then use 'btid'
to identify the table.
If table is renamed before we go DB_BUSY (and force others to help)
the insert will fail. Otherwise it could be observed that
the insert op happened after the table was renamed.
* Make sure a successful assisted insert op returns 'true' even if table was
deleted by someone else before the initial caller returns.
* Make sure BIF arguments are restored if ets:insert fails after trapping.
|
|\ \
| |/
| |
| |
| | |
* maint:
Update copyright year
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* make sure that ordered_set tables configured with
{write_concurrency, auto} gives auto when ets:info(T,
write_concurrency) is called
* Remove the possibility to explicitly set the number of locks from the
public interface by using the option {write_concurrency, N}. For
performance "debuging" purposes there is still a hidden way
explicitly set the number of locks for hash tables by using the
option {write_concurrency, {debug_hash_fixed_number_of_locks, 2048}}
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit makes it possible for users to configure ETS tables with
the {write_concurrency, auto} option. This option forces tables to
automatically change the number of locks that are used at run-time
depending on how much concurrency is detected.
Benchmark results comparing this option with the other ETS
optimization options are available here:
http://winsh.me/bench/ets_config_locks/ets_bench_result_lock_config.html
|
|/
|
|
|
|
|
| |
This commit makes it possible for users to explecitly set the number
of locks that are used for tables of type set, bag and dubplicate_bag
by configuring a table with the {write_concurrency, N} option, where N
is an integer in the range [1, 32768].
|
|\ |
|
| |
| |
| |
| |
| | |
to not use random but instead use a process flag bit to force
trap on every other call.
|
|/ |
|
|\ |
|
| |
| |
| |
| | |
Move out HAlloc of cons cell to the code that actualy needs it.
|
|\ \
| |/ |
|
| |\
| | |
| | |
| | | |
into 'sverker/22/ets-select_replace-compressed/OTP-16874'
|
| | |\
| | | |
| | | |
| | | | |
into 'sverker/21/ets-select_replace-compressed/OTP-16874'
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The temporary uncompressed copy was deallocated too early,
before the replacing objects was constructed as a copy of the
match result which in turn is referring matched variables
in to the temporary copy.
|
|\ \ \ \
| |/ / /
| | | |
| | | |
| | | | |
* maint:
Update copyright year
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The `ets:insert/2` and `ets:insert_new/2` functions did not yield even
when they were given long lists and a call to one of the functions did
only consume a single reduction. This commit fixes these problems.
The fix uses the "Yielding C Fun" tool to automatically generate
yieldable versions of C functions.
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
ets tables can only be accessed from normal schedulers, so
when there is only one normal scheduler we don't have to take
any locks.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If we only have one normal scheduler we know that only
one thread can access an ETS table at a time, so we don't
need to optimize away that.
This approach reads and checks the global variable
erts_no_schedulers each time, it could be optimized
to use the table flags field instead. Though I'm not sure
how much that would help.
With this change doing ets:lookup/2 och a small value is
about twice as fast.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
doxygen only looks for tags in comments that start with `/**` or
`///`, tags inside plain comments are ignored.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This commit enables decentralized counters by default in all public
ETS tables when the `write_concurrency` option is turned on. Tables of
type `ordered_set` already had support for decentralized counters so
this commit only affects tables of type `set`, `bag` and
`duplicate_bag`.
[Experiments][1] indicate that this change substantially improves the
scalability in ETS table scenarios with frequent `ets:insert/2` and
`ets:delete/2` calls when many processor cores are being utilized.
[1]: http://winsh.me/ets_catree_benchmark/decent_ctrs_hash.html
|
| | |
| | |
| | |
| | | |
with unaligned ProcBins.
|
| | |
| | |
| | |
| | |
| | | |
Similar to process_info(_, binary)
to get all refc binaries referred by an ETS table.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This commit fixes an ETS test case that tests the decentralized memory
counter in tables of type ordered_set with the write_concurrency
option turned on. The test case assumed that the memory consumption of
the table would only grow monotonically when terms are
inserted. However, this was not the case when the emulator was
compiled in debug mode as random splits and joins of CA tree nodes
could happen. This commit fixes the test case by disabling random
splits and joins in the tested table.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, all ETS tables used centralized counter variables to keep
track of the number of items stored and the amount of memory
consumed. These counters can cause scalability problems (especially on
big NUMA systems). This commit adds an implementation of a
decentralized counter and modifies the implementation of ETS so that
ETS tables of type ordered_set with write_concurrency enabled use the
decentralized counter. [Experiments][1] indicate that this change
substantially improves the scalability of ETS ordered_set tables with
write_concurrency enabled in scenarios with frequent `ets:insert/2`
and `ets:delete/2` calls.
The new counter is implemented in the module erts_flxctr
(`erts_flxctr.h` and `erts_flxctr.c`). The module has the suffix
flxctr as it contains the implementation of a flexible counter (i.e.,
counter instances can be configured to be either centralized or
decentralized). Counters that are configured to be centralized are
implemented with a single counter variable which is modified with
atomic operations. Decentralized counters are spread over several
cache lines (how many can be configured with the parameter
`+dcg`). The scheduler threads are mapped to cache lines so that there
is no single point of contention when decentralized counters are
updated. The thread progress functionality of the Erlang VM is
utilized to implement support for linearizable snapshots of
decentralized counters. The snapshot functionality is used by the
`ets:info/1` and `ets:info/2` functions.
[1]: http://winsh.me/ets_catree_benchmark/flxctr_res.html
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Symtom:
ETS table remains fixed after finished ets:select* call.
Problem:
The decision to unfix table after a yielding ets:select*
is based on table ownership, but ownership might have changed
while ets:select* was yielding.
Solution:
Remember and pass along whether table was fixed
when the traversal started.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
to easier generate a routing tree for test
without having to spend cpu to provoke actual repeated lock conflicts.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The current ETS ordered_set implementation can quickly become a
scalability bottleneck on multicore machines when an application updates
an ordered_set table from concurrent processes [1][2]. The current
implementation is based on an AVL tree protected from concurrent writes
by a single readers-writer lock. Furthermore, the current implementation
has an optimization, called the stack optimization [3], that can improve
the performance when only a single process accesses a table but can
cause bad scalability even in read-only scenarios. It is possible to
pass the option {write_concurrency, true} to ets:new/2 when creating an
ETS table of type ordered_set but this option has no effect for tables
of type ordered_set without this commit. The new ETS ordered_set
implementation, added by this commit, is only activated when one passes
the options ordered_set and {write_concurrency, true} to the ets:new/2
function. Thus, the previous ordered_set implementation (from here on
called the default implementation) can still be used in applications
that do not benefit from the new implementation. The benchmark results
on the following web page show that the new implementation is many times
faster than the old implementation in some scenarios and that the old
implementation is still better than the new implementation in some
scenarios.
http://winsh.me/ets_catree_benchmark/ets_ca_tree_benchmark_results.html
The new implementation is expected to scale better than the default
implementation when concurrent processes use the following ETS
operations to operate on a table:
delete/2, delete_object/2, first/1, insert/2 (single object),
insert_new/2 (single object), lookup/2, lookup_element/2, member/2,
next/2, take/2 and update_element/3 (single object).
Currently, the new implementation does not have scalable support for the
other operations (e.g., select/2). However, when these operations are
used infrequently, the new implantation may still scale better than the
default implementation as the benchmark results at the URL above shows.
Description of the New Implementation
----------------------------------
The new implementation is based on a data structure which is called the
contention adapting search tree (CA tree for short). The following
publication contains a detailed description of the CA tree:
A Contention Adapting Approach to Concurrent Ordered Sets
Journal of Parallel and Distributed Computing, 2018
Kjell Winblad and Konstantinos Sagonas
https://doi.org/10.1016/j.jpdc.2017.11.007
http://www.it.uu.se/research/group/languages/software/ca_tree/catree_proofs.pdf
A discussion of how the CA tree can be used as an ETS back-end can be
found in another publication [1]. The CA tree is a data structure that
dynamically changes its synchronization granularity based on detected
contention. Internally, the CA tree uses instances of a sequential data
structure to store items. The CA tree implementation contained in this
commit uses the same AVL tree implementation as is used for the default
ordered set implementation. This AVL tree implementation is reused so
that much of the existing code to implement the ETS operations can be
reused.
Tests
-----
The ETS tests in `lib/stdlib/test/ets_SUITE.erl` have been extended to
also test the new ordered_set implementation. The function
ets_SUITE:throughput_benchmark/0 has also been added to this file. This
function can be used to measure and compare the performance of the
different ETS table types and options. This function writes benchmark
data to standard output that can be visualized by the HTML page
`lib/stdlib/test/ets_SUITE_data/visualize_throughput.html`.
[1]
More Scalable Ordered Set for ETS Using Adaptation.
In Thirteenth ACM SIGPLAN workshop on Erlang (2014).
Kjell Winblad and Konstantinos Sagonas.
https://doi.org/10.1145/2633448.2633455
http://www.it.uu.se/research/group/languages/software/ca_tree/erlang_paper.pdf
[2]
On the Scalability of the Erlang Term Storage
In Twelfth ACM SIGPLAN workshop on Erlang (2013)
Kjell Winblad, David Klaftenegger and Konstantinos Sagonas
https://doi.org/10.1145/2505305.2505308
http://winsh.me/papers/erlang_workshop_2013.pdf
[3]
The stack optimization works by keeping one preallocated stack instance
in every ordered_set table. This stack is updated so that it contains
the search path in some read operations (e.g., ets:next/2). This makes
it possible for a subsequent ets:next/2 to avoid traversing some nodes
in some cases. Unfortunately, the preallocated stack needs to be flagged
so that it is not updated concurrently by several threads which cause
bad scalability.
|
| | |
|
| |
| |
| |
| | |
as it's now only used for empty tables by ets:new/2.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
by using a cooperative strategy that will make
any process accessing the table execute delelete_all_objects_continue
until the table is empty.
This is not an optimal solution as concurrent threads will still
block on the table lock, but at least thread progress is made.
|
|\ \
| |/ |
|
| |
| |
| |
| | |
Cannot read fix->counter safely without table lock.
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This refactor was done using the unifdef tool like this:
for file in $(find erts/ -name *.[ch]); do unifdef -t -f defile -o $file $file; done
where defile contained:
#define ERTS_SMP 1
#define USE_THREADS 1
#define DDLL_SMP 1
#define ERTS_HAVE_SMP_EMU 1
#define SMP 1
#define ERL_BITS_REENTRANT 1
#define ERTS_USE_ASYNC_READY_Q 1
#define FDBLOCK 1
#undef ERTS_POLL_NEED_ASYNC_INTERRUPT_SUPPORT
#define ERTS_POLL_ASYNC_INTERRUPT_SUPPORT 0
#define ERTS_POLL_USE_WAKEUP_PIPE 1
#define ERTS_POLL_USE_UPDATE_REQUESTS_QUEUE 1
#undef ERTS_HAVE_PLAIN_EMU
#undef ERTS_SIGNAL_STATE
|