| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Lower the thread and arena count to avoid resource exhaustion on 32-bit.
|
|
|
|
|
|
|
|
|
|
|
| |
The added hooks hooks.prof_sample and hooks.prof_sample_free are intended to
allow advanced users to track additional information, to enable new ways of
profiling on top of the jemalloc heap profile and sample features.
The sample hook is invoked after the allocation and backtracing, and forwards
the both the allocation and backtrace to the user hook; the sample_free hook
happens before the actual deallocation, and forwards only the ptr and usz to the
hook.
|
|
|
|
|
|
|
|
| |
Summary:
Per issue #2356, some CXX compilers may optimize away the
new/delete operation in stress/cpp/microbench.cpp.
Thus, this commit (1) bumps the time interval to 1 if it is 0, and
(2) modifies the pointers in the microbench to volatile.
|
| |
|
|
|
|
|
| |
Added the microbenchmark for operator delete.
Also modified bench.h so that it can be used in C++.
|
|
|
|
|
| |
In bench.h, specify the ratio as the time consumption ratio and
modify the display of the ratio.
|
|
|
|
|
|
|
|
|
|
| |
Previously if a thread does only allocations, it stays on the slow path /
minimal initialized state forever. However, dealloc-only is a valid pattern for
dedicated reclamation threads -- this means thread cache is disabled (no batched
flush) for them, which causes high overhead and contention.
Added the condition to fully initialize TSD when a fair amount of dealloc
activities are observed.
|
| |
|
|
|
|
| |
An arena-level name can help identify manual arenas.
|
| |
|
|
|
|
| |
Add a sanity check for double free issue in the arena in case that the tcache has been flushed.
|
|
|
|
|
|
| |
Add new runtime option `debug_double_free_max_scan` that specifies the max
number of stack entries to scan in the cache bit when trying to detect the
double free bug (currently debug build only).
|
|
|
|
|
| |
Despite being an obsolete function, pvalloc is still present in GLIBC and should
work correctly when jemalloc replaces libc allocator.
|
| |
|
|
|
|
| |
To avoid resource exhaustion on 32-bit platforms.
|
|
|
|
|
| |
Allow setting the safety check abort hook through mallctl, which avoids abort()
and core dumps.
|
|
|
|
| |
Signed-off-by: cuishuang <imcusg@gmail.com>
|
|
|
|
|
| |
With realloc(ptr, 0) being UB per C23, the option name "strict" makes less sense
now. Rename to "alloc" which describes the behavior.
|
|
|
|
|
| |
In test/integration/rallocx, full usable size is checked which may confuse
overflow detection.
|
| |
|
|
|
|
|
|
| |
Due to a bug in sec initialization, the number of cached size classes
was equal to 198. The bug caused the creation of more than a hundred of
unused bins, although it didn't affect the caching logic.
|
| |
|
|
|
|
|
|
| |
The option makes the process to exit with error code 1 if a memory leak
is detected. This is useful for implementing automated tools that rely
on leak detection.
|
|
|
|
|
| |
This takes a fair amount of resources. Under high concurrency it was causing
resource exhaustion such as pthread_create and mmap failures.
|
|
|
|
|
|
| |
Under high concurrency / heavy test load (e.g. using run_tests.sh), the
background thread may not get scheduled for a longer period of time. Retry 100
times max before bailing out.
|
|
|
|
|
| |
The option may be configure-disabled, which resulted in the invalid options
output from the tests.
|
| |
|
| |
|
|
|
|
|
|
|
| |
On deallocation, sampled pointers (specially aligned) get junked and stashed
into tcache (to prevent immediate reuse). The expected behavior is to have
read-after-free corrupted and stopped by the junk-filling, while
write-after-free is checked when flushing the stashed pointers.
|
|
|
|
| |
Verified with EXTRA_CFLAGS=-Wshadow.
|
|
|
|
|
|
|
| |
Many profiling related tests make assumptions on the profiling settings,
e.g. opt_prof is off by default, and prof_active is default on when opt_prof is
on. However the default settings can be changed via --with-malloc-conf at build
time. Fixing the tests by adding the assumed settings explicitly.
|
|
|
|
|
|
|
| |
nstime module guarantees monotonic clock update within a single nstime_t. This
means, if two separate nstime_t variables are read and updated separately,
nstime_subtract between them may result in underflow. Fixed by switching to the
time since utility provided by nstime.
|
| |
|
| |
|
|
|
|
|
| |
To utilize a separate retained area for guarded extents, use bump alloc
to allocate those extents.
|
|
|
|
|
|
| |
Currently used only for guarding purposes, the hint is used to determine
if the allocation is supposed to be frequently reused. For example, it
might urge the allocator to ensure the allocation is cached.
|
|
|
|
|
|
| |
While initially this file contained helper functions for one particular
test, now its usage spread across different test files. Purpose has
shifted towards a collection of handy arena ctl wrappers.
|
|
|
|
|
| |
The new allocator will be used to allocate guarded extents used as slabs
for guarded small allocations.
|
|
|
|
|
|
| |
With prof enabled, number of page aligned allocations doesn't match the
number of slab "ends" because prof allocations skew the addresses. It
leads to 'pages' array overflow and hard to debug failures.
|
|
|
|
|
| |
This prepares the foundation for more sanitizer-related work in the
future.
|
|
|
|
|
|
| |
In order for nstime_update to handle non-monotonic clocks, it requires the input
nstime to be initialized -- when reading for the first time, zero init has to be
done. Otherwise random stack value may be seen as clocks and returned.
|
|
|
|
|
| |
To ensure that the free fastpath can tolerate uninitialized tsd, improved the
static initializer for rtree_ctx in tsd.
|
|
|
|
|
| |
Android build has issues with these defines, this will allow the build to
succeed if it doesn't need to build the tests.
|
|
|
|
|
|
| |
As the code evolves, some code paths that have previously assigned
deferred_work_generated may cease being reached. This would leave the value
uninitialized. This change initializes the value for safety.
|
|
|
|
| |
Darwin has similar api than Linux/FreeBSD's malloc_usable_size.
|
| |
|
|
|
|
|
|
|
| |
Adding guarded extents, which are regular extents surrounded by guard pages
(mprotected). To reduce syscalls, small guarded extents are cached as a
separate eset in ecache, and decay through the dirty / muzzy / retained pipeline
as usual.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This mallctl accepts an arena_config_t structure which
can be used to customize the behavior of the arena.
Right now it contains extent_hooks and a new option,
metadata_use_hooks, which controls whether the extent
hooks are also used for metadata allocation.
The medata_use_hooks option has two main use cases:
1. In heterogeneous memory systems, to avoid metadata
being placed on potentially slower memory.
2. Avoiding virtual memory from being leaked as a result
of metadata allocation failure originating in an extent hook.
|
|
|
|
| |
If users want to be notified when a heap dump occurs, they can set this hook.
|
|
|
|
|
|
| |
Existing backtrace implementations skip native stack frames from runtimes like
Python. The hook allows to augment the backtraces to attribute allocations to
native functions in heap profiles.
|