summaryrefslogtreecommitdiff
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* More conservative setting for /test/unit/background_thread_enable.Qi Wang2023-02-161-6/+3
| | | | Lower the thread and arena count to avoid resource exhaustion on 32-bit.
* Implement prof sample hooks "experimental.hooks.prof_sample(_free)".Qi Wang2022-12-071-15/+179
| | | | | | | | | | | The added hooks hooks.prof_sample and hooks.prof_sample_free are intended to allow advanced users to track additional information, to enable new ways of profiling on top of the jemalloc heap profile and sample features. The sample hook is invoked after the allocation and backtracing, and forwards the both the allocation and backtrace to the user hook; the sample_free hook happens before the actual deallocation, and forwards only the ptr and usz to the hook.
* Fix dividing 0 error in stress/cpp/microbenchguangli-dai2022-12-062-18/+29
| | | | | | | | Summary: Per issue #2356, some CXX compilers may optimize away the new/delete operation in stress/cpp/microbench.cpp. Thus, this commit (1) bumps the time interval to 1 if it is 0, and (2) modifies the pointers in the microbench to volatile.
* Inline free and sdallocx into operator deleteGuangli Dai2022-11-211-4/+3
|
* Benchmark operator deleteguangli-dai2022-11-213-6/+90
| | | | | Added the microbenchmark for operator delete. Also modified bench.h so that it can be used in C++.
* Update the ratio display in benchmarkguangli-dai2022-11-211-1/+1
| | | | | In bench.h, specify the ratio as the time consumption ratio and modify the display of the ratio.
* Enable fast thread locals for dealloc-only threads.Qi Wang2022-10-251-0/+56
| | | | | | | | | | Previously if a thread does only allocations, it stays on the slow path / minimal initialized state forever. However, dealloc-only is a valid pattern for dedicated reclamation threads -- this means thread cache is disabled (no batched flush) for them, which causes high overhead and contention. Added the condition to fully initialize TSD when a fair amount of dealloc activities are observed.
* Fix a bug in C++ integration test.Guangli Dai2022-09-161-2/+1
|
* Add arena-level name.Guangli Dai2022-09-162-2/+45
| | | | An arena-level name can help identify manual arenas.
* Making jemalloc max stack depth a runtime optionGuangli Dai2022-09-123-2/+3
|
* Add double free detection using slab bitmap for debug buildGuangli Dai2022-09-061-9/+41
| | | | Add a sanity check for double free issue in the arena in case that the tcache has been flushed.
* Add double free detection in thread cache for debug buildIvan Zaitsev2022-08-042-8/+42
| | | | | | Add new runtime option `debug_double_free_max_scan` that specifies the max number of stack entries to scan in the cache bit when trying to detect the double free bug (currently debug build only).
* Implement pvalloc replacementAlex Lapenkou2022-05-181-0/+14
| | | | | Despite being an obsolete function, pvalloc is still present in GLIBC and should work correctly when jemalloc replaces libc allocator.
* Improve the failure message upon opt_experimental_infallible_new.Qi Wang2022-05-171-2/+2
|
* Make test/unit/background_thread_enable more conservative.Qi Wang2022-05-041-8/+19
| | | | To avoid resource exhaustion on 32-bit platforms.
* Avoid abort() in test/integration/cpp/infallible_new_true.Qi Wang2022-04-251-43/+49
| | | | | Allow setting the safety check abort hook through mallctl, which avoids abort() and core dumps.
* fix some typoscuishuang2022-04-252-2/+2
| | | | Signed-off-by: cuishuang <imcusg@gmail.com>
* Rename zero_realloc option "strict" to "alloc".Qi Wang2022-04-203-5/+5
| | | | | With realloc(ptr, 0) being UB per C23, the option name "strict" makes less sense now. Rename to "alloc" which describes the behavior.
* Use volatile to workaround buffer overflow false positives.Qi Wang2022-04-041-5/+21
| | | | | In test/integration/rallocx, full usable size is checked which may confuse overflow detection.
* Add comments and use meaningful vars in sz_psz2ind.Charles2022-03-241-0/+66
|
* Fix size class calculation for secAlex Lapenkou2022-03-221-0/+1
| | | | | | Due to a bug in sec initialization, the number of cached size classes was equal to 198. The bug caused the creation of more than a hundred of unused bins, although it didn't affect the caching logic.
* Avoid overflow warnings in test/unit/safety_check.Qi Wang2022-01-271-6/+13
|
* Add prof_leak_error optionyunxu2022-01-211-0/+1
| | | | | | The option makes the process to exit with error code 1 if a memory leak is detected. This is useful for implementing automated tools that rely on leak detection.
* Lower the num_threads in the stress test of test/unit/prof_recentQi Wang2022-01-111-1/+1
| | | | | This takes a fair amount of resources. Under high concurrency it was causing resource exhaustion such as pthread_create and mmap failures.
* Add background thread sleep retry in test/unit/hpa_background_threadQi Wang2022-01-071-4/+12
| | | | | | Under high concurrency / heavy test load (e.g. using run_tests.sh), the background thread may not get scheduled for a longer period of time. Retry 100 times max before bailing out.
* Fix test config of lg_san_uaf_align.Qi Wang2022-01-045-5/+17
| | | | | The option may be configure-disabled, which resulted in the invalid options output from the tests.
* Rename san_enabled() to san_guard_enabled().Qi Wang2021-12-292-3/+3
|
* Add stats for stashed bytes in tcache.Qi Wang2021-12-293-9/+51
|
* Implement use-after-free detection using junk and stash.Qi Wang2021-12-297-21/+380
| | | | | | | On deallocation, sampled pointers (specially aligned) get junked and stashed into tcache (to prevent immediate reuse). The expected behavior is to have read-after-free corrupted and stopped by the junk-filling, while write-after-free is checked when flushing the stashed pointers.
* Fix shadowed variable usage.Qi Wang2021-12-2311-58/+58
| | | | Verified with EXTRA_CFLAGS=-Wshadow.
* Add the profiling settings for tests explicit.Qi Wang2021-12-2213-11/+26
| | | | | | | Many profiling related tests make assumptions on the profiling settings, e.g. opt_prof is off by default, and prof_active is default on when opt_prof is on. However the default settings can be changed via --with-malloc-conf at build time. Fixing the tests by adding the assumed settings explicitly.
* Fix the time-since computation in HPA.Qi Wang2021-12-211-0/+7
| | | | | | | nstime module guarantees monotonic clock update within a single nstime_t. This means, if two separate nstime_t variables are read and updated separately, nstime_subtract between them may result in underflow. Fixed by switching to the time since utility provided by nstime.
* Add nstime_ns_since which obtains the duration since the input time.Qi Wang2021-12-211-0/+28
|
* Fix base_ehooks_get_for_metadatamweisgut2021-12-201-1/+30
|
* San: Bump alloc frequently reused guarded allocationsAlex Lapenkou2021-12-151-10/+12
| | | | | To utilize a separate retained area for guarded extents, use bump alloc to allocate those extents.
* Pass 'frequent_reuse' hint to PAIAlex Lapenkou2021-12-152-33/+37
| | | | | | Currently used only for guarding purposes, the hint is used to determine if the allocation is supposed to be frequently reused. For example, it might urge the allocator to ensure the allocation is cached.
* Rename 'arena_decay' to 'arena_util'Alex Lapenkou2021-12-154-3/+3
| | | | | | While initially this file contained helper functions for one particular test, now its usage spread across different test files. Purpose has shifted towards a collection of handy arena ctl wrappers.
* San: Implement bump allocAlex Lapenkou2021-12-154-16/+127
| | | | | The new allocator will be used to allocate guarded extents used as slabs for guarded small allocations.
* San: Avoid running san tests with prof enabledAlex Lapenkou2021-12-151-0/+4
| | | | | | With prof enabled, number of page aligned allocations doesn't match the number of slab "ends" because prof allocations skew the addresses. It leads to 'pages' array overflow and hard to debug failures.
* San: Rename 'guard' to 'san'Alex Lapenkou2021-12-155-4/+4
| | | | | This prepares the foundation for more sanitizer-related work in the future.
* Fix uninitialized nstime reading / updating on the stack in hpa.Qi Wang2021-11-161-1/+1
| | | | | | In order for nstime_update to handle non-monotonic clocks, it requires the input nstime to be initialized -- when reading for the first time, zero init has to be done. Otherwise random stack value may be seen as clocks and returned.
* Optimize away the tsd_fast() check on free fastpath.Qi Wang2021-10-281-0/+7
| | | | | To ensure that the free fastpath can tolerate uninitialized tsd, improved the static initializer for rtree_ctx in tsd.
* Redefine functions with test hooks only for testsAlex Lapenkou2021-10-151-1/+1
| | | | | Android build has issues with these defines, this will allow the build to succeed if it doesn't need to build the tests.
* Initialize deferred_work_generatedAlex Lapenkou2021-10-073-20/+14
| | | | | | As the code evolves, some code paths that have previously assigned deferred_work_generated may cease being reached. This would leave the value uninitialized. This change initializes the value for safety.
* Darwin malloc_size override support proposal.David CARLIER2021-10-019-10/+15
| | | | Darwin has similar api than Linux/FreeBSD's malloc_usable_size.
* Small refactors around 7bb05e0.Qi Wang2021-09-272-12/+12
|
* Implement guard pages.Qi Wang2021-09-2611-178/+434
| | | | | | | Adding guarded extents, which are regular extents surrounded by guard pages (mprotected). To reduce syscalls, small guarded extents are cached as a separate eset in ecache, and decay through the dirty / muzzy / retained pipeline as usual.
* add experimental.arenas_create_ext mallctlPiotr Balcer2021-09-247-14/+72
| | | | | | | | | | | | | | | | This mallctl accepts an arena_config_t structure which can be used to customize the behavior of the arena. Right now it contains extent_hooks and a new option, metadata_use_hooks, which controls whether the extent hooks are also used for metadata allocation. The medata_use_hooks option has two main use cases: 1. In heterogeneous memory systems, to avoid metadata being placed on potentially slower memory. 2. Avoiding virtual memory from being leaked as a result of metadata allocation failure originating in an extent hook.
* Allow setting a dump hookAlex Lapenkou2021-09-221-3/+111
| | | | If users want to be notified when a heap dump occurs, they can set this hook.
* Allow setting custom backtrace hookAlex Lapenkou2021-09-223-7/+74
| | | | | | Existing backtrace implementations skip native stack frames from runtimes like Python. The hook allows to augment the backtraces to attribute allocations to native functions in heap profiles.