| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
For MSVC, ccache treats all arguments starting with a slash as an
option, which makes it fail to detect the source code file if it's
passed as a Unix-style absolute path.
Fix this by not treating an argument as an option if it's (a) an unknown
option, and (b) the argument exists as a file in the file system.
Fixes #1230.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the process is signaled after SignalHandler::~SignalHandler has run
then this assert will trigger:
ccache: SignalHandler.cpp:87: static void SignalHandler::on_signal(int): failed assertion: g_the_signal_handler
Fix this by deregistering the signal handler function before destructing
the SignalHandler object.
Fixes #1246.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The content of the Zstd and Hiredis GitHub source achive URLs like
<https://github.com/$X/$Y/archive/$tag.tar.gz> apparently change from
time to time. Color me surprised. [1] says that this is intentional
(although, at the time of writing, reverted temporarily). Let's use
another URL for Zstd and not verify the checksum for Hiredis (since
there is no release source archive).
[1]: https://github.blog/changelog/2023-01-30-git-archive-checksums-may-change/
|
|
|
|
|
| |
Reference:
<https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After PR #1033 and [1], a stat call is made each time a note about an
include file is found in the preprocessed output. Such calls are very
performant on Linux (and therefore unnoticed until now), but apparently
costly on Windows.
Fix this by caching the calculation of relative paths in
process_preprocessed_file.
[1]: 9a05332915d2808f4713437006b3f7c812d9fd74
Closes #1245.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some filesystems, for instance btrfs with compression enabled,
apparently make a posix_fallocate call succeed without actually
allocating the requested space for the file. This means that if the file
is mapped into memory, like done by the inode cache, the process can
crash when accessing the memory if the filesystem is full.
This commit implements a workaround: the inode cache is disabled if the
filesystem reports that it has less than 100 MiB free space. The free
space check is valid for one second before it is done again. This should
hopefully make crashes very rare in practice.
Closes #1236.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before it was possible to force downloading of the Zstandard library
using "-D ZSTD_FROM_INTERNET=ON" and similar for Hiredis. That ability
was lost in 2c742c2c7ca9, so if you for some reason want to not use a
locally installed library you're out of luck.
Improve this by letting ZSTD_FROM_INTERNET and HIREDIS_FROM_INTERNET be
tristate variables:
ON: Always download
AUTO (default): Download if local installation not found
OFF: Never download
As mentioned in #1240.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Fixes #1238.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cache cleanup mechanism has worked essentially the same ever since
ccache was initially created in 2002:
- The total number and size of all files in one of the 16 subdirectories
(AKA level 1) are kept in the stats file in said subdirectory.
- On a cache miss, the new compilation result file is written (based on
the first digits of the hash) to a subdirectory of one of those 16
subdirectories, and the stats file is updated accordingly.
- Automatic cleanup is triggered if the size of the level 1 subdirectory
becomes larger than max_size / 16.
- ccache then lists all files in the subdirectory recursively, stats
them to check their size and mtime, sorts the file list on mtime and
deletes the 20% oldest files.
Some problems with the approach described above:
- (A) If several concurrent ccache invocations result in a cache miss
and write their results to the same subdirectory then all of them will
start cleaning up the same subdirectory simultaneously, doing
unnecessary work.
- (B) The ccache invocation that resulted in a cache miss will perform
cleanup and then exit, which means that an arbitrary ccache process
that happens to trigger cleanup will take a long time to finish.
- (C) Listing all files in a subdirectory of a large cache can be quite
slow.
- (D) stat-ing all files in a subdirectory of a large cache can be quite
slow.
- (E) Deleting many files can be quite slow.
- (F) Since a cleanup by default removes 20% of the files in a
subdirectory, the actual cache size will (once the cache limit is
reached) on average hover around 90% of the configured maximum size,
which can be confusing.
This commit solves or improves on all of the listed problems:
- Before starting automatic cleanup, a global "auto cleanup" lock is
acquired (non-blocking) so that at most one process is performing
cleanup at a time. This solves the potential "cache cleanup stampede"
described in (A).
- Automatic cleanup is now performed in just one of the 256 level 2
directories. This means that a single cleanup on average will be 16
times faster than before. This improves on (B), (C), (D) and (E) since
the cleanup made by a single compilation will not have to access a
large part of the cache. On the other hand, cleanups will be triggered
16 times more often, but the cleanup duty will be more evenly spread
out during a build.
- The total cache size is calculated and compared with the configured
maximum size before starting automatic cleanup. This, in combination
with performing cleanup on level 2, means that the actual cache size
will stay very close to the maximum size instead of about 90%. This
solves (F).
The limit_multiple configuration option has been removed since it is no
longer used.
Closes #417.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If a long-lived lock is stale and has no alive file,
LockFile::try_acquire will never succeed to acquire the lock. Fix this
by creating the alive file for all lock types and making
LockFile::try_acquire exit when lock activity is seen instead of
immediately after failing to acquire the lock.
Another advantage is that a stale lock can now always be broken right
away if the alive file exists.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
If the nominator is 99999 and the denominator is 100000, the percent
function in Statistics.cpp would return "(100.00%)" instead of the
wanted "(100.0%)". Fix this by using the alternate format string if the
result string overflows its target size.
|
| |
|
|
|
|
|
| |
This makes it possible to check ordinary log messages when debugging
"ccache -c" and similar options.
|
| |
|
| |
|
|
|
|
|
|
| |
Progress bars will now be smoother since the operations are now divided
into 256 instead of 16 "read files + act on files" steps. This is also
in preparation for future improvements related to cache cleanup.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Changed the inode cache implementation to use spinlocks instead of pthread
mutexes. This makes the inode cache work on FreeBSD and other systems where the
pthread mutexes are destroyed when the last memory mapping containing the
mutexes is unmapped.
Also added tmpfs, ufs and zfs to the list of supported filesystems on macOS and
BSDs.
See also ccache discussion #1228.
|
| |
|
|
|
|
|
|
|
| |
This fixes a problem where the original umask would be used when storing
a remote cache result in the local cache in from_cache.
Fixes #1235.
|
|
|
|
|
|
| |
The base directory will now match case-insensitively with absolute paths
in preprocessed output, or from /showIncludes in the depend mode case,
when compiling with MSVC.
|
| |
|
|
|
|
| |
lgtm.com has been shut down.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|