| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does a lot of house cleaning, finally bringing cache
cleanup logic to a level of comprehensibility.
Changes in this commit include:
o _artifactcache/artifactcache.py: _cache_size, _cache_quota and
_cache_lower_threshold are now all private variables.
get_approximate_cache_size() is now get_cache_size()
Added get_quota_exceeded() for the purpose of safely checking
if we have exceeded the quota.
set_cache_size() now asserts that the passed size is not None,
it is not acceptable to set a None size cache anymore.
o _artifactcache/cascache.py: No longer set the ArtifactCache
'cache_size' variable violently in the commit() method.
Also the calculate_cache_size() method now unconditionally
calculates the cache size, that is what it's for.
o _scheduler/jobs/cachesizejob.py & _scheduler/jobs/cleanupjob.py:
Now check the success status. Don't try to set the cache size
in the case that the job was terminated.
o _scheduler/jobs/elementjob.py & _scheduler/queues/queue.py:
No longer passing around the cache size from child tasks,
this happens only explicitly, not implicitly for all tasks.
o _scheduler/queues/buildqueue.py & _scheduler/scheduler.py:
Use get_quota_exceeded() accessor
This is a part of #623
|
|
|
|
|
|
|
|
|
|
|
|
| |
The artifact cache is anyway a singleton, the Element itself
has an internal handle on the artifact cache because it is
needed very frequently; but an artifact cache is not element
specific and should not be looked up by surrounding code
on a per element basis.
Updated _scheduler/queues/queue.py, _scheduler/queues/buildqueue.py
and _scheduler/jobs/elementjob.py to get the artifact cache directly
from the Platform
|
|
|
|
|
|
|
|
| |
This was previously poking directly at the Platform._instance.
Also, use the name 'artifacts' to hold the artifact cache to
be consistent with other parts of the codebase, instead of
calling it 'cache'.
|
|
|
|
|
|
|
|
| |
This was previously poking directly at the Platform._instance.
Also, use the name 'artifacts' to hold the artifact cache to
be consistent with other parts of the codebase, instead of
calling it 'cache'.
|
|
|
|
|
|
|
|
| |
There is no justification to hold onto this state here.
Instead, just make `Element._assemble()` return the size of the
artifact it cached, and localize handling of that return value in
the BuildQueue implementation where the value is observed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the API contract was to expose the estimated_size variable
on the ArtifactCache instance for all to see, however it is only relevant
to the ArtifactCache abstract class code. Subclasses were informed to
update the estimated_size variable in their calculate_cache_size()
implementation.
To untangle this and hide away the estimated size, this commit
does the following:
o Introduces ArtifactCache.compute_cache_size() API for external
callers
o ArtifactCache.compute_cache_size() calls the abstract method
for the CasCache subclass to implement
o ArtifactCache.compute_cache_size() updates the private
estimated_size variable
o All direct callers to ArtifactCache.calculate_cache_size(),
have been updated to use the ArtifactCache.compute_cache_size()
method instead, which takes care of updating anything local
to the ArtifactCache abstract class code (the estimated_size)
|
|
|
|
|
|
|
|
|
| |
The ArtifactCache._local variable used to exist in order to
use a special hack to allow absolute paths to a remote artifact
cache, this was all for the purpose of testing.
This has all gone away with the introduction of CAS, leaving behind
a stale variable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we have a very private looking _check_cache_size_real() function
which no-one would ever want to call from outside of the _scheduler,
especially given it's `_real()` prefix we should look for another
outward facing API to use.
However this is not private to the scheduler, and is intended to
be called by the `Queue` implementations.
o Renamed this to check_cache_size()
o Moved it to the public API section of the Scheduler object
o Added the missing API documenting comment
o Also added the missing API documenting comment to the private
`_run_cleanup()` callback which runs in response to completion
of the cache size calculation job.
o Also place the cleanup job logs into a cleanup subdirectory,
for better symmetry with the cache_size jobs which now have
their own subdirectory
|
|
|
|
|
|
|
|
|
|
|
| |
The artifact cache provides the following public methods for
external callers, but was hiding them away as if they are private.
o ArtifactCache.add_artifact_size()
o ArtifactCache.set_cache_size()
Mark these properly public
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This runs after every pull, and does not need the cache exclusively,
only the cleanup job requires the cache exclusively.
Without this, every time a cache_size job is queued, all pull and
build jobs need to complete before cache_size job can run exclusively,
which is not good.
This is a part of #623
|
|
|
|
|
|
|
|
|
|
|
| |
This means there is no cap for shared resource requests.
Together with the previous commit, this causes the cleanup
job and the pull/build jobs to all require unlimited shared
access to the CACHE resource; while only the cleanup job
requires exclusive access to the resource.
This is a part of #623
|
|
|
|
|
|
|
|
|
|
|
| |
the CACHE
This is what the whole resource.py thing was created for, the
cleanup job must have exclusive access to the cache, while the pull
and build jobs which result in adding new artifacts, must only require
shared access.
This is a part of #623
|
| |
|
|\
| |
| |
| |
| | |
jobs.py: Reduce FD leaks from queues and process objects
See merge request BuildStream/buildstream!778
|
|/
|
|
|
|
|
|
|
|
|
|
| |
The garbage collector can take too long to get around to cleaning
up the Queue and Process instances in completed Job instances. As
such, FDs tend to leak and in very large projects this can result
in running out of FDs before a build, fetch, track, or other process
can complete. This patch reduces the chance of that by only creating
the queue when it's needed, and forcing the queue and process instances
to be deleted when the parent is finished with them.
Signed-off-by: Daniel Silverstone <daniel.silverstone@codethink.co.uk>
|
|\
| |
| |
| |
| | |
Improve documentation for artifact cache installation
See merge request BuildStream/buildstream!777
|
|/
|
|
|
|
|
| |
Remove ambiguity about systemd service files being separate.
Add URL for more information about systemd service files.
Add note about public keys being mandatory for self-signed certs.
Make cert/key file naming consistent throughout document.
|
|\
| |
| |
| |
| | |
plugins/git.py: Warn if ref is not in given track
See merge request BuildStream/buildstream!564
|
| |
| |
| |
| |
| | |
Add tests that cover assert_ref_in_track & the configurable
CoreWarnings REF_NOT_IN_TRACK warnings token.
|
|/
|
|
|
|
|
|
|
|
|
| |
Add a helper function assert_ref_in_track to git.py GitMirror()
which is used when staging & initing the source workspace. It
checks the element's ref exists in the track (branch/tag) if it
has been specified, raising a warning if necessary. The warning makes
use of the warning token 'REF_NOT_IN_TRACK' from the configurable
CoreWarnings. If the element has been tracked with bst, it is assumed
that the value of ref exists in the track as it was generated from it
& as such is not asserted.
|
|\
| |
| |
| |
| | |
Source fetcher changes
See merge request BuildStream/buildstream!772
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Plugin.configure()
This cannot test for unaliased URLs, as those can be discovered later
on outside of user provided element configuration; at least we
assert that if an alias was used, we have seen it at load time.
This will cause a BUG to occur for a plugin which marks an aliased
URL (or attempts to translate one) outside of `Plugin.configure()`,
if that URL was not previously seen.
This is a part of #620
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The Source must now mention whether the marked or translated
URL is "primary" or not. Even when a Source may have multiple
URLs, the auxilliary URLs are derived from the primary one, not
only is this true for git, but it is mandated by our tracking
API which assumes there is a primary URL.
This adjusts the `git` source and the test `fetch_source.py` source
to behave properly and advertize it's primary URL properly.
This is a part of #620
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Keeps track of whether the plugin is currently being configured.
Adjusted Element and Source classes to call _configure() in place
of calling configure() directly.
This is a part of #620
|
| |
| |
| |
| |
| |
| |
| | |
A download URL must be interpreted by the core at `Plugin.configure()`
time, even if only employed later on.
This is a part of #620
|
| | |
|
|/
|
|
|
|
|
|
| |
Also highlight the fact that the plugin can rely on the fetcher's
fetch() method getting called before consuming the next item in the
list, which is the magick behavior that the git plugin relies on.
This is a part of #620
|
|\
| |
| |
| |
| | |
Move cache_size.pid.log files into a subdirectory of logs
See merge request BuildStream/buildstream!769
|
|/
|
|
|
| |
This prevents the cache_size.pid.log files from cluttering the root
log directory.
|
|\
| |
| |
| |
| | |
Retries log as failures
See merge request BuildStream/buildstream!766
|
| | |
|
|/
|
|
|
|
|
|
| |
This adjusts the message handler for the child processes to no longer
override the message type.
This also removes the ability for unhandled non BstError exceptions to
retry.
|
| |
|
|\
| |
| |
| |
| | |
buildstream/_project.py: Report if project.conf is missing name
See merge request BuildStream/buildstream!680
|
|/
|
|
|
|
|
| |
Explicitly check that project.conf contains a name. This resolves
the issue of the provenance check from _yaml.py incorrectly reporting
the offending file as the default_config_node projectconfig.yaml
when no name key exists in the pre_config_node dict.
|
|\
| |
| |
| |
| | |
Replacing string 'bzr' with value from host tools
See merge request BuildStream/buildstream!764
|
|/ |
|
|\
| |
| |
| |
| | |
_artifactcache/artifactcache.py: Write the cache_size file atomically
See merge request BuildStream/buildstream!762
|
|/
|
|
|
|
|
|
| |
This is causing issues while the size file is being read and written
simultaneously.
The proper fix will be to read/add/save the file atomically and that
will require locking, but this fix is a good stop gap to existing crashes.
|
|\
| |
| |
| |
| |
| |
| | |
Report processing errors from tracking
Closes #533
See merge request BuildStream/buildstream!747
|
|/
|
|
|
|
| |
Failures to write files when tracking were not reported.
Fixes #533.
|
|\
| |
| |
| |
| | |
Minor code changes revolving around source mirroring
See merge request BuildStream/buildstream!758
|
| |
| |
| |
| |
| | |
This was displaying the aliased URL which is pretty useless,
use the translated URL for the timed activity.
|
| |
| |
| |
| |
| |
| | |
This was sitting in the section for abstract methods, but this
is most definitely not an abstract method to be implemented by
Sources.
|
|/
|
|
|
|
| |
Added some comments to make the flow easier to follow, and
removed an annoying 'success' variabled in favor of a for / else
loop statement.
|
|\
| |
| |
| |
| | |
tests/frontend/mirror.py: Reenable test_mirror_fetch_upstream_absent[ostree]
See merge request BuildStream/buildstream!755
|
|/
|
|
|
| |
This test was skipped because of issue #538, but #538 was fixed
and the test was still not reenabled.
|
|\
| |
| |
| |
| | |
tests/plugins/filter.py: Don't run redundant tests
See merge request BuildStream/buildstream!753
|
|/
|
|
|
|
|
|
|
| |
There is no reason that the filter element codepaths can behave
differently depending on the Source implementation used in the test,
as the Source implementation does not have any filter specific
virtual methods.
Removing the redundant tests and just performing these tests with the git source.
|