| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, `source-bundle` command is entirely broken as it tries to stage the
sources in a directory that doesn't exist. Fix it by ensuring that we create
the necessary directories before calling any methods that try to use those
directories.
This fix comes with a regression test to ensure that the basic use-case
of `source-bundle` continues to work in future.
Fixes https://gitlab.com/BuildStream/buildstream/issues/651.
|
|\
| |
| |
| |
| | |
README.rst: Add status badges for PyPI release and Python versions
See merge request BuildStream/buildstream!719
|
|/
|
|
|
|
| |
The first badge will work fine right away while the second badge will
show "not found" until a release is made after merging this branch:
https://gitlab.com/BuildStream/buildstream/merge_requests/718.
|
|\
| |
| |
| |
| | |
source/install_source.rst: pip plugin depends on host pip
See merge request BuildStream/buildstream!791
|
|/ |
|
|\
| |
| |
| |
| |
| |
| | |
_artifactcache/casserver.py: Implement BatchReadBlobs
Closes #632
See merge request BuildStream/buildstream!785
|
| |
| |
| |
| | |
Fixes #632.
|
|/ |
|
|\
| |
| |
| |
| | |
Add validation of configuration variables
See merge request BuildStream/buildstream!678
|
| | |
|
| |
| |
| |
| | |
Ensure that protected variables are not being redefined by the user.
|
| |
| |
| |
| |
| | |
And remove then from the defaults as they are dynamically set by
BuildStream.
|
|/
|
|
| |
Setting "max-jobs" won't be allowed anymore in a following commit.
|
|\
| |
| |
| |
| | |
Ensure PWD is set in process environment
See merge request BuildStream/buildstream!782
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Naive getcwd implementations (such as in bash 4.4) can break
when bind-mounts to different paths on the same filesystem are present,
since the algorithm needs to know whether it's a mount-point
to know whether it can trust the inode value from the readdir result
or to use stat on the directory.
Less naive implementations (such as in glibc) iterate again using stat
in the case of not finding the directory because the inode in readdir was wrong,
though a Linux-specific implementation could use name_to_handle_at.
Letting the command know what directory it is in makes it unnecessary
for it to call the faulty getcwd in the first place.
|
|\
| |
| |
| |
| | |
_scheduler/queues: Mark build and pull queue as requiring shared access to the CACHE
See merge request BuildStream/buildstream!775
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This does a lot of house cleaning, finally bringing cache
cleanup logic to a level of comprehensibility.
Changes in this commit include:
o _artifactcache/artifactcache.py: _cache_size, _cache_quota and
_cache_lower_threshold are now all private variables.
get_approximate_cache_size() is now get_cache_size()
Added get_quota_exceeded() for the purpose of safely checking
if we have exceeded the quota.
set_cache_size() now asserts that the passed size is not None,
it is not acceptable to set a None size cache anymore.
o _artifactcache/cascache.py: No longer set the ArtifactCache
'cache_size' variable violently in the commit() method.
Also the calculate_cache_size() method now unconditionally
calculates the cache size, that is what it's for.
o _scheduler/jobs/cachesizejob.py & _scheduler/jobs/cleanupjob.py:
Now check the success status. Don't try to set the cache size
in the case that the job was terminated.
o _scheduler/jobs/elementjob.py & _scheduler/queues/queue.py:
No longer passing around the cache size from child tasks,
this happens only explicitly, not implicitly for all tasks.
o _scheduler/queues/buildqueue.py & _scheduler/scheduler.py:
Use get_quota_exceeded() accessor
This is a part of #623
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The artifact cache is anyway a singleton, the Element itself
has an internal handle on the artifact cache because it is
needed very frequently; but an artifact cache is not element
specific and should not be looked up by surrounding code
on a per element basis.
Updated _scheduler/queues/queue.py, _scheduler/queues/buildqueue.py
and _scheduler/jobs/elementjob.py to get the artifact cache directly
from the Platform
|
| |
| |
| |
| |
| |
| |
| |
| | |
This was previously poking directly at the Platform._instance.
Also, use the name 'artifacts' to hold the artifact cache to
be consistent with other parts of the codebase, instead of
calling it 'cache'.
|
| |
| |
| |
| |
| |
| |
| |
| | |
This was previously poking directly at the Platform._instance.
Also, use the name 'artifacts' to hold the artifact cache to
be consistent with other parts of the codebase, instead of
calling it 'cache'.
|
| |
| |
| |
| |
| |
| |
| |
| | |
There is no justification to hold onto this state here.
Instead, just make `Element._assemble()` return the size of the
artifact it cached, and localize handling of that return value in
the BuildQueue implementation where the value is observed.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, the API contract was to expose the estimated_size variable
on the ArtifactCache instance for all to see, however it is only relevant
to the ArtifactCache abstract class code. Subclasses were informed to
update the estimated_size variable in their calculate_cache_size()
implementation.
To untangle this and hide away the estimated size, this commit
does the following:
o Introduces ArtifactCache.compute_cache_size() API for external
callers
o ArtifactCache.compute_cache_size() calls the abstract method
for the CasCache subclass to implement
o ArtifactCache.compute_cache_size() updates the private
estimated_size variable
o All direct callers to ArtifactCache.calculate_cache_size(),
have been updated to use the ArtifactCache.compute_cache_size()
method instead, which takes care of updating anything local
to the ArtifactCache abstract class code (the estimated_size)
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The ArtifactCache._local variable used to exist in order to
use a special hack to allow absolute paths to a remote artifact
cache, this was all for the purpose of testing.
This has all gone away with the introduction of CAS, leaving behind
a stale variable.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Here we have a very private looking _check_cache_size_real() function
which no-one would ever want to call from outside of the _scheduler,
especially given it's `_real()` prefix we should look for another
outward facing API to use.
However this is not private to the scheduler, and is intended to
be called by the `Queue` implementations.
o Renamed this to check_cache_size()
o Moved it to the public API section of the Scheduler object
o Added the missing API documenting comment
o Also added the missing API documenting comment to the private
`_run_cleanup()` callback which runs in response to completion
of the cache size calculation job.
o Also place the cleanup job logs into a cleanup subdirectory,
for better symmetry with the cache_size jobs which now have
their own subdirectory
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The artifact cache provides the following public methods for
external callers, but was hiding them away as if they are private.
o ArtifactCache.add_artifact_size()
o ArtifactCache.set_cache_size()
Mark these properly public
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This runs after every pull, and does not need the cache exclusively,
only the cleanup job requires the cache exclusively.
Without this, every time a cache_size job is queued, all pull and
build jobs need to complete before cache_size job can run exclusively,
which is not good.
This is a part of #623
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This means there is no cap for shared resource requests.
Together with the previous commit, this causes the cleanup
job and the pull/build jobs to all require unlimited shared
access to the CACHE resource; while only the cleanup job
requires exclusive access to the resource.
This is a part of #623
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
the CACHE
This is what the whole resource.py thing was created for, the
cleanup job must have exclusive access to the cache, while the pull
and build jobs which result in adding new artifacts, must only require
shared access.
This is a part of #623
|
|/ |
|
|\
| |
| |
| |
| | |
Upstream freedesktop-sdk autotools config for libtool .la files
See merge request BuildStream/buildstream!683
|
|/
|
|
|
|
| |
In freedesktop-sdk we add a script to our project.conf to remove
libtool .la files from autotools projects after install, this seems
like a sensible default, so we're attempting to send it upstream.
|
|\
| |
| |
| |
| | |
Remote execution client
See merge request BuildStream/buildstream!626
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Executing run() on a sandbox can now replace the virtual directory,
since remote execution returns a potentially different directory rather
than an update to the existing one. Call get_virtual_directory() again
after running to accout for this.
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| |
| |
| |
| |
| | |
The remote execution client is implemented as a remote sandbox that
sends sources and build commands to a REAPI server and fetches results
once remotely executed. New file.
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| |
| |
| | |
This just adds one option, "remote-execution/url". Affects multiple files.
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is for use after remote execution has finished, since remote
execution produces a new output directory rather than modifying
the initial directory.
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| | |
https://gitlab.com/BuildStream/buildstream/issues/454
|
| |
| |
| |
| |
| |
| | |
Add a pull_tree() helper.
https://gitlab.com/BuildStream/buildstream/issues/454
|