| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This commit ensures that CASCache.list_refs(), and
ArtifactCache.list_artifacts(), can both handle glob expressions.
|
|
|
|
|
|
|
| |
A CasBasedDirectory object of an artifacts logs can be obtained
with ArtifactCache.get_artifacts_log(). This ultimately calls
CASCache.get_top_level_dir() to obtain a CasBasedDirectory
of an artifact's subdirectory (or subdirectories).
|
|
|
|
|
|
|
|
| |
This commit removes the method ArtifactCache.get_artifact_fullname()
and replaces it with Element.get_artifact_name()
Given a key, we are now able to construct the full name of any of an
element's artifacts.
|
|
|
|
|
|
|
| |
This makes a junction use the artifact cache of the parent project
before the ones defined for the junction
Fixes #401
|
|
|
|
|
|
|
| |
the code for initializing remotes added the project specific remote
caches to the global list instead of making a copy.
Fixes #618
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead only rely on the headroom to be enough to protect against
out of space conditions. The headroom can become configurable as
a separate step is required.
The changes to achieve this are:
* Rename ArtifactCache.has_quota_exceeded() to ArtifactCache.full().
* ArtifactCache.full() now also reports True if the available
space on the artifact cache volume is smaller than the headroom.
This ensures jobs get triggered to cleanup the cache when
reaching the end of the disk.
* When loading the artifact quota, it is now only an error if
the quota exceeds the overall disk space, not if it does not
fit in the available space.
It is still a warning if the quota does not fit in the
available space on the artifact cache volume.
* Updated scheduler.py and buildqueue.py for the API rename
* tests: Updated the artifactcache/expiry.py test for its
expectations in this regard.
Added a new test to test an error when quota was specified to
exceed total disk space, and adjusted the existing tests to
expect a warning when the quota does not fit in the available
space.
This fixes issue #733 and #869.
|
|
|
|
|
|
| |
This seems to have been copy/pasted from cascache, and
documents the function to possibly return None if defer_prune
was specified, but this function does not expose defer_prune.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added some useful status messages when:
* Calculating a new artifact cache usage size
* Starting a cleanup
* Finishing a cleanup
Also enhanced messaging about what was cleaned up so far when
aborting a cleanup.
|
|
|
|
|
| |
A simple object which creates a snapshot of current
usage statistics for easy reporting in the frontend.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will benefit from a better UtilError being raised, and
and turns the artifact cache's local function into a one liner.
The loop which finds the first existing directory in the
given path has been removed, being meaningless due to the
call to os.makedirs() in ArtifactCache.__init__().
The local function was renamed to _get_cache_volume_size() and
no longer takes any arguments, which is more suitable for the
function as it serves as a testing override surface for
unittest.mock().
The following test cases which use the function to override
the ArtifactCache behavior have been updated to use the new
overridable function name:
tests/artifactcache/cache_size.py
tests/artifactcache/expiry.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not an error related to loading data, like a parse error
in the quota specification is, but a problem raised by the artifact
cache - this allows us to assert more specific machine readable
errors in test cases (instead of checking the string in stderr, which
this patch also fixes).
This also removes a typo from the error message in the said error.
* tests/artifactcache/cache_size.py
Updated test case to expect the artifact error, which consequently
changes the test case to properly assert a machine readable error
instead of asserting text in the stderr (which is the real, secret
motivation behind this patch).
* tests/artifactcache/expiry.py: Reworked test_invalid_cache_quota()
Now expect the artifact error for the tests which check configurations
which create caches too large to fit on the disk.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
It was saying "There is not enough space to build the given element.",
this makes me think the error is associated to a specific element, but
this does not make sense to show up in a cleanup task.
Instead say "There is not enough space to complete the build.", which
should be more clear that even after cleaning up there is not enough
space.
|
|
|
|
|
|
|
|
|
|
|
|
| |
List of methods moved
* Initialization check: made it a class method that is run in a subprocess, for
when checking in the main buildstream process.
* fetch_blobs
* send_blobs
* verify_digest_on_remote
* push_method
Part of #802
|
|
|
|
|
|
|
|
|
| |
Other components will start to reply on cas modules, and not the artifact cache
modules so it should be organized to reflect this.
All relevant imports have been changed.
Part #802
|
| |
|
|
|
|
|
|
|
|
|
| |
Reduces a bit of code, less code paths for pushing, now use separate
uris for push and pull.
Also renamed fetch() to pull() for more consistency throughout the
codebase, 'pull' refers to downloading artifacts; 'fetch' refers
to downloading sources (in general).
|
|
|
|
| |
Instead of mounting in CWD, mount in the ${artifactdir}/mounts directory.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So now we use:
o bare user repository
o set file uid/gid = 0 at commit time
o user mode checkouts.
This ensures we have hardlinks to check out, and that we have
permission to create those files. With recent ostree this
means that the only losses are:
o No xattrs
o Stripped suid/sgid permission bits
o Ownership assigned to active user
In older ostree versions, this munges the permission bits further.
|
|
|
|
|
|
|
|
| |
This is a temporary workaround for https://gitlab.com/BuildStream/buildstream/issues/19
I tried to force copy on checkout instead as it would be less intense,
but this is not working for ensuring there are no hardlinks for some
reason, so resorting to full on archive-z2 repository instead.
|
| |
|
|
|
|
|
|
|
|
|
| |
Catch both EEXIST and not ENOTEMPTY from os.rename().
Seems that low level rename() calls (see `man 2 rename`) document
this call to report either EEXIST _or_ ENOTEMPTY (at the OS/libc's
discretion) in the case that the destination directory exists and
is not empty.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
o The metaelements and metasources now carry the name, the loader
resolves source names now.
o Element/Source factories dont require a name anymore as they
are already in the meta objects
o Pipeline no longer composes names
o Element.name is now the original project relative filename,
this allows plugins to identify that name in their dependencies,
allowing one to express configuration which identifies elements
by the same name that the user used in the dependencies.
o Removed plugin._get_display_name() in favor of the plugin.name
o Added Element.normal_name, for the cases where we need to have
a normalized name for creating directories and log files
o Updated frontend and test cases and all callers to use the
new naming
|
|
|
|
|
|
|
|
|
|
| |
Now the artifact cache APIs take an Element as a parameter and
work out the ostree key/branch from there automatically, making
the Element code a bit cleaner.
Also the ArtifactError derives from our special _BstError ensuring
that any artifact related errors are reported nicely through the
regular pathways.
|
| |
|
| |
|
|
|
|
|
|
| |
Dont pass in a None suffix to the tempfile.TemporaryDirectory()
constructor, this causes exceptions in python 3.4, even though we
require python >= 3.5 might as well patch this up.
|
| |
|
|
|
|
|
| |
Note pep8 must be ignored in some specific cases, like the import statement
which comes after a gi version check. This is done with the '# nopep8' annotation.
|
|
Uses OSTree.
|