| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Replaces Pipeline method `track_cross_junction_filter()`.
This changes the error domain for invalid cross junction tracking, so
updating the following two test cases:
* testing/_sourcetests/track.py
* tests/frontend/track.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This internal test was relying on internal construction of pipeline, and
relying on the shape of internal API interactions, which is the case for
many tests in the tests/internals directory, and thus they make internal
API churn harder to deal with.
In this case, this test has been long superceded by the newer test in
tests/plugins/loading.py, which not only tests the ability to load a custom
plugin and source successfully, but also tests the various possible failure
modes.
In short, this internal test is now thankfully obsolete.
|
|
|
|
|
|
|
|
|
|
|
| |
This test was broken from the following commits:
98c807002cf3beb2110695083450a42fe8feefd0
9a124386a0ba8995e7cfd92e2c7d8fb23854f7e4
As of these commits, a Cli() instance requires it's directory to not
be created in advance, this was not caught in CI because we skip the
tests in CI where the runners lack proper subsecond mtime precision.
|
|
|
|
|
|
| |
This also adjusts the very strange tests in tests/internals/cascache.py
which use unittest's MagicMock interface to inspect what happened on
specific python methods instead of doing proper end-to-end testing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These tests were quite hard to follow what they were testing, and
did not cover the error cases thoroughly either. Instead, use some
test parametrization to implement more succinct tests which cover
the API surface more thoroughly.
Due to parametrization and discrete testing of various use cases,
this was going to be very expensive, so this patch introduces some
pytest module scope fixtures.
This allows multiple discrete tests to be run against a prebuilt
ArtifactShare with specific artifacts already built and available
in the share, so that this discrete testing is quite optimized.
|
|
|
|
|
| |
This allows our tests to use the ArtifactShare() object in
custom fixtures.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In errors pertaining to failing to launch a shell with a buildtree.
Other related updates:
- _frontend/cli.py: Propagate machine readable error codes in `bst shell`
This command prefixes a reported error, so it rewraps the error into
an AppError, this needs to propagate the originating machine readable
error.
- tests/integration/shell.py, tests/integration/shellbuildtrees.py:
Updated to use new machine readable errors
|
|
|
|
|
| |
Assert that errors are raised when stack dependencies are declared as
build-only or runtime-only dependencies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Stack elements cannot be build-only dependencies, as this would defeat
the purpose of using stack elements in order to directly build-depend on
them.
Stack element dependencies must all be built in order to build depend
on them, and as such we gain no build parallelism by allowing runtime-only
dependencies on stack elements. Declaring a runtime-only dependency on
a stack element as a whole might still be useful, but still requires the
entire stack to be built at the time we need that stack.
Instead, it is more useful to ensure that a stack element is a logical
group of all dependencies, including runtime dependencies, such that we
can guarantee cache key alignment with all stack dependencies.
This allows for stronger reliability in commands such as
`bst artifact checkout`, which can now reliably download and checkout
a fully built stack as a result, without any uncertainty about possible
runtime-only dependencies which might exist in the project where that
artifact was created.
This consequently closes #1075
This also fixes the following tests such that the no longer
require build-depends or runtime-depends to work in stack elements:
* tests/frontend/default_target.py: Was not necessary to check results of show,
these stacks were set to runtime-depends so that they would have the same
buildable state as their dependencies when shown.
* tests/format/dependencies.py: tests/frontend/pull.py, test/frontend/show.py,
tests/integration/compose.py:
These tests were using specific build/runtime dependencies in stacks, but
for no particular reason.
|
|
|
|
| |
It's not used outside testutils.
|
|
|
|
|
|
| |
This new test tests that environment variables are preserved in generated
artifacts, and that artifact data is observed rather than irrelevant
local state when integrating an artifact checked out by it's artifact name.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When instantiating an ArtifactElement, use an ArtifactProject to ensure
that the Element does not accidentally have access to any incidentally
existing project loaded from the current working directory.
Also pass along the Artifact to the Element's initializer directly, and
conditionally instantiate the element based on it's artifact instead of
based on loading YAML configuration.
Fixes #1410
Summary of changes:
* _artifactelement.py:
- Now load the Artifact and pass it along to the Element constructor
- Now use an ArtifactProject for the element's project
- Remove overrides of Element methods, instead of behaving differently,
now we just fill in all the blanks for an Element to behave more
naturally when loaded from an artifact.
- Avoid redundantly loading the same artifact twice, if the artifact
was cached then we will load only one artifact.
* element.py:
- Conditionally instantiate from the passed Artifact instead of
considering any YAML loading.
- Error out early in _prepare_sandbox() in case that we are trying
to instantiate a sandbox for an uncached artifact, in which case
we don't have any SandboxConfig at hand to do so.
* _stream.py:
- Clear the ArtifactProject cache after loading artifacts
- Ensure we load a list of unique artifacts without any duplicates
* tests/frontend/buildcheckout.py: Expect a different error when trying
to checkout an uncached artifact
* tests/frontend/push.py, tests/frontend/artifact_show.py: No longer expect
duplicates to show up with wild card statements which would capture multiple
versions of the same artifact (this changes because of #1410 being fixed)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit enriches the metadata we store on artifacts in the
new detatched low/high diversity metadata files:
* The SandboxConfig is now stored in the artifact, allowing
one to perform activities such as launching sandboxes on
artifacts downloaded via artifact name (without backing
project data).
* The environment variables is now stored in the artifact,
similarly allowing one to shell into a downloaded artifacts
which are unrelated to a loaded project.
* The element variables are now stored in the artifact, allowing
more flexibility in what the core can do with a downloaded
ArtifactElement
* The element's strict key
All of these of course can additionally enhance traceability
in the UI with commands such as `bst artifact show`.
Summary of changes:
* _artifact.py:
- Store new data in the new proto digests.
- Added new accessors to extract these new aspects from loaded artifacts.
- Bump the proto version number for compatibility
* _artifactcache.py: Adjusted to push and pull the new blobs and digests.
* element.py:
- Call Artifact.cache() with new parameters
- Expect the strict key from Artifact.get_meta_keys()
- Always specify the strict key when constructing an Artifact
instance which will later be used to cache the artifact
(i.e. the self.__artifact Artifact).
* _versions.py: Bump the global artifact version number, as this breaks
the artifact format.
* tests/cachekey: Updated cache key test for new keys.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes SandboxConfig such that it now has a simple constructor
and a new SandboxConfig.new_from_node() classmethod to load it from a YAML
configuration node. The new version of SandboxConfig now uses type annotations.
SandboxConfig also now sports a to_dict() method to help in serialization in
artifacts, this replaces SandboxConfig.get_unique_key() since it does exactly
the same thing, but uses the same names as expected in the YAML configuration
to achieve it.
The element.py code has been updated to use the classmethod, and to
use the to_dict() method when constructing cache keys.
This refactor is meant to allow instantiating a SandboxConfig without
any MappingNode, such that we can later load a SandboxConfig from an
Artifact instead of from an parsed Element.
This commit also updates the cache keys in the cache key test, as
the cache key format is slightly changed by the to_dict() method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes how the scheduler works and adapts all the code that needs
adapting in order to be able to run in threads instead of in
subprocesses, which helps with Windows support, and will allow some
simplifications in the main pipeline.
This addresses the following issues:
* Fix #810: All CAS calls are now made in the master process, and thus
share the same connection to the cas server
* Fix #93: We don't start as many child processes anymore, so the risk
of starving the machine are way less
* Fix #911: We now use `forkserver` for starting processes. We also
don't use subprocesses for jobs so we should be starting less
subprocesses
And the following highlevel changes where made:
* cascache.py: Run the CasCacheUsageMonitor in a thread instead of a
subprocess.
* casdprocessmanager.py: Ensure start and stop of the process are thread
safe.
* job.py: Run the child in a thread instead of a process, adapt how we
stop a thread, since we ca't use signals anymore.
* _multiprocessing.py: Not needed anymore, we are not using `fork()`.
* scheduler.py: Run the scheduler with a threadpool, to run the child
jobs in. Also adapt how our signal handling is done, since we are not
receiving signals from our children anymore, and can't kill them the
same way.
* sandbox: Stop using blocking signals to wait on the process, and use
timeouts all the time.
* messenger.py: Use a thread-local context for the handler, to allow for
multiple parameters in the same process.
* _remote.py: Ensure the start of the connection is thread safe
* _signal.py: Allow blocking entering in the signal's context managers
by setting an event. This is to ensure no thread runs long-running
code while we asked the scheduler to pause. This also ensures all the
signal handlers is thread safe.
* source.py: Change check around saving the source's ref. We are now
running in the same process, and thus the ref will already have been
changed.
|
|
|
|
|
|
|
|
| |
min-version
This test was broken as it was failing for the wrong reason, even though
in both cases it was a missing yaml key. Fix the test to fail due to it
being missing the required cert specified in the cache config.
|
|
|
|
|
|
| |
This test was broken as it was failing for the wrong reason, even though
in both cases it was a missing yaml key. Fix the test to fail due to it
being missing the required cert specified in the cache config.
|
|
|
|
|
|
| |
This test was broken as it was failing for the wrong reason, even though
in both cases it was a missing yaml key. Fix the test to fail due to it
being missing the required cert specified in the cache config.
|
|
|
|
|
| |
Skip an artifact expiry test in the case we don't have subsecond mtime
precision.
|
| |
|
|
|
|
|
|
| |
This tests a few glob patterns through `bst artifact show` and also
asserts that globs which match both elements and artifacts will produce
an error.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should not have a different globbing behavior than split rules for
the command line.
This should also make artifact globbing slightly more performant, as
the regular expression under the hood need not be recompiled for each
file being checked.
This commit also updates tests/frontend/artifact_list_contents.py to
use a double star `**` (globstar syntax) in order to match path
separators as well as all other characters in the list contents command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch does the following things:
* Ensure that we only ever try to match artifacts to user provided
glob patterns if we are performing a command which tries to load
artifacts.
* Stops being selective about glob patterns, if the user provides
a pattern which does not end in ".bst", we still try to match it
against elements.
* Provide a warning when the provided globs did not match anything,
previously this code only provided this warning if artifacts were
not matched to globs, but not elements.
* tests/frontend/artifact_delete.py, tests/frontend/push.py,
tests/frontend/buildcheckout.py:
Fixed tests to to not try to determine success by examining the
wording of a user facing message, use the machine readable errors
instead.
Fixes #959
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Test that we can override an element in a subproject with a local element,
the local element has a dependency on another element in the subproject
through the same junction.
* Test that we can override the dependency in the subproject, proving
that reverse dependencies in that subproject are built against the
overridden element.
* Test that we can override a subproject element using a local link
to another element in the same subproject.
* Test that we can declare an override of a subproject element using
a link in that subproject, and it will be effective even if that
link is not traversed by the actual dependency chain.
* Check that the same element being overridden multiple times in
a subproject is overridden by the highest level project which should
have the highest priority in the overrides.
|
|
|
|
|
|
|
|
|
|
|
| |
Starting from Python 3.9, it seems like the `_replace()` method no
longer works on `platform.uname_result` objects, that are returned by
`platform.uname()`. This causes some of our tests to fail on Python 3.9.
See https://bugs.python.org/issue42163 for upstream issue.
Fix it by slightly changing the way we override the values of the
`platform.uname()` function, such that it works on both Python 3.9 and
3.8 (and below).
|
|
|
|
|
|
|
|
|
|
|
|
| |
When printing log lines to the master log, we ensure that log lines are
printed with the element name and cache key which are related to the task
from which the messages are being issued.
When printing log lines to task specific log lines, we prefer to print
the element names and cache keys which pertain to the element from which
the log line was actually issued.
This new tests asserts this behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This behavior has regressed a while back when introducing the messenger
object in 0026e379 from merge request !1500.
Main behavior change:
- Messages in the master log always appear with the task element's
element name and cache key, even if the element or plugin issuing
the log line is not the primary task element.
- Messages logged in the task specific log, retain the context of the
element names and cache keys which are issuing the log lines.
Changes include:
* _message.py: Added the task element name & key members
* _messenger.py: Log the element key as well if it is provided
* _widget.py: Prefer the task name & key when logging, we fallback
to the element name & key in case messages are being logged outside
of any ongoing task (main process/context)
* job.py: Unconditionally stamp messages with the task name & key
Also removed some unused parameters here, clearing up an XXX comment
* plugin.py: Add new `_message_kwargs` instance property, it is the responsibility
of the core base class to maintain the base keyword arguments which
are to be used as kwargs for Message() instances created on behalf
of the issuing plugin.
Use this method to construct messages in Plugin.__message() and to
pass kwargs along to Messenger.timed_activity().
* element.py: Update the `_message_kwargs` when the cache key is updated
* tests/frontend/logging.py: Fix test to expect the cache key in the logline
* tests/frontend/artifact_log.py: Fix test to expect the cache key in the logline
Fixes #1393
|
|
|
|
|
| |
`--use-buildtree` is now a boolean option for `bst shell`. If no
buildtree is available, an error is raised.
|
|
|
|
|
|
| |
Use single scheduler run for pulling dependencies and buildtree.
Deduplicate cache checks and corresponding warning/error messages
with and without pulling enabled.
|
| |
|
|
|
|
|
|
| |
Skip a test which relies on mtimes differing within a short timespan,
this will fail if it happens fast enough (which it should) on systems
which do not support subsecond precision mtimes.
|
|
|
|
|
| |
Skip the mtime related test if the underlying filesystem does not
support subsecond precision mtime.
|
|
|
|
|
| |
Skip some of the artifact expiry tests in the case we don't have
subsecond mtime precision.
|
|
|
|
| |
Conditionally skip tests which require subsecond mtime precision.
|
| |
|
|
|
|
| |
projects
|
|
|
|
| |
subproject to override
|
|
|
|
|
|
|
|
|
|
|
| |
As per #819, BuildStream should pull missing artifacts by default. The
previous behavior was to only pull missing buildtrees. A top-level
`--no-pull` option can easily be supported in the future.
This change makes it possible to use a single scheduler session (with
concurrent pull and push jobs). This commit also simplifies the code as
it removes the `sched_error_action` emulation, using the regular
frontend code path instead.
|
|
|
|
| |
Test a dictionary instead of a string when given to the filename list.
|
| |
|
|
|
|
| |
Adding some clarifications about an existing test
|
|
|
|
|
| |
This tests that built filter artifacts don't depend on build
dependencies for integration.
|
|
|
|
|
|
|
|
| |
This test checks that:
* We get SKIP messages for tracking local sources
* We get SKIP messages for tracking workspaced elements
* We go no messages at all for elemenents which have no sources
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This was actually deadcode, since node.validate_keys() was called
on the configure dictionary without the legacy command steps. If any
element was using the legacy commands, they would have been met with
a load time error anyway.
This commit also updates the cache key test, since removing these
legacy commands affects BuildElement internally in such a way as
to affect the cache keys.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of relying on Element.search(), use Element.configure_dependencies() to
configure the layout.
Summary of changes:
* scriptelement.py:
Change ScriptElement.layout_add() API to take an Element instead of an element name,
this is now not optional (one cannot specify a `None` element).
This is an API breaking change
* plugins/elements/script.py:
Implement Element.configure_dependencies() in order to call ScriptElement.layout_add().
This is a breaking YAML format change.
* tests/integration: Script integration tests updated to use the new YAML format
* tests/cachekey: Updated for `script` element changes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Element.stage_dependency_artifacts()
This patch adds a new test plugin which implements
Element.configure_dependencies() in order to offer better flexibility for
testing overlaps.
Newly added tests:
* Test overlap warnings and errors when staging elsewhere than
in the sandbox root.
* Test unstaged files failure modes when staging elsewhere than
in the sandbox root.
* Test various overlap behaviors of OverlapAction, when different
calls to Element.stage_dependency_artifacts() cause overlaps to
occur after staging files into separate directories.
|
| |
|
|
|
|
|
|
| |
Test that when the same dependency is added as a build and runtime
dependency separately, they end up being the same dependency which
is both a build & runtime dependency in the loaded build graph.
|