| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This is probably going to need a more substantial refactor - the job
pickler probably shouldn't be reaching into the individual sandboxes
classes like this.
Ideally the state would be stored somewhere more accessible than class
attributes.
|
|
|
|
| |
These aren't used by anything else, so make them private.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the 'identifier' argument of PluginBase's make_plugin_source(), to
ensure that the plugins have a consistent namespace when pickled and
then unpickled in another process.
This means that we can pickle plugins that are more than an entrypoint,
e.g. ones that have multiple classes in the same file.
This enables the `tests/frontend/mirror.py::test_mirror_fetch_multi`
test to pass, which uses `fetch_source` plugin. That plugin has more
than one class in it's file.
|
|
|
|
|
| |
Clear up some errors when running tests with
`BST_FORCE_START_METHOD=spawn`.
|
|
|
|
|
|
|
|
|
|
|
| |
Note that unittest.MagicMock is unfortunately not pickable, so if one
tries to cross over to a ChildJob then we're in trouble.
Sacrifice some convenience and a bit of safety by implementing our own
_DummyTask instead of using MagicMock.
Don't try to pickle the 'simple_task' context manager, if it's been
overridden when creating a test context.
|
|
|
|
|
| |
Move pickle_child_job and do_pickled_child_job into jobpickler.py, to
keep details like saving and restoring global state out of job.py.
|
| |
|
|
|
|
|
|
|
| |
At somepoint action_name instead of complete_name started to be
rendered to the user for the dynamic list of queue status reports.
As we now have more of a UI seperation between 'Artifact' & 'Source'
tasks, it also makes sense to reflect these actions in this output.
|
|
|
|
|
| |
Note that for multiple-pass setups, i.e. where we have junctions, we
also have to pickle things that belong to the 'first_pass_config'.
|
|
|
|
|
|
|
|
|
| |
Remove the need for plugins to find and return the factory they came
from. Also take the opportunity to combine source and element pickling
into a single 'plugin' pickling path.
This will make it easier for us to later support pickling plugins from
the 'first_pass_config' of projects.
|
|
|
|
| |
It turns out we don't even need it once.
|
|
|
|
|
| |
We don't need this in subprocesses, and it doesn't pickle, so don't try
to. Make sure we get an error if we do try to use it in subprocesses.
|
|
|
|
|
|
|
| |
Now that we don't use mark.in_suprocess in our tests anymore, we don't
need to suppress creating _CASCacheUsageMonitor.
This was introduced in 9c2bbe3c3871db3a33f81e48987f6d473f97b136
|
|
|
|
|
| |
It seems we don't need this anymore, thanks to cleaning up gRPC
background threads.
|
|
|
|
|
|
| |
tests: manually close channels when interacting with the cache
cascache.py: disable the usage monitor if start method is spawn
|
|
|
|
|
| |
This is now required by some code paths. Also make a generic routine for
pickling / unpickling, as we may be doing more of this.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow situations where the start method is already set, this enables us
to use this in testing situations.
Also, print a diagnostic if it's already set to something we didn't
want.
Now this block got more complex, split out into a new function.
Now we're using this string a lot, extract it to a variable, to make
sure we're spelling it correctly everywhere.
|
| |
|
|
|
|
|
| |
WorkspaceSource.init_workspace raises an exception so it is no longer
necessary to retain the original source objects of the loaded element.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tracking not needed in reset
* support workspace opening for already open workspaces
remove existing files to preserve behaviour
Add ignore_workspaces kwarg to element loading via Stream().load
Setting this to true will ignore special handling of sources for open
workspaces and load the sources specified rather than a workspace
source. This avoids having to reload elements when re-opening
workspaces.
|
| |
|
| |
|
|
|
|
|
| |
'strip_node_info' would be useful for multiple plugins. We should
therefore allow users to use it.
|
|
|
|
| |
Also rename it to 'node_subst_sequence_vars' to mimic 'node_subst_vars'.
|
| |
|
|
|
|
| |
This is now unused. An alternative is 'node_subst_vars'.
|
|
|
|
|
|
|
| |
This is in order to consolidate how we substitute variables.
_project: use 'node_subst_vars' instead of '_subst_list'
as the first one expects a node.
|
|
|
|
|
|
|
|
|
| |
Previously 'substitute_variable' would take a str, which would prevent
us from doing nice error reporting. Using a 'ScalarNode' allows us
to get our errors nicely.
- rename it to 'node_subst_vars'.
- add a nicer try-except around it in order to get nicer error reporting to users.
|
| |
|
|
|
|
| |
This is now handled in Context.prepare_fork().
|
|
|
|
| |
This is no longer required as gRPC connections are closed before fork.
|
| |
|
|
|
|
|
| |
gRPC channels might be opened after the scheduler has already been
started. Make sure channels are closed right before spawning jobs.
|
| |
|
|
|
|
| |
This aligns the method name with has_open_grpc_channels().
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Copy the approach of 'Darwin' and provide a SandboxDummy.
This enables us to run 'bst workspace list' on Windows.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should be safe now - this particular point turned out to be
involved in loading dependencies of junction elements, rather than
anything in their projects.
This meant that, yes, we were missing progress, however junction
elements are not allowed to have dependencies in the first place, so
we simply short-circuit their load and avoid the problem altogether.
We also added more explicit progress opt-outs, since it's far too easy
to end up with spurious Nones.
|
|
|
|
|
|
|
|
|
|
|
|
| |
To ensure that we only disable element loading task progress reporting
for very specific code paths, we need to teach the test suite to be a
bit smarter.
For this reason we now mock a _Task object and return it in our mock
context's relevant method invocations.
Other code paths that deliberately invoke the loader without task
reporting now mark their loads with NO_PROGRESS.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By avoiding this, loading metaelements of junctions becomes cheap even
for junctions with erroneous dependencies, and we can ignore their
task reporting.
Task reporting for junction metaelement loading is confusing, since
the junction element itself will never be part of the pipeline, so
we'd rather not have this show up as an actual loaded element.
Elements loaded from the junction are loaded separately, therefore
this does not affect their progress display.
|
|
|
|
|
|
| |
If a remote has some missing blobs for a source, we should not fail
abruptly but instead continue to the next remote, and, in the worst
case, fetch the source again.
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases, users might not have permissions to use fuse, or fuse
might crash.
This was previously leading to a hanged process and, with chance an
error message on the UI, which could be overwritten.
This ensures we are explicitely monitoring the fuse process while
waiting and adds better reporting of the fuse errors.
|
|
|
|
|
|
|
|
|
|
| |
Previously, if an artifact proto had no files at all in it, we would
fail at pushing it, making BuildStream crash.
When no files are part of an artifact proto, we can short-circuit
the call and avoid pushing them unecessarily.
- Add a test to ensure this doesn't come back.
|
|
|
|
|
|
|
| |
This adds a listener on the scheduler's event loop to ensure that
the buildbox-casd process stays alive during the run. If that fails,
terminate all running processes, we know they can't succeed anyways
and exit accordingly.
|
|
|
|
|
|
|
|
|
| |
Remove XFAIL mark from test_workspace_visible and remove the explicit
SourceCache.commit() in the workspace source plugin. Allow buildstream
to handle the commit logic.
Add handling for non-cached workspace sources in
`source.Source._generate_keys()`.
|