summaryrefslogtreecommitdiff
path: root/src/buildstream
Commit message (Collapse)AuthorAgeFilesLines
* WIP: _fuse/mount: make _run_fuse protected, so subprocess can loadaevri/enable_spawn_ci_7Angelos Evripiotis2019-10-291-2/+2
|
* WIP: _fuse/mount: don't pickle self.__logfileAngelos Evripiotis2019-10-291-0/+6
|
* WIP: sandbox bwrap stateAngelos Evripiotis2019-10-292-2/+42
| | | | | | | | | This is probably going to need a more substantial refactor - the job pickler probably shouldn't be reaching into the individual sandboxes classes like this. Ideally the state would be stored somewhere more accessible than class attributes.
* _fuse/mount private mount() and unmount()Angelos Evripiotis2019-10-291-7/+7
| | | | These aren't used by anything else, so make them private.
* _plugincontext: stable ids for better picklingAngelos Evripiotis2019-10-291-8/+23
| | | | | | | | | | | | | Use the 'identifier' argument of PluginBase's make_plugin_source(), to ensure that the plugins have a consistent namespace when pickled and then unpickled in another process. This means that we can pickle plugins that are more than an entrypoint, e.g. ones that have multiple classes in the same file. This enables the `tests/frontend/mirror.py::test_mirror_fetch_multi` test to pass, which uses `fetch_source` plugin. That plugin has more than one class in it's file.
* testing/runcli: node._reset_global_state on runAngelos Evripiotis2019-10-291-1/+10
| | | | | Clear up some errors when running tests with `BST_FORCE_START_METHOD=spawn`.
* testing, messenger: make dummy_context picklableAngelos Evripiotis2019-10-291-0/+7
| | | | | | | | | | | Note that unittest.MagicMock is unfortunately not pickable, so if one tries to cross over to a ChildJob then we're in trouble. Sacrifice some convenience and a bit of safety by implementing our own _DummyTask instead of using MagicMock. Don't try to pickle the 'simple_task' context manager, if it's been overridden when creating a test context.
* _scheduler/jobs: mv pickle details into jobpicklerAngelos Evripiotis2019-10-292-60/+59
| | | | | Move pickle_child_job and do_pickled_child_job into jobpickler.py, to keep details like saving and restoring global state out of job.py.
* job pickling: also pickle global state in node.pyxAngelos Evripiotis2019-10-293-19/+72
|
* _frontend/status.py: Complete names when rendering dynamic queue statusTom Pollard2019-10-258-12/+15
| | | | | | | At somepoint action_name instead of complete_name started to be rendered to the user for the dynamic list of queue status reports. As we now have more of a UI seperation between 'Artifact' & 'Source' tasks, it also makes sense to reflect these actions in this output.
* job pickling: pickle first_pass_config factoriesAngelos Evripiotis2019-10-251-2/+4
| | | | | Note that for multiple-pass setups, i.e. where we have junctions, we also have to pickle things that belong to the 'first_pass_config'.
* job pickling: plugins don't return their factoriesAngelos Evripiotis2019-10-254-35/+35
| | | | | | | | | Remove the need for plugins to find and return the factory they came from. Also take the opportunity to combine source and element pickling into a single 'plugin' pickling path. This will make it easier for us to later support pickling plugins from the 'first_pass_config' of projects.
* element: remove double MetaSource importAngelos Evripiotis2019-10-251-2/+0
| | | | It turns out we don't even need it once.
* cascache: don't pickle _cache_usage_monitorAngelos Evripiotis2019-10-221-0/+10
| | | | | We don't need this in subprocesses, and it doesn't pickle, so don't try to. Make sure we get an error if we do try to use it in subprocesses.
* cascache: don't need create_cas_usage_monitor nowAngelos Evripiotis2019-10-221-10/+1
| | | | | | | Now that we don't use mark.in_suprocess in our tests anymore, we don't need to suppress creating _CASCacheUsageMonitor. This was introduced in 9c2bbe3c3871db3a33f81e48987f6d473f97b136
* tests: remove mark.in_subprocessAngelos Evripiotis2019-10-221-100/+0
| | | | | It seems we don't need this anymore, thanks to cleaning up gRPC background threads.
* cascache.py: instantiate usage monitor earlyDarius Makovsky2019-10-221-8/+16
| | | | | | tests: manually close channels when interacting with the cache cascache.py: disable the usage monitor if start method is spawn
* jobpickler: also pickle DigestProtoAngelos Evripiotis2019-10-211-6/+21
| | | | | This is now required by some code paths. Also make a generic routine for pickling / unpickling, as we may be doing more of this.
* cli: BST_FORCE_START_METHOD only sets if necessaryAngelos Evripiotis2019-10-181-10/+39
| | | | | | | | | | | | | Allow situations where the start method is already set, this enables us to use this in testing situations. Also, print a diagnostic if it's already set to something we didn't want. Now this block got more complex, split out into a new function. Now we're using this string a lot, extract it to a variable, to make sure we're spelling it correctly everywhere.
* element.py: remove unused variableDarius Makovsky2019-10-181-3/+0
|
* Remove special loading for workspacestraveltissues/notesDarius Makovsky2019-10-173-48/+21
| | | | | WorkspaceSource.init_workspace raises an exception so it is no longer necessary to retain the original source objects of the loaded element.
* Use workspace_close and workspace_open to reset workspacesDarius Makovsky2019-10-164-46/+42
| | | | | | | | | | | | | * tracking not needed in reset * support workspace opening for already open workspaces remove existing files to preserve behaviour Add ignore_workspaces kwarg to element loading via Stream().load Setting this to true will ignore special handling of sources for open workspaces and load the sources specified rather than a workspace source. This avoids having to reload elements when re-opening workspaces.
* _basecache.py: early return if remotes are setupDarius Makovsky2019-10-161-1/+3
|
* workspace.py: raise AssertionError on init_workspaceDarius Makovsky2019-10-161-7/+4
|
* node.pyx: Make 'strip_node_info' publicBenjamin Schubert2019-10-166-27/+27
| | | | | 'strip_node_info' would be useful for multiple plugins. We should therefore allow users to use it.
* element.py: Rework 'node_subst_list' to take the sequence directlyBenjamin Schubert2019-10-162-8/+7
| | | | Also rename it to 'node_subst_sequence_vars' to mimic 'node_subst_vars'.
* element.py: remove 'node_subst_member' and replace with 'node_susbst_vars'Benjamin Schubert2019-10-163-37/+7
|
* element.py: Remove '_subst_string'Benjamin Schubert2019-10-161-15/+0
| | | | This is now unused. An alternative is 'node_subst_vars'.
* _options/option.py: Pass the node instead of the str to 'transform'Benjamin Schubert2019-10-164-6/+9
| | | | | | | This is in order to consolidate how we substitute variables. _project: use 'node_subst_vars' instead of '_subst_list' as the first one expects a node.
* element.py: change 'substitute_variables' to take a 'ScalarNode' and renameBenjamin Schubert2019-10-162-7/+30
| | | | | | | | | Previously 'substitute_variable' would take a str, which would prevent us from doing nice error reporting. Using a 'ScalarNode' allows us to get our errors nicely. - rename it to 'node_subst_vars'. - add a nicer try-except around it in order to get nicer error reporting to users.
* testing/.../site: windows-friendly HAVE_OLD_GITaevri/oldgitAngelos Evripiotis2019-10-151-1/+3
|
* workspace.py: Do not close gRPC channelsJürg Billeter2019-10-152-5/+0
| | | | This is now handled in Context.prepare_fork().
* _remote.py: Do not use subprocess to check remoteJürg Billeter2019-10-151-37/+6
| | | | This is no longer required as gRPC connections are closed before fork.
* _context.py: Replace is_fork_allowed() with prepare_fork()Jürg Billeter2019-10-152-13/+10
|
* scheduler.py: Call is_fork_allowed() right before spawning jobsJürg Billeter2019-10-151-2/+7
| | | | | gRPC channels might be opened after the scheduler has already been started. Make sure channels are closed right before spawning jobs.
* _basecache.py: Add close_grpc_channels() methodJürg Billeter2019-10-151-3/+10
|
* cascache.py: Rename close_channel() to close_grpc_channels()Jürg Billeter2019-10-153-5/+5
| | | | This aligns the method name with has_open_grpc_channels().
* cascache.py: Reset _casd_cas in close_channel()Jürg Billeter2019-10-151-0/+1
|
* _remote.py: Reset _initialized in close()Jürg Billeter2019-10-151-0/+2
|
* _sourcecache.py: Reset source_service in SourceRemote.close()Jürg Billeter2019-10-151-0/+4
|
* _artifactcache.py: Reset artifact_service in ArtifactRemote.close()Jürg Billeter2019-10-151-0/+4
|
* win32: _platform/win32: add support for win32Angelos Evripiotis2019-10-142-0/+63
| | | | | | Copy the approach of 'Darwin' and provide a SandboxDummy. This enables us to run 'bst workspace list' on Windows.
* Remove XXX comment about missing progressTristan Maat2019-10-101-2/+9
| | | | | | | | | | | | | This should be safe now - this particular point turned out to be involved in loading dependencies of junction elements, rather than anything in their projects. This meant that, yes, we were missing progress, however junction elements are not allowed to have dependencies in the first place, so we simply short-circuit their load and avoid the problem altogether. We also added more explicit progress opt-outs, since it's far too easy to end up with spurious Nones.
* testutils/context.py: Mock tasks instead of accepting NonesTristan Maat2019-10-102-5/+10
| | | | | | | | | | | | To ensure that we only disable element loading task progress reporting for very specific code paths, we need to teach the test suite to be a bit smarter. For this reason we now mock a _Task object and return it in our mock context's relevant method invocations. Other code paths that deliberately invoke the loader without task reporting now mark their loads with NO_PROGRESS.
* loader.py: Avoid loading deps of junction metaelementsTristan Maat2019-10-101-1/+19
| | | | | | | | | | | | | By avoiding this, loading metaelements of junctions becomes cheap even for junctions with erroneous dependencies, and we can ignore their task reporting. Task reporting for junction metaelement loading is confusing, since the junction element itself will never be part of the pipeline, so we'd rather not have this show up as an actual loaded element. Elements loaded from the junction are loaded separately, therefore this does not affect their progress display.
* _sourcecache: Fallback to fetch source when remote has missing blobsBenjamin Schubert2019-10-101-0/+5
| | | | | | If a remote has some missing blobs for a source, we should not fail abruptly but instead continue to the next remote, and, in the worst case, fetch the source again.
* _fuse/mount.py: Monitor the fuse process while waiting for the mountbschubert/fuse-permissionsBenjamin Schubert2019-10-091-14/+35
| | | | | | | | | | | In some cases, users might not have permissions to use fuse, or fuse might crash. This was previously leading to a hanged process and, with chance an error message on the UI, which could be overwritten. This ensures we are explicitely monitoring the fuse process while waiting and adds better reporting of the fuse errors.
* _artifactcache.py: Don't push artifact blobs when no files are presentBenjamin Schubert2019-10-082-2/+4
| | | | | | | | | | Previously, if an artifact proto had no files at all in it, we would fail at pushing it, making BuildStream crash. When no files are part of an artifact proto, we can short-circuit the call and avoid pushing them unecessarily. - Add a test to ensure this doesn't come back.
* _scheduler.py: Listen for buildbox-casd failure and terminateBenjamin Schubert2019-10-083-2/+44
| | | | | | | This adds a listener on the scheduler's event loop to ensure that the buildbox-casd process stays alive during the run. If that fails, terminate all running processes, we know they can't succeed anyways and exit accordingly.
* Defer committing workspace files to cachetraveltissues/1159Darius Makovsky2019-10-082-9/+10
| | | | | | | | | Remove XFAIL mark from test_workspace_visible and remove the explicit SourceCache.commit() in the workspace source plugin. Allow buildstream to handle the commit logic. Add handling for non-cached workspace sources in `source.Source._generate_keys()`.