summaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* _platform/platform.py: Add alias for IBM AIX 7 powerpcChandan Singh2019-12-231-3/+11
| | | | | | | | | | | * `uname -m` is unusable in case of IBM AIX 7 as it reports the serial number of the machine. As a workaround, special case it and use the reported processor identifier. * tests/format/optionos.py: Don't use AIX for unsupported architecture `AIX` is special-cased in BuildStream, so use a different system for testing unsupported architectures.
* _platform/platform.py: Add alias for sun4vChandan Singh2019-12-231-0/+1
| | | | | | | | | | | Some Solaris 11 servers report `uname -m` as `sun4v`. It uses the same SPARC v9 processor architecture so add an alias for it as we already support `sparc-v9`. `uname -a` output for reference: $ uname -a SunOS sundev1 5.11 11.3 sun4v sparc sun4v Solaris
* job.py: Do not call Process.close()Jürg Billeter2019-12-191-1/+0
| | | | | | | | | | | As we handle subprocess termination by pid with an asyncio child watcher, the multiprocessing.Process object does not get notified when the process terminates. And as the child watcher reaps the process, the pid is no longer valid and the Process object is unable to check whether the process is dead. This results in Process.close() raising a ValueError. Fixes: 9c23ce5c ("job.py: Replace message queue with pipe")
* _platform/platform.py: Add powerpc64 and powerpc64leJavier Jardón2019-12-171-0/+2
|
* tests: source_determinism.py: Skip flaky test with buildbox-runJürg Billeter2019-12-171-0/+4
| | | | | The tests are flaky due to non-deterministic timestamps in the output of `ls -l`. See https://gitlab.com/BuildStream/buildstream/issues/1218
* testing/_utils/site.py: Add BUILDBOX_RUN variableJürg Billeter2019-12-171-0/+9
|
* testing/runcli.py: Add BST_CAS_STAGING_ROOT environment variableJürg Billeter2019-12-171-0/+7
| | | | | This is required for testing with userchroot to create staging directories in a system-specific prefix.
* utils.py: Use `onerror` in `_force_rmtree`Tristan Maat2019-12-171-11/+16
| | | | | | | | | If we don't, and encounter a file we don't own, but have permission to delete, we'll fail with EPERM, since we won't be able to change permissions but will be able to delete it. Instead, we now try to change permissions and remove a file *after* we realize we couldn't at first.
* sources/git.py: Document that checkout-submodules is recursivetmewett/recursive-gitTom Mewett2019-12-131-2/+6
|
* _gitsourcebase.py: Rename and refactor _ignore_submoduleTom Mewett2019-12-131-11/+12
| | | | | "Ignore submodule" sounds like it could be an action, so this changes the name to more clearly be a predicate.
* _gitsourcebase.py: Manage submodules recursivelyTom Mewett2019-12-131-56/+65
| | | | | | | | | | | Previously, GitSourceBase would only consider immediate submodules of the superproject. It now fetches and stages recursively. To achieve this, this commit somewhat refactors the relationship between GitMirror and GitSourceBase. Enumerating GitMirrors for the submodules is now done in GitMirror itself. GitSourceBase recursively iterates these mirror classes with _recurse_submodules and applies the source configuration with _configure_submodules.
* _gitsourcebase.py: Add and update some comments on _GitMirrorTom Mewett2019-12-131-3/+14
|
* element.py: Only cache sources if some had to be fetchedbschubert/cleanup-element-fetchBenjamin Schubert2019-12-131-1/+1
|
* element.py: Remove temporary variableBenjamin Schubert2019-12-131-2/+2
| | | | | This variable is used only once, when the original is used multiple times. This only increases the cognitive load
* job.py: Replace message queue with pipejuerg/job-pipeJürg Billeter2019-12-121-44/+40
| | | | | | | | A lightweight unidirectional pipe is sufficient to pass messages from the child job process to its parent. This also avoids the need to access the private `_reader` instance variable of `multiprocessing.Queue`.
* tests: Drop buildbox xfailsJürg Billeter2019-12-102-7/+0
|
* Drop buildbox sandboxJürg Billeter2019-12-102-275/+0
| | | | Replaced by buildbox-run.
* _platform: Support experimental buildbox-run sandbox on all platformsJürg Billeter2019-12-102-3/+25
| | | | | The buildbox-run sandbox is used only if BST_FORCE_SANDBOX is set to buildbox-run.
* Add buildbox-run sandboxBenjamin Schubert2019-12-101-0/+148
|
* _sandboxreapi.py: Pass sandbox flags to _execute_action()Jürg Billeter2019-12-102-3/+3
|
* _project.py: Allow junctions to use parent remoteThomas Coldrick2019-12-101-3/+3
| | | | | | | | | | | | At present it doesn't seem to be possible to use ignore-remote-caches and also cache cross-junction artifacts in one's own cache. By passing the parent caches to the junction we ensure that things get cached in the parent cache. For a motivating purpose, consider that one may have a (patched) junction which specifies a cache incompatible with master. This will throw warnings at every invokation of bst, or you won't cache cross-junction artifacts.
* _remote: ignore unused argsDarius Makovsky2019-12-091-1/+1
|
* _profile: ignore unused argsDarius Makovsky2019-12-091-1/+1
|
* resources: remove [un]register_exclusive_interest()Darius Makovsky2019-12-091-50/+0
|
* _project: remove create_artifact_element()Darius Makovsky2019-12-091-13/+0
|
* _pipeline: remove subtract_elements()Darius Makovsky2019-12-091-15/+0
|
* _pipeline: remove targets_include()Darius Makovsky2019-12-091-17/+0
|
* _context: remove set_artifact_directories_optional()Darius Makovsky2019-12-091-10/+0
|
* casserver: remove _digest_from_*_resource_name()Darius Makovsky2019-12-091-45/+0
|
* casserver: remove ArtifactStatus()Darius Makovsky2019-12-091-4/+0
|
* cascache: remove update_tree_mtime()Darius Makovsky2019-12-091-4/+0
|
* _artifactcache: remove _reachable_digests()Darius Makovsky2019-12-091-18/+0
|
* _artifactcache: remove _reachable_directories()Darius Makovsky2019-12-091-18/+0
|
* _artifactcache: remove get_artifact_logs()Darius Makovsky2019-12-091-16/+0
|
* _basecache: remove has_open_grpc_channels()Darius Makovsky2019-12-091-12/+0
|
* _workspaces: remove get_key()Darius Makovsky2019-12-091-38/+0
|
* _workspaces: remove invalidate_key()Darius Makovsky2019-12-091-8/+0
|
* element: remove _is_required()/__requiredDarius Makovsky2019-12-091-8/+0
|
* element: Remove unused method (__reset_cache_data)Darius Makovsky2019-12-091-18/+0
|
* scheduler.py: Only run thread-safe code in callbacks from watchersbschubert/stricter-asyncio-handlingBenjamin Schubert2019-12-072-2/+12
| | | | | | | | Per https://docs.python.org/3/library/asyncio-policy.html#asyncio.AbstractChildWatcher.add_child_handler, the callback from a child handler must be thread safe. Not all our callbacks were. This changes all our callbacks to schedule a call for the next loop iteration instead of executing it directly.
* job.py: Only start new jobs in a `with watcher:` blockBenjamin Schubert2019-12-071-26/+5
| | | | | | | | The documentation (https://docs.python.org/3/library/asyncio-policy.html#asyncio.AbstractChildWatcher) is apparently missing this part, but the code mentions that new processes should only ever be called inside a with block: https://github.com/python/cpython/blob/99eb70a9eb9493602ff6ad8bb92df4318cf05a3e/Lib/asyncio/unix_events.py#L808
* job.py: Remove '_watcher' attribute, it is not neededBenjamin Schubert2019-12-071-3/+2
| | | | We don't need to keep a reference to the watcher, let's remove it.
* scheduler.py: Optimize scheduling by not calling it unnecessarilyBenjamin Schubert2019-12-071-9/+18
| | | | | | | | | | | | This delays the call to the re-scheduling of jobs until the current event loop as terminated. This is in order to reduce the number of time we call this method per loop, which should reduce the pressure on the loop and allow faster event handling Since the call is now delayed, also ensure we only call it once per loop iteration.
* sandbox.py: Remove unused _set_virtual_directory() methodJürg Billeter2019-12-051-6/+0
|
* _sandboxreapi.py: Use CasBasedDirectory._reset()Jürg Billeter2019-12-051-3/+2
| | | | | | Calling _reset() instead of completely replacing the object fixes element plugins that use a virtual directory object across Sandbox.run() calls such as the compose plugin with integration commands.
* _casbaseddirectory.py: Add _reset() methodJürg Billeter2019-12-051-1/+6
| | | | This reinitializes a CASBasedDirectory object from a directory digest.
* testing/runcli.py: Remove unused configure parameter from run() methodsJürg Billeter2019-12-051-34/+5
|
* tests: source_determinism.py: Do not use too restrictive test umasksjuerg/casd-separate-userJürg Billeter2019-12-031-2/+7
| | | | | | | | To protect the local cache of buildbox-casd from corruption without the use of FUSE, buildbox-casd has to run as a different user. Use less restrictive umasks in the source determinism tests to allow buildbox-casd to function when it is running as a separate user.
* testing/_utils/site.py: Add CASD_SEPARATE_USER variableJürg Billeter2019-12-031-0/+5
| | | | | This is set to True if buildbox-casd is installed with the set-uid bit and thus, indicates whether buildbox-casd is running as a separate user.
* utils.py: safe_link(): Copy if hardlink creation is not permittedJürg Billeter2019-12-031-1/+1
| | | | | | | | | | | By default, Linux doesn't allow creating hardlinks to read-only files of other users since Linux 3.6 (see /proc/sys/fs/protected_hardlinks). This fixes staging when buildbox-casd is running as a separate user and the traditional bubblewrap sandboxing backend is used. This combination is not recommended, however, it's triggered in CI by docker images that run buildbox-casd as a separate user and a few test cases that override BST_FORCE_SANDBOX.