summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* _artifactcache/pushreceive.py: Verify that we have the commit objectspushreceiveJürg Billeter2017-07-271-0/+6
|
* _artifactcache/pushreceive.py: Add handshake after sending objectsJürg Billeter2017-07-271-0/+12
| | | | | Ensure all objects have been sent before moving them into the repository and do not terminate pusher while receiver is still processing.
* Add network-retries optionJürg Billeter2017-07-276-6/+41
| | | | | | Retry network tasks up to two times by default. Fixes #30
* _scheduler/job.py: Fix respawnJürg Billeter2017-07-271-2/+2
| | | | | Move parent_start_listening() from __init__ to spawn() to support respawning a job after shutdown.
* _scheduler/job.py: Ignore SIGTSTP error for already exited processJürg Billeter2017-07-271-10/+14
|
* _artifactcache/pushreceive.py: Raise PushException on connection failureJürg Billeter2017-07-261-0/+4
| | | | | | Unexpected connection termination should not be considered a bug. Fixes #51
* main.py: Dont handle build failures interactively when --on-error is specifiedTristan Van Berkom2017-07-261-1/+12
| | | | | If --on-error is specified to decide the failure action on the command line, then dont interactively handle that decision.
* _sandboxbwrap.py: Ensure existence of the cwdTristan Van Berkom2017-07-261-0/+5
| | | | | Before we were assuming the user would specify a cwd which exists, now we dont care and just create it if it's not there.
* element.py: Fix self.__public being written to during buildsJonathan Maw2017-07-261-1/+1
|
* dpkg_build.py: Prevent cache-key changing mid-buildJonathan Maw2017-07-261-1/+16
|
* _artifactcache/pushreceive.py: Drop branch check in receiverJürg Billeter2017-07-261-18/+1
| | | | | | | | | | The pusher already checks this and the check in the receiver does not provide any additional guarantees as it is prone to race conditions. This prevents a push error in case two clients push an artifact with the same key around the same time. Fixes #52
* _sandboxbwrap.py: Restore terminal after exit of interactive childJürg Billeter2017-07-251-0/+12
| | | | | | | | | | | | | | Make the main BuildStream process the foreground process again when the interactive child exits. Otherwise the next read() on stdin will trigger SIGTTIN and stop the process. This is required because the sandboxed process does not have permission to do this on its own (running in separate PID namespace). dash still prints an error because it fails to restore the foreground process, however, this is harmless. bash doesn't print an error in this case, but the behavior is otherwise identical. Fixes #41
* element.py: Introduce artifact versionsJürg Billeter2017-07-251-0/+29
| | | | Fixes #49
* buildstream/_loader.py: Don't repeat elements in resolve_project_variantPedro Alvarez Piedehierro2017-07-251-2/+10
| | | | | Add an extra argument to the function to know which elements were already resolved.
* element.py: Check consistency before recursion in _get_cache_key()Jürg Billeter2017-07-251-8/+8
|
* Check for write access to remote artifact cache early on in the pipelinesam/artifactcache-preflight-checkSam Thursfield2017-07-213-38/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the first time you configured an artifact cache, you would get to the end of your first build and then BuildStream would exit because of some stupid mistake like you got the address slightly wrong or you forgot to add the host keys of the remote artifact cache to `~/.ssh/known_hosts`. To avoid surprises, if there's an artifacts push-url configured we now try to connect to it as a preflight check so that issues are raised early. On success, you will see something like this: [--:--:--][90904fe4][ main:gnu-toolchain/stage2.bst ] START Checking connectivity to remote artifact cache [00:00:00][90904fe4][ main:gnu-toolchain/stage2.bst ] SUCCESS Connectivity OK On failure, it looks like this: [--:--:--][90904fe4][ main:gnu-toolchain/stage2.bst ] START Checking connectivity to remote artifact cache [00:00:03][90904fe4][ main:gnu-toolchain/stage2.bst ] FAILURE BuildStream will be unable to push artifacts to the shared cache: ssh: connect to host ostree.baserock.org port 2220: Connection timed out As a bonus, for some reason this check causes SSH to ask about unknown host keys rather than just failing, so you may now see messages like this if the host keys are unknown rather than an error: The authenticity of host '[ostree.baserock.org]:22200 ([185.43.218.170]:22200)' can't be established. ECDSA key fingerprint is SHA256:mB+MNfYREOdRfp2FG6dceOlguE/Skd4QwnS0tvCPcnI. ECDSA key fingerprint is MD5:8f:fa:ab:90:19:31:f9:f7:f1:d4:e5:f0:a2:be:56:71. Are you sure you want to continue connecting (yes/no)?
* _artifactcache/pushreceive.py: Do not list all refsJürg Billeter2017-07-201-5/+20
| | | | | | | The full ref list can easily exceed the maximum message size. Limit list to refs being pushed. Fixes #47
* userconfig.yaml: Mention push-url restrictionsummaryJürg Billeter2017-07-201-0/+1
| | | | | If push-url is specified, it must point to the same repository as pull-url as the summary file is used for pull and push operations.
* Artifacts documentation: Add section for summary file updatesJürg Billeter2017-07-201-0/+17
|
* _pipeline.py: Continue if remote artifact repository is unavailableJürg Billeter2017-07-201-2/+6
|
* _artifactcache: Add set_offline()Jürg Billeter2017-07-201-2/+17
|
* element.py: Remove _built() and _set_build()Jürg Billeter2017-07-201-19/+0
| | | | They are no longer needed.
* buildqueue.py: Do not call _set_built()Jürg Billeter2017-07-201-1/+0
| | | | _set_built() is being removed.
* pushqueue.py: Use _skip_push() instead of _built()Jürg Billeter2017-07-201-1/+1
|
* element.py: Add _skip_push()Jürg Billeter2017-07-201-0/+17
|
* _artifactcache: Add remote_contains_key()Jürg Billeter2017-07-201-5/+19
|
* _artifactcache: Do not attempt to pull unavailable artifactsJürg Billeter2017-07-201-19/+18
|
* pullqueue.py: Update skip rulesJürg Billeter2017-07-201-1/+13
|
* element.py: Consider pull failure fatalJürg Billeter2017-07-201-22/+5
| | | | | Build planning uses list of artifacts in remote artifact cache. Pull failures cannot be ignored.
* _pipeline.py: Use _remotely_cached()Jürg Billeter2017-07-201-1/+1
|
* _pipeline.py: Fetch remote refsJürg Billeter2017-07-201-0/+3
|
* tests/artifactcache: Generate and fetch summary fileJürg Billeter2017-07-201-2/+13
|
* tests/artifactcache: Do not configure pull/push for all testsJürg Billeter2017-07-201-3/+4
|
* element.py: Add _remotely_cached()Jürg Billeter2017-07-201-0/+37
|
* _artifactcache: Add remote_contains()Jürg Billeter2017-07-201-0/+25
|
* _artifactcache: Use subprocess to fetch summaryJürg Billeter2017-07-201-1/+18
| | | | | OSTree fetch operations in subprocesses hang if OSTree fetch operations have been used in the main process.
* _artifactcache: Add fetch_remote_refs()Jürg Billeter2017-07-201-0/+16
|
* _ostree.py: Add list_remote_refs()Jürg Billeter2017-07-201-0/+19
|
* element.py: Do not fail with artifacts with pre-workspace metadataJürg Billeter2017-07-201-2/+8
|
* .gitlab-ci.yml: Install bzr to run testsPedro Alvarez Piedehierro2017-07-191-0/+1
|
* docs: Fix Docker instructionsPedro Alvarez Piedehierro2017-07-191-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Suggest the user to mount /etc/passwd inside the container to fix `bzr` command. The command `bzr` fails if the uid of the user using it doesn't exist, giving, for example, the following back trace when running `bzr init`: File "/usr/lib64/python2.7/site-packages/bzrlib/lockdir.py", line 238, in _attempt_lock tmpname = self._create_pending_dir() File "/usr/lib64/python2.7/site-packages/bzrlib/lockdir.py", line 335, in _create_pending_dir info = LockHeldInfo.for_this_process(self.extra_holder_info) File "/usr/lib64/python2.7/site-packages/bzrlib/lockdir.py", line 779, in for_this_process user=get_username_for_lock_info(), File "/usr/lib64/python2.7/site-packages/bzrlib/lockdir.py", line 863, in get_username_for_lock_info return osutils.getuser_unicode() File "/usr/lib64/python2.7/site-packages/bzrlib/osutils.py", line 356, in _posix_getuser_unicode name = getpass.getuser() File "/usr/lib64/python2.7/getpass.py", line 158, in getuser return pwd.getpwuid(os.getuid())[0] KeyError: 'getpwuid(): uid not found: 1000' To fix this there were 2 possible solutions: - To run everything as rooat (or as another user) in the container. We will not need to set the uid to use, making the instructions simpler. The main problem is that the files stored in the host will probably have a different owner, and the user will have problems with file permissions. - To ensure the uid of the current user also exists inside the container.
* Dockerfile: Include bzr into the imagePedro Alvarez Piedehierro2017-07-191-1/+1
|
* dpkg_deploy.yaml: Removed a line of whitespace which had floating tabsTristan Van Berkom2017-07-191-1/+0
|
* Add documentation links to dpkg_build and dpkg_deploy elementsJonathan Maw2017-07-191-0/+2
|
* dpkg-deploy: Add an element that creates debian packagesJonathan Maw2017-07-192-0/+294
| | | | | It takes public data from the input element and uses that to generate a debian package.
* dpkg-build.py: Add a build element for debian sourcesJonathan Maw2017-07-192-0/+280
| | | | | It produces an artifact, plus write public data to provide enough information to build a debian package out of the artifact.
* _frontend/main.py: Naming workspace functions with workspace prefixTristan Van Berkom2017-07-181-8/+8
| | | | | | | | | | The `list` function was already named `list_` to avoid clashing with the builtin python type, the `open` command was causing log viewing to fail because we try to call the `open` builtin python function but it's shadowed. So just call all of the workspace functions with a `workspace_` prefix (but keep the same names on the command line).
* source.py: Fix missing files when calculating sha256sumTristan Maat2017-07-171-4/+9
|
* _pipeline.py: Reset workspace after deletingTristan Maat2017-07-171-4/+4
|
* element.py: Encode workspaced dependencies in metadataTristan Maat2017-07-171-5/+28
|