| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Technically this breaks cache keys for the local source, but as
this comes in a branch which fixes local source cache keys to be
stable (they were random before this branch), we wont bother with
considering this enhancement a separate API break, the cache key
breakage was inescapable anyway.
|
|
|
|
|
|
|
| |
When fetching a downloadable source, we make a defensive check
to avoid redundant download at fetch() time by checking if it's
already cached, but fetch() will never be called if the source
is already cached.
|
|
|
|
|
|
|
|
| |
The local plugin is always Consistency.CACHED, this means that
fetch(), set_ref() and get_ref() methods will never be called.
Instead of omitting them, just "pragma: nocover" on the `pass`
statements, making our coverage report more realistic.
|
|
|
|
|
|
|
|
|
| |
The patch plugin was checking if the target directory exists, however
this is automatically guaranteed by the Source abstract class and
documented to be guaranteed as well.
Since this error cannot be caught by the plugin (it will be caught
in advance by the Source class), removing the check from patch.py.
|
| |
|
|
|
|
|
| |
It's not required to raise SourceError() manually when calling
utils.get_host_tool().
|
|
|
|
|
| |
This avoid multiple file system traversal when the key is requested multiple
times.
|
|
|
|
|
| |
This isn't reachable as there's a default value in script.yaml, also it
has an error in it - ElementError hasn't been declared in this scope.
|
|
|
|
|
|
|
| |
The residual `from e` was probably left there after a refactoring from
try-catch to exit-code-checking. While this could prevent the expected
`SourceError` being thrown, I couldn't find a proper way to trigger it since
`git show` has no documented return codes other than `0` and `128`.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the common practice with cmake
Actually, some modules will fail to build if this is not follow. For
example for llvm you wil get this error when configuring:
"
CMake would overwrite the makefiles distributed with LLVM.
Please create a directory and run cmake from there, passing the path
to this source directory as the last argument.
"
|
|
|
|
|
|
|
|
|
| |
This will allow to define:
- global configuration parameters that will be used to all the elements
using that build system
- local configuration parameters that will override the global ones
Left *-extra for compatibility
|
| |
|
|
|
|
| |
should be normalized.
|
|
|
|
|
|
|
|
|
|
| |
compose plugin including paths still reachable through following of
symbolic links.
Keeping reachable paths through following of symbolic links in
manifest. Can lead to ENOENT error when copying the file if target
directory of a symbolic link is not yet created. The file is anyway
copied since the real path of the file is also in the manifest.
|
|
|
|
| |
and add accept header to avoid 406 error on some http servers (e.g alioth.debian.org).
|
|
|
|
|
| |
Since these changes were effected in 3b17762a4cab23c238762ca32baa348788347473,
these stringifications are now implied and no longer needed.
|
|
|
|
|
|
|
|
|
|
| |
o Some things changed in master since this patch, notably the
keyword only arguments have changed
o Enhanced the user feedback to mention removed, added and modified
files resulting from running integration
o Dont silence messages while integrating the sandbox
|
|
|
|
| |
Fixes issue #147
|
| |
|
|
|
|
|
|
|
|
| |
When extracting files from a base directory, we are normalizing
the TarInfo file names so we need to also normalize the link names
in the case of links and symlinks.
Fixes issue #155
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BuildStream made the choice to unconditionally run the autogen step for
autotools elements, even when building from tarballs.
However, due to how the conditionals for what to do in that step were
set up, some tarballs would fall through the crack and effectively skip
that step.
For example the Python tarballs and VCS checkouts both:
* do not have an autogen/autogen.sh script
* do not have a bootstrap/bootstrap.sh script
* do have a configure script
The conditionals we had before this change would not do anything for a
project like that, and the first thing to actually be run would be the
configure script.
With this change, `autoreconf` gets run whether the configure script
exists or not, ensuring we always effectively do the autogen step.
More details on the mailing-list:
https://mail.gnome.org/archives/buildstream-list/2017-November/msg00015.html
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes workspaces created with the git source element so that the
origin remote points to the source repository of the build element as
opposed to the internal repository in the bst cache.
This introduces an addition of the init_workspace method in the source
API. This method, which defaults to calling stage, is for the setup of
the workspace after the creation of the workspace directory.
This is a part of issue #53
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I noticed this issue when running `bst track` on a system that contained
GLIBC. The following error occurred:
[--:--:--] START [gnu-toolchain/stage2-glibc.bst]: Tracking release/2.25/master from git://git.baserock.org/delta/glibc
Running host command /home/fedora/src/baserock/definitions/.cache/buildstream/sources/git/git___git_baserock_org_delta_glibc: /usr/bin/git fetch origin
[--:--:--] STATUS [gnu-toolchain/stage2-glibc.bst]: Running host command
/usr/bin/git fetch origin
error: cannot lock ref 'refs/heads/hjl/memcpy/dpdk/master': 'refs/heads/hjl/memcpy' exists; cannot create 'refs/heads/hjl/memcpy/dpdk/master'
From git://git.baserock.org/delta/glibc
! [new branch] hjl/memcpy/dpdk/master -> hjl/memcpy/dpdk/master (unable to update local ref)
error: cannot lock ref 'refs/heads/hjl/x86/master': 'refs/heads/hjl/x86' exists; cannot create 'refs/heads/hjl/x86/master'
! [new branch] hjl/x86/master -> hjl/x86/master (unable to update local ref)
error: cannot lock ref 'refs/heads/hjl/x86/math': 'refs/heads/hjl/x86' exists; cannot create 'refs/heads/hjl/x86/math'
! [new branch] hjl/x86/math -> hjl/x86/math (unable to update local ref)
error: cannot lock ref 'refs/heads/hjl/x86/optimize': 'refs/heads/hjl/x86' exists; cannot create 'refs/heads/hjl/x86/optimize'
! [new branch] hjl/x86/optimize -> hjl/x86/optimize (unable to update local ref)
error: some local refs could not be updated; try running
'git remote prune origin' to remove any old, conflicting branches
git source at gnu-toolchain/stage2-glibc.bst [line 4 column 2]: Failed to fetch from remote git repository: git://git.baserock.org/delta/glibc
The issue here is that my local clone had old remote-tracking refs which
conflicted with newer upstream refs. For example, there used to be a ref
named `hlj/memcpy` which I had mirrored locally. This has been deleted
and now a ref exists named `hlj/memcpy/dpdk/master`. The new ref cannot
be pulled because Git considers it to conflict with the old one.
The solution is to use `git fetch --prune` when updating so that Git
removes any outdated remote-tracking refs before trying to create any
new ones.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This required adding two new APIs to make up for it on the Source
o get_project_directory()
Added here because elements should not be accessing external
resources, Sources needed for local files and GPG keys and such
o translate_url()
Used by sources to mish-mash the project aliases and create
real urls.
|
|
|
|
|
|
| |
Also, restructured a bit to match tar source, there was never
any need for the `dirs_only` parameter for listing the archive
paths, the source is only interested in directories anyway.
|
| |
|
|
|
|
|
|
|
| |
This makes buildstream behave the same way with tarballs which
were encoded with a leading `.` and those encoded without one.
This fixes issue #145
|
|
|
|
| |
This is equivalent to the tar source, but for Zip archives.
|
|
|
|
|
|
|
|
|
|
|
| |
The new DownloadableFileSource will be used as a base for all sources
which just download a file to use as source.
The existing TarSource just keeps the code responsible to manage a Tar
archive.
This will help implemeting other types of single file downloaded
sources, for example Zip archives.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To extract the full tarball, one should set base-dir to an
empty string.
By ignoring the leading '.' in any archive, we make the 'base-dir'
API more predictable and reliable - the default behavior of '*' is
to pickup the first directory in the tarball (usually source code
tarballs are encoded with one leading directory) - in the off chance
that a source tarball has a leading '.' in it; that would cause
the 'base-dir' default '*' glob to extract the whole thing.
It seems undesirable to behave differently depending on whether
a tarball was encoded with, or without a leading '.'
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When building Python modules, a bytecode `.pyc` file is generated from
the source `.py` file.
The former contains 4 bytes representing the timestamp of the latter at
the time it was generated.
Unfortunately, after building OSTree sets all the file timestamps to 0,
which introduces a discrepency between the timestamp of the `.py` file
and the 4 bytes stored inside the `.pyc` file.
As a result, when running a Python module from a checkout, Python thinks
the bytecode files are stale, which causes a dramatic performance
penalty when starting an application.
Fixes #94
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Fixes #63
|
|
|
|
|
|
|
|
| |
NOTE: This changes cache keys for existing sources which
have submodules configured.
Presence of submodules in the source configuration changes the
configuration significantly, this should be considered in cache keys.
|
|
|
|
|
| |
Here we were querying a non existing directory with bzr, we dont
need this when the directory doesnt exist.
|
|
|
|
|
|
| |
Instead use BST_STRICT_REBUILD and follow a new pattern we're
using for any class attributes used for the plugin to communicate
static data back to the core.
|
|
|
|
|
|
|
|
| |
Using the new enhanced Element API for staging, allow the
user to specify domains to exclude as well as domains to
include.
Fixes issue #78
|
|
|
|
| |
Fixes #59
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Packaging is a big topic in the Python community these days. Things are
evolving, but a consensus seems to have formed around the path forward.
With PEP 518, Pip is becoming the primary tool to install Python
modules. In turn, Pip will use the right underlying tool for the job.
(distutils, setuptools, flint, ...)
Given all this, it makes sense to have a pip element in BuildStream.
This element installs a single Python module, telling Pip not to go and
download its dependencies, to make builds reproducible and not rely
on the network during builds.
By default it will use the `pip` command which generally points to Pip
for Python 2. Users can override the "pip" variable, for example to use
the `pip3` command, which generally points to Pip for Python 3.
|