| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This command adds initial cross-compilation support to BuildStream.
It has been tested against a converted version of the Baserock
compiler bootstrap and used to cross build sysroots for armv8l64 and ppc64l
from an x86_64 host.
For example, to build a sysroot for ARM v8 64-bit you can do this:
bst build --target-arch=armv8b64 gnu-toolchain/stage2.bst
This would cause the adapted Baserock definitions to produce a stage1 simple
cross compiler that runs on the native architecture and produces armv8b64
binaries, and then cross build a stage2 sysroot that executes on armv8b64.
Currently the --host-arch option does nothing of use. It will one day
enable host-incompatible builds using a QEMU-powered cross sandbox.
The `--arch=` option is now shorthand for `--host-arch= --target-arch=`.
Elements have 2 new variables available, %{bst-host-arch} and
%{bst-target-arch}. The 'arches' conditional now follows %{bst-target-arch},
while the new 'host-arches' conditional follows %{bst-host-arch}. All
of --arch, --host-arch and --target-arch default to the output of `uname -a`.
There's no magic here that would make all BuildStream elements suddenly
able to cross compile. It is up to an individual element to support this by
honouring %{bst-target-arch} in whatever way makes sense.
|
|
|
|
| |
Seems they keep getting executable !
|
|
|
|
| |
Seems I have been impatient today...
|
|
|
|
| |
Forgot to squash this into the previous commit before pushing.
|
| |
|
|
|
|
|
| |
Using the recurse=False argument to Element.dependencies() is equal
to using the private Element._direct_deps() API.
|
|
|
|
| |
Using OSError instead of the general base Exception class proposed in MR 39.
|
|
|
|
|
|
| |
- Add a `--directory` option to source-bundle
- Remove the `name` argument
- Rename the tempdir
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Raise PipelineError in the case that specified --except elements
do not exist in the selected pipeline, instead of allowing an
action to continue when the user misspells an element in the --except
arguments.
Also make the default message for PipelineError an empty string,
so that intentionally unclassified errors just print as an empty string.
|
|
|
|
|
| |
It might not matter in this instance, but we stay away from
these as a matter of convention.
|
| |
|
|
|
|
|
|
|
| |
Unless the track option is given for either of them.
Also this adds the track option for fetch(), which was previously
only availble for build().
|
|
|
|
|
|
| |
Instead of just brutally linking out the artifact, which was already
wrong because we risk artifact cache corruption (need to copy files
out into the checkout directory, never link).
|
|
|
|
|
|
|
| |
When instantiating in inconsistent mode, the third load stage
will force all elements to appear inconsistent, ensuring that
cache keys can not be calculated until a later time (e.g. TrackQueue)
when the source references are first resolved.
|
|
|
|
|
| |
When specified, a TrackQueue is placed before the FetchQueue in
the build process, allowing one to track and build in one go.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly code reorganization:
o _scheduler.py now in _scheduler/ submodule, split into job.py,
queue.py and scheduler.py
o _pipeline.py no longer defines the queues, instead they are
in the scheduler submodule as trackqueue.py, fetchqueue.py
and buildqueue.py
o Queue initializers no longer require parameters, instead each
queue implementation defines the data which was previously passed
to the initializers
o The scheduler now has a concept of "tokens" for managing the
availability of QueueType.FETCH and QueueType.BUILD tasks.
Multiple queues of the same QueueType can coexist (e.g. a TrackQueue
and a FetchQueue) in the same pipeline, but still be managed by
the same user defined task limit (e.g. --fetchers)
o Base Queue class now keeps track of how many elements were successfully
processed, how many failed and how many were safely skipped.
|
|
|
|
|
| |
And take a deps argument in the Pipeline.fetch() API instead of
the old boolean 'needed', to match the API for track()
|
|
|
|
|
|
|
|
|
|
|
| |
This was originally written to be at the end because we thought it
may be possible to have multiple tasks which effect the same file,
but this is now untrue since we removed the include feature and
the scheduler always runs things at element granularity (one element
one file, one element processed at a time).
Since it's more desirable to rewrite a tracked source immediately
after tracking it, that's what we're doing now.
|
|
|
|
|
|
|
|
| |
For status display purposes, this should eventually move to the scheduler
when recursive pipelines are implemented.
Also changed to instantiate queues with their complete names as well
as their action names.
|
|
|
|
| |
Instead of managing timers around Scheduler.run()
|
| |
|
|
|
|
|
|
| |
After deriving from _BstError, PipelineError wanted an unconditional
message, solve this by just providing a generic one in PipelineError's
constructor.
|
|
|
|
| |
To checkout the pipeline target artifact content to a directory.
|
|
|
|
| |
This one was missed in transition
|
|
|
|
| |
Allow the frontend to pass in which elements should be tracked.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
o The metaelements and metasources now carry the name, the loader
resolves source names now.
o Element/Source factories dont require a name anymore as they
are already in the meta objects
o Pipeline no longer composes names
o Element.name is now the original project relative filename,
this allows plugins to identify that name in their dependencies,
allowing one to express configuration which identifies elements
by the same name that the user used in the dependencies.
o Removed plugin._get_display_name() in favor of the plugin.name
o Added Element.normal_name, for the cases where we need to have
a normalized name for creating directories and log files
o Updated frontend and test cases and all callers to use the
new naming
|
| |
|
|
|
|
|
| |
This can be useful to plugins, even if usually they will
be using the stage_sources() convenience function.
|
|
|
|
|
|
|
|
|
|
|
| |
The pipeline no longer owns the scheduler but is given a scheduler
to operate on, this needs to be pushed a little further for recursive
pipelines to work with a single application scheduler.
Also consequently, this patch addresses a problem in pipeline.fetch()
which was failing to filter down to only the elements which needed
fetching (was resulting in noop of fetch when actually some elements
required a fetch).
|
|
|
|
|
| |
This seem a bit more intuitive since dependencies and
everything are referred to by their project relative filenames.
|
| |
|
| |
|
|
|
|
| |
Outputs the time it takes for the full run.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Only interrogate plugins once, after that manually update
states depending on successful build steps.
|
| |
|
|
|
|
|
|
| |
In the fetch and refresh API, fetch/refresh all by default, use
a needed flag to specify you only want to fetch/refresh what is
needed to complete a build.
|
|
|
|
|
| |
Skip cached elements at assemble time, skip consistency cached
elements in fetch.
|
|
|
|
|
|
|
| |
Now we lookup the sources which changed in the master process
and only apply the references sent over the process boundary
to update the sources in tree, and then re-dump the bst files
which needed updating.
|
| |
|
| |
|
|
|
|
|
|
|
| |
o Removed Resolver, this is just a function now
o Some changes to work with the new scheduler and fetch
result plugins by their id using _plugin_lookup()
|
|
|
|
| |
Now use the Scheduler to parallelize source operations.
|