| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
| |
This was never particularly useful, there is no circumstances
under which a workspace needs to be deleted, and a cache key
invalidated, in the course of a session.
A workspace is deleted only atomically as a part of `bst workspace close`,
which does not even load a pipeline anymore, so the pipeline state need not
be adjusted in this case.
|
| |
|
|
|
|
| |
So we can report proper processed status from tracking queues
|
|
|
|
|
|
|
|
|
|
|
| |
The Source object previously stored the __origin_node,
__origin_toplevel and __origin_filename, this is from a time
when we did not hold on to the plugin's Provenance object
explicitly.
Since this information comes from the same place, let's just
use Plugin._get_provenance() to derive these values, instead
of redundantly carrying them along separately.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new Source.load_ref() API which is technically optional to
implement, projects which make use of a project.refs file must only
use source plugins which implement the new load_ref() method.
* source.py: Added load_ref() API to load a ref from a specified node.
This also adds _load_ref() and _save_ref() wrappers which handle
the logistics of when to load and save a ref to which location.
This also fixes _set_ref() to apply the ref to the node unconditionally,
this must be done independantly of whether the ref actually changed.
o Modifications to the loading process such that Source now can have
access to the element name and source index.
o _pipeline.py: Delegate abstract loading of source refs to Source._load_ref()
- Print a summarized warning about redundant source references
- Assert that one cannot track cross-junction elements without project.refs.
o _scheduler/trackqueue.py: Delegate saving refs to Source._save_ref()
|
|
|
|
|
|
|
| |
This is technically an API break, but will be transparant for the
vast majority of the current hand full of source implementations
which exist at this time. This is a lesser evil than bloating the
API with new methods.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Source interrogation usually involves calling out to host tools
to quickly check if a given ref exists. This has however regressed
over time when running `bst build --track`.
This patch adds a new context manager to silence the messages,
and silences messages while calling `Source.get_consistency()`
Fixes #280
|
|
|
|
|
| |
Equivalent to the 'elements' field, but slightly different because
sources don't have accompanying yaml.
|
| |
|
|
|
|
| |
Fixes #250
|
|
|
|
|
|
|
| |
In case a source with an open workspace is tracked and it's ref gets
updated, BuildStream should inform the user that the new ref will not be
picked up so long as the workspace is open. To start using the updated
ref, the existing workspace will have to be closed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This functionality is only supported for sources which have an open
workspace. When such sources are present, the workspace directory will
be mounted directly inside the sandbox. As opposed to the default
behavior, which is to copy files inside the sandbox.
This will save time when building large projects as only those files
will need be re-compiled that have been modified during two consecutive
builds (assuming the underlying build system supports such behavior).
A few things to note regarding this behavior:
- If there are any `configure-commands` present, they will run only once
for each open workspace. If an element has multiple workspaces and any
one of them is opened/closed, they will be executed again on the next
run. But, modifying the contents of a workspace will not trigger the
`configure-commands` to be executed on the next run.
- Workspaced builds still leverage the cache. So, if no changes are made
to the workspace, i.e. no files are modified, then it will not force a
rebuild.
Fixes #192.
|
|
|
|
|
|
|
| |
Source.track() may return None when tracking is not available. Handle
this identical to the case where track() returns the current ref.
Fixes #201
|
|
|
|
|
| |
This adds the _update_state() method to the Source class, similar to the
corresponding method in the Element class.
|
|
|
|
|
| |
_force_inconsistent is too low level. Keep that detail contained in the
Source class.
|
| |
|
|
|
|
|
|
|
|
| |
Recently I added the `reason` member which can be used to set
machine readable error reason strings for the purpose of testing.
Forgot to add the necessary `*` argument, forcing `reason` to be
a keyword-only argument.
|
|
|
|
|
|
|
| |
directory
This changes the UX to report a better human readable error, which
is otherwise a BUG message with stack trace.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Outline of changes to exceptions:
o Now BstError base class has a `domain` and `reason` member,
reason members are optional.
o All derived error classes now specify their `domain`
o For LoadError, LoadErrorReason defines the error's `reason`
o Now the scheduler `job` class takes care of sending the error
reason and domain back to the main process where the last
exception which caused a child task to fail can be discretely stored.
|
| |
|
|
|
|
|
|
| |
In order to avoid multiple traversals of the file system when the workspace key
is requested multiple times, it is now cached in the source element. The cache
is invalidated if the workspace is deleted or moved.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes workspaces created with the git source element so that the
origin remote points to the source repository of the build element as
opposed to the internal repository in the bst cache.
This introduces an addition of the init_workspace method in the source
API. This method, which defaults to calling stage, is for the setup of
the workspace after the creation of the workspace directory.
This is a part of issue #53
|
|
|
|
|
|
|
|
|
|
| |
Consequently:
o Changed Plugin.get_context() to a private Plugin._get_context() accessor.
o Updated anything which imports Context to do so from private _context module
o Updated docs to exclude the now private Context
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This required adding two new APIs to make up for it on the Source
o get_project_directory()
Added here because elements should not be accessing external
resources, Sources needed for local files and GPG keys and such
o translate_url()
Used by sources to mish-mash the project aliases and create
real urls.
|
|
|
|
|
|
| |
Base class for exceptions is now a part of the already private _exceptions module
Also moved PipelineError from _pipeline -> _exceptions module
|
|
|
|
| |
Hide all of buildstream's internal exceptions from the API surface.
|
|
|
|
|
|
|
|
|
|
| |
create SandboxError
These errors are a part of public facing API, and the exceptions
module contains a lot of internal details to be hidden from public API.
This move required creating SandboxError because sandbox related
code had previously been hijacking the ElementError and raising that.
|
|
|
|
|
| |
This is the wrong place for the check, it needs to be done once
for the toplevel staging directory, not for each source.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Avoid trying to calculate cache keys and then running into an error
by just considering workspaces with missing content to be inconsistent.
This fixes issue #80
|
|
|
|
|
|
|
|
| |
NOTE: This commit changes cache keys for everything !
The core parses a 'directory' parameter which effects how all
sources are staged. This obviously changes the build but until now
this was ignored in cache key calculation.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
It is usually created by integration commands when staging dependencies,
but we cannot rely on this as some elements have no dependencies, and
some sources handle the non-existence of 'directory' better than others.
To avoid mandating that every source must be able to cope with
'directory's not existence, source will ensure it exists.
|
|
|
|
|
| |
For initializing a pipeline in a forcefully inconsistent state,
this is required for pipelines which execute TrackQueues.
|
| |
|
|
|
|
|
|
| |
This is for Source implementations to use a temp directory, this
should be used instead of python tempdir APIs because it will cleanup
the tempdir automatically in the case of user provoked termination.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
o The metaelements and metasources now carry the name, the loader
resolves source names now.
o Element/Source factories dont require a name anymore as they
are already in the meta objects
o Pipeline no longer composes names
o Element.name is now the original project relative filename,
this allows plugins to identify that name in their dependencies,
allowing one to express configuration which identifies elements
by the same name that the user used in the dependencies.
o Removed plugin._get_display_name() in favor of the plugin.name
o Added Element.normal_name, for the cases where we need to have
a normalized name for creating directories and log files
o Updated frontend and test cases and all callers to use the
new naming
|
|
|
|
| |
This was missed in the previous changes regarding caching
|
| |
|
|
|
|
| |
Now adds Consistency class for consistency options
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now we have a different API around refreshing tracking references:
o get_ref(): Source should return currently set reference
o set_ref(): Source should update current ref, and update
the passed yaml node
o track(): Return the latest reference for a symbolic tracking ref,
this may involve network usage enough to track the new
ref.
This way we dont try to serialize entire yaml documents over the
subprocess pipe, some of the formatting I think may have been lost in
serialization.
|