| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
You can bind to an ephemeral port by passing 0 as the port number.
To work out which port you actually got, you need to call getsockname().
To facilitate being able to spawn multiple copies of the daemons for
testing environments, you can pass a -file option, which will make the
daemon write which port it actually bound to.
If this path is a fifo, reading from it in the spawner process will
allow synchronisation of only spawning services that require that port to
be ready after it is.
|
|
|
|
|
|
| |
This is needed for distbuild to deserialise based on the split rules on
the node that did the graph calculation, rather than the node that does
the building.
|
|
|
|
|
|
|
| |
I've rarely needed to use it, and on those rare occasions, it would have
been easy enough to calculate it.
Let's get rid of this step, and save everyone some time in future.
|
|
|
|
| |
It's easy enough to deploy the image.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, if morph is installed in the system, `morph --version`
prints the sha1 of the version installed.
$ morph --version
e8adedb8f3f27d9212caf277b8e8f7c6792a20c2
If you run morph from git, the output will be something similar to
the following.
$ morph --version
baserock-14.26-124-g7b73af4
This patch changes the behaviour of the latter to match the former.
|
|
|
|
|
|
|
|
|
| |
The msg parameter to status is a format string. If we pass a string
directly to it, then we have to be careful to escape any formatting
characters.
However, we can just do the interpolation directly in the status call
instead, which is less code.
|
| |
|
|\
| |
| |
| |
| |
| | |
Reviewed-by: Lars Wirzenius (+2 to misc fixups)
Reviewed-by: Sam Thursfield (+1 to per-source building)
Reviewed-by: Paul Sherwood (+1 to per-source building)
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This was used by artifacts to tell what they should put in their path,
based on what prefixes were used earlier.
This implementation didn't handle recursive deps, and it was trivial to
open-code it at the call-site, so it is no longer useful.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This means we can avoid having to rewrite everything immediately after
the fields moved.
|
| | |
|
| |
| |
| |
| |
| | |
This is logically part of the previous patch, but has been split out to
ease reviewing.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Building per-artifact results in undesirable behaviour,
as multiple artifacts are produced for every chunk build.
It therefore makes more sense to build per-source.
This implies that actually, the model of one source per
morphology is wrong and we should move the dependencies
into the source.
Unlike chunks however, where every chunk artifact has the
same dependencies, stratum artifacts can have different
dependencies.
So before we can move the dependencies into the Source,
we need to have as many Sources as Stratum Artifacts.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I need to be able to deduplicate a list of morphologies. Putting it in a
set is the easiest way, but it needs to be hashable.
It's not included in dicts by default, since they're stored by
reference, and you can change them while they're in the dict, so the
hash value can change.
I don't need to deduplicate morphologies by their contents, just by
reference though, so using `id` as the hash function is sufficient.
|
| |
| |
| |
| |
| | |
This means we can remove some complication from the MorphologyFactory
class.
|
| |
| |
| |
| |
| | |
This was used before the Artifact splitting code landed to determine
which artifacts should be produced.
|
| | |
|
| |
| |
| |
| |
| | |
There's other methods called get_sources in other modules, and
fetch_sources explains more about what it does in the context.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
metadata_version is for when the format of the /baserock metadata files
changes.
This means it would make sense for it to either live with the code for
generating the metadata, or the cache key code so it lives with the rest
of the compatibility values.
Since the code for generating the metadata isn't in a nice module
anywhere, I've put it in the cachekeycomputer module.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It was an odd thing to have, when Artifact objects are part of your
input.
Its purpose appears to have been to allow the build step to produce an
artifact called $morphology_name-rootfs, but since the split rules
decide that the artifact is called that anyway, the new_artifact step is
redundant.
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This helps debugging issues with rule matching, since SplitRules can be
print-statemented
|
| |
| |
| |
| |
| |
| |
| |
| | |
It rather peculiarly defines artifacts that have different cache keys,
but the same source.
This flies in the face of how real artifacts get cache keys, and our
ability to move the cache key to being per-source.
|
| |
| |
| |
| |
| | |
We don't care if add_dependency causes there to be multiple dependents,
just that our artifacts are properly included.
|
| |
| |
| |
| |
| |
| |
| | |
This pre-dates deployment, and if we need the kernel, we can always copy
it out of the rootfs.
It also uses new_artifact, which is a method I want to remove.
|
| |
| |
| |
| |
| | |
The upstream cliapp project is not interested in this functionality
right now.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This involved rewriting the util.log_dict_diff() function. It has been
renamed to log_environment_changes() to better reflect its purpose.
It no longer logs both the old and new values in the event of an
environment variable changing. It now just logs the new value. This makes
the code simpler and seems like it should not be a big problem.
Some projects recommend passing credentials through the environment.
OpenStack does this, for example, see:
<http://docs.openstack.org/user-guide/content/cli_openrc.html>
It's unlikely that users would be happy about applications saving
these passwords in log files all over their system.
I do not recommend ever storing valuable passwords in the environment.
|
|
|
|
|
|
|
|
| |
This workaround fix http://bugs.python.org/issue12841
The code added in this patch is from tarfile.py from python 2.7.3.
Also only use the workaround for tarfile.makefile when the ptyhon
is smaller than 2.7.4
|
|
|
|
|
|
| |
It broke when we added /baserock/deployment.meta.
We didn't notice this because our test suite was looking at the artifact
produced by morph build, and listed on the terminal.
|
|
|
|
|
|
| |
The openstack.write extension was calling a nonexistent method
'check_location'. This metod was moved to openstack.check
in the commit ba7d1d1ed3bad002ce36e5d4adf4e3794625091a.
|
| |
|
|\
| |
| |
| |
| | |
Reviewed-by: Sam Thursfield
Reviewed-by: Lars Wirzenius
|
| |
| |
| |
| |
| |
| |
| |
| | |
If the credentials are wrong, then morph will fail before
attempting the OpenStack deployment.
To achieve that openstack.check will attempt to run
`glance image-list`.
|
| |
| |
| |
| | |
Found by Richard Maw.
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The KVM and VirtualBox deployments use sparse files for raw disk
images. This means they can store a large disk (say, tens or hundreds
of gigabytes) without using more disk space than is required for the
actual content (e.g., a gigabyte or so for the files in the root
filesystem). The kernel and filesystem make the unwritten parts of the
disk image look as if they are filled with zero bytes. This is good.
However, during deployment those sparse files get transferred as if
there really are a lot of zeroes. Those zeroes take a lot of time to
transfer. rsync, for example, does not handle large holes efficiently.
This change introduces a couple of helper tools (morphlib/xfer-hole
and morphlib/recv-hole), which transfer the holes more efficiently.
The xfer-hole program reads a file and outputs records like these:
DATA
123
binary data (exaclyt 123 bytes and no newline at the end)
HOLE
3245
xfer-hole can do this efficiently, without having to read through all
the zeroes in the holes, using the SEEK_DATA and SEEK_HOLE arguments
to lseek.
Using this, the holes take only take a few bytes each, making it
possible to transfer a disk image faster. In my benchmarks,
transferring a 100G byte disk image took about 100 seconds for KVM,
and 220 seconds for VirtualBox (which needs to more work at the
receiver to convert the raw disk to a VDI). Both benchmarks were from
a VM on my laptop to the laptop itself.
The interesting bit here is that the receiver (recv-hole) is simple
enough that it can be implemented in a bit of shell script, and the
text of the shell script can be run on the remote end by giving it to
ssh as a command line argument. This means there is no need to install
any special tools on the receiver, which makes using this improvement
much simpler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than using `git ls-remote` every time to see if there are changes
at the remote end, use a local cache.
Git already solves this problem with its refs/remotes/$foo branch
namespace, so we can just use that instead.
In addition, the detection of which upstream branch to use has been
improved; it now uses git's notion of which the upstream branch of your
current branch is, which saves effort in the implementation, and allows
the name of the local branch to differ from that of the remote branch.
This now won't notice if the branch you currently have checked out had
commits pushed from another source, but for some use-cases this is
preferable, as the result equivalent to if you had built before the
other push.
It may make sense to further extend this logic to check that the local
branch is not ahead of the remote branch, instead of requiring them to
be equal.
|
|\
| |
| |
| |
| | |
Reviewed-By: Richard Maw <richard.maw@codethink.co.uk>
Reviewed-By: Lars Wirzenius <lars.wirzenius@codethink.co.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There's no need to log every time we look something up in a dict. This just
makes log files huge.
The CacheKeyComputer.compute_key() function still logs the first time it
calculates a cache key for an Artifact instance. This message now includes
the hex string that is used to identify the artifact.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously logging was disabled for Python deploy extensions.
We get a lot of useful information for free in the log file by
doing this: the environment will be written when the subprocess starts,
and if it crashes the full backtrace will be written there too. Subcommand
execution with cliapp.runcmd() will also be logged.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is achieved by passing it the write end of a pipe, so that the
extension has somewhere to write debug messages without clobbering either
stdout or stderr.
Previously deployment extensions could only display status messages on
stdout, which is good for telling the user what is happening but is not
useful when trying to do post-mortem debugging, when more information is
usually required.
This uses the asyncore asynchronous event framework, which is rather
specific to sockets, but can be made to work with file descriptors, and
has the advantage that it's part of the python standard library.
It relies on polling file descriptors, but there's a trick with pipes to
use them as a notification that a process has exited:
1. Ensure the write-end of the pipe is only open in the process you
want to know when it exits
2. Make sure the pipe's FDs are set in non-blocking read/write mode
3. Call select or poll on the read-end of the file descriptor
4. If select/poll says you can read from that FD, but you get an EOF,
then the process has closed it, and if you know it doesn't do that
in normal operation, then the process has terminated.
It took a few weird hacks to get the async_chat module to unregister
its event handlers on EOF, but the result is an event loop that is
asleep until the kernel tells it that it has to do something.
|
| |
| |
| |
| |
| |
| | |
The arguments to `morph deploy` can get quite long, any way we can make it
shorter and clearer is useful. We can also avoid having the strange
--no-upgrade flag in future.
|