| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
This means logging gets its own file descriptor instead of taking over
stderr, which is probably better.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
It's too hard to build these into Morph's default rules for now.
|
| |
|
| |
|
|
|
|
| |
One can become confused about why old URLs are being used, otherwise.
|
| |
|
|
|
|
|
| |
We can discover the Gem install path at build-time with `gem environment
home`.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Instead of exiting immediately, collect the errors up and report them
at the end. The tool should do as much as possible on each execution.
For example, this helps in the case where a user wants to get a rough
idea of how much is involved to integrate a given package.
|
| |
|
|
|
|
| |
Caveat: requires a cliapp version newer than the one currently in Baserock.
|
|
|
|
|
|
| |
This allows tools other than Morph to store information in morphologies.
While this probably shouldn't be encouraged, it's proved very useful
during development of the import tool.
|
| |
|
| |
|
|
|
|
| |
At least for now.
|
|
|
|
|
|
|
|
|
|
| |
This is a generic tool which allows using metadata from foreign
packaging systems to create morphologies.
So far it supports RubyGems, but it should be extendable to other
packaging systems.
It is not complete and lacks many things.
|
|\
| |
| |
| |
| | |
Reviewed-By: Richard Ipsum <richard.ipsum@codethink.co.uk>
Reviewed-By: Daniel Silverstone <daniel.silverstone@codethink.co.uk>
|
| | |
|
|/ |
|
|\
| |
| |
| |
| | |
Reviewed-by: Daniel Silverstone
Reviewed-by: Lars Wirzenius
|
| |
| |
| |
| |
| |
| | |
The openstack.write extension was calling a nonexistent method
'check_location'. This metod was moved to openstack.check
in the commit ba7d1d1ed3bad002ce36e5d4adf4e3794625091a.
|
|/ |
|
|
|
|
| |
Without this, they won't be installed in the binary.
|
|\
| |
| |
| |
| | |
Reviewed-by: Sam Thursfield
Reviewed-by: Lars Wirzenius
|
| |
| |
| |
| |
| |
| |
| |
| | |
If the credentials are wrong, then morph will fail before
attempting the OpenStack deployment.
To achieve that openstack.check will attempt to run
`glance image-list`.
|
|\ \
| |/
|/|
| | |
Reviewed by various people.
|
| |
| |
| |
| | |
Found by Richard Maw.
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The KVM and VirtualBox deployments use sparse files for raw disk
images. This means they can store a large disk (say, tens or hundreds
of gigabytes) without using more disk space than is required for the
actual content (e.g., a gigabyte or so for the files in the root
filesystem). The kernel and filesystem make the unwritten parts of the
disk image look as if they are filled with zero bytes. This is good.
However, during deployment those sparse files get transferred as if
there really are a lot of zeroes. Those zeroes take a lot of time to
transfer. rsync, for example, does not handle large holes efficiently.
This change introduces a couple of helper tools (morphlib/xfer-hole
and morphlib/recv-hole), which transfer the holes more efficiently.
The xfer-hole program reads a file and outputs records like these:
DATA
123
binary data (exaclyt 123 bytes and no newline at the end)
HOLE
3245
xfer-hole can do this efficiently, without having to read through all
the zeroes in the holes, using the SEEK_DATA and SEEK_HOLE arguments
to lseek.
Using this, the holes take only take a few bytes each, making it
possible to transfer a disk image faster. In my benchmarks,
transferring a 100G byte disk image took about 100 seconds for KVM,
and 220 seconds for VirtualBox (which needs to more work at the
receiver to convert the raw disk to a VDI). Both benchmarks were from
a VM on my laptop to the laptop itself.
The interesting bit here is that the receiver (recv-hole) is simple
enough that it can be implemented in a bit of shell script, and the
text of the shell script can be run on the remote end by giving it to
ssh as a command line argument. This means there is no need to install
any special tools on the receiver, which makes using this improvement
much simpler.
|
|\
| |
| |
| |
| | |
Reviewed-by: Lars Wirzenius
Reviewed-by: Sam Thursfield
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than using `git ls-remote` every time to see if there are changes
at the remote end, use a local cache.
Git already solves this problem with its refs/remotes/$foo branch
namespace, so we can just use that instead.
In addition, the detection of which upstream branch to use has been
improved; it now uses git's notion of which the upstream branch of your
current branch is, which saves effort in the implementation, and allows
the name of the local branch to differ from that of the remote branch.
This now won't notice if the branch you currently have checked out had
commits pushed from another source, but for some use-cases this is
preferable, as the result equivalent to if you had built before the
other push.
It may make sense to further extend this logic to check that the local
branch is not ahead of the remote branch, instead of requiring them to
be equal.
|
|\
| |
| |
| |
| | |
Reviewed-By: Richard Maw <richard.maw@codethink.co.uk>
Reviewed-By: Lars Wirzenius <lars.wirzenius@codethink.co.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There's no need to log every time we look something up in a dict. This just
makes log files huge.
The CacheKeyComputer.compute_key() function still logs the first time it
calculates a cache key for an Artifact instance. This message now includes
the hex string that is used to identify the artifact.
|
|\ \
| | |
| | |
| | |
| | | |
Reviewed-By: Richard Maw <richard.maw@codethink.co.uk>
Reviewed-By: Sam Thursfield <sam.thursfield@codethink.co.uk>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously logging was disabled for Python deploy extensions.
We get a lot of useful information for free in the log file by
doing this: the environment will be written when the subprocess starts,
and if it crashes the full backtrace will be written there too. Subcommand
execution with cliapp.runcmd() will also be logged.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is achieved by passing it the write end of a pipe, so that the
extension has somewhere to write debug messages without clobbering either
stdout or stderr.
Previously deployment extensions could only display status messages on
stdout, which is good for telling the user what is happening but is not
useful when trying to do post-mortem debugging, when more information is
usually required.
This uses the asyncore asynchronous event framework, which is rather
specific to sockets, but can be made to work with file descriptors, and
has the advantage that it's part of the python standard library.
It relies on polling file descriptors, but there's a trick with pipes to
use them as a notification that a process has exited:
1. Ensure the write-end of the pipe is only open in the process you
want to know when it exits
2. Make sure the pipe's FDs are set in non-blocking read/write mode
3. Call select or poll on the read-end of the file descriptor
4. If select/poll says you can read from that FD, but you get an EOF,
then the process has closed it, and if you know it doesn't do that
in normal operation, then the process has terminated.
It took a few weird hacks to get the async_chat module to unregister
its event handlers on EOF, but the result is an event loop that is
asleep until the kernel tells it that it has to do something.
|
| | |
| | |
| | |
| | |
| | |
| | | |
The arguments to `morph deploy` can get quite long, any way we can make it
shorter and clearer is useful. We can also avoid having the strange
--no-upgrade flag in future.
|
|/ / |
|
|\ \
| |/
|/|
| |
| | |
Reviewed-by: Lars Wirzenius (git-daemon)
Reviewed-by: Sam Thursfield (yarn fixup)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously we would use file: URIs to point to the git repositories.
This was fast and simple, but had the drawback that it bypassed all the
git cache logic, so changes to the git cache weren't adequately covered
by the test suite.
Now we spool up a simulated git server per scenario, and shut it down at
the end.
|