| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
Change-Id: Id2e76f31c12178a42488489e320af0ed99b4c7eb
|
|
|
|
| |
Change-Id: Iae387e39c4a62ef608496d31c748493fa88ce3e1
|
|
|
|
|
|
|
|
| |
This adds NOVA_ENABLE_{CONTROLLER,COMPUTE}. Both are enabled by deafult,
but if CONTROLLER is enabled but COMPUTE isn't, then the conductor
service is enabled.
Change-Id: I523a7270d4afdcd1e2a30eaac42ea499581fe971
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds NEUTRON_ENABLE_{MANAGER,CONTROLLER,AGENT} to determine which
parts should be run on a node, so a network node has MANAGER enabled,
but doesn't need CONTROLLER or AGENT, since those will be run on the
controller and compute nodes respectively.
This works by the configuration extension selectively enabling systemd
units, with config-setup always being run, and db-setup run on the
controller node.
Rather than having the enable logic in 3 distinct setup services, their
dependencies have been augmented to run after appropriate setup services
if they are enabled, and to not run if their configuration hasn't been
created.
Change-Id: I7625074c94acfb49fc68660440609b0fe9c0052d
|
|
|
|
|
|
|
| |
This will be fully functional after service configuration is better
partitioned between the nodes.
Change-Id: I7822c42b9087bc52111e8b7181b67f55d8393643
|
|
|
|
| |
Change-Id: I2eee55408b174dc820ce713e6821f200a1532a48
|
|
|
|
|
|
|
| |
This commit configures Ironic to integrate with Keystone, Neutron and
Glance. Nova integration will be added in a following commit.
Change-Id: Id557e8e048b6051d764b4915192cfd55bfe68d32
|
|
|
|
| |
Change-Id: I03aa39e33a2a8326c3d8a779dde9bc3bf0801266
|
|
|
|
| |
Change-Id: I8784857c1531cac0e1048da1bc83bdfda25258c2
|
|
|
|
|
|
|
|
|
|
| |
We now deploy swift systems rather than devel systems.
We also now need to specify the controller host address,
since swift storage nodes will use the controller
node to get their ntp time updates.
Change-Id: I2416aa9fc92161cb2df00ad1676c48810851f7f3
|
|
|
|
|
|
|
| |
Add cluster definition and install system definition, to enable
deployment of a big-endian system to a moonshot M.2 (SSD) device.
Change-Id: Icb2d48eff152a3df9556739fadbf4055478e79f4
|
|
|
|
|
|
|
| |
Add a cluster definition to enable deployment of a big-endian system to an
NFS/TFTP netboot server, from which a Moonshot node's U-Boot can "pxe" boot.
Change-Id: I6654879d61b58aebdb83bf490d77d8d403d13155
|
|
|
|
| |
Change-Id: Iad40b665edff7a3605b6600dafbcf67831e4290a
|
|
|
|
|
|
|
| |
It needs an initramfs to support UUID, without which you can't reliably
determine which device should be used as the rootfs.
Change-Id: If5f62428a299c1e06f55e15d0a0d8e3329362ab8
|
|
|
|
|
|
|
| |
As the installer system installs the rawdisk into a device, this
variable is not used.
Change-Id: Id6ba83ecbeb460813a074438930767638f68a141
|
|
|
|
| |
Change-Id: I041f7d0090b1fbbcfe1634b1635660fda56c9509
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
A new version of a Baserock Gerrit system definition now lives in
infrastructure.git.
Change-Id: I6aeed4c5381edf5e7736f1816f9d58832c0ac781
|
|/
|
|
|
|
|
|
|
| |
As far as I know, these are out of date, unmaintained and nobody is
using them. It was definitely a useful learning process to integrate
Gitlab into Baserock, but I think this is now just taking up space in
definitions.git needlessly.
Change-Id: Ifdd9c0a3dd889382bc5e6825c2df4f3afbd89f3c
|
| |
|
|
|
|
| |
They are long enough anyway.
|
|
|
|
|
|
| |
This is for an old implemenation of Mason. The more recent
implementations don't need special configuration to be done on the
Trove.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Java is sourced from the binary Java release from Oracle. This
chunk was originally written by Francisco Marchena.
ANT is a Java build system and is needed by ZooKeeper.
ZooKeeper itself is documented at http://zookeeper.apache.org/
This patch also brings in a zookeeper test program in a seperate strata
that can be safely discarded if not required. this test program
was written by me, <mike.smith@codethink.co.uk> and is not designed to
be used in any practical way, but to showcase the functionality
of zookeeper within baserock
The ZooKeeper demonstration server and client are currently hosted
on baserock/test.
The Java binary chunk only works for x86_64. As such, these
systems are limited to that architecture.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The installer-x86_64 system is a system that can be used to install other
systems in an storage device. This system is intended to be booted by usb,
pxeboot... to install a Baserock system in your local disk.
The installer system requires the installer.configure extension to generate
a configuration file located in /etc/install.conf. With this extension
you can specify the following variables in a cluster morphology:
- INSTALLER_TARGET_STORAGE_DEVICE: Target storage device to install the
Baserock system.
- INSTALLER_ROOTFS_TO_INSTALL: The location of the root filesystem that
is going to be installed.
- INSTALLER_POST_INSTALL_COMMAND: Commands that will be run after the
installation finishes. It defaults to `reboot -f`.
The installer-utils stratum is required to contain the installer-scripts
chunk. This chunk contains the installer script that is going to be
installed in /usr/lib/installer/installer.py
The clusters/installer-build-system-x86_64.morph file defines the deployment
of a installer system as a rawdisk image. This installer system will
install a build-system-x86_64 located in /rootfs into the /dev/sda device.
Also this cluster defines a subsystem which is the build-system that
is going to end up in /rootfs on the installer system.
|
| |
|
|
|
|
|
|
|
|
| |
This will compile almost all the strata available in baserock related
on building Linux systems (kernel, systemd, connectivity, multimedia, wayland,
weston...).
This will also make the artifacts cache available to anyone that want
to build starting from this reference system
|
| |
|
| |
|
|
|
|
|
| |
Now the build systems have the openstack-clients stratum,
and this system is not longer needed.
|
|
|
|
|
| |
This will avoid problems when self-upgrading a system and also
will save time when flashing a JETSON TK1 board.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes localhost does not resolve, which makes rsync to
root@localhost fail during upgrades with:
ERROR: ssh-rsync.check failed with code 1: ERROR: Unable to SSH to
root@localhost: Command failed: ssh root@localhost -- true
ssh: connect to host localhost port 22: Connection timed out
So let's set the default to 127.0.0.1, which should always work.
|
|
|
|
|
|
| |
This could be improved in future by combining the cluster morphology
with the existing one, and mason/mason-generator.sh being improved to
allow choice between OpenStack and KVM.
|
|\
| |
| |
| |
| | |
Reviewed-By: Richard Maw <richard.maw@codethink.co.uk>
Reviewed-By: Daniel Silverstone <daniel.silverstone@codethink.co.uk>
|
| |
| |
| |
| |
| |
| |
| | |
The build-system is equivalent in functionality to the current
devel-system that we release, but this change allows us to add more
components to the devel-system without increasing the amount of bytes we
have to arrange and transfer when making a release.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It's better to have one type of system that can do either distributed or
local builds than to have separate ones that must both be kept up to
date with changes.
The need for a separate 'distbuild' stratum went already:
commit 1a7fbedf56a4c7a6afb683851dde5d34bbb48b86
Author: Richard Maw <richard.maw@codethink.co.uk>
Date: Thu Oct 2 14:16:00 2014 +0000
Split morph out of tools
morph now contains distbuild and morph-cache-server, so the distbuild
stratum can go away, and anything that needs it can now use morph.
|
|\ \
| |/
|/|
| |
| |
| | |
Reviewed-By: Pedro Alvarez <pedro.alvarez@codethink.co.uk>
Reviewed-By: Richard Maw <richard.maw@codethink.co.uk>
Reviewed-By: Sam Thursfield <sam.thursfield@codethink.co.uk>
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Make the name of the Jetson-specific linux morph file consistent
with the others, and add the jetson-upgrade cluster
|
| |
|