summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLars Wirzenius <liw@liw.fi>2014-10-12 18:38:29 +0300
committerRichard Ipsum <richardipsum@fastmail.co.uk>2014-10-13 15:22:30 +0100
commit38f81bea4d43994287b43947b91b317f7847e8a1 (patch)
treef827df90737675484d3bb1e28af9fd7e099889a5
parent042b19d767925f491c2d0bafa15a12429e7e52a0 (diff)
downloadlorry-controller-38f81bea4d43994287b43947b91b317f7847e8a1.tar.gz
Restructure and clarify ARCH text
-rw-r--r--ARCH170
1 files changed, 103 insertions, 67 deletions
diff --git a/ARCH b/ARCH
index 271b2bc..15ce872 100644
--- a/ARCH
+++ b/ARCH
@@ -5,56 +5,53 @@ Introduction
============
This is an architecture document for Lorry Controller. It is aimed at
-those who develop the software.
+those who develop the software, or develop against its HTTP API. See
+the file `README` for general information about Lorry Controller.
-Lorry is a tool in Baserock for mirroring code from whatever format
-upstream provides it into git repositories, converting them to git as
-needed. Lorry Controller is service, running on a Trove, which runs
-Lorry against all configured upstreams, including other Troves.
-
-Lorry Controller reads a configuration from a git repository. That
-configuration includes specifications of which upstreams to
-mirror/convert. This includes what upstream Troves to mirror. Lorry
-Controller instructs Lorry to push to a Trove's git repositories.
-
-Lorry specifications, and upstream Trove specifications, may include
-scheduling information, which the Lorry Controller uses to decide when
-to execute which specification.
Requirements
============
Some concepts/terminology:
-* CONFGIT is the git repository the Lorry Controller instance uses for
- its configuration.
-* Lorry specification: which upstream version control repository or
- tarball to mirror.
-* Trove specification: which upstream Trove to mirror. This gets
+* CONFGIT is the git repository Lorry Controller uses for its
+ configuration.
+
+* Lorry specification: the configuration to Lorry to mirror an
+ upstream version control repository or tarball. Note that a `.lorry`
+ file may contain several specifications.
+
+* Trove specification: which remote Trove to mirror. This gets
broken into generated Lorry specifications, one per git repository
- on the upstream Trove. There can be many Trove specifications to
+ on the other Trove. There can be many Trove specifications to
mirror many Troves.
-* job: An instance of executing a Lorry specification. Each job has an
- identifier and associated data (such as the output provided by the
- running job, and whether it succeeded).
+
* run queue: all the Lorry specifications (from CONFGIT or generated
- from the Troe specifications) a Lorry Controller knows about; this
+ from the Trove specifications) a Lorry Controller knows about; this
is the set of things that get scheduled. The queue has a linear
order (first job in the queue is the next job to execute).
+
+* job: An instance of executing a Lorry specification. Each job has an
+ identifier and associated data (such as the output provided by the
+ running job, and whether it succeeded).
+
* admin: a person who can control or reconfigure a Lorry Controller
- instance.
+ instance. All users of the HTTP API are admins, for example.
-Original set of requirement, which have been broken down and detailed
+Original set of requirements, which have been broken down and detailed
up below:
* Lorry Controller should be capable of being reconfigured at runtime
to allow new tasks to be added and old tasks to be removed.
(RC/ADD, RC/RM, RC/START)
+
* Lorry Controller should not allow all tasks to become stuck if one
task is taking a long time. (RR/MULTI)
+
* Lorry Controller should not allow stuck tasks to remain stuck
forever. (Configurable timeout? monitoring of disk usage or CPU to
see if work is being done?) (RR/TIMEOUT)
+
* Lorry Controller should be able to be controlled at runtime to allow:
- Querying of the current task set (RQ/SPECS, RQ/SPEC)
- Querying of currently running tasks (RQ/RUNNING)
@@ -84,13 +81,14 @@ used elsewhere to refer to the exact requirement):
* (RC/START) A Lorry Controller reads CONFGIT when it starts,
updating its run queue if anything has changed.
* (RT) Lorry Controller can controlled at runtime.
- * (RT/KILL) An admin can get their Lorry Controller to stop a running job.
- * (RT/TOP) An admin can get their Lorry Controller to move a Lorry spec to
- the beginning of the run queue.
+ * (RT/KILL) An admin can get their Lorry Controller to stop a
+ running job.
+ * (RT/TOP) An admin can get their Lorry Controller to move a Lorry
+ spec to the beginning of the run queue.
* (RT/BOT) An admin can get their Lorry Controller to move a Lorry
spec to the end of the run queue.
- * (RT/QSTOP) An admin can stop their Lorry Controller from scheduling any new
- jobs.
+ * (RT/QSTOP) An admin can stop their Lorry Controller from
+ scheduling any new jobs.
* (RT/QSTART) An admin can get their Lorry Controller to start
scheduling jobs again.
* (RQ) Lorry Controller can be queried at runtime.
@@ -143,15 +141,16 @@ Constraints
Python is not good at multiple threads (partly due to the global
interpreter lock), and mixing threads and executing subprocesses is
-quite tricky to get right in general. Thus, this design avoids using
-threads.
+quite tricky to get right in general. Thus, this design splits the
+software into a threaded web application (using the bottle.py
+framework) and one or more single-threaded worker processes to execute
+Lorry.
Entities
--------
-* An admin is a human being that communicates with the Lorry
- Controller using an HTTP API. They might do it using a command line
- client.
+* An admin is a human being or some software using the HTTP API to
+ communicate with the Lorry Controller.
* Lorry Controller runs Lorry appropriately, and consists of several
components described below.
* The local Trove is where Lorry Controller tells its Lorry to push
@@ -163,28 +162,32 @@ Components of Lorry Controller
------------------------------
* CONFGIT is a git repository for Lorry Controller configuration,
- which the Lorry Controller can access and pull from. Pushing is not
- required and should be prevented by Gitano. CONFGIT is hosted on the
- local Trove.
+ which the Lorry Controller (see WEBAPP below) can access and pull
+ from. Pushing is not required and should be prevented by Gitano.
+ CONFGIT is hosted on the local Trove.
+
* STATEDB is persistent storage for the Lorry Controller's state: what
Lorry specs it knows about (provided by the admin, or generated from
a Trove spec by Lorry Controller itself), their ordering, jobs that
- have been run or are being run, information about the jobs, etc.
- The idea is that the Lorry Controller process can terminate (cleanly
- or by crashing), and be restarted, and continue approximately where
- it was. Also, a persistent storage is useful if there are multiple
- processes involved due to how bottle.py and WSGI work. STATEDB is
- implemented using sqlite3.
+ have been run or are being run, information about the jobs, etc. The
+ idea is that the Lorry Controller process can terminate (cleanly or
+ by crashing), and be restarted, and continue approximately from
+ where it was. Also, a persistent storage is useful if there are
+ multiple processes involved due to how bottle.py and WSGI work.
+ STATEDB is implemented using sqlite3.
+
* WEBAPP is the controlling part of Lorry Controller, which maintains
the run queue, and provides an HTTP API for monitoring and
- controller Lorry Controller. WEBAPP is implemented as a bottle.py
- application.
+ controlling Lorry Controller. WEBAPP is implemented as a bottle.py
+ application. bottle.py runs the WEBAPP code in multiple threads to
+ improve concurrency.
+
* MINION runs jobs (external processes) on behalf of WEBAPP. It
communicates with WEBAPP over HTTP, and requests a job to run,
- starts it, and while it waits, sends partial output to the WEBAPP,
- and asks the WEBAPP whether the job should be aborted or not. MINION
- may eventually run on a different host than WEBAPP, for added
- scalability.
+ starts it, and while it waits, sends partial output to the WEBAPP
+ every few seconds, and asks the WEBAPP whether the job should be
+ aborted or not. MINION may eventually run on a different host than
+ WEBAPP, for added scalability.
Components external to Lorry Controller
---------------------------------------
@@ -192,9 +195,10 @@ Components external to Lorry Controller
* A web server. This runs the Lorry Controller WEBAPP, using WSGI so
that multiple instances (processes) can run at once, and thus serve
many clients.
-* bottle.py is a Python microframework for web applications. We
- already have it in Baserock, where we use it for morph-cache-server,
- and it seems to be acceptable.
+
+* bottle.py is a Python microframework for web applications. It sits
+ between the web server itself and the WEBAPP code.
+
* systemd is the operating system component that starts services and
processes.
@@ -203,17 +207,21 @@ How the components work together
* Each WEBAPP instance is started by the web server, when a request
comes in. The web server is started by a systemd unit.
+
* Each MINION instance is started by a systemd unit. Each MINION
handles one job at a time, and doesn't block other MINIONs from
running other jobs. The admins decide how many MINIONs run at once,
depending on hardware resources and other considerations. (RR/MULTI)
+
* An admin communicates with the WEBAPP only, by making HTTP requests.
Each request is either a query (GET) or a command (POST). Queries
report state as stored in STATEDB. Commands cause the WEBAPP
instance to do something and alter STATEDB accordingly.
+
* When an admin makes changes to CONFGIT, and pushes them to the local
Trove, the Trove's git post-update hook makes an HTTP request to
WEBAPP to update STATEDB from CONFGIT. (RC/ADD, RC/RM)
+
* Each MINION likewise communicates only with the WEBAPP using HTTP
requests. MINION requests a job to run (which triggers WEBAPP's job
scheduling), and then reports results to the WEBAPP (which causes
@@ -221,8 +229,10 @@ How the components work together
continue running the job or not (RT/KILL). There is no separate
scheduling process: all scheduling happens when there is a MINION
available.
+
* At system start up, a systemd unit makes an HTTP request to WEBAPP
to make it refresh STATEDB from CONFGIT. (RC/START)
+
* A timer unit for systemd makes an HTTP request to get WEBAPP to
refresh the static HTML status page. (MON/STATIC)
@@ -236,51 +246,72 @@ The WEBAPP
The WEBAPP provides an HTTP API as described below.
-Requests for admins:
+Run queue management:
-* `GET /1.0/status` causes WEBAPP to return a JSON object that
- describes the state of Lorry Controller. This information is meant
- to be programmatically useable and may or may not be the same as in
- the HTML page.
* `POST /1.0/stop-queue` causes WEBAPP to stop scheduling new jobs to
run. Any currently running jobs are not affected. (RT/QSTOP)
+
* `POST /1.0/start-queue` causes WEBAPP to start scheduling jobs
again. (RT/QSTART)
* `GET /1.0/list-queue` causes WEBAPP to return a JSON list of ids of
all Lorry specifications in the run queue, in the order they are in
the run queue. (RQ/SPECS)
-* `GET /1.0/lorry/<lorryspecid>` causes WEBAPP to return a JSON map
- (dict) with all the information about the specified Lorry
- specification. (RQ/SPEC)
+
* `POST /1.0/move-to-top` with `path=lorryspecid` as the body, where
`lorryspecid` is the id (path) of a Lorry specification in the run
queue, causes WEBAPP to move the specified spec to the head of the
run queue, and store this in STATEDB. It doesn't affect currently
running jobs. (RT/TOP)
+
* `POST /1.0/move-to-bottom` with `path=lorryspecid` in the body is
like `/move-to-top`, but moves the job to the end of the run queue.
(RT/BOT)
+Running job management:
+
* `GET /1.0/list-running-jobs` causes WEBAPP to return a JSON list of
ids of all currently running jobs. (RQ/RUNNING)
+
* `GET /1.0/job/<jobid>` causes WEBAPP to return a JSON map (dict)
with all the information about the specified job.
+
* `POST /1.0/stop-job` with `job_id=jobid` where `jobid` is an id of a
running job, causes WEBAPP to record in STATEDB that the job is to
be killed, and waits for it to be killed. (Killing to be done when
MINION gets around to it.) This request returns as soon as the
STATEDB change is done.
+
* `GET /1.0/list-all-jobs` causes WEBAPP to return a JSON list of ids
of all jobs, running or finished, that it knows about. (RQ/ALLJOBS)
+
* `POST /1.0/remove-job` with `job_id=jobid` in the body, removes a
stopped job from the state database.
+
* `POST /1.0/remove-ghost-jobs` looks for any running jobs in STATEDB
that haven't been updated (with `job-update`, see below) in a long
time (see `--ghost-timeout`), and marks them as terminated. This is
used to catch situations when a MINION fails to tell the WEBAPP that
a job has terminated.
+Other status queries:
+
+* `GET /1.0/status` causes WEBAPP to return a JSON object that
+ describes the state of Lorry Controller. This information is meant
+ to be programmatically useable and may or may not be the same as in
+ the HTML page.
+
+* `GET /1.0/status-html` causes WEBAPP to return an HTML page that
+ describes the state of Lorry Controller. This also updates an
+ on-disk copy of the HTML page, which the web server is configured to
+ serve using a normal HTTP request. This is the primary interface for
+ human admins to look at the state of Lorry Controller. (MON/STATIC)
+
+* `GET /1.0/lorry/<lorryspecid>` causes WEBAPP to return a JSON map
+ (dict) with all the information about the specified Lorry
+ specification. (RQ/SPEC)
+
+
Requests for MINION:
* `GET /1.0/give-me-job` is used by MINION to get a new job to run.
@@ -290,6 +321,7 @@ Requests for MINION:
to do, and MINION will then sleep for a while before it tries again.
WEBAPP updates STATEDB to record that the job is allocated to a
MINION.
+
* `POST /1.0/job-update` is used by MINION to push updates about the
job it is running to WEBAPP. The body sets fields `exit` (exit code
of program, or `no` if not set), `stdout` (some output from the
@@ -308,17 +340,21 @@ Other requests:
* `POST /1.0/read-configuration` causes WEBAPP to update its copy of
CONFGIT and update STATEDB based on the new configuration, if it has
changed. Returns OK/ERROR status. (RC/ADD, RC/RM, RC/START)
+
+ This is called by systemd units at system startup and periodically
+ (perhaps once a minute) otherwise. It can also be triggered by an
+ admin (there is a button on the `/1.0/status-html` web page).
+
* `POST /1.0/ls-troves` causes WEBAPP to refresh its list of
repositories in each remote Trove, if the current list is too old
(see the `ls-interval` setting for each remote trove in
`lorry-controller.conf`). This gets called from a systemd timer unit
at a suitable interval.
+
* `POST /1.0/force-ls-troves` causes the repository refresh to happen
- for all remote Troves, regardless of whether it is due or not.
-* `GET /1.0/status-html` causes WEBAPP to return an HTML page that
- describes the state of Lorry Controller. This also updates an
- on-disk copy of the HTML page, which the web server is configured to
- serve using a normal HTTP request. (MON/STATIC)
+ for all remote Troves, regardless of whether it is due or not. This
+ can be called manually by an admin.
+
The MINION
----------