summaryrefslogtreecommitdiff
path: root/ARCH
blob: d2d81ad50b2f15edf564c2be5c015743d05ab090 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
% Architecture of daemonised Lorry Controller
% Codethink Ltd

Introduction
============

This is an architecture document for Lorry Controller. It is aimed at
those who develop the software.

Lorry is a tool in Baserock for mirroring code from whatever format
upstream provides it into git repositories, converting them to git as
needed. Lorry Controller is service, running on a Trove, which runs
Lorry against all configured upstreams, including other Troves.

Lorry Controller reads a configuration from a git repository. That
configuration includes specifications of which upstreams to
mirror/convert. This includes what upstream Troves to mirror. Lorry
Controller instructs Lorry to push to a Trove's git repositories.

Lorry specifications, and upstream Trove specifications, may include
scheduling information, which the Lorry Controller uses to decide when
to execute which specification.

Requirements
============

Some concepts/terminology:

* CONFGIT is the git repository the Lorry Controller instance uses for
  its configuration.
* Lorry specification: which upstream version control repository or
  tarball to mirror.
* Trove specification: which upstream Trove to mirror. This gets
  broken into generated Lorry specifications, one per git repository
  on the upstream Trove. There can be many Trove specifications to
  mirror many Troves.
* job: An instance of executing a Lorry specification. Each job has an
  identifier and associated data (such as the output provided by the
  running job, and whether it succeeded).
* run queue: all the Lorry specifications (from CONFGIT or generated
  from the Troe specifications) a Lorry Controller knows about; this
  is the set of things that get scheduled. The queue has a linear
  order (first job in the queue is the next job to execute).
* admin: a person who can control or reconfigure a Lorry Controller
  instance.

Original set of requirement, which have been broken down and detailed
up below:

*   Lorry Controller should be capable of being reconfigured at runtime
    to allow new tasks to be added and old tasks to be removed.
    (RC/ADD, RC/RM, RC/START)
*   Lorry Controller should not allow all tasks to become stuck if one
    task is taking a long time. (RR/MULTI)
*   Lorry Controller should not allow stuck tasks to remain stuck
    forever. (Configurable timeout? monitoring of disk usage or CPU to
    see if work is being done?) (RR/TIMEOUT)
*   Lorry Controller should be able to be controlled at runtime to allow:
    - Querying of the current task set (RQ/SPECS, RQ/SPEC)
    - Querying of currently running tasks (RQ/RUNNING)
    - Promotion or demotion of a task in the queue (RT/TOP, RT/BOT)
    - Supporting of the health monitoring to allow appropriate alerts
      to be sent out (MON/STATIC, MON/DU)

The detailed requirements (prefixed by a unique identfier, which is
used elsewhere to refer to the exact requirement):

* (FW) Lorry Controller can access upstream Troves from behind firewalls.
    * (FW/H) Lorry Controller can access the upstream Trove using HTTP or
      HTTPS only, without using ssh, in order to get a list of
      repositories to mirror. (Lorry itself also needs to be able to
      access the upstream Trove using HTTP or HTTPS only, bypassing
      ssh, but that's a Lorry problem and outside the scope of Lorry
      Controller, so it'll need to be dealt separately.)
    * (FW/C) Lorry Controller does not verify SSL/TLS certificates
      when accessing the upstream Trove.
* (RC) Lorry Controller can be reconfigured at runtime.
    * (RC/ADD) A new Lorry specification can be added to CONFGIT, and
      a running Lorry Controller will add them to its run queue as
      soon as it is notified of the change.
    * (RC/RM) A Lorry specification can be removed from CONFGIT, and a
      running Lorry Controller will remove it from its run queue as
      soon as it is notified of the change.
    * (RC/START) A Lorry Controller reads CONFGIT when it starts,
      updating its run queue if anything has changed.
* (RT) Lorry Controller can controlled at runtime.
    * (RT/KILL) An admin can get their Lorry Controller to stop a running job.
    * (RT/TOP) An admin can get their Lorry Controller to move a Lorry spec to
      the beginning of the run queue.
    * (RT/BOT) An admin can get their Lorry Controller to move a Lorry
      spec to the end of the run queue.
    * (RT/QSTOP) An admin can stop their Lorry Controller from scheduling any new
      jobs.
    * (RT/QSTART) An admin can get their Lorry Controller to start
      scheduling jobs again.
* (RQ) Lorry Controller can be queried at runtime.
    * (RQ/RUNNING) An admin can list all currently running jobs.
    * (RQ/ALLJOBS) An admin can list all finished jobs that the Lorry
      Controller still remembers.
    * (RQ/SPECS) An admin can list all existing Lorry specifications
      in the run queue.
    * (RQ/SPEC) An admin can query existing Lorry specifications in
      the run queue for any information the Lorry Controller holds for
      them, such as the last time they successfully finished running.
* (RR) Lorry Controller is reasonably robust.
    * (RR/CONF) Lorry Controller ignores any broken Lorry or Trove
      specifications in CONFGIT, and runs without them.
    * (RR/TIMEOUT) Lorry Controller stops a job that runs for too
      long.
    * (RR/MULTI) Lorry Controller can run multiple jobs at the same
      time, and lets the maximal number of such jobs be configured by
      the admin.
    * (RR/DU) Lorry Controller (and the way it runs Lorry) is
      designed to be frugal about disk space usage.
    * (RR/CERT) Lorry Controller tells Lorry to not worry about
      unverifiable SSL/TLS certificates and to continue even if the
      certificate can't be verified or the verification fails.
* (RS) Lorry Controller is reasonably scalable.
    * (RS/SPECS) Lorry Controller works for the number of Lorry
      specifications we have on git.baserock.org (a number that will
      increase, and is currently about 500).
    * (RS/GITS) Lorry Controller works for mirroring git.baserock.org
      (about 500 git repositories).
    * (RS/HW) Lorry Controller may assume that CPU, disk, and
      bandwidth are sufficient, if not to be needlessly wasted.
* (MON) Lorry Controller can be monitored from the outside.
    * (MON/STATIC) Lorry Controller updates at least once a minute a
      static HTML file, which shows its current status with sufficient
      detail that an admin knows if things get stuck or break.
    * (MON/DU) Lorry Controller measures, at least, the disk usage of
      each job and Lorry specification.
* (SEC) Lorry Controller is reasonably secure.
    * (SEC/API) Access to the Lorry Controller run-time query and
      controller interfaces is managed with iptables (for now).
    * (SEC/CONF) Access to CONFGIT is managed by the git server that
      hosts it. (Gitano on Trove.)

Architecture design
===================

Constraints
-----------

Python is not good at multiple threads (partly due to the global
interpreter lock), and mixing threads and executing subprocesses is
quite tricky to get right in general. Thus, this design avoids using
threads.

Entities
--------

* An admin is a human being that communicates with the Lorry
  Controller using an HTTP API. They might do it using a command line
  client.
* Lorry Controller runs Lorry appropriately, and consists of several
  components described below.
* The local Trove is where Lorry Controller tells its Lorry to push
  the results.
* Upstream Trove is a Trove that Lorry Controller mirrors to the local
  Trove. There can be multiple upstream Troves.

Components of Lorry Controller
------------------------------

* CONFGIT is a git repository for Lorry Controller configuration,
  which the Lorry Controller can access and pull from. Pushing is not
  required and should be prevented by Gitano. CONFGIT is hosted on the
  local Trove.
* STATEDB is persistent storage for the Lorry Controller's state: what
  Lorry specs it knows about (provided by the admin, or generated from
  a Trove spec by Lorry Controller itself), their ordering, jobs that
  have been run or are being run, information about the jobs, etc.
  The idea is that the Lorry Controller process can terminate (cleanly
  or by crashing), and be restarted, and continue approximately where
  it was. Also, a persistent storage is useful if there are multiple
  processes involved due to how bottle.py and WSGI work. STATEDB is
  implemented using sqlite3.
* WEBAPP is the controlling part of Lorry Controller, which maintains
  the run queue, and provides an HTTP API for monitoring and
  controller Lorry Controller. WEBAPP is implemented as a bottle.py
  application.
* MINION runs jobs (external processes) on behalf of WEBAPP. It
  communicates with WEBAPP over HTTP, and requests a job to run,
  starts it, and while it waits, sends partial output to the WEBAPP,
  and asks the WEBAPP whether the job should be aborted or not. MINION
  may eventually run on a different host than WEBAPP, for added
  scalability.

Components external to Lorry Controller
---------------------------------------

* A web server. This runs the Lorry Controller WEBAPP, using WSGI so
  that multiple instances (processes) can run at once, and thus serve
  many clients.
* bottle.py is a Python microframework for web applications. We
  already have it in Baserock, where we use it for morph-cache-server,
  and it seems to be acceptable.
* systemd is the operating system component that starts services and
  processes.

How the components work together
--------------------------------

* Each WEBAPP instance is started by the web server, when a request
  comes in. The web server is started by a systemd unit.
* Each MINION instance is started by a systemd unit. Each MINION
  handles one job at a time, and doesn't block other MINIONs from
  running other jobs. The admins decide how many MINIONs run at once,
  depending on hardware resources and other considerations. (RR/MULTI)
* An admin communicates with the WEBAPP only, by making HTTP requests.
  Each request is either a query (GET) or a command (POST). Queries
  report state as stored in STATEDB. Commands cause the WEBAPP
  instance to do something and alter STATEDB accordingly.
* When an admin makes changes to CONFGIT, and pushes them to the local
  Trove, the Trove's git post-update hook makes an HTTP request to
  WEBAPP to update STATEDB from CONFGIT. (RC/ADD, RC/RM)
* Each MINION likewise communicates only with the WEBAPP using HTTP
  requests. MINION requests a job to run (which triggers WEBAPP's job
  scheduling), and then reports results to the WEBAPP (which causes
  WEBAPP to store them in STATEDB), which tells MINION whether to
  continue running the job or not (RT/KILL). There is no separate
  scheduling process: all scheduling happens when there is a MINION
  available.
* At system start up, a systemd unit makes an HTTP request to WEBAPP
  to make it refresh STATEDB from CONFGIT. (RC/START)
* A timer unit for systemd makes an HTTP request to get WEBAPP to
  refresh the static HTML status page. (MON/STATIC)

In summary: systemd starts WEBAPP and MINIONs, and whenever a
MINION can do work, it asks WEBAPP for something to do, and reports
back results. Meanwhile, admin can query and control via HTTP requests
to WEBAPP, and WEBAPP instances communicate via STATEDB.

The WEBAPP
----------

The WEBAPP provides an HTTP API as described below.

Requests for admins:

* `GET /1.0/status` causes WEBAPP to return a JSON object that
  describes the state of Lorry Controller. This information is meant
  to be programmatically useable and may or may not be the same as in
  the HTML page.
* `POST /1.0/stop-queue` causes WEBAPP to stop scheduling new jobs to
  run. Any currently running jobs are not affected. (RT/QSTOP)
* `POST /1.0/start-queue` causes WEBAPP to start scheduling jobs
  again. (RT/QSTART)

* `GET /1.0/list-queue` causes WEBAPP to return a JSON list of ids of
  all Lorry specifications in the run queue, in the order they are in
  the run queue. (RQ/SPECS)
* `GET /1.0/lorry/<lorryspecid>` causes WEBAPP to return a JSON map
  (dict) with all the information about the specified Lorry
  specification. (RQ/SPEC)
* `POST /1.0/move-to-top` with `path=lorryspecid` as the body, where
  `lorryspecid` is the id (path) of a Lorry specification in the run
  queue, causes WEBAPP to move the specified spec to the head of the
  run queue, and store this in STATEDB. It doesn't affect currently
  running jobs. (RT/TOP)
* `POST /1.0/move-to-bottom` with `path=lorryspecid` in the body is
  like `/move-to-top`, but moves the job to the end of the run queue.
  (RT/BOT)

* `GET /1.0/list-running-jobs` causes WEBAPP to return a JSON list of
  ids of all currently running jobs. (RQ/RUNNING)
* `GET /1.0/job/<jobid>` causes WEBAPP to return a JSON map (dict)
  with all the information about the specified job.
* `POST /1.0/stop-job` with `job_id=jobid` where `jobid` is an id of a
  running job, causes WEBAPP to record in STATEDB that the job is to
  be killed, and waits for it to be killed. (Killing to be done when
  MINION gets around to it.) This request returns as soon as the
  STATEDB change is done.
* `GET /1.0/list-all-jobs` causes WEBAPP to return a JSON list of ids
  of all jobs, running or finished, that it knows about. (RQ/ALLJOBS)
* `POST /1.0/remove-job` with `job_id=jobid` in the body, removes a
  stopped job from the state database.

Requests for MINION:

* `GET /1.0/give-me-job` is used by MINION to get a new job to run.
  WEBAPP will either return a JSON object describing the job to run,
  or return a status code indicating that there is nothing to do.
  WEBAPP will respond immediately, even if there is nothing for MINION
  to do, and MINION will then sleep for a while before it tries again.
  WEBAPP updates STATEDB to record that the job is allocated to a
  MINION.
* `POST /1.0/job-update` is used by MINION to push updates about the
  job it is running to WEBAPP. The body sets fields `exit` (exit code
  of program, or `no` if not set), `stdout` (some output from the
  job's standard output) and `stderr` (ditto, but standard error
  output). There MUST be at least one `job-update` call, which
  indicates the job has terminated. WEBAPP responds with a status
  indicating whether the job should continue to run or be terminated
  (RR/TIMEOUT). WEBAPP records the job as terminated only after MINION
  tells it the job has been terminated. MINION makes the `job-update`
  request frequently, even if the job has produced no output, so that
  WEBAPP can update a timestamp in STATEDB to indicate the job is
  still alive.

Other requests:

* `POST /1.0/read-configuration` causes WEBAPP to update its copy of
  CONFGIT and update STATEDB based on the new configuration, if it has
  changed. Returns OK/ERROR status. (RC/ADD, RC/RM, RC/START)
* `POST /1.0/ls-troves` causes WEBAPP to refresh its list of
  repositories in each remote Trove, if the current list is too old
  (see the `ls-interval` setting for each remote trove in
  `lorry-controller.conf`). This gets called from a systemd timer unit
  at a suitable interval.
* `POST /1.0/force-ls-troves` causes the repository refresh to happen
  for all remote Troves, regardless of whether it is due or not.
* `GET /1.0/status-html` causes WEBAPP to return an HTML page that
  describes the state of Lorry Controller. This also updates an
  on-disk copy of the HTML page, which the web server is configured to
  serve using a normal HTTP request. (MON/STATIC)

The MINION
----------

* Do `GET /1.0/give-me-job` to WEBAPP.
* If didn't get a job, sleep a while and try again.
* If did get job, fork and exec that.
* In a loop: wait for output, for a suitably short period of time,
  from job (or its termination), with `select` or similar mechanism,
  and send anything (if anything) you get to WEBAPP. If the WEBAPP
  told us to kill the job, kill it, then send an update to that effect
  to WEBAPP.
* Go back to top to request new job.

STATEDB
-------

The STATEDB has several tables. This section explains them.

The `running_queue` table has a single column (`running`) and a single
row, and is used to store a single boolean value that specifies
whether WEBAPP is giving out jobs to run from the run-queue. This
value is controlled by `/1.0/start-queue` and `/1.0/stop-queue`
requests.

The `lorries` table implements the run-queue: all the Lorry specs that
WEBAPP knows about. It has the following columns:

* `path` is the path of the git repository on the local Trove, i.e.,
  the git repository to which Lorry will push. This is a unique
  identifier. It is used, for example, to determine if a Lorry spec
  is obsolete after a CONFGIT update.
* `text` has the text of the Lorry spec. This may be read from a file
  or generated by Lorry Controller itself. This text will be given to
  Lorry when a job is run.
* `generated` is set to 0 or 1, depending on if the Lorry came from an
  actual `.lorry` file or was generated by Lorry Controller.