| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Job IDs are assigned sequentially, and jobs cannot finish before they
start. Therefore we can iterate over jobs in order of ID and stop
when we find a job that started more recently than max-age-seconds
ago.
Related to #18.
|
|
|
|
|
|
|
| |
In order to filter jobs more efficiently, we need to look at the start
time (job_started in the API) as well as the finish time.
Related to #18.
|
|
|
|
|
|
|
| |
In order to filter jobs earlier, we need to have a single loop over
jobs in process_args().
Related to #18.
|
|
|
|
|
|
|
|
| |
In order to filter jobs earlier, we need to have a single loop over
jobs in process_args() instead of in multiple functions that it calls.
Prepare for that by inlining the methods that it calls directly.
Related to #18.
|
|
|
|
|
|
| |
Various modules are imported and not used.
Caught by pyflakes.
|
|
|
|
|
|
|
| |
pyflakes found various assignments to local variables which are not
used again. In some cases we still need to evaluate the expression
that's assigned. In most places we can delete the assignment
entirely.
|
| |
|
|
|
|
|
|
| |
While we're here, seeing as Adam mentioned it.
Change-Id: I5ddb86c70d76a84cf12fbd4eb91f3802e490d745
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From a disk space point of view, 1 year's worth of logs,
while potentially excessive, isn't too much of an ask,
since disk space is cheap enough.
However our queries on the database run slower when it is large,
so we need a shorter log retention policy,
and it's best when the defaults do the right thing.
1 day's worth of logs was found to be 87MB,
which means 3 days is roughly 250MB, which is acceptable.
Change-Id: If3dd58fa01f785bc7d7029a45b6a0fc35c2c2b1d
|
|
|