summaryrefslogtreecommitdiff
path: root/doc/MANUAL.txt
blob: fafd16eebe7ec95ff96c490ff91a894d5acfa683 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
Test Repository users manual
++++++++++++++++++++++++++++

Overview
~~~~~~~~

Test repository is a small application for tracking test results. Any test run
that can be represented as a subunit stream can be inserted into a repository.

Typical workflow is to have a repository into which test runs are inserted, and
then to query the repository to find out about issues that need addressing. For
instance, using the sample subunit stream included with Test repository::

  # Note that there is a .testr.conf already:
  ls .testr.conf
  # Create a store to manage test results in.
  $ testr init
  # add a test result (shows failures)
  $ testr load < doc/example-failing-subunit-stream
  # see the tracked failing tests again
  $ testr failing
  # fix things
  $ testr load < doc/example-passing-subunit-stream
  # Now there are no tracked failing tests
  $ testr failing

Most commands in testr have comprehensive online help, and the commands::

  $ testr help
  $ testr commands

Will be useful to explore the system.

Running tests
~~~~~~~~~~~~~

testr can be taught how to run your tests by setting up a .testr.conf
file in your cwd. A file like::

  [DEFAULT]
  test_command=foo $IDOPTION
  test_id_option=--bar $IDFILE

will cause 'testr run' to run 'foo' and process it as 'testr load' would.
Likewise 'testr run --failing' will run 'foo --bar failing.list' and process it
as 'testr load' would. failing.list will be a newline separated list of the
test ids that your test runner outputs. Arguments passed to 'testr run' are
used to filter test ids that will be run - testr will query the runner for test
ids and then apply each argument as a regex filter. Tests that match any of the
given filters will be run. Arguments passed to run after a ``--`` are passed
through to your test runner command line. For instance, using the above config
example ``testr run quux -- bar --no-plugins`` would query for test ids, filter
for those that match 'quux' and then run
``foo bar --load-list tempfile.list --no-plugins``. Shell variables are
expanded in these commands on platforms that have a shell.

To get a full list of these options run ``testr help run``.

Having setup a .testr.conf, a common workflow then becomes::

  # Fix currently broken tests - repeat until there are no failures.
  $ testr run --failing
  # Do a full run to find anything that regressed during the reduction process.
  $ testr run
  # And either commit or loop around this again depending on whether errors
  # were found.

The --failing option turns on ``--partial`` automatically (so that if the
partial test run were to be interrupted, the failing tests that aren't run are
not lost).

Another common use case is repeating a failure that occured on a remote
machine (e.g. during a jenkins test run). There are two common ways to do
approach this.

Firstly, if you have a subunit stream from the run you can just load it::

  $ testr load < failing-stream
  # Run the failed tests
  $ testr run --failing

The streams generated by test runs are in .testrepository/ named for their test
id - e.g. .testrepository/0 is the first stream.

If you do not have a stream (because the test runner didn't output subunit or
you don't have access to the .testrepository) you may be able to use a list
file. If you can get a file that contains one test id per line, you can run
the named tests like this:

  $ testr run --load-list FILENAME

This can also be useful when dealing with sporadically failing tests, or tests
that only fail in combination with some other test - you can bisect the tests
that were run to get smaller and smaller (or larger and larger) test subsets
until the error is pinpointed.

Listing tests
~~~~~~~~~~~~~

It is useful to be able to query the test program to see what tests will be
run - this permits partitioning the tests and running multiple instances with
separate partitions at once. Set 'test_list_option' in .testr.conf like so::

  test_list_option=--list-tests

You also need to use the $LISTOPT option to tell testr where to expand things:

  test_command=foo $LISTOPT $IDOPTION

All the normal rules for invoking test program commands apply: extra parameters
will be passed through, if a test list is being supplied test_option can be
used via $IDOPTION.

The output of the test command when this option is supplied should be a series
of test ids, in any order, ``\n`` separated on stdout.

To test whether this is working the `testr list-tests` command can be useful.

You can also use this to see what tests will be run by a given testr run
command. For instance, the tests that ``testr run myfilter`` will run are shown
by ``testr list-tests myfilter``. As with 'run', arguments to 'list-tests' are
used to regex filter the tests of the test runner, and arguments after a '--'
are passed to the test runner.

Parallel testing
~~~~~~~~~~~~~~~~

If both test listing and filtering (via either IDLIST or IDFILE) are configured
then testr is able to run your tests in parallel::

  $ testr run --parallel

This will first list the tests, partition the tests into one partition per CPU
on the machine, and then invoke multiple test runners at the same time, with
each test runner getting one partition. Currently the partitioning algorithm
is a simple round-robin.

On Linux, testrepository will inspect /proc/cpuinfo to determine how many CPUs
are present in the machine, and run one worker per CPU. On other operating
systems, or if you need to control the number of workers that are used, the
--concurrency option will let you do so::

  $ testr run --parallel --concurrency=2

When running tests in parallel, testrepository tags each test with a tag for
the worker that executed the test. The tags are of the form ``worker-%d`` and
are usually used to reproduce test isolation failures, where knowing exactly
what test ran on a given backend is important.

To find the tests that ran on a single slave::

  $ testr last
  # grab the id from that.
  $ subunit-filter -s --xfail --with-tag=worker-3 < .testrepository/$lastid | subunit-ls > slave-3.list

This will be better integrated in future.

To find out which slave a failing test ran on just look at the 'tags' line in
its test error::

  ======================================================================
  label: testrepository.tests.ui.TestDemo.test_methodname
  tags: foo worker-0
  ----------------------------------------------------------------------
  error text

Hiding tests
~~~~~~~~~~~~

Some test runners (for instance, zope.testrunner) report pseudo tests having to
do with bringing up the test environment rather than being actual tests that
can be executed. These are only relevant to a test run when they fail - the
rest of the time they tend to be confusing. For instance, the same 'test' may
show up on multiple parallel test runs, which will inflate the 'executed tests'
count depending on the number of worker threads that were used. Scheduling such
'tests' to run is also a bit pointless, as they are only ever executed
implicitly when preparing (or finishing with) a test environment to run other
tests in.

testr can ignore such tests if they are tagged, using the filter_tags
configuration option. Tests tagged with any tag in that (space separated) list
will only be included in counts and reports if the test failed (or errored).

Repositories
~~~~~~~~~~~~

A testr repository is a very simple disk structure. It contains the following
files (for a format 1 repository - the only current format):

* format: This file identifies the precise layout of the repository, in case
  future changes are needed.

* next-stream: This file contains the serial number to be used when adding another
  stream to the repository.

* failing: This file is a stream containing just the known failing tests. It
  is updated whenever a new stream is added to the repository, so that it only
  references known failing tests.

* #N - all the streams inserted in the repository are given a serial number.

* repo.conf: This file contains user configuration settings for the repository.
  ``testr repo-config`` will dump a repo configration and
  ``test help repo-config`` has online help for all the repository settings.