summaryrefslogtreecommitdiff
path: root/tests/README.md
blob: 86597d8beb254355efb02ef78f872a6bd9eee6cc (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
<!--
Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.

SPDX-License-Identifier: curl
-->

# The curl Test Suite

# Running

## Requires to run

  - perl (and a unix-style shell)
  - python (and a unix-style shell, for SMB and TELNET tests)
  - python-impacket (for SMB tests)
  - diff (when a test fails, a diff is shown)
  - stunnel (for HTTPS and FTPS tests)
  - OpenSSH or SunSSH (for SCP, SFTP and SOCKS4/5 tests)
  - nghttpx (for HTTP/2 and HTTP/3 tests)
  - nroff (for --manual tests)
  - An available `en_US.UTF-8` locale

### Installation of python-impacket

  The Python-based test servers support both recent Python 2 and 3.
  You can figure out your default Python interpreter with python -V

  Please install python-impacket in the correct Python environment.
  You can use pip or your OS' package manager to install 'impacket'.

  On Debian/Ubuntu the package names are:

  -  Python 2: 'python-impacket'
  -  Python 3: 'python3-impacket'

  On FreeBSD the package names are:

  -  Python 2: 'py27-impacket'
  -  Python 3: 'py37-impacket'

  On any system where pip is available:

  -  Python 2: 'pip2 install impacket'
  -  Python 3: 'pip3 install impacket'

  You may also need to manually install the Python package 'six'
  as that may be a missing requirement for impacket on Python 3.

### Port numbers used by test servers

  All test servers run on "random" port numbers. All tests should be written
  to use suitable variables instead of fixed port numbers so that test cases
  continue to work independent on what port numbers the test servers actually
  use.

  See [`FILEFORMAT`](FILEFORMAT.md) for the port number variables.

### Test servers

  The test suite runs stand-alone servers on random ports to which it makes
  requests. For SSL tests, it runs stunnel to handle encryption to the regular
  servers. For SSH, it runs a standard OpenSSH server. For SOCKS4/5 tests SSH
  is used to perform the SOCKS functionality and requires a SSH client and
  server.

  The listen port numbers for the test servers are picked randomly to allow
  users to run multiple test cases concurrently and to not collide with other
  existing services that might listen to ports on the machine.

  The HTTP server supports listening on a Unix domain socket, the default
  location is 'http.sock'.
  
  For HTTP/2 and HTTP/3 testing an installed `nghttpx` is used. HTTP/3
  tests check if nghttpx supports the protocol. To override the nghttpx
  used, set the environment variable `NGHTTPX`. The default can also be
  changed by specifying `--with-test-nghttpx=<path>` as argument to `configure`.

### Run

  `./configure && make && make test`. This builds the test suite support code
  and invokes the 'runtests.pl' perl script to run all the tests. Edit the top
  variables of that script in case you have some specific needs, or run the
  script manually (after the support code has been built).

  The script breaks on the first test that doesn't do OK. Use `-a` to prevent
  the script from aborting on the first error. Run the script with `-v` for
  more verbose output. Use `-d` to run the test servers with debug output
  enabled as well. Specifying `-k` keeps all the log files generated by the
  test intact.

  Use `-s` for shorter output, or pass test numbers to run specific tests only
  (like `./runtests.pl 3 4` to test 3 and 4 only). It also supports test case
  ranges with 'to', as in `./runtests.pl 3 to 9` which runs the seven tests
  from 3 to 9. Any test numbers starting with ! are disabled, as are any test
  numbers found in the files `data/DISABLED` or `data/DISABLED.local` (one per
  line). The latter is meant for local temporary disables and will be ignored
  by git.

  Test cases mentioned in `DISABLED` can still be run if `-f` is provided.

  When `-s` is not present, each successful test will display on one line the
  test number and description and on the next line a set of flags, the test
  result, current test sequence, total number of tests to be run and an
  estimated amount of time to complete the test run. The flags consist of
  these letters describing what is checked in this test:

    s stdout
    d data
    u upload
    p protocol
    o output
    e exit code
    m memory
    v valgrind

### Shell startup scripts

  Tests which use the ssh test server, SCP/SFTP/SOCKS tests, might be badly
  influenced by the output of system wide or user specific shell startup
  scripts, .bashrc, .profile, /etc/csh.cshrc, .login, /etc/bashrc, etc. which
  output text messages or escape sequences on user login. When these shell
  startup messages or escape sequences are output they might corrupt the
  expected stream of data which flows to the sftp-server or from the ssh
  client which can result in bad test behavior or even prevent the test server
  from running.

  If the test suite ssh or sftp server fails to start up and logs the message
  'Received message too long' then you are certainly suffering the unwanted
  output of a shell startup script. Locate, cleanup or adjust the shell
  script.

### Memory test

  The test script will check that all allocated memory is freed properly IF
  curl has been built with the `CURLDEBUG` define set. The script will
  automatically detect if that is the case, and it will use the
  `memanalyze.pl` script to analyze the memory debugging output.

  Also, if you run tests on a machine where valgrind is found, the script will
  use valgrind to run the test with (unless you use `-n`) to further verify
  correctness.

  The `runtests.pl` `-t` option enables torture testing mode. It runs each
  test many times and makes each different memory allocation fail on each
  successive run. This tests the out of memory error handling code to ensure
  that memory leaks do not occur even in those situations. It can help to
  compile curl with `CPPFLAGS=-DMEMDEBUG_LOG_SYNC` when using this option, to
  ensure that the memory log file is properly written even if curl crashes.

### Debug

  If a test case fails, you can conveniently get the script to invoke the
  debugger (gdb) for you with the server running and the same command line
  parameters that failed. Just invoke `runtests.pl <test number> -g` and then
  just type 'run' in the debugger to perform the command through the debugger.

### Logs

  All logs are generated in the log/ subdirectory (it is emptied first in the
  runtests.pl script). They remain in there after a test run.
  
### Log Verbosity

  A curl build with `--enable-debug` offers more verbose output in the logs.
  This applies not only for test cases, but also when running it standalone
  with `curl -v`. While a curl debug built is
  ***not suitable for production***, it is often helpful in tracking down
  problems.
  
  Sometimes, one needs detailed logging of operations, but does not want
  to drown in output. The newly introduced *connection filters* allows one to
  dynamically increase log verbosity for a particular *filter type*. Example:
  
    CURL_DEBUG=ssl curl -v https://curl.se

  will make the `ssl` connection filter log more details. One may do that for
  every filter type and also use a combination of names, separated by `,` or 
  space.
  
    CURL_DEBUG=ssl,http/2 curl -v https://curl.se

   The order of filter type names is not relevant. Names used here are
   case insensitive. Note that these names are implementation internals and
   subject to change.
   
   Some, likely stable names are `tcp`, `ssl`, `http/2`. For a current list,
   one may search the sources for `struct Curl_cftype` definitions and find
   the names there. Also, some filters are only available with certain build
   options, of course.
   
### Test input files

  All test cases are put in the `data/` subdirectory. Each test is stored in
  the file named according to the test number.

  See [`FILEFORMAT`](FILEFORMAT.md) for a description of the test case file
  format.

### Code coverage

  gcc provides a tool that can determine the code coverage figures for the
  test suite. To use it, configure curl with `CFLAGS='-fprofile-arcs
  -ftest-coverage -g -O0'`. Make sure you run the normal and torture tests to
  get more full coverage, i.e. do:

    make test
    make test-torture

  The graphical tool `ggcov` can be used to browse the source and create
  coverage reports on \*nix hosts:

    ggcov -r lib src

  The text mode tool `gcov` may also be used, but it doesn't handle object
  files in more than one directory correctly.

### Remote testing

  The runtests.pl script provides some hooks to allow curl to be tested on a
  machine where perl can not be run. The test framework in this case runs on
  a workstation where perl is available, while curl itself is run on a remote
  system using ssh or some other remote execution method. See the comments at
  the beginning of runtests.pl for details.

## Test case numbering

  Test cases used to be numbered by category ranges, but the ranges filled
  up. Subsets of tests can now be selected by passing keywords to the
  runtests.pl script via the make `TFLAGS` variable.

  New tests are added by finding a free number in `tests/data/Makefile.inc`.

## Write tests

  Here's a quick description on writing test cases. We basically have three
  kinds of tests: the ones that test the curl tool, the ones that build small
  applications and test libcurl directly and the unit tests that test
  individual (possibly internal) functions.

### test data

  Each test has a master file that controls all the test data. What to read,
  what the protocol exchange should look like, what exit code to expect and
  what command line arguments to use etc.

  These files are `tests/data/test[num]` where `[num]` is just a unique
  identifier described above, and the XML-like file format of them is
  described in the separate [`FILEFORMAT`](FILEFORMAT.md) document.

### curl tests

  A test case that runs the curl tool and verifies that it gets the correct
  data, it sends the correct data, it uses the correct protocol primitives
  etc.

### libcurl tests

  The libcurl tests are identical to the curl ones, except that they use a
  specific and dedicated custom-built program to run instead of "curl". This
  tool is built from source code placed in `tests/libtest` and if you want to
  make a new libcurl test that is where you add your code.

### unit tests

  Unit tests are placed in `tests/unit`. There's a tests/unit/README
  describing the specific set of checks and macros that may be used when
  writing tests that verify behaviors of specific individual functions.

  The unit tests depend on curl being built with debug enabled.