summaryrefslogtreecommitdiff
path: root/tests
diff options
context:
space:
mode:
Diffstat (limited to 'tests')
-rw-r--r--tests/CI.md46
-rw-r--r--tests/FILEFORMAT.md55
-rw-r--r--tests/README.md16
-rwxr-xr-xtests/disable-scan.pl2
-rw-r--r--tests/unit/README.md37
5 files changed, 79 insertions, 77 deletions
diff --git a/tests/CI.md b/tests/CI.md
index febd1f823..92d57a058 100644
--- a/tests/CI.md
+++ b/tests/CI.md
@@ -39,15 +39,15 @@ Consider the following table while looking at pull request failures:
| LGTM analysis: Python | stable | new findings |
| LGTM analysis: C/C++ | stable | new findings |
| buildbot/curl_winssl_ ... | stable | all errors and failures |
- | continuous-integration/appveyor/pr | stable | all errors and failures |
+ | AppVeyor | flaky | all errors and failures |
| curl.curl (linux ...) | stable | all errors and failures |
| curl.curl (windows ...) | flaky | repetitive errors/failures |
- | deepcode-ci-bot | stable | new findings |
- | musedev | stable | new findings |
+ | CodeQL | stable | new findings |
Sometimes the tests fail due to a dependency service temporarily being offline
-or otherwise unavailable, eg. package downloads. In this case you can just
-try to update your pull requests to rerun the tests later as described below.
+or otherwise unavailable, for example package downloads. In this case you can
+just try to update your pull requests to rerun the tests later as described
+below.
## CI servers
@@ -75,20 +75,20 @@ The following tests are run in Microsoft Azure CI environment:
These are all configured in `.azure-pipelines.yml`.
-As of November 2021 @bagder and @mback2k are the only people with administrator
-access to the Azure CI environment. Additional admins/group members can be added
-on request.
+As of November 2021 `@bagder` and `@mback2k` are the only people with
+administrator access to the Azure CI environment. Additional admins/group
+members can be added on request.
-### Appveyor
+### AppVeyor
-Appveyor runs a variety of different Windows builds, with different compilation
+AppVeyor runs a variety of different Windows builds, with different compilation
options.
-As of November 2021 @bagder, @mback2k, @jay, @vszakats, @dfandrich and
-@danielgustafsson have administrator access to the Appveyor CI environment.
-Additional admins/group members can be added on request.
+As of November 2021 `@bagder`, `@mback2k`, `@jay`, `@vszakats`, `@dfandrich`
+and `@danielgustafsson` have administrator access to the AppVeyor CI
+environment. Additional admins/group members can be added on request.
-The tests are configured in appveyor.yml.
+The tests are configured in `appveyor.yml`.
### Zuul
@@ -105,21 +105,21 @@ do not report results to the Github checks runner - you need to manually check
for failures. See [#7522](https://github.com/curl/curl/issues/7522) for more
information.
-As of November 2021 Daniel Stenberg is the only person with administrator access
-to the Zuul CI environment.
+As of November 2021 Daniel Stenberg is the only person with administrator
+access to the Zuul CI environment.
These are configured in `zuul.d` and have test runners in `scripts/zuul`.
-### CircleCI
+### Circle CI
-CircleCI runs a basic Linux test suite on Ubuntu for both x86 and ARM
+Circle CI runs a basic Linux test suite on Ubuntu for both x86 and ARM
processors. This is configured in `.circleci/config.yml`.
-You can [view the full list of CI jobs on CircleCI's
+You can [view the full list of CI jobs on Circle CI's
website](https://app.circleci.com/pipelines/github/curl/curl).
-@bagder has access to edit the "Project Settings" on that page.
-Additional admins/group members can be added on request.
+`@bagder` has access to edit the "Project Settings" on that page. Additional
+admins/group members can be added on request.
### Cirrus CI
@@ -129,5 +129,5 @@ Cirrus CI runs a basic test suite on FreeBSD and Windows. This is configured in
You can [view the full list of CI jobs on Cirrus CI's
website](https://cirrus-ci.com/github/curl/curl).
-@bagder has access to edit the "Project Settings" on that page.
-Additional admins/group members can be added on request.
+`@bagder` has access to edit the "Project Settings" on that page. Additional
+admins/group members can be added on request.
diff --git a/tests/FILEFORMAT.md b/tests/FILEFORMAT.md
index a1d1c12ea..03e6fa171 100644
--- a/tests/FILEFORMAT.md
+++ b/tests/FILEFORMAT.md
@@ -17,8 +17,8 @@ character entities and the preservation of CR/LF characters at the end of
lines are the biggest differences).
Each test case source exists as a file matching the format
-`tests/data/testNUM`, where NUM is the unique test number, and must begin with
-a 'testcase' tag, which encompasses the remainder of the file.
+`tests/data/testNUM`, where `NUM` is the unique test number, and must begin
+with a `testcase` tag, which encompasses the remainder of the file.
# Preprocessing
@@ -79,7 +79,7 @@ For example, to insert the word hello a 100 times:
Lines in the test file can be made to appear conditionally on a specific
feature (see the "features" section below) being set or not set. If the
specific feature is present, the following lines will be output, otherwise it
-outputs nothing, until a following else or endif clause. Like this:
+outputs nothing, until a following else or `endif` clause. Like this:
%if brotli
Accept-Encoding
@@ -120,7 +120,7 @@ Available substitute variables include:
- `%FTPSPORT` - Port number of the FTPS server
- `%FTPTIME2` - Timeout in seconds that should be just sufficient to receive a
response from the test FTP server
-- `%FTPTIME3` - Even longer than %FTPTIME2
+- `%FTPTIME3` - Even longer than `%FTPTIME2`
- `%GOPHER6PORT` - IPv6 port number of the Gopher server
- `%GOPHERPORT` - Port number of the Gopher server
- `%GOPHERSPORT` - Port number of the Gophers server
@@ -165,8 +165,9 @@ Available substitute variables include:
# `<testcase>`
-Each test is always specified entirely within the testcase tag. Each test case
-is split up in four main sections: `info`, `reply`, `client` and `verify`.
+Each test is always specified entirely within the `testcase` tag. Each test
+case is split up in four main sections: `info`, `reply`, `client` and
+`verify`.
- **info** provides information about the test case
@@ -225,28 +226,28 @@ and used as "raw" data.
should be cut off from the data before sending or comparing it.
For FTP file listings, the `<data>` section will be used *only* if you make
-sure that there has been a CWD done first to a directory named `test-[num]`
-where [num] is the test case number. Otherwise the ftp server can't know from
+sure that there has been a CWD done first to a directory named `test-[NUM]`
+where `NUM` is the test case number. Otherwise the ftp server can't know from
which test file to load the list content.
### `<dataNUM>`
-Send back this contents instead of the <data> one. The num is set by:
+Send back this contents instead of the <data> one. The `NUM` is set by:
- The test number in the request line is >10000 and this is the remainder
of [test case number]%10000.
- - The request was HTTP and included digest details, which adds 1000 to NUM
- - If a HTTP request is NTLM type-1, it adds 1001 to num
- - If a HTTP request is NTLM type-3, it adds 1002 to num
- - If a HTTP request is Basic and num is already >=1000, it adds 1 to num
- - If a HTTP request is Negotiate, num gets incremented by one for each
+ - The request was HTTP and included digest details, which adds 1000 to `NUM`
+ - If a HTTP request is NTLM type-1, it adds 1001 to `NUM`
+ - If a HTTP request is NTLM type-3, it adds 1002 to `NUM`
+ - If a HTTP request is Basic and `NUM` is already >=1000, it adds 1 to `NUM`
+ - If a HTTP request is Negotiate, `NUM` gets incremented by one for each
request with Negotiate authorization header on the same test case.
-Dynamically changing num in this way allows the test harness to be used to
+Dynamically changing `NUM` in this way allows the test harness to be used to
test authentication negotiation where several different requests must be sent
to complete a transfer. The response to each request is found in its own data
section. Validating the entire negotiation sequence can be done by specifying
-a datacheck section.
+a `datacheck` section.
### `<connect>`
The connect section is used instead of the 'data' for all CONNECT
@@ -265,14 +266,14 @@ Use the `mode="text"` attribute if the output is in text mode on platforms
that have a text/binary difference.
### `<datacheckNUM [nonewline="yes"] [mode="text"]>`
-The contents of numbered datacheck sections are appended to the non-numbered
+The contents of numbered `datacheck` sections are appended to the non-numbered
one.
### `<size>`
number to return on a ftp SIZE command (set to -1 to make this command fail)
### `<mdtm>`
-what to send back if the client sends a (FTP) MDTM command, set to -1 to
+what to send back if the client sends a (FTP) `MDTM` command, set to -1 to
have it return that the file doesn't exist
### `<postcmd>`
@@ -460,8 +461,8 @@ to have failed.
### `<tool>`
Name of tool to invoke instead of "curl". This tool must be built and exist
-either in the libtest/ directory (if the tool name starts with 'lib') or in
-the unit/ directory (if the tool name starts with 'unit').
+either in the `libtest/` directory (if the tool name starts with `lib`) or in
+the `unit/` directory (if the tool name starts with `unit`).
### `<name>`
Brief test case description, shown when the test runs.
@@ -525,13 +526,13 @@ needed.
This creates the named file with this content before the test case is run,
which is useful if the test case needs a file to act on.
-If 'nonewline="yes"` is used, the created file will have the final newline
+If `nonewline="yes"` is used, the created file will have the final newline
stripped off.
### `<stdin [nonewline="yes"]>`
Pass this given data on stdin to the tool.
-If 'nonewline' is set, we will cut off the trailing newline of this given data
+If `nonewline` is set, we will cut off the trailing newline of this given data
before comparing with the one actually received by the client
## `<verify>`
@@ -551,7 +552,7 @@ advanced. Example: `s/^EPRT .*/EPRT stripped/`.
### `<protocol [nonewline="yes"]>`
-the protocol dump curl should transmit, if 'nonewline' is set, we will cut off
+the protocol dump curl should transmit, if `nonewline` is set, we will cut off
the trailing newline of this given data before comparing with the one actually
sent by the client The `<strip>` and `<strippart>` rules are applied before
comparisons are made.
@@ -559,7 +560,7 @@ comparisons are made.
### `<proxy [nonewline="yes"]>`
The protocol dump curl should transmit to a HTTP proxy (when the http-proxy
-server is used), if 'nonewline' is set, we will cut off the trailing newline
+server is used), if `nonewline` is set, we will cut off the trailing newline
of this given data before comparing with the one actually sent by the client
The `<strip>` and `<strippart>` rules are applied before comparisons are made.
@@ -569,7 +570,7 @@ This verifies that this data was passed to stderr.
Use the mode="text" attribute if the output is in text mode on platforms that
have a text/binary difference.
-If 'nonewline' is set, we will cut off the trailing newline of this given data
+If `nonewline` is set, we will cut off the trailing newline of this given data
before comparing with the one actually received by the client
### `<stdout [mode="text"] [nonewline="yes"]>`
@@ -578,7 +579,7 @@ This verifies that this data was passed to stdout.
Use the mode="text" attribute if the output is in text mode on platforms that
have a text/binary difference.
-If 'nonewline' is set, we will cut off the trailing newline of this given data
+If `nonewline` is set, we will cut off the trailing newline of this given data
before comparing with the one actually received by the client
### `<file name="log/filename" [mode="text"]>`
@@ -601,7 +602,7 @@ compared with what is stored in the test file. This is pretty
advanced. Example: "s/^EPRT .*/EPRT stripped/"
### `<stripfile1>`
-1 to 4 can be appended to 'stripfile' to strip the corresponding <fileN>
+1 to 4 can be appended to `stripfile` to strip the corresponding <fileN>
content
### `<stripfile2>`
diff --git a/tests/README.md b/tests/README.md
index daa168014..7fff0a534 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -53,7 +53,7 @@ SPDX-License-Identifier: curl
continue to work independent on what port numbers the test servers actually
use.
- See [FILEFORMAT](FILEFORMAT.md) for the port number variables.
+ See [`FILEFORMAT`](FILEFORMAT.md) for the port number variables.
### Test servers
@@ -129,13 +129,13 @@ SPDX-License-Identifier: curl
The test script will check that all allocated memory is freed properly IF
curl has been built with the `CURLDEBUG` define set. The script will
automatically detect if that is the case, and it will use the
- 'memanalyze.pl' script to analyze the memory debugging output.
+ `memanalyze.pl` script to analyze the memory debugging output.
Also, if you run tests on a machine where valgrind is found, the script will
use valgrind to run the test with (unless you use `-n`) to further verify
correctness.
- runtests.pl's `-t` option will enable torture testing mode, which runs each
+ The `runtests.pl` `-t` option enables torture testing mode. It runs each
test many times and makes each different memory allocation fail on each
successive run. This tests the out of memory error handling code to ensure
that memory leaks do not occur even in those situations. It can help to
@@ -159,7 +159,7 @@ SPDX-License-Identifier: curl
All test cases are put in the `data/` subdirectory. Each test is stored in
the file named according to the test number.
- See [FILEFORMAT.md](FILEFORMAT.md) for a description of the test case file
+ See [`FILEFORMAT`](FILEFORMAT.md) for a description of the test case file
format.
### Code coverage
@@ -172,13 +172,13 @@ SPDX-License-Identifier: curl
make test
make test-torture
- The graphical tool ggcov can be used to browse the source and create
+ The graphical tool `ggcov` can be used to browse the source and create
coverage reports on \*nix hosts:
ggcov -r lib src
- The text mode tool gcov may also be used, but it doesn't handle object files
- in more than one directory correctly.
+ The text mode tool `gcov` may also be used, but it doesn't handle object
+ files in more than one directory correctly.
### Remote testing
@@ -211,7 +211,7 @@ SPDX-License-Identifier: curl
These files are `tests/data/test[num]` where `[num]` is just a unique
identifier described above, and the XML-like file format of them is
- described in the separate [FILEFORMAT.md](FILEFORMAT.md) document.
+ described in the separate [`FILEFORMAT`](FILEFORMAT.md) document.
### curl tests
diff --git a/tests/disable-scan.pl b/tests/disable-scan.pl
index b6ca37fa0..5d09bff27 100755
--- a/tests/disable-scan.pl
+++ b/tests/disable-scan.pl
@@ -95,7 +95,7 @@ sub scan_docs {
my $line = 0;
while(<F>) {
$line++;
- if(/^## (CURL_DISABLE_[A-Z_]+)/g) {
+ if(/^## `(CURL_DISABLE_[A-Z_]+)/g) {
my ($sym)=($1);
$docs{$sym} = $line;
}
diff --git a/tests/unit/README.md b/tests/unit/README.md
index 0d32e010f..d17b249a0 100644
--- a/tests/unit/README.md
+++ b/tests/unit/README.md
@@ -33,7 +33,7 @@ start up gdb and run the same case using that.
## Write Unit Tests
We put tests that focus on an area or a specific function into a single C
-source file. The source file should be named 'unitNNNN.c' where NNNN is a
+source file. The source file should be named `unitNNNN.c` where `NNNN` is a
previously unused number.
Add your test to `tests/unit/Makefile.inc` (if it is a unit test). Add your
@@ -45,28 +45,29 @@ and the `tests/FILEFORMAT.md` documentation.
For the actual C file, here's a simple example:
~~~c
-#include "curlcheck.h"
-#include "a libcurl header.h" /* from the lib dir */
+ #include "curlcheck.h"
-static CURLcode unit_setup( void )
-{
- /* whatever you want done first */
- return CURLE_OK;
-}
+ #include "a libcurl header.h" /* from the lib dir */
-static void unit_stop( void )
-{
- /* done before shutting down and exiting */
-}
+ static CURLcode unit_setup( void )
+ {
+ /* whatever you want done first */
+ return CURLE_OK;
+ }
-UNITTEST_START
+ static void unit_stop( void )
+ {
+ /* done before shutting down and exiting */
+ }
- /* here you start doing things and checking that the results are good */
+ UNITTEST_START
- fail_unless( size == 0 , "initial size should be zero" );
- fail_if( head == NULL , "head should not be initiated to NULL" );
+ /* here you start doing things and checking that the results are good */
- /* you end the test code like this: */
+ fail_unless( size == 0 , "initial size should be zero" );
+ fail_if( head == NULL , "head should not be initiated to NULL" );
-UNITTEST_STOP
+ /* you end the test code like this: */
+
+ UNITTEST_STOP