summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorDaniel Stenberg <daniel@haxx.se>2021-11-01 13:43:11 +0100
committerDaniel Stenberg <daniel@haxx.se>2021-11-01 13:50:27 +0100
commit0d979544fe2b6f9ebc687302bbd993274ad31616 (patch)
treee70d292fedf9a512b07cb7dc80c14ea81cf123ab /docs
parentf907faec790f6bb5dc46102f1efa7e0faeef99ee (diff)
downloadcurl-bagder/less-very.tar.gz
docs: reduce use of "very"bagder/less-very
"Very" should be avoided in most texts. If intensifiers are needed, try find better words instead.
Diffstat (limited to 'docs')
-rw-r--r--docs/CODE_REVIEW.md6
-rw-r--r--docs/CODE_STYLE.md10
-rw-r--r--docs/CONTRIBUTE.md22
-rw-r--r--docs/FAQ30
-rw-r--r--docs/HELP-US.md2
-rw-r--r--docs/HTTP-COOKIES.md2
-rw-r--r--docs/INTERNALS.md7
-rw-r--r--docs/KNOWN_BUGS10
-rw-r--r--docs/MAIL-ETIQUETTE22
-rw-r--r--docs/MANUAL.md26
-rw-r--r--docs/SECURITY-PROCESS.md10
-rw-r--r--docs/TODO6
-rw-r--r--docs/TheArtOfHttpScripting.md32
-rw-r--r--docs/URL-SYNTAX.md4
-rw-r--r--docs/cmdline-opts/cookie.d6
-rw-r--r--docs/cmdline-opts/ftp-method.d2
-rw-r--r--docs/cmdline-opts/netrc.d6
-rw-r--r--docs/cmdline-opts/trace-ascii.d6
-rw-r--r--docs/cmdline-opts/user.d4
-rw-r--r--docs/libcurl/curl_easy_pause.36
-rw-r--r--docs/libcurl/curl_easy_send.34
-rw-r--r--docs/libcurl/curl_mime_data.32
-rw-r--r--docs/libcurl/curl_multi_info_read.38
-rw-r--r--docs/libcurl/curl_multi_perform.36
-rw-r--r--docs/libcurl/curl_multi_timeout.35
-rw-r--r--docs/libcurl/libcurl-easy.38
-rw-r--r--docs/libcurl/libcurl-security.318
-rw-r--r--docs/libcurl/libcurl-thread.38
-rw-r--r--docs/libcurl/libcurl-tutorial.364
-rw-r--r--docs/libcurl/opts/CURLINFO_APPCONNECT_TIME.38
-rw-r--r--docs/libcurl/opts/CURLINFO_APPCONNECT_TIME_T.314
-rw-r--r--docs/libcurl/opts/CURLINFO_EFFECTIVE_METHOD.36
-rw-r--r--docs/libcurl/opts/CURLINFO_EFFECTIVE_URL.36
-rw-r--r--docs/libcurl/opts/CURLOPT_CURLU.36
-rw-r--r--docs/libcurl/opts/CURLOPT_DNS_CACHE_TIMEOUT.32
-rw-r--r--docs/libcurl/opts/CURLOPT_ERRORBUFFER.36
-rw-r--r--docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.34
-rw-r--r--docs/libcurl/opts/CURLOPT_HEADERFUNCTION.310
-rw-r--r--docs/libcurl/opts/CURLOPT_HTTP200ALIASES.310
-rw-r--r--docs/libcurl/opts/CURLOPT_MAXLIFETIME_CONN.32
-rw-r--r--docs/libcurl/opts/CURLOPT_PRE_PROXY.32
-rw-r--r--docs/libcurl/opts/CURLOPT_PROGRESSFUNCTION.32
-rw-r--r--docs/libcurl/opts/CURLOPT_PROXY.32
-rw-r--r--docs/libcurl/opts/CURLOPT_PROXYTYPE.38
-rw-r--r--docs/libcurl/opts/CURLOPT_URL.36
-rw-r--r--docs/libcurl/opts/CURLOPT_XFERINFOFUNCTION.32
46 files changed, 217 insertions, 221 deletions
diff --git a/docs/CODE_REVIEW.md b/docs/CODE_REVIEW.md
index e6a28a600..b18f4a607 100644
--- a/docs/CODE_REVIEW.md
+++ b/docs/CODE_REVIEW.md
@@ -36,9 +36,9 @@ Changing the API and the ABI may be fine in a change but it needs to be done
deliberately and carefully. If not, a reviewer must help the author to realize
the mistake.
-curl and libcurl are similarly very strict on not modifying existing
-behavior. API and ABI stability is not enough, the behavior should also remain
-intact as far as possible.
+curl and libcurl are similarly strict on not modifying existing behavior. API
+and ABI stability is not enough, the behavior should also remain intact as far
+as possible.
## Code style
diff --git a/docs/CODE_STYLE.md b/docs/CODE_STYLE.md
index 8ba1e038d..6e2b1e2bd 100644
--- a/docs/CODE_STYLE.md
+++ b/docs/CODE_STYLE.md
@@ -2,7 +2,7 @@
Source code that has a common style is easier to read than code that uses
different styles in different places. It helps making the code feel like one
-single code base. Easy-to-read is a very important property of code and helps
+single code base. Easy-to-read is an important property of code and helps
making it easier to review when new things are added and it helps debugging
code when developers are trying to figure out why things go wrong. A unified
style is more important than individual contributors having their own personal
@@ -56,10 +56,10 @@ introduced in the C standard until C99. We use only __/* comments */__.
## Long lines
Source code in curl may never be wider than 79 columns and there are two
-reasons for maintaining this even in the modern era of very large and high
+reasons for maintaining this even in the modern era of large and high
resolution screens:
-1. Narrower columns are easier to read than very wide ones. There's a reason
+1. Narrower columns are easier to read than wide ones. There's a reason
newspapers have used columns for decades or centuries.
2. Narrower columns allow developers to easier show multiple pieces of code
@@ -154,8 +154,8 @@ if(!ptr)
## New block on a new line
-We never write multiple statements on the same source line, even for very
-short if() conditions.
+We never write multiple statements on the same source line, even for short
+if() conditions.
```c
if(a)
diff --git a/docs/CONTRIBUTE.md b/docs/CONTRIBUTE.md
index 4d278b219..8469b6043 100644
--- a/docs/CONTRIBUTE.md
+++ b/docs/CONTRIBUTE.md
@@ -96,9 +96,9 @@ and regression in the future.
### Patch Against Recent Sources
Please try to get the latest available sources to make your patches against.
-It makes the lives of the developers so much easier. The very best is if you
-get the most up-to-date sources from the git repository, but the latest
-release archive is quite OK as well!
+It makes the lives of the developers so much easier. The best is if you get
+the most up-to-date sources from the git repository, but the latest release
+archive is quite OK as well!
### Documentation
@@ -120,8 +120,8 @@ in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
posts a few test cases, it won't end up as a heavy burden on a single person!
-If you don't have test cases or perhaps you have done something that is very
-hard to write tests for, do explain exactly how you have otherwise tested and
+If you don't have test cases or perhaps you have done something that is hard
+to write tests for, do explain exactly how you have otherwise tested and
verified your changes.
## Sharing Your Changes
@@ -139,9 +139,9 @@ risks stalling and eventually just getting deleted without action. As a
submitter of a change, you are the owner of that change until it has been merged.
Respond on the list or on github about the change and answer questions and/or
-fix nits/flaws. This is very important. We will take lack of replies as a
-sign that you're not very anxious to get your patch accepted and we tend to
-simply drop such changes.
+fix nits/flaws. This is important. We will take lack of replies as a sign that
+you're not anxious to get your patch accepted and we tend to simply drop such
+changes.
### About pull requests
@@ -242,9 +242,9 @@ you commit
### Write Access to git Repository
-If you are a very frequent contributor, you may be given push access to the
-git repository and then you'll be able to push your changes straight into the
-git repo instead of sending changes as pull requests or by mail as patches.
+If you are a frequent contributor, you may be given push access to the git
+repository and then you'll be able to push your changes straight into the git
+repo instead of sending changes as pull requests or by mail as patches.
Just ask if this is what you'd want. You will be required to have posted
several high quality patches first, before you can be granted push access.
diff --git a/docs/FAQ b/docs/FAQ
index 3e029ee87..1f323e7bd 100644
--- a/docs/FAQ
+++ b/docs/FAQ
@@ -215,15 +215,15 @@ FAQ
another tool that uses libcurl.
We do not add things to curl that other small and available tools already do
- very well at the side. curl's output can be piped into another program or
+ well at the side. curl's output can be piped into another program or
redirected to another file for the next program to interpret.
We focus on protocol related issues and improvements. If you want to do more
- magic with the supported protocols than curl currently does, chances are good
- we will agree. If you want to add more protocols, we may very well agree.
+ magic with the supported protocols than curl currently does, chances are
+ good we will agree. If you want to add more protocols, we may agree.
If you want someone else to do all the work while you wait for us to
- implement it for you, that is not a very friendly attitude. We spend a
+ implement it for you, that is not a friendly attitude. We spend a
considerable time already on maintaining and developing curl. In order to
get more out of us, you should consider trading in some of your time and
effort in return. Simply go to the GitHub repo which resides at
@@ -407,14 +407,14 @@ FAQ
The reason why static libraries is much harder to deal with is that for them
we don't get any help but the script itself must know or check what more
libraries that are needed (with shared libraries, that dependency "chain" is
- handled automatically). This is a very error-prone process and one that also
+ handled automatically). This is a error-prone process and one that also
tends to vary over time depending on the release versions of the involved
components and may also differ between operating systems.
- For that reason, configure does very little attempts to actually figure this
- out and you are instead encouraged to set LIBS and LDFLAGS accordingly when
- you invoke configure, and point out the needed libraries and set the
- necessary flags yourself.
+ For that reason, configure does few attempts to actually figure this out and
+ you are instead encouraged to set LIBS and LDFLAGS accordingly when you
+ invoke configure, and point out the needed libraries and set the necessary
+ flags yourself.
2.2 Does curl work with other SSL libraries?
@@ -878,7 +878,7 @@ FAQ
Also note that regular HTTP (using Basic authentication) and FTP passwords
are sent as cleartext across the network. All it takes for anyone to fetch
- them is to listen on the network. Eavesdropping is very easy. Use more secure
+ them is to listen on the network. Eavesdropping is easy. Use more secure
authentication methods (like Digest, Negotiate or even NTLM) or consider the
SSL-based alternatives HTTPS and FTPS.
@@ -988,7 +988,7 @@ FAQ
4.16 My HTTP POST or PUT requests are slow!
libcurl makes all POST and PUT requests (except for POST requests with a
- very tiny request body) use the "Expect: 100-continue" header. This header
+ tiny request body) use the "Expect: 100-continue" header. This header
allows the server to deny the operation early so that libcurl can bail out
before having to send any data. This is useful in authentication
cases and others.
@@ -1392,10 +1392,10 @@ FAQ
6. License Issues
- curl and libcurl are released under a MIT/X derivative license. The license is
- very liberal and should not impose a problem for your project. This section
- is just a brief summary for the cases we get the most questions. (Parts of
- this section was much enhanced by Bjorn Reese.)
+ curl and libcurl are released under a MIT/X derivative license. The license
+ is liberal and should not impose a problem for your project. This section is
+ just a brief summary for the cases we get the most questions. (Parts of this
+ section was much enhanced by Bjorn Reese.)
We are not lawyers and this is not legal advice. You should probably consult
one if you want true and accurate legal insights without our prejudice. Note
diff --git a/docs/HELP-US.md b/docs/HELP-US.md
index 257845190..714fef30d 100644
--- a/docs/HELP-US.md
+++ b/docs/HELP-US.md
@@ -19,7 +19,7 @@ down and report the bug. Or make your first pull request with a fix for that.
Some projects mark small issues as "beginner friendly", "bite-sized" or
similar. We don't do that in curl since such issues never linger around long
-enough. Simple issues get handled very fast.
+enough. Simple issues get handled fast.
If you're looking for a smaller or simpler task in the project to help out
with as an entry-point into the project, perhaps because you are a newcomer or
diff --git a/docs/HTTP-COOKIES.md b/docs/HTTP-COOKIES.md
index 2ef1a60a0..76e3dcb69 100644
--- a/docs/HTTP-COOKIES.md
+++ b/docs/HTTP-COOKIES.md
@@ -14,7 +14,7 @@
Cookies are set to the client with the Set-Cookie: header and are sent to
servers with the Cookie: header.
- For a very long time, the only spec explaining how to use cookies was the
+ For a long time, the only spec explaining how to use cookies was the
original [Netscape spec from 1994](https://curl.se/rfc/cookie_spec.html).
In 2011, [RFC6265](https://www.ietf.org/rfc/rfc6265.txt) was finally
diff --git a/docs/INTERNALS.md b/docs/INTERNALS.md
index 176ca5257..3af760648 100644
--- a/docs/INTERNALS.md
+++ b/docs/INTERNALS.md
@@ -467,7 +467,7 @@ Return Codes and Informationals
I've made things simple. Almost every function in libcurl returns a CURLcode,
that must be `CURLE_OK` if everything is OK or otherwise a suitable error
- code as the `curl/curl.h` include file defines. The very spot that detects an
+ code as the `curl/curl.h` include file defines. The place that detects an
error must use the `Curl_failf()` function to set the human-readable error
description.
@@ -797,7 +797,7 @@ Track Down Memory Leaks
tests/memanalyze.pl dump
This now outputs a report on what resources that were allocated but never
- freed etc. This report is very fine for posting to the list!
+ freed etc. This report is fine for posting to the list!
If this doesn't produce any output, no leak was detected in libcurl. Then
the leak is mostly likely to be in your code.
@@ -812,8 +812,7 @@ Track Down Memory Leaks
1. The application can use whatever event system it likes as it gets info
from libcurl about what file descriptors libcurl waits for what action
- on. (The previous API returns `fd_sets` which is very
- `select()`-centric).
+ on. (The previous API returns `fd_sets` which is `select()`-centric).
2. When the application discovers action on a single socket, it calls
libcurl and informs that there was action on this particular socket and
diff --git a/docs/KNOWN_BUGS b/docs/KNOWN_BUGS
index 173fe4c9e..46a02aedc 100644
--- a/docs/KNOWN_BUGS
+++ b/docs/KNOWN_BUGS
@@ -288,7 +288,7 @@ problems may have been fixed or changed somewhat since this was written!
get "good" random) so applications trying to avoid the init for
performance reasons would do wrong anyway
- D) never very carefully documented so all this mostly just happened to work
+ D) not documented carefully so all this mostly just happened to work
for some users
However, in spite of the problems with the feature, there were some users who
@@ -529,10 +529,10 @@ problems may have been fixed or changed somewhat since this was written!
5.12 flaky Windows CI builds
We run many CI builds for each commit and PR on github, and especially a
- number of the Windows builds are very flaky. This means that we rarely get
- all CI builds go green and complete without errors. This is very unfortunate
- as it makes us sometimes miss actual build problems and it is surprising to
- newcomers to the project who (rightfully) don't expect this.
+ number of the Windows builds are flaky. This means that we rarely get all CI
+ builds go green and complete without errors. This is unfortunate as it makes
+ us sometimes miss actual build problems and it is surprising to newcomers to
+ the project who (rightfully) don't expect this.
See https://github.com/curl/curl/issues/6972
diff --git a/docs/MAIL-ETIQUETTE b/docs/MAIL-ETIQUETTE
index 80d06b640..63bf63e58 100644
--- a/docs/MAIL-ETIQUETTE
+++ b/docs/MAIL-ETIQUETTE
@@ -39,9 +39,9 @@ MAIL ETIQUETTE
Each mailing list is targeted to a specific set of users and subjects,
please use the one or the ones that suit you the most.
- Each mailing list has hundreds up to thousands of readers, meaning that
- each mail sent will be received and read by a very large number of people.
- People from various cultures, regions, religions and continents.
+ Each mailing list has hundreds up to thousands of readers, meaning that each
+ mail sent will be received and read by a large number of people. People
+ from various cultures, regions, religions and continents.
1.2 Netiquette
@@ -133,16 +133,16 @@ MAIL ETIQUETTE
send the email, your post will just be silently discarded.
If you posted for the first time to the mailing list, you first need to wait
- for an administrator to allow your email to go through (moderated). This normally
- happens very quickly but in case we're asleep, you may have to wait a few
- hours.
+ for an administrator to allow your email to go through (moderated). This
+ normally happens quickly but in case we're asleep, you may have to wait a
+ few hours.
Once your email goes through it is sent out to several hundred or even
- thousands of recipients. Your email may cover an area that not that many people
- know about or are interested in. Or possibly the person who knows about it
- is on vacation or under a very heavy work load right now. You may have to wait
- for a response and you should not expect to get a response at all, but
- hopefully you get an answer within a couple of days.
+ thousands of recipients. Your email may cover an area that not that many
+ people know about or are interested in. Or possibly the person who knows
+ about it is on vacation or under a heavy work load right now. You may have
+ to wait for a response and you should not expect to get a response at all,
+ but hopefully you get an answer within a couple of days.
You do yourself and all of us a service when you include as many details as
possible already in your first email. Mention your operating system and
diff --git a/docs/MANUAL.md b/docs/MANUAL.md
index a637c66c0..ae0cf6973 100644
--- a/docs/MANUAL.md
+++ b/docs/MANUAL.md
@@ -275,8 +275,8 @@ Store the HTTP headers in a separate file (headers.txt in the example):
curl --dump-header headers.txt curl.se
-Note that headers stored in a separate file can be very useful at a later time
-if you want curl to use cookies sent by the server. More about that in the
+Note that headers stored in a separate file can be useful at a later time if
+you want curl to use cookies sent by the server. More about that in the
cookies section.
## POST (HTTP)
@@ -509,8 +509,8 @@ second for 1 minute, run:
curl -Y 3000 -y 60 www.far-away-site.com
-This can very well be used in combination with the overall time limit, so
-that the above operation must be completed in whole within 30 minutes:
+This can be used in combination with the overall time limit, so that the above
+operation must be completed in whole within 30 minutes:
curl -m 1800 -Y 3000 -y 60 www.far-away-site.com
@@ -583,9 +583,9 @@ tables etc:
## Extra Headers
-When using curl in your own very special programs, you may end up needing
-to pass on your own custom headers when getting a web page. You can do
-this by using the `-H` flag.
+When using curl in your own programs, you may end up needing to pass on your
+own custom headers when getting a web page. You can do this by using the `-H`
+flag.
Example, send the header `X-you-and-me: yes` to the server when getting a
page:
@@ -608,8 +608,8 @@ directory at your ftp site, do:
curl ftp://user:passwd@my.site.com/README
-But if you want the README file from the root directory of that very same
-site, you need to specify the absolute file name:
+But if you want the README file from the root directory of that same site, you
+need to specify the absolute file name:
curl ftp://user:passwd@my.site.com//README
@@ -839,7 +839,7 @@ Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
`--netrc-optional` options). This is not restricted to just FTP, so curl can
use it for all protocols where authentication is used.
-A very simple `.netrc` file could look something like:
+A simple `.netrc` file could look something like:
machine curl.se login iamdaniel password mysecret
@@ -869,9 +869,9 @@ curl ask for one and you already entered the real password to kinit/kauth.
## TELNET
-The curl telnet support is basic and very easy to use. Curl passes all data
-passed to it on stdin to the remote server. Connect to a remote telnet server
-using a command line similar to:
+The curl telnet support is basic and easy to use. Curl passes all data passed
+to it on stdin to the remote server. Connect to a remote telnet server using a
+command line similar to:
curl telnet://remote.server.com
diff --git a/docs/SECURITY-PROCESS.md b/docs/SECURITY-PROCESS.md
index 383d0c070..3a909b468 100644
--- a/docs/SECURITY-PROCESS.md
+++ b/docs/SECURITY-PROCESS.md
@@ -99,11 +99,11 @@ This is a private mailing list for discussions on and about curl security
issues.
Who is on this list? There are a couple of criteria you must meet, and then we
-might ask you to join the list or you can ask to join it. It really isn't very
-formal. We basically only require that you have a long-term presence in the
-curl project and you have shown an understanding for the project and its way
-of working. You must've been around for a good while and you should have no
-plans in vanishing in the near future.
+might ask you to join the list or you can ask to join it. It really isn't a
+formal process. We basically only require that you have a long-term presence
+in the curl project and you have shown an understanding for the project and
+its way of working. You must've been around for a good while and you should
+have no plans in vanishing in the near future.
We do not make the list of participants public mostly because it tends to vary
somewhat over time and a list somewhere will only risk getting outdated.
diff --git a/docs/TODO b/docs/TODO
index eb7f75b47..4ae909a76 100644
--- a/docs/TODO
+++ b/docs/TODO
@@ -832,7 +832,7 @@
CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root
certificates when comparing the pinned keys. Therefore it is not compatible
with "HTTP Public Key Pinning" as there also intermediate and root
- certificates can be pinned. This is very useful as it prevents webadmins from
+ certificates can be pinned. This is useful as it prevents webadmins from
"locking themselves out of their servers".
Adding this feature would make curls pinning 100% compatible to HPKP and
@@ -981,8 +981,8 @@
18.6 Option to make -Z merge lined based outputs on stdout
When a user requests multiple lined based files using -Z and sends them to
- stdout, curl will not "merge" and send complete lines fine but may very well
- send partial lines from several sources.
+ stdout, curl will not "merge" and send complete lines fine but may send
+ partial lines from several sources.
https://github.com/curl/curl/issues/5175
diff --git a/docs/TheArtOfHttpScripting.md b/docs/TheArtOfHttpScripting.md
index a6eb8b354..054c6267b 100644
--- a/docs/TheArtOfHttpScripting.md
+++ b/docs/TheArtOfHttpScripting.md
@@ -21,7 +21,7 @@
## The HTTP Protocol
- HTTP is the protocol used to fetch data from web servers. It is a very simple
+ HTTP is the protocol used to fetch data from web servers. It is a simple
protocol that is built upon TCP/IP. The protocol also allows information to
get sent to the server from the client using a few different methods, as will
be shown here.
@@ -252,13 +252,13 @@
your browser. That's generally a good thing when you want to be able to
bookmark that page with your given data, but it is an obvious disadvantage if
you entered secret information in one of the fields or if there are a large
- amount of fields creating a very long and unreadable URL.
+ amount of fields creating a long and unreadable URL.
The HTTP protocol then offers the POST method. This way the client sends the
data separated from the URL and thus you won't see any of it in the URL
address field.
- The form would look very similar to the previous one:
+ The form would look similar to the previous one:
```html
<form method="POST" action="junk.cgi">
@@ -313,10 +313,10 @@
## Hidden Fields
- A very common way for HTML based applications to pass state information
- between pages is to add hidden fields to the forms. Hidden fields are already
- filled in, they aren't displayed to the user and they get passed along just
- as all the other fields.
+ A common way for HTML based applications to pass state information between
+ pages is to add hidden fields to the forms. Hidden fields are already filled
+ in, they aren't displayed to the user and they get passed along just as all
+ the other fields.
A similar example form with one visible field, one hidden field and one
submit button could look like:
@@ -337,8 +337,8 @@
## Figure Out What A POST Looks Like
When you're about fill in a form and send to a server by using curl instead
- of a browser, you're of course very interested in sending a POST exactly the
- way your browser does.
+ of a browser, you're of course interested in sending a POST exactly the way
+ your browser does.
An easy way to get to see this, is to save the HTML page with the form on
your local disk, modify the 'method' to a GET, and press the submit button
@@ -408,9 +408,9 @@
able to watch your passwords if you pass them as plain command line
options. There are ways to circumvent this.
- It is worth noting that while this is how HTTP Authentication works, very
- many websites will not use this concept when they provide logins etc. See the
- Web Login chapter further below for more details on that.
+ It is worth noting that while this is how HTTP Authentication works, many
+ websites will not use this concept when they provide logins etc. See the Web
+ Login chapter further below for more details on that.
# More HTTP Headers
@@ -430,7 +430,7 @@
## User Agent
- Very similar to the referer field, all HTTP requests may set the User-Agent
+ Similar to the referer field, all HTTP requests may set the User-Agent
field. It names what user agent (client) that is being used. Many
applications use this information to decide how to display pages. Silly web
programmers try to make different pages for users of different browsers to
@@ -690,9 +690,9 @@
## Check what the browsers do
- A very good helper to make sure you do this right, is the web browsers'
- developers tools that let you view all headers you send and receive (even
- when using HTTPS).
+ A good helper to make sure you do this right, is the web browsers' developers
+ tools that let you view all headers you send and receive (even when using
+ HTTPS).
A more raw approach is to capture the HTTP traffic on the network with tools
such as Wireshark or tcpdump and check what headers that were sent and
diff --git a/docs/URL-SYNTAX.md b/docs/URL-SYNTAX.md
index 4921f81fc..950222f81 100644
--- a/docs/URL-SYNTAX.md
+++ b/docs/URL-SYNTAX.md
@@ -20,7 +20,7 @@ changes over time.
URL parsers as implemented in browsers, libraries and tools usually opt to
support one of the mentioned specifications. Bugs, differences in
interpretations and the moving nature of the WHATWG spec does however make it
-very unlikely that multiple parsers treat URLs the exact same way!
+unlikely that multiple parsers treat URLs the exact same way!
## Security
@@ -43,7 +43,7 @@ security concerns:
1. If you have an application that runs as or in a server application, getting
an unfiltered URL can trick your application to access a local resource
instead of a remote resource. Protecting yourself against localhost accesses
- is very hard when accepting user provided URLs.
+ is hard when accepting user provided URLs.
2. Such custom URLs can access other ports than you planned as port numbers
are part of the regular URL format. The combination of a local host and a
diff --git a/docs/cmdline-opts/cookie.d b/docs/cmdline-opts/cookie.d
index 7723ed179..bf2597ee2 100644
--- a/docs/cmdline-opts/cookie.d
+++ b/docs/cmdline-opts/cookie.d
@@ -32,6 +32,6 @@ use the Netscape format.
This option can be used multiple times.
-Users very often want to both read cookies from a file and write updated
-cookies back to a file, so using both --cookie and --cookie-jar in the same
-command line is common.
+Users often want to both read cookies from a file and write updated cookies
+back to a file, so using both --cookie and --cookie-jar in the same command
+line is common.
diff --git a/docs/cmdline-opts/ftp-method.d b/docs/cmdline-opts/ftp-method.d
index 4af0baf42..82ab4819a 100644
--- a/docs/cmdline-opts/ftp-method.d
+++ b/docs/cmdline-opts/ftp-method.d
@@ -13,7 +13,7 @@ server. The method argument should be one of the following alternatives:
.RS
.IP multicwd
curl does a single CWD operation for each path part in the given URL. For deep
-hierarchies this means very many commands. This is how RFC 1738 says it should
+hierarchies this means many commands. This is how RFC 1738 says it should
be done. This is the default but the slowest behavior.
.IP nocwd
curl does no CWD at all. curl will do SIZE, RETR, STOR etc and give a full
diff --git a/docs/cmdline-opts/netrc.d b/docs/cmdline-opts/netrc.d
index d6ca28f53..42d4db746 100644
--- a/docs/cmdline-opts/netrc.d
+++ b/docs/cmdline-opts/netrc.d
@@ -13,8 +13,8 @@ complain if that file doesn't have the right permissions (it should be
neither world- nor group-readable). The environment variable "HOME" is used
to find the home directory.
-A quick and very simple example of how to setup a *.netrc* to allow curl
-to FTP to the machine host.domain.com with user name \&'myself' and password
-\&'secret' should look similar to:
+A quick and simple example of how to setup a *.netrc* to allow curl to FTP to
+the machine host.domain.com with user name \&'myself' and password \&'secret'
+should look similar to:
.B "machine host.domain.com login myself password secret"
diff --git a/docs/cmdline-opts/trace-ascii.d b/docs/cmdline-opts/trace-ascii.d
index 4da81cbd9..914392a15 100644
--- a/docs/cmdline-opts/trace-ascii.d
+++ b/docs/cmdline-opts/trace-ascii.d
@@ -10,9 +10,9 @@ Enables a full trace dump of all incoming and outgoing data, including
descriptive information, to the given output file. Use "-" as filename to have
the output sent to stdout.
-This is very similar to --trace, but leaves out the hex part and only shows
-the ASCII part of the dump. It makes smaller output that might be easier to
-read for untrained humans.
+This is similar to --trace, but leaves out the hex part and only shows the
+ASCII part of the dump. It makes smaller output that might be easier to read
+for untrained humans.
This option is global and does not need to be specified for each use of
--next.
diff --git a/docs/cmdline-opts/user.d b/docs/cmdline-opts/user.d
index 2848f4410..b5f43f82d 100644
--- a/docs/cmdline-opts/user.d
+++ b/docs/cmdline-opts/user.d
@@ -18,8 +18,8 @@ still.
On systems where it works, curl will hide the given option argument from
process listings. This is not enough to protect credentials from possibly
getting seen by other users on the same system as they will still be visible
-for a brief moment before cleared. Such sensitive data should be retrieved
-from a file instead or similar and never used in clear text in a command line.
+for a moment before cleared. Such sensitive data should be retrieved from a
+file instead or similar and never used in clear text in a command line.
When using Kerberos V5 with a Windows based server you should include the
Windows domain name in the user name, in order for the server to successfully
diff --git a/docs/libcurl/curl_easy_pause.3 b/docs/libcurl/curl_easy_pause.3
index 0558b9c64..265db6ec4 100644
--- a/docs/libcurl/curl_easy_pause.3
+++ b/docs/libcurl/curl_easy_pause.3
@@ -99,9 +99,9 @@ If the downloaded data is compressed and is asked to get uncompressed
automatically on download, libcurl will continue to uncompress the entire
downloaded chunk and it will cache the data uncompressed. This has the side-
effect that if you download something that is compressed a lot, it can result
-in a very large data amount needing to be allocated to save the data during
-the pause. This said, you should probably consider not using paused receiving
-if you allow libcurl to uncompress data automatically.
+in a large data amount needing to be allocated to save the data during the
+pause. This said, you should probably consider not using paused receiving if
+you allow libcurl to uncompress data automatically.
.SH AVAILABILITY
Added in 7.18.0.
.SH RETURN VALUE
diff --git a/docs/libcurl/curl_easy_send.3 b/docs/libcurl/curl_easy_send.3
index 6430c54e0..292a3e3d3 100644
--- a/docs/libcurl/curl_easy_send.3
+++ b/docs/libcurl/curl_easy_send.3
@@ -73,8 +73,8 @@ sent was for internal SSL processing, and no other data could be sent.
Added in 7.18.2.
.SH RETURN VALUE
On success, returns \fBCURLE_OK\fP and stores the number of bytes actually
-sent into \fB*n\fP. Note that this may very well be less than the amount you
-wanted to send.
+sent into \fB*n\fP. Note that this may be less than the amount you wanted to
+send.
On failure, returns the appropriate error code.
diff --git a/docs/libcurl/curl_mime_data.3 b/docs/libcurl/curl_mime_data.3
index e482555ac..abb84789c 100644
--- a/docs/libcurl/curl_mime_data.3
+++ b/docs/libcurl/curl_mime_data.3
@@ -42,7 +42,7 @@ Setting a part's contents twice is valid: only the value set by the last call
is retained. It is possible to unassign part's contents by setting
\fIdata\fP to NULL.
-Setting very large data is memory consuming: one might consider using
+Setting large data is memory consuming: one might consider using
\fIcurl_mime_data_cb(3)\fP in such a case.
.SH EXAMPLE
.nf
diff --git a/docs/libcurl/curl_multi_info_read.3 b/docs/libcurl/curl_multi_info_read.3
index 6a3d9f748..cd4c725e9 100644
--- a/docs/libcurl/curl_multi_info_read.3
+++ b/docs/libcurl/curl_multi_info_read.3
@@ -48,10 +48,10 @@ is emptied.
calling \fIcurl_multi_cleanup(3)\fP, \fIcurl_multi_remove_handle(3)\fP or
\fIcurl_easy_cleanup(3)\fP.
-The 'CURLMsg' struct is very simple and only contains very basic information.
-If more involved information is wanted, the particular "easy handle" is
-present in that struct and can be used in subsequent regular
-\fIcurl_easy_getinfo(3)\fP calls (or similar):
+The 'CURLMsg' struct is simple and only contains basic information. If more
+involved information is wanted, the particular "easy handle" is present in
+that struct and can be used in subsequent regular \fIcurl_easy_getinfo(3)\fP
+calls (or similar):
.nf
struct CURLMsg {
diff --git a/docs/libcurl/curl_multi_perform.3 b/docs/libcurl/curl_multi_perform.3
index 90ccf2b60..aeb346eb1 100644
--- a/docs/libcurl/curl_multi_perform.3
+++ b/docs/libcurl/curl_multi_perform.3
@@ -46,9 +46,9 @@ is less than the amount of easy handles you've added to the multi handle), you
know that there is one or more transfers less "running". You can then call
\fIcurl_multi_info_read(3)\fP to get information about each individual
completed transfer, and that returned info includes CURLcode and more. If an
-added handle fails very quickly, it may never be counted as a running_handle.
-You could use \fIcurl_multi_info_read(3)\fP to track actual status of the
-added handles in that case.
+added handle fails quickly, it may never be counted as a running_handle. You
+could use \fIcurl_multi_info_read(3)\fP to track actual status of the added
+handles in that case.
When \fIrunning_handles\fP is set to zero (0) on the return of this function,
there is no longer any transfers in progress.
diff --git a/docs/libcurl/curl_multi_timeout.3 b/docs/libcurl/curl_multi_timeout.3
index 4f169332c..125738783 100644
--- a/docs/libcurl/curl_multi_timeout.3
+++ b/docs/libcurl/curl_multi_timeout.3
@@ -38,9 +38,8 @@ to CURL_SOCKET_TIMEOUT, or call \fIcurl_multi_perform(3)\fP if you're using
the simpler and older multi interface approach.
The timeout value returned in the long \fBtimeout\fP points to, is in number
-of milliseconds at this very moment. If 0, it means you should proceed
-immediately without waiting for anything. If it returns -1, there's no timeout
-at all set.
+of milliseconds at this moment. If 0, it means you should proceed immediately
+without waiting for anything. If it returns -1, there's no timeout at all set.
An application that uses the multi_socket API SHOULD NOT use this function, but
SHOULD instead use \fIcurl_multi_setopt(3)\fP and its
diff --git a/docs/libcurl/libcurl-easy.3 b/docs/libcurl/libcurl-easy.3
index 0fe58159a..efa78fd08 100644
--- a/docs/libcurl/libcurl-easy.3
+++ b/docs/libcurl/libcurl-easy.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -35,9 +35,9 @@ is available etc. \fIcurl_easy_setopt(3)\fP is used for all this.
\fICURLOPT_URL(3)\fP is only option you really must set, as otherwise there
can be no transfer. Another commonly used option is \fICURLOPT_VERBOSE(3)\fP
-that will help you see what libcurl is doing under the hood, very useful when
-debugging for example. The \fIcurl_easy_setopt(3)\fP man page has a full index
-of the over 200 available options.
+that will help you see what libcurl is doing under the hood, which is useful
+when debugging for example. The \fIcurl_easy_setopt(3)\fP man page has a full
+index of the over 200 available options.
If you at any point would like to blank all previously set options for a
single easy handle, you can call \fIcurl_easy_reset(3)\fP and you can also
diff --git a/docs/libcurl/libcurl-security.3 b/docs/libcurl/libcurl-security.3
index b83112f2a..5e9db9152 100644
--- a/docs/libcurl/libcurl-security.3
+++ b/docs/libcurl/libcurl-security.3
@@ -42,9 +42,9 @@ many of these and similar types of weaknesses of which application writers
should be aware.
.SH "Command Lines"
If you use a command line tool (such as curl) that uses libcurl, and you give
-options to the tool on the command line those options can very likely get read
-by other users of your system when they use 'ps' or other tools to list
-currently running processes.
+options to the tool on the command line those options can get read by other
+users of your system when they use 'ps' or other tools to list currently
+running processes.
To avoid these problems, never feed sensitive things to programs using command
line options. Write them to a protected file and use the \-K option to avoid
@@ -64,11 +64,11 @@ To avoid these problems, don't use .netrc files and never store passwords in
plain text anywhere.
.SH "Clear Text Passwords"
Many of the protocols libcurl supports send name and password unencrypted as
-clear text (HTTP Basic authentication, FTP, TELNET etc). It is very easy for
-anyone on your network or a network nearby yours to just fire up a network
-analyzer tool and eavesdrop on your passwords. Don't let the fact that HTTP
-Basic uses base64 encoded passwords fool you. They may not look readable at a
-first glance, but they very easily "deciphered" by anyone within seconds.
+clear text (HTTP Basic authentication, FTP, TELNET etc). It is easy for anyone
+on your network or a network nearby yours to just fire up a network analyzer
+tool and eavesdrop on your passwords. Don't let the fact that HTTP Basic uses
+base64 encoded passwords fool you. They may not look readable at a first
+glance, but they are easily "deciphered" by anyone within seconds.
To avoid this problem, use an authentication mechanism or other protocol that
doesn't let snoopers see your password: Digest, CRAM-MD5, Kerberos, SPNEGO or
@@ -370,7 +370,7 @@ information with faked data.
.SH "Setuid applications using libcurl"
libcurl-using applications that set the 'setuid' bit to run with elevated or
modified rights also implicitly give that extra power to libcurl and this
-should only be done after very careful considerations.
+should only be done after careful considerations.
Giving setuid powers to the application means that libcurl can save files using
those new rights (if for example the `SSLKEYLOGFILE` environment variable is
diff --git a/docs/libcurl/libcurl-thread.3 b/docs/libcurl/libcurl-thread.3
index ff9c2304f..387e407f9 100644
--- a/docs/libcurl/libcurl-thread.3
+++ b/docs/libcurl/libcurl-thread.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2015 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2015 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -84,9 +84,9 @@ threaded situation as there will be race where libcurl risks restoring the
former signal handler while another thread should still ignore it.
.IP "Name resolving"
\fBgethostby* functions and other system calls.\fP These functions, provided
-by your operating system, must be thread safe. It is very important that
-libcurl can find and use thread safe versions of these and other system calls,
-as otherwise it can't function fully thread safe. Some operating systems are
+by your operating system, must be thread safe. It is important that libcurl
+can find and use thread safe versions of these and other system calls, as
+otherwise it can't function fully thread safe. Some operating systems are
known to have faulty thread implementations. We have previously received
problem reports on *BSD (at least in the past, they may be working fine these
days). Some operating systems that are known to have solid and working thread
diff --git a/docs/libcurl/libcurl-tutorial.3 b/docs/libcurl/libcurl-tutorial.3
index 3aafcb853..6c053ec17 100644
--- a/docs/libcurl/libcurl-tutorial.3
+++ b/docs/libcurl/libcurl-tutorial.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -86,9 +86,9 @@ The people behind libcurl have put a considerable effort to make libcurl work
on a large amount of different operating systems and environments.
You program libcurl the same way on all platforms that libcurl runs on. There
-are only very few minor considerations that differ. If you just make sure to
-write your code portable enough, you may very well create yourself a very
-portable program. libcurl shouldn't stop you from that.
+are only a few minor details that differ. If you just make sure to write your
+code portable enough, you can create a portable program. libcurl shouldn't
+stop you from that.
.SH "Global Preparation"
The program must initialize some of the libcurl functionality globally. That
@@ -120,7 +120,7 @@ libcurl has a default protection mechanism that detects if
\fIcurl_global_init(3)\fP hasn't been called by the time
\fIcurl_easy_perform(3)\fP is called and if that is the case, libcurl runs the
function itself with a guessed bit pattern. Please note that depending solely
-on this is not considered nice nor very good.
+on this is not considered nice nor good.
When the program no longer uses libcurl, it should call
\fIcurl_global_cleanup(3)\fP, which is the opposite of the init call. It will
@@ -284,14 +284,13 @@ If \fICURLOPT_VERBOSE(3)\fP is not enough, you increase the level of debug
data your application receive by using the \fICURLOPT_DEBUGFUNCTION(3)\fP.
Getting some in-depth knowledge about the protocols involved is never wrong,
-and if you're trying to do funny things, you might very well understand
-libcurl and how to use it better if you study the appropriate RFC documents
-at least briefly.
+and if you're trying to do funny things, you might understand libcurl and how
+to use it better if you study the appropriate RFC documents at least briefly.
.SH "Upload Data to a Remote Site"
libcurl tries to keep a protocol independent approach to most transfers, thus
-uploading to a remote FTP site is very similar to uploading data to an HTTP
-server with a PUT request.
+uploading to a remote FTP site is similar to uploading data to an HTTP server
+with a PUT request.
Of course, first you either create an easy handle or you re-use one existing
one. Then you set the URL to operate on just like before. This is the remote
@@ -373,7 +372,7 @@ curl use this file, use the \fICURLOPT_NETRC(3)\fP option:
curl_easy_setopt(easyhandle, CURLOPT_NETRC, 1L);
-And a very basic example of how such a .netrc file may look like:
+And a basic example of how such a .netrc file may look like:
.nf
machine myhost.mydomain.com
@@ -795,10 +794,9 @@ For HTTP proxies: the fact that the proxy is an HTTP proxy puts certain
restrictions on what can actually happen. A requested URL that might not be a
HTTP URL will be still be passed to the HTTP proxy to deliver back to
libcurl. This happens transparently, and an application may not need to
-know. I say "may", because at times it is very important to understand that
-all operations over an HTTP proxy use the HTTP protocol. For example, you
-can't invoke your own custom FTP commands or even proper FTP directory
-listings.
+know. I say "may", because at times it is important to understand that all
+operations over an HTTP proxy use the HTTP protocol. For example, you can't
+invoke your own custom FTP commands or even proper FTP directory listings.
.IP "Proxy Options"
@@ -942,9 +940,9 @@ may also be added in the future.
Each easy handle will attempt to keep the last few connections alive for a
while in case they are to be used again. You can set the size of this "cache"
-with the \fICURLOPT_MAXCONNECTS(3)\fP option. Default is 5. There is very
-seldom any point in changing this value, and if you think of changing this it
-is often just a matter of thinking again.
+with the \fICURLOPT_MAXCONNECTS(3)\fP option. Default is 5. There is rarely
+any point in changing this value, and if you think of changing this it is
+often just a matter of thinking again.
To force your upcoming request to not use an already existing connection (it
will even close one first if there happens to be one alive to the same host
@@ -986,7 +984,7 @@ libcurl is your friend here too.
.IP CUSTOMREQUEST
If just changing the actual HTTP request keyword is what you want, like when
GET, HEAD or POST is not good enough for you, \fICURLOPT_CUSTOMREQUEST(3)\fP
-is there for you. It is very simple to use:
+is there for you. It is simple to use:
curl_easy_setopt(easyhandle, CURLOPT_CUSTOMREQUEST, "MYOWNREQUEST");
@@ -1045,9 +1043,9 @@ data size is unknown.
.IP "HTTP Version"
All HTTP requests includes the version number to tell the server which version
-we support. libcurl speaks HTTP 1.1 by default. Some very old servers don't
-like getting 1.1-requests and when dealing with stubborn old things like that,
-you can tell libcurl to use 1.0 instead by doing something like this:
+we support. libcurl speaks HTTP 1.1 by default. Some old servers don't like
+getting 1.1-requests and when dealing with stubborn old things like that, you
+can tell libcurl to use 1.0 instead by doing something like this:
curl_easy_setopt(easyhandle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
@@ -1057,14 +1055,14 @@ Not all protocols are HTTP-like, and thus the above may not help you when
you want to make, for example, your FTP transfers to behave differently.
Sending custom commands to an FTP server means that you need to send the
-commands exactly as the FTP server expects them (RFC959 is a good guide
-here), and you can only use commands that work on the control-connection
-alone. All kinds of commands that require data interchange and thus need
-a data-connection must be left to libcurl's own judgement. Also be aware
-that libcurl will do its very best to change directory to the target
-directory before doing any transfer, so if you change directory (with CWD
-or similar) you might confuse libcurl and then it might not attempt to
-transfer the file in the correct remote directory.
+commands exactly as the FTP server expects them (RFC959 is a good guide here),
+and you can only use commands that work on the control-connection alone. All
+kinds of commands that require data interchange and thus need a
+data-connection must be left to libcurl's own judgement. Also be aware that
+libcurl will do its best to change directory to the target directory before
+doing any transfer, so if you change directory (with CWD or similar) you might
+confuse libcurl and then it might not attempt to transfer the file in the
+correct remote directory.
A little example that deletes a given file before an operation:
@@ -1336,9 +1334,9 @@ with the particular file descriptors libcurl uses for the moment.
When you then call select(), it'll return when one of the file handles signal
action and you then call \fIcurl_multi_perform(3)\fP to allow libcurl to do
what it wants to do. Take note that libcurl does also feature some time-out
-code so we advise you to never use very long timeouts on select() before you
-call \fIcurl_multi_perform(3)\fP again. \fIcurl_multi_timeout(3)\fP is
-provided to help you get a suitable timeout period.
+code so we advise you to never use long timeouts on select() before you call
+\fIcurl_multi_perform(3)\fP again. \fIcurl_multi_timeout(3)\fP is provided to
+help you get a suitable timeout period.
Another precaution you should use: always call \fIcurl_multi_fdset(3)\fP
immediately before the select() call since the current set of file descriptors
diff --git a/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME.3 b/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME.3
index d3a93bb22..94e73e78c 100644
--- a/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME.3
+++ b/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -30,9 +30,9 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_APPCONNECT_TIME, double *timep
.SH DESCRIPTION
Pass a pointer to a double to receive the time, in seconds, it took from the
start until the SSL/SSH connect/handshake to the remote host was completed.
-This time is most often very near to the \fICURLINFO_PRETRANSFER_TIME(3)\fP
-time, except for cases such as HTTP pipelining where the pretransfer time can
-be delayed due to waits in line for the pipeline and more.
+This time is most often close to the \fICURLINFO_PRETRANSFER_TIME(3)\fP time,
+except for cases such as HTTP pipelining where the pretransfer time can be
+delayed due to waits in line for the pipeline and more.
When a redirect is followed, the time from each request is added together.
diff --git a/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME_T.3 b/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME_T.3
index 37f18c938..753356227 100644
--- a/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME_T.3
+++ b/docs/libcurl/opts/CURLINFO_APPCONNECT_TIME_T.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 2018 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 2018 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -28,12 +28,12 @@ CURLINFO_APPCONNECT_TIME_T \- get the time until the SSL/SSH handshake is comple
CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_APPCONNECT_TIME_T, curl_off_t *timep);
.SH DESCRIPTION
-Pass a pointer to a curl_off_t to receive the time, in microseconds,
-it took from the
-start until the SSL/SSH connect/handshake to the remote host was completed.
-This time is most often very near to the \fICURLINFO_PRETRANSFER_TIME_T(3)\fP
-time, except for cases such as HTTP pipelining where the pretransfer time can
-be delayed due to waits in line for the pipeline and more.
+Pass a pointer to a curl_off_t to receive the time, in microseconds, it took
+from the start until the SSL/SSH connect/handshake to the remote host was
+completed. This time is most often close to the
+\fICURLINFO_PRETRANSFER_TIME_T(3)\fP time, except for cases such as HTTP
+pipelining where the pretransfer time can be delayed due to waits in line for
+the pipeline and more.
When a redirect is followed, the time from each request is added together.
diff --git a/docs/libcurl/opts/CURLINFO_EFFECTIVE_METHOD.3 b/docs/libcurl/opts/CURLINFO_EFFECTIVE_METHOD.3
index 66c1bec2d..b9b111ad1 100644
--- a/docs/libcurl/opts/CURLINFO_EFFECTIVE_METHOD.3
+++ b/docs/libcurl/opts/CURLINFO_EFFECTIVE_METHOD.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -34,8 +34,8 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_EFFECTIVE_METHOD,
Pass in a pointer to a char pointer and get the last used effective HTTP
method.
-In cases when you've asked libcurl to follow redirects, the method may very
-well not be the same method the first request would use.
+In cases when you've asked libcurl to follow redirects, the method may not be
+the same method the first request would use.
The \fBmethodp\fP pointer will be NULL or pointing to private memory you MUST
NOT free - it gets freed when you call \fIcurl_easy_cleanup(3)\fP on the
diff --git a/docs/libcurl/opts/CURLINFO_EFFECTIVE_URL.3 b/docs/libcurl/opts/CURLINFO_EFFECTIVE_URL.3
index 47b1c9802..0a89836e6 100644
--- a/docs/libcurl/opts/CURLINFO_EFFECTIVE_URL.3
+++ b/docs/libcurl/opts/CURLINFO_EFFECTIVE_URL.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -30,8 +30,8 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_EFFECTIVE_URL, char **urlp);
.SH DESCRIPTION
Pass in a pointer to a char pointer and get the last used effective URL.
-In cases when you've asked libcurl to follow redirects, it may very well not
-be the same value you set with \fICURLOPT_URL(3)\fP.
+In cases when you've asked libcurl to follow redirects, it may not be the same
+value you set with \fICURLOPT_URL(3)\fP.
The \fBurlp\fP pointer will be NULL or pointing to private memory you MUST NOT
free - it gets freed when you call \fIcurl_easy_cleanup(3)\fP on the
diff --git a/docs/libcurl/opts/CURLOPT_CURLU.3 b/docs/libcurl/opts/CURLOPT_CURLU.3
index 5f6d4af76..364ead7c4 100644
--- a/docs/libcurl/opts/CURLOPT_CURLU.3
+++ b/docs/libcurl/opts/CURLOPT_CURLU.3
@@ -36,9 +36,9 @@ CURLU *. Setting \fICURLOPT_CURLU(3)\fP will explicitly override
transfer is started.
libcurl will use this handle and its contents read-only and will not change
-its contents. An application can very well update the contents of the URL
-handle after a transfer is done and if the same handle is then used in a
-subsequent request the updated contents will then be used.
+its contents. An application can update the contents of the URL handle after a
+transfer is done and if the same handle is then used in a subsequent request
+the updated contents will then be used.
.SH DEFAULT
The default value of this parameter is NULL.
.SH PROTOCOLS
diff --git a/docs/libcurl/opts/CURLOPT_DNS_CACHE_TIMEOUT.3 b/docs/libcurl/opts/CURLOPT_DNS_CACHE_TIMEOUT.3
index a85e9f7ca..7b2e37242 100644
--- a/docs/libcurl/opts/CURLOPT_DNS_CACHE_TIMEOUT.3
+++ b/docs/libcurl/opts/CURLOPT_DNS_CACHE_TIMEOUT.3
@@ -52,7 +52,7 @@ CURL *curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/foo.bin");
- /* only reuse addresses for a very short time */
+ /* only reuse addresses for a short time */
curl_easy_setopt(curl, CURLOPT_DNS_CACHE_TIMEOUT, 2L);
ret = curl_easy_perform(curl);
diff --git a/docs/libcurl/opts/CURLOPT_ERRORBUFFER.3 b/docs/libcurl/opts/CURLOPT_ERRORBUFFER.3
index 55fe53d67..bb999c6c0 100644
--- a/docs/libcurl/opts/CURLOPT_ERRORBUFFER.3
+++ b/docs/libcurl/opts/CURLOPT_ERRORBUFFER.3
@@ -34,9 +34,9 @@ return code from \fIcurl_easy_perform(3)\fP and related functions. The buffer
\fBmust be at least CURL_ERROR_SIZE bytes big\fP.
You must keep the associated buffer available until libcurl no longer needs
-it. Failing to do so will cause very odd behavior or even crashes. libcurl
-will need it until you call \fIcurl_easy_cleanup(3)\fP or you set the same
-option again to use a different pointer.
+it. Failing to do so will cause odd behavior or even crashes. libcurl will
+need it until you call \fIcurl_easy_cleanup(3)\fP or you set the same option
+again to use a different pointer.
Do not rely on the contents of the buffer unless an error code was returned.
Since 7.60.0 libcurl will initialize the contents of the error buffer to an
diff --git a/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.3 b/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.3
index 0d0ab00c8..505c106dc 100644
--- a/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.3
+++ b/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -51,7 +51,7 @@ dictates which request method it will use in the subsequent request: For 301,
will make libcurl send the same method again.
For users who think the existing location following is too naive, too simple
-or just lacks features, it is very easy to instead implement your own redirect
+or just lacks features, it is easy to instead implement your own redirect
follow logic with the use of \fIcurl_easy_getinfo(3)\fP's
\fICURLINFO_REDIRECT_URL(3)\fP option instead of using
\fICURLOPT_FOLLOWLOCATION(3)\fP.
diff --git a/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.3 b/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.3
index 229bf356e..33ac8cfd7 100644
--- a/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.3
+++ b/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -38,10 +38,10 @@ shown above.
This function gets called by libcurl as soon as it has received header
data. The header callback will be called once for each header and only
-complete header lines are passed on to the callback. Parsing headers is very
-easy using this. \fIbuffer\fP points to the delivered data, and the size of
-that data is \fInitems\fP; \fIsize\fP is always 1. Do not assume that the
-header line is null-terminated!
+complete header lines are passed on to the callback. Parsing headers is easy
+to do using this callback. \fIbuffer\fP points to the delivered data, and the
+size of that data is \fInitems\fP; \fIsize\fP is always 1. Do not assume that
+the header line is null-terminated!
The pointer named \fIuserdata\fP is the one you set with the
\fICURLOPT_HEADERDATA(3)\fP option.
diff --git a/docs/libcurl/opts/CURLOPT_HTTP200ALIASES.3 b/docs/libcurl/opts/CURLOPT_HTTP200ALIASES.3
index 1009c66fd..66ac89ad9 100644
--- a/docs/libcurl/opts/CURLOPT_HTTP200ALIASES.3
+++ b/docs/libcurl/opts/CURLOPT_HTTP200ALIASES.3
@@ -32,11 +32,11 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_HTTP200ALIASES,
.SH DESCRIPTION
Pass a pointer to a linked list of \fIaliases\fP to be treated as valid HTTP
200 responses. Some servers respond with a custom header response line. For
-example, SHOUTcast servers respond with "ICY 200 OK". Also some very old
-Icecast 1.3.x servers will respond like that for certain user agent headers or
-in absence of such. By including this string in your list of aliases,
-the response will be treated as a valid HTTP header line such as
-"HTTP/1.0 200 OK".
+example, SHOUTcast servers respond with "ICY 200 OK". Also some old Icecast
+1.3.x servers will respond like that for certain user agent headers or in
+absence of such. By including this string in your list of aliases, the
+response will be treated as a valid HTTP header line such as "HTTP/1.0 200
+OK".
The linked list should be a fully valid list of struct curl_slist structs, and
be properly filled in. Use \fIcurl_slist_append(3)\fP to create the list and
diff --git a/docs/libcurl/opts/CURLOPT_MAXLIFETIME_CONN.3 b/docs/libcurl/opts/CURLOPT_MAXLIFETIME_CONN.3
index 8f8ec33c8..18329d9cd 100644
--- a/docs/libcurl/opts/CURLOPT_MAXLIFETIME_CONN.3
+++ b/docs/libcurl/opts/CURLOPT_MAXLIFETIME_CONN.3
@@ -35,7 +35,7 @@ connection to have to be considered for reuse for this request.
libcurl features a connection cache that holds previously used connections.
When a new request is to be done, it will consider any connection that matches
for reuse. The \fICURLOPT_MAXLIFETIME_CONN(3)\fP limit prevents libcurl from
-trying very old connections for reuse. This can be used for client-side load
+trying too old connections for reuse. This can be used for client-side load
balancing. If a connection is found in the cache that is older than this set
\fImaxlifetime\fP, it will instead be closed once any in-progress transfers
complete.
diff --git a/docs/libcurl/opts/CURLOPT_PRE_PROXY.3 b/docs/libcurl/opts/CURLOPT_PRE_PROXY.3
index 4d979ce55..5adc43a89 100644
--- a/docs/libcurl/opts/CURLOPT_PRE_PROXY.3
+++ b/docs/libcurl/opts/CURLOPT_PRE_PROXY.3
@@ -59,7 +59,7 @@ Default is NULL, meaning no pre proxy is used.
When you set a host name to use, do not assume that there's any particular
single port number used widely for proxies. Specify it!
.SH PROTOCOLS
-All except file://. Note that some protocols don't do very well over proxy.
+All except file://. Note that some protocols don't work well over proxy.
.SH EXAMPLE
.nf
CURL *curl = curl_easy_init();
diff --git a/docs/libcurl/opts/CURLOPT_PROGRESSFUNCTION.3 b/docs/libcurl/opts/CURLOPT_PROGRESSFUNCTION.3
index dc2f8439a..35a37de6b 100644
--- a/docs/libcurl/opts/CURLOPT_PROGRESSFUNCTION.3
+++ b/docs/libcurl/opts/CURLOPT_PROGRESSFUNCTION.3
@@ -41,7 +41,7 @@ We encourage users to use the newer \fICURLOPT_XFERINFOFUNCTION(3)\fP instead,
if you can.
This function gets called by libcurl instead of its internal equivalent with a
-frequent interval. While data is being transferred it will be called very
+frequent interval. While data is being transferred it will be called
frequently, and during slow periods like when nothing is being transferred it
can slow down to about one call per second.
diff --git a/docs/libcurl/opts/CURLOPT_PROXY.3 b/docs/libcurl/opts/CURLOPT_PROXY.3
index 8913771c8..6c570fb20 100644
--- a/docs/libcurl/opts/CURLOPT_PROXY.3
+++ b/docs/libcurl/opts/CURLOPT_PROXY.3
@@ -90,7 +90,7 @@ Default is NULL, meaning no proxy is used.
When you set a host name to use, do not assume that there's any particular
single port number used widely for proxies. Specify it!
.SH PROTOCOLS
-All except file://. Note that some protocols don't do very well over proxy.
+All except file://. Note that some protocols don't work well over proxy.
.SH EXAMPLE
.nf
CURL *curl = curl_easy_init();
diff --git a/docs/libcurl/opts/CURLOPT_PROXYTYPE.3 b/docs/libcurl/opts/CURLOPT_PROXYTYPE.3
index abdd96457..2cdcb3307 100644
--- a/docs/libcurl/opts/CURLOPT_PROXYTYPE.3
+++ b/docs/libcurl/opts/CURLOPT_PROXYTYPE.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -36,9 +36,9 @@ HTTP Proxy. Default.
.IP CURLPROXY_HTTPS
HTTPS Proxy. (Added in 7.52.0 for OpenSSL, GnuTLS and NSS)
.IP CURLPROXY_HTTP_1_0
-HTTP 1.0 Proxy. This is very similar to CURLPROXY_HTTP except it uses HTTP/1.0
-for any CONNECT tunnelling. It does not change the HTTP version of the actual
-HTTP requests, controlled by \fICURLOPT_HTTP_VERSION(3)\fP.
+HTTP 1.0 Proxy. This is similar to CURLPROXY_HTTP except it uses HTTP/1.0 for
+any CONNECT tunnelling. It does not change the HTTP version of the actual HTTP
+requests, controlled by \fICURLOPT_HTTP_VERSION(3)\fP.
.IP CURLPROXY_SOCKS4
SOCKS4 Proxy.
.IP CURLPROXY_SOCKS4A
diff --git a/docs/libcurl/opts/CURLOPT_URL.3 b/docs/libcurl/opts/CURLOPT_URL.3
index e33d866bf..aaac06119 100644
--- a/docs/libcurl/opts/CURLOPT_URL.3
+++ b/docs/libcurl/opts/CURLOPT_URL.3
@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
-.\" * Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
+.\" * Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -88,8 +88,8 @@ security concerns:
If you have an application that runs as or in a server application, getting an
unfiltered URL can easily trick your application to access a local resource
-instead of a remote. Protecting yourself against localhost accesses is very
-hard when accepting user provided URLs.
+instead of a remote. Protecting yourself against localhost accesses is hard
+when accepting user provided URLs.
Such custom URLs can also access other ports than you planned as port numbers
are part of the regular URL format. The combination of a local host and a
diff --git a/docs/libcurl/opts/CURLOPT_XFERINFOFUNCTION.3 b/docs/libcurl/opts/CURLOPT_XFERINFOFUNCTION.3
index fba1930dc..29367a6b7 100644
--- a/docs/libcurl/opts/CURLOPT_XFERINFOFUNCTION.3
+++ b/docs/libcurl/opts/CURLOPT_XFERINFOFUNCTION.3
@@ -38,7 +38,7 @@ Pass a pointer to your callback function, which should match the prototype
shown above.
This function gets called by libcurl instead of its internal equivalent with a
-frequent interval. While data is being transferred it will be called very
+frequent interval. While data is being transferred it will be called
frequently, and during slow periods like when nothing is being transferred it
can slow down to about one call per second.