summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
Diffstat (limited to 'lib')
-rw-r--r--lib/Makefile.am5
-rw-r--r--lib/README.ares69
-rw-r--r--lib/README.curl_off_t68
-rw-r--r--lib/README.curlx61
-rw-r--r--lib/README.encoding60
-rw-r--r--lib/README.hostip35
-rw-r--r--lib/README.httpauth74
-rw-r--r--lib/README.memoryleak55
-rw-r--r--lib/README.multi_socket53
9 files changed, 1 insertions, 479 deletions
diff --git a/lib/Makefile.am b/lib/Makefile.am
index 1bef388df..a2c3dc56c 100644
--- a/lib/Makefile.am
+++ b/lib/Makefile.am
@@ -21,9 +21,6 @@
###########################################################################
AUTOMAKE_OPTIONS = foreign nostdinc
-DOCS = README.encoding README.memoryleak README.ares README.curlx \
- README.hostip README.multi_socket README.httpauth README.curl_off_t
-
CMAKE_DIST = CMakeLists.txt curl_config.h.cmake
EXTRA_DIST = Makefile.b32 Makefile.m32 Makefile.vc6 config-win32.h \
@@ -31,7 +28,7 @@ EXTRA_DIST = Makefile.b32 Makefile.m32 Makefile.vc6 config-win32.h \
makefile.dj config-dos.h libcurl.plist libcurl.rc config-amigaos.h \
makefile.amiga Makefile.netware nwlib.c nwos.c config-win32ce.h \
config-os400.h setup-os400.h config-symbian.h Makefile.Watcom \
- config-tpf.h $(DOCS) mk-ca-bundle.pl mk-ca-bundle.vbs $(CMAKE_DIST) \
+ config-tpf.h mk-ca-bundle.pl mk-ca-bundle.vbs $(CMAKE_DIST) \
firefox-db2pem.sh config-vxworks.h Makefile.vxworks checksrc.pl \
objnames-test08.sh objnames-test10.sh objnames.inc checksrc.whitelist
diff --git a/lib/README.ares b/lib/README.ares
deleted file mode 100644
index 8c77937eb..000000000
--- a/lib/README.ares
+++ /dev/null
@@ -1,69 +0,0 @@
- _ _ ____ _
- ___| | | | _ \| |
- / __| | | | |_) | |
- | (__| |_| | _ <| |___
- \___|\___/|_| \_\_____|
-
- How To Build libcurl to Use c-ares For Asynch Name Resolves
- ===========================================================
-
-c-ares:
- http://c-ares.haxx.se/
-
-NOTE
- The latest libcurl version requires c-ares 1.6.0 or later.
-
- Once upon the time libcurl built fine with the "original" ares. That is no
- longer true. You need to use c-ares.
-
-Build c-ares
-============
-
-1. unpack the c-ares archive
-2. cd c-ares-dir
-3. ./configure
-4. make
-5. make install
-
-Build libcurl to use c-ares in the curl source tree
-===================================================
-
-1. name or symlink the c-ares source directory 'ares' in the curl source
- directory
-2. ./configure --enable-ares
-
- Optionally, you can point out the c-ares install tree root with the the
- --enable-ares option.
-
-3. make
-
-Build libcurl to use an installed c-ares
-========================================
-
-1. ./configure --enable-ares=/path/to/ares/install
-2. make
-
-c-ares on win32
-===============
-(description brought by Dominick Meglio)
-
-First I compiled c-ares. I changed the default C runtime library to be the
-single-threaded rather than the multi-threaded (this seems to be required to
-prevent linking errors later on). Then I simply build the areslib project (the
-other projects adig/ahost seem to fail under MSVC).
-
-Next was libcurl. I opened lib/config-win32.h and I added a:
- #define USE_ARES 1
-
-Next thing I did was I added the path for the ares includes to the include
-path, and the libares.lib to the libraries.
-
-Lastly, I also changed libcurl to be single-threaded rather than
-multi-threaded, again this was to prevent some duplicate symbol errors. I'm
-not sure why I needed to change everything to single-threaded, but when I
-didn't I got redefinition errors for several CRT functions (malloc, stricmp,
-etc.)
-
-I would have modified the MSVC++ project files, but I only have VC.NET and it
-uses a different format than VC6.0 so I didn't want to go and change
-everything and remove VC6.0 support from libcurl.
diff --git a/lib/README.curl_off_t b/lib/README.curl_off_t
deleted file mode 100644
index 923b2774c..000000000
--- a/lib/README.curl_off_t
+++ /dev/null
@@ -1,68 +0,0 @@
-
- curl_off_t explained
- ====================
-
-curl_off_t is a data type provided by the external libcurl include headers. It
-is the type meant to be used for the curl_easy_setopt() options that end with
-LARGE. The type is 64bit large on most modern platforms.
-
-Transition from < 7.19.0 to >= 7.19.0
--------------------------------------
-
-Applications that used libcurl before 7.19.0 that are rebuilt with a libcurl
-that is 7.19.0 or later may or may not have to worry about anything of
-this. We have made a significant effort to make the transition really seamless
-and transparent.
-
-You have have to take notice if you are in one of the following situations:
-
-o Your app is using or will after the transition use a libcurl that is built
- with LFS (large file support) disabled even though your system otherwise
- supports it.
-
-o Your app is using or will after the transition use a libcurl that doesn't
- support LFS at all, but your system and compiler support 64bit data types.
-
-In both these cases, the curl_off_t type will now (after the transition) be
-64bit where it previously was 32bit. This will cause a binary incompatibility
-that you MAY need to deal with.
-
-Benefits
---------
-
-This new way has several benefits:
-
-o Platforms without LFS support can still use libcurl to do >32 bit file
- transfers and range operations etc as long as they have >32 bit data-types
- supported.
-
-o Applications will no longer easily build with the curl_off_t size
- mismatched, which has been a very frequent (and annoying) problem with
- libcurl <= 7.18.2
-
-Historically
-------------
-
-Previously, before 7.19.0, the curl_off_t type would be rather strongly
-connected to the size of the system off_t type, where currently curl_off_t is
-independent of that.
-
-The strong connection to off_t made it troublesome for application authors
-since when they did mistakes, they could get curl_off_t type of different
-sizes in the app vs libcurl, and that caused strange effects that were hard to
-track and detect by users of libcurl.
-
-SONAME
-------
-
-We opted to not bump the soname for the library unconditionally, simply
-because soname bumping is causing a lot of grief and moaning all over the
-community so we try to keep that at minimum. Also, our selected design path
-should be 100% backwards compatible for the vast majority of all libcurl
-users.
-
-Enforce SONAME bump
--------------------
-
-If configure doesn't detect your case where a bump is necessary, re-run it
-with the --enable-soname-bump command line option!
diff --git a/lib/README.curlx b/lib/README.curlx
deleted file mode 100644
index 5375b0d1d..000000000
--- a/lib/README.curlx
+++ /dev/null
@@ -1,61 +0,0 @@
- _ _ ____ _
- ___| | | | _ \| |
- / __| | | | |_) | |
- | (__| |_| | _ <| |___
- \___|\___/|_| \_\_____|
-
- Source Code Functions Apps Might Use
- ====================================
-
-The libcurl source code offers a few functions by source only. They are not
-part of the official libcurl API, but the source files might be useful for
-others so apps can optionally compile/build with these sources to gain
-additional functions.
-
-We provide them through a single header file for easy access for apps:
-"curlx.h"
-
- curlx_strtoofft()
-
- A macro that converts a string containing a number to a curl_off_t number.
- This might use the curlx_strtoll() function which is provided as source
- code in strtoofft.c. Note that the function is only provided if no
- strtoll() (or equivalent) function exist on your platform. If curl_off_t
- is only a 32 bit number on your platform, this macro uses strtol().
-
- curlx_tvnow()
-
- returns a struct timeval for the current time.
-
- curlx_tvdiff()
-
- returns the difference between two timeval structs, in number of
- milliseconds.
-
- curlx_tvdiff_secs()
-
- returns the same as curlx_tvdiff but with full usec resolution (as a
- double)
-
-FUTURE
-======
-
- Several functions will be removed from the public curl_ name space in a
- future libcurl release. They will then only become available as curlx_
- functions instead. To make the transition easier, we already today provide
- these functions with the curlx_ prefix to allow sources to get built properly
- with the new function names. The functions this concerns are:
-
- curlx_getenv
- curlx_strequal
- curlx_strnequal
- curlx_mvsnprintf
- curlx_msnprintf
- curlx_maprintf
- curlx_mvaprintf
- curlx_msprintf
- curlx_mprintf
- curlx_mfprintf
- curlx_mvsprintf
- curlx_mvprintf
- curlx_mvfprintf
diff --git a/lib/README.encoding b/lib/README.encoding
deleted file mode 100644
index 1012bb9ec..000000000
--- a/lib/README.encoding
+++ /dev/null
@@ -1,60 +0,0 @@
-
- Content Encoding Support for libcurl
-
-* About content encodings:
-
-HTTP/1.1 [RFC 2616] specifies that a client may request that a server encode
-its response. This is usually used to compress a response using one of a set
-of commonly available compression techniques. These schemes are `deflate' (the
-zlib algorithm), `gzip' and `compress' [sec 3.5, RFC 2616]. A client requests
-that the sever perform an encoding by including an Accept-Encoding header in
-the request document. The value of the header should be one of the recognized
-tokens `deflate', ... (there's a way to register new schemes/tokens, see sec
-3.5 of the spec). A server MAY honor the client's encoding request. When a
-response is encoded, the server includes a Content-Encoding header in the
-response. The value of the Content-Encoding header indicates which scheme was
-used to encode the data.
-
-A client may tell a server that it can understand several different encoding
-schemes. In this case the server may choose any one of those and use it to
-encode the response (indicating which one using the Content-Encoding header).
-It's also possible for a client to attach priorities to different schemes so
-that the server knows which it prefers. See sec 14.3 of RFC 2616 for more
-information on the Accept-Encoding header.
-
-* Current support for content encoding:
-
-Support for the 'deflate' and 'gzip' content encoding are supported by
-libcurl. Both regular and chunked transfers should work fine. The library
-zlib is required for this feature. 'deflate' support was added by James
-Gallagher, and support for the 'gzip' encoding was added by Dan Fandrich.
-
-* The libcurl interface:
-
-To cause libcurl to request a content encoding use:
-
- curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, <string>)
-
-where <string> is the intended value of the Accept-Encoding header.
-
-Currently, libcurl only understands how to process responses that use the
-"deflate" or "gzip" Content-Encoding, so the only values for
-CURLOPT_ACCEPT_ENCODING that will work (besides "identity," which does
-nothing) are "deflate" and "gzip" If a response is encoded using the
-"compress" or methods, libcurl will return an error indicating that the
-response could not be decoded. If <string> is NULL no Accept-Encoding header
-is generated. If <string> is a zero-length string, then an Accept-Encoding
-header containing all supported encodings will be generated.
-
-The CURLOPT_ACCEPT_ENCODING must be set to any non-NULL value for content to
-be automatically decoded. If it is not set and the server still sends encoded
-content (despite not having been asked), the data is returned in its raw form
-and the Content-Encoding type is not checked.
-
-* The curl interface:
-
-Use the --compressed option with curl to cause it to ask servers to compress
-responses using any format supported by curl.
-
-James Gallagher <jgallagher@gso.uri.edu>
-Dan Fandrich <dan@coneharvesters.com>
diff --git a/lib/README.hostip b/lib/README.hostip
deleted file mode 100644
index d5688fff1..000000000
--- a/lib/README.hostip
+++ /dev/null
@@ -1,35 +0,0 @@
- hostip.c explained
- ==================
-
- The main COMPILE-TIME DEFINES to keep in mind when reading the host*.c
- source file are these:
-
- CURLRES_IPV6 - this host has getaddrinfo() and family, and thus we use
- that. The host may not be able to resolve IPv6, but we don't really have to
- take that into account. Hosts that aren't IPv6-enabled have CURLRES_IPV4
- defined.
-
- CURLRES_ARES - is defined if libcurl is built to use c-ares for asynchronous
- name resolves. This can be Windows or *nix.
-
- CURLRES_THREADED - is defined if libcurl is built to use threading for
- asynchronous name resolves. The name resolve will be done in a new thread,
- and the supported asynch API will be the same as for ares-builds. This is
- the default under (native) Windows.
-
- If any of the two previous are defined, CURLRES_ASYNCH is defined too. If
- libcurl is not built to use an asynchronous resolver, CURLRES_SYNCH is
- defined.
-
- The host*.c sources files are split up like this:
-
- hostip.c - method-independent resolver functions and utility functions
- hostasyn.c - functions for asynchronous name resolves
- hostsyn.c - functions for synchronous name resolves
- asyn-ares.c - functions for asynchronous name resolves using c-ares
- asyn-thread.c - functions for asynchronous name resolves using threads
- hostip4.c - IPv4 specific functions
- hostip6.c - IPv6 specific functions
-
- The hostip.h is the single united header file for all this. It defines the
- CURLRES_* defines based on the config*.h and curl_setup.h defines.
diff --git a/lib/README.httpauth b/lib/README.httpauth
deleted file mode 100644
index 960504510..000000000
--- a/lib/README.httpauth
+++ /dev/null
@@ -1,74 +0,0 @@
-
-1. PUT/POST without a known auth to use (possibly no auth required):
-
- (When explicitly set to use a multi-pass auth when doing a POST/PUT,
- libcurl should immediately go the Content-Length: 0 bytes route to avoid
- the first send all data phase, step 2. If told to use a single-pass auth,
- goto step 3.)
-
- Issue the proper PUT/POST request immediately, with the correct
- Content-Length and Expect: headers.
-
- If a 100 response is received or the wait for one times out, start sending
- the request-body.
-
- If a 401 (or 407 when talking through a proxy) is received, then:
-
- If we have "more than just a little" data left to send, close the
- connection. Exactly what "more than just a little" means will have to be
- determined. Possibly the current transfer speed should be taken into
- account as well.
-
- NOTE: if the size of the POST data is less than MAX_INITIAL_POST_SIZE (when
- CURLOPT_POSTFIELDS is used), libcurl will send everything in one single
- write() (all request-headers and request-body) and thus it will
- unconditionally send the full post data here.
-
-2. PUT/POST with multi-pass auth but not yet completely negotiated:
-
- Send a PUT/POST request, we know that it will be rejected and thus we claim
- Content-Length zero to avoid having to send the request-body. (This seems
- to be what IE does.)
-
-3. PUT/POST as the last step in the auth negotiation, that is when we have
- what we believe is a completed negotiation:
-
- Send a full and proper PUT/POST request (again) with the proper
- Content-Length and a following request-body.
-
- NOTE: this may very well be the second (or even third) time the whole or at
- least parts of the request body is sent to the server. Since the data may
- be provided to libcurl with a callback, we need a way to tell the app that
- the upload is to be restarted so that the callback will provide data from
- the start again. This requires an API method/mechanism that libcurl
- doesn't have today. See below.
-
-Data Rewind
-
- It will be troublesome for some apps to deal with a rewind like this in all
- circumstances. I'm thinking for example when using 'curl' to upload data
- from stdin. If libcurl ends up having to rewind the reading for a request
- to succeed, of course a lack of this callback or if it returns failure, will
- cause the request to fail completely.
-
- The new callback is set with CURLOPT_IOCTLFUNCTION (in an attempt to add a
- more generic function that might be used for other IO-related controls in
- the future):
-
- curlioerr curl_ioctl(CURL *handle, curliocmd cmd, void *clientp);
-
- And in the case where the read is to be rewinded, it would be called with a
- cmd named CURLIOCMD_RESTARTREAD. The callback would then return CURLIOE_OK,
- if things are fine, or CURLIOE_FAILRESTART if not.
-
-Backwards Compatibility
-
- The approach used until now, that issues a HEAD on the given URL to trigger
- the auth negotiation could still be supported and encouraged, but it would
- be up to the app to first fetch a URL with GET/HEAD to negotiate on, since
- then a following PUT/POST wouldn't need to negotiate authentication and
- thus avoid double-sending data.
-
- Optionally, we keep the current approach if some option is set
- (CURLOPT_HEADBEFOREAUTH or similar), since it seems to work fairly well for
- POST on most servers.
diff --git a/lib/README.memoryleak b/lib/README.memoryleak
deleted file mode 100644
index 166177794..000000000
--- a/lib/README.memoryleak
+++ /dev/null
@@ -1,55 +0,0 @@
- _ _ ____ _
- ___| | | | _ \| |
- / __| | | | |_) | |
- | (__| |_| | _ <| |___
- \___|\___/|_| \_\_____|
-
- How To Track Down Suspected Memory Leaks in libcurl
- ===================================================
-
-Single-threaded
-
- Please note that this memory leak system is not adjusted to work in more
- than one thread. If you want/need to use it in a multi-threaded app. Please
- adjust accordingly.
-
-
-Build
-
- Rebuild libcurl with -DCURLDEBUG (usually, rerunning configure with
- --enable-debug fixes this). 'make clean' first, then 'make' so that all
- files actually are rebuilt properly. It will also make sense to build
- libcurl with the debug option (usually -g to the compiler) so that debugging
- it will be easier if you actually do find a leak in the library.
-
- This will create a library that has memory debugging enabled.
-
-Modify Your Application
-
- Add a line in your application code:
-
- curl_memdebug("dump");
-
- This will make the malloc debug system output a full trace of all resource
- using functions to the given file name. Make sure you rebuild your program
- and that you link with the same libcurl you built for this purpose as
- described above.
-
-Run Your Application
-
- Run your program as usual. Watch the specified memory trace file grow.
-
- Make your program exit and use the proper libcurl cleanup functions etc. So
- that all non-leaks are returned/freed properly.
-
-Analyze the Flow
-
- Use the tests/memanalyze.pl perl script to analyze the dump file:
-
- tests/memanalyze.pl dump
-
- This now outputs a report on what resources that were allocated but never
- freed etc. This report is very fine for posting to the list!
-
- If this doesn't produce any output, no leak was detected in libcurl. Then
- the leak is mostly likely to be in your code.
diff --git a/lib/README.multi_socket b/lib/README.multi_socket
deleted file mode 100644
index d91e1d9f2..000000000
--- a/lib/README.multi_socket
+++ /dev/null
@@ -1,53 +0,0 @@
-Implementation of the curl_multi_socket API
-
- The main ideas of the new API are simply:
-
- 1 - The application can use whatever event system it likes as it gets info
- from libcurl about what file descriptors libcurl waits for what action
- on. (The previous API returns fd_sets which is very select()-centric).
-
- 2 - When the application discovers action on a single socket, it calls
- libcurl and informs that there was action on this particular socket and
- libcurl can then act on that socket/transfer only and not care about
- any other transfers. (The previous API always had to scan through all
- the existing transfers.)
-
- The idea is that curl_multi_socket_action() calls a given callback with
- information about what socket to wait for what action on, and the callback
- only gets called if the status of that socket has changed.
-
- We also added a timer callback that makes libcurl call the application when
- the timeout value changes, and you set that with curl_multi_setopt() and the
- CURLMOPT_TIMERFUNCTION option. To get this to work, Internally, there's an
- added a struct to each easy handle in which we store an "expire time" (if
- any). The structs are then "splay sorted" so that we can add and remove
- times from the linked list and yet somewhat swiftly figure out both how long
- time there is until the next nearest timer expires and which timer (handle)
- we should take care of now. Of course, the upside of all this is that we get
- a curl_multi_timeout() that should also work with old-style applications
- that use curl_multi_perform().
-
- We created an internal "socket to easy handles" hash table that given
- a socket (file descriptor) return the easy handle that waits for action on
- that socket. This hash is made using the already existing hash code
- (previously only used for the DNS cache).
-
- To make libcurl able to report plain sockets in the socket callback, we had
- to re-organize the internals of the curl_multi_fdset() etc so that the
- conversion from sockets to fd_sets for that function is only done in the
- last step before the data is returned. I also had to extend c-ares to get a
- function that can return plain sockets, as that library too returned only
- fd_sets and that is no longer good enough. The changes done to c-ares are
- available in c-ares 1.3.1 and later.
-
- We have done a test runs with up to 9000 connections (with a single active
- one). The curl_multi_socket_action() invoke then takes less than 10
- microseconds in average (using the read-only-1-byte-at-a-time hack). We are
- now below the 60 microseconds "per socket action" goal (the extra 50 is the
- time libevent needs).
-
-Documentation
-
- http://curl.haxx.se/libcurl/c/curl_multi_socket_action.html
- http://curl.haxx.se/libcurl/c/curl_multi_timeout.html
- http://curl.haxx.se/libcurl/c/curl_multi_setopt.html