summaryrefslogtreecommitdiff
path: root/lib/multi.c
Commit message (Collapse)AuthorAgeFilesLines
* Revert "multi_done: wait for name resolve to finish if still ongoing"Daniel Stenberg2017-10-081-6/+0
| | | | | | | | | This reverts commit f3e03f6c0ac52a1bf396e03f7d7e9b5b3b7165fe. Caused memory leaks in the fuzzer, needs to be done differently. Disable test 1553 for now too, as it causes memory leaks without this commit!
* remove_handle: call multi_done() first, then clear dns cache pointerDaniel Stenberg2017-10-071-6/+7
| | | | Closes #1960
* multi_done: wait for name resolve to finish if still ongoingDaniel Stenberg2017-10-071-0/+6
| | | | ... as we must clean up memory.
* multi_cleanup: call DONE on handles that never got thatDaniel Stenberg2017-10-061-18/+21
| | | | | | | | | | | ... fixes a memory leak with at least IMAP when remove_handle is never called and the transfer is abruptly just abandoned early. Test 1552 added to verify Detected by OSS-fuzz Assisted-by: Max Dymond Closes #1954
* code style: use spaces around plusesDaniel Stenberg2017-09-111-1/+1
|
* code style: use spaces around equals signsDaniel Stenberg2017-09-111-30/+30
|
* multi: fix request timer managementBrad Spencer2017-08-011-14/+13
| | | | | | | | | | | | | There are some bugs in how timers are managed for a single easy handle that causes the wrong "next timeout" value to be reported to the application when a new minimum needs to be recomputed and that new minimum should be an existing timer that isn't currently set for the easy handle. When the application drives a set of easy handles via the `curl_multi_socket_action()` API (for example), it gets told to wait the wrong amount of time before the next call, which causes requests to linger for a long time (or, it is my guess, possibly forever). Bug: https://curl.haxx.se/mail/lib-2017-07/0033.html
* timeval: struct curltime is a struct timeval replacementDaniel Stenberg2017-07-281-15/+15
| | | | | | | | | ... to make all libcurl internals able to use the same data types for the struct members. The timeval struct differs subtly on several platforms so it makes it cumbersome to use everywhere. Ref: #1652 Closes #1693
* multi: mention integer overflow risk if using > 500 million socketsDaniel Stenberg2017-07-271-0/+4
| | | | | | | Reported-by: ovidiu-benea@users.noreply.github.com Closes #1675 Closes #1683
* http-proxy: do the HTTP CONNECT process entirely non-blockingDaniel Stenberg2017-06-141-5/+13
| | | | | | | Mentioned as a problem since 2007 (8f87c15bdac63) and of course it existed even before that. Closes #1547
* expire: remove Curl_expire_latest()Daniel Stenberg2017-06-081-50/+6
| | | | | | | | | | | | | | | | | With the introduction of expire IDs and the fact that existing timers can be removed now and thus never expire, the concept with adding a "latest" timer is not working anymore as it risks to not expire at all. So, to be certain the timers actually are in line and will expire, the plain Curl_expire() needs to be used. The _latest() function was added as a sort of shortcut in the past that's quite simply not necessary anymore. Follow-up to 31b39c40cf90 Reported-by: Paul Harris Closes #1555
* redirect: store the "would redirect to" URL when max redirs is reachedDaniel Stenberg2017-05-231-9/+2
| | | | | | | | | Test 1261 added to verify. Reported-by: Lloyd Fournier Fixes #1489 Closes #1497
* multi: remove leftover debug infof() calls from e9fd794a6Daniel Stenberg2017-05-121-3/+0
|
* multi: use a fixed array of timers instead of mallocDaniel Stenberg2017-05-101-39/+23
| | | | | | | | | | ... since the total amount is low this is faster, easier and reduces memory overhead. Also, Curl_expire_done() can now mark an expire timeout as done so that it never times out. Closes #1472
* multi: assign IDs to all timers and make each timer singletonDaniel Stenberg2017-05-101-16/+57
| | | | | | | A) reduces the timeout lists drastically B) prevents a lot of superfluous loops for timers that expires "in vain" when it has actually already been extended to fire later on
* multi: clarify condition in curl_multi_waitAlan Jenkins2017-04-221-1/+1
| | | | | | | | | | | | | | | `if(nfds || extra_nfds) {` is followed by `malloc(nfds * ...)`. If `extra_fs` could be non-zero when `nfds` was zero, then we have `malloc(0)` which is allowed to return `NULL`. But, malloc returning NULL can be confusing. In this code, the next line would treat the NULL as an allocation failure. It turns out, if `nfds` is zero then `extra_nfds` must also be zero. The final value of `nfds` includes `extra_nfds`. So the test for `extra_nfds` is redundant. It can only confuse the reader. Closes #1439
* llist: no longer uses mallocDaniel Stenberg2017-04-221-22/+23
| | | | | | | | | | | | The 'list element' struct now has to be within the data that is being added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP transfer. (96 => 80) Also removed return codes since the llist functions can't fail now. Test 1300 updated accordingly. Closes #1435
* Curl_expire_latest: ignore already expired timersDaniel Stenberg2017-04-111-3/+6
| | | | | | | | | If the existing timer is still in there but has expired, the new timer should be added. Reported-by: Rainer Canavan Bug: https://curl.haxx.se/mail/lib-2017-04/0030.html Closes #1407
* llist: replace Curl_llist_alloc with Curl_llist_initDaniel Stenberg2017-04-041-60/+42
| | | | | | | | No longer allocate the curl_llist head struct for lists separately. Removes 17 (15%) tiny allocations in a normal "curl localhost" invoke. closes #1381
* multi: make curl_multi_wait avoid malloc in the typical caseDaniel Stenberg2017-04-031-4/+14
| | | | | | When only a few additional file descriptors are used, avoid the malloc. Closes #1377
* pause: handle mixed types of data when pausedDaniel Stenberg2017-03-281-3/+6
| | | | | | | | | | | | When receiving chunked encoded data with trailers, and the write callback returns PAUSE, there might be both body and header to store to resend on unpause. Previously libcurl returned error for that case. Added test case 1540 to verify. Reported-by: Stephen Toub Fixes #1354 Closes #1357
* multi: fix MinGW-w64 compiler warningsMarcel Raad2017-03-271-2/+2
| | | | | error: conversion to 'long int' from 'time_t {aka long long int}' may alter its value [-Werror=conversion]
* multi: fix streamclose() crash in debug modeDaniel Stenberg2017-03-211-4/+4
| | | | | | | | The code would refer to the wrong data pointer. Only debug builds do this - for verbosity. Reported-by: zelinchen@users.noreply.github.com Fixes #1329
* Improve code readbilitySylvestre Ledru2017-03-131-9/+5
| | | | | | ... by removing the else branch after a return, break or continue. Closes #1310
* speed caps: update the timeouts if the speed is too low/highMichael Kaufmann2017-02-181-36/+48
| | | | | | | Follow-up to 4b86113 Fixes https://github.com/curl/curl/issues/793 Fixes https://github.com/curl/curl/issues/942
* proxy: fix hostname resolution and IDN conversionMichael Kaufmann2017-02-181-3/+6
| | | | | | | | | | | Properly resolve, convert and log the proxy host names. Support the "--connect-to" feature for SOCKS proxies and for passive FTP data transfers. Follow-up to cb4e2be Reported-by: Jay Satiro Fixes https://github.com/curl/curl/issues/1248
* HTTPS-proxy: fixed mbedtls and polishingOkhin Vasilij2016-11-241-0/+2
|
* proxy: Support HTTPS proxy and SOCKS+HTTP(s)Alex Rousskov2016-11-241-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * HTTPS proxies: An HTTPS proxy receives all transactions over an SSL/TLS connection. Once a secure connection with the proxy is established, the user agent uses the proxy as usual, including sending CONNECT requests to instruct the proxy to establish a [usually secure] TCP tunnel with an origin server. HTTPS proxies protect nearly all aspects of user-proxy communications as opposed to HTTP proxies that receive all requests (including CONNECT requests) in vulnerable clear text. With HTTPS proxies, it is possible to have two concurrent _nested_ SSL/TLS sessions: the "outer" one between the user agent and the proxy and the "inner" one between the user agent and the origin server (through the proxy). This change adds supports for such nested sessions as well. A secure connection with a proxy requires its own set of the usual SSL options (their actual descriptions differ and need polishing, see TODO): --proxy-cacert FILE CA certificate to verify peer against --proxy-capath DIR CA directory to verify peer against --proxy-cert CERT[:PASSWD] Client certificate file and password --proxy-cert-type TYPE Certificate file type (DER/PEM/ENG) --proxy-ciphers LIST SSL ciphers to use --proxy-crlfile FILE Get a CRL list in PEM format from the file --proxy-insecure Allow connections to proxies with bad certs --proxy-key KEY Private key file name --proxy-key-type TYPE Private key file type (DER/PEM/ENG) --proxy-pass PASS Pass phrase for the private key --proxy-ssl-allow-beast Allow security flaw to improve interop --proxy-sslv2 Use SSLv2 --proxy-sslv3 Use SSLv3 --proxy-tlsv1 Use TLSv1 --proxy-tlsuser USER TLS username --proxy-tlspassword STRING TLS password --proxy-tlsauthtype STRING TLS authentication type (default SRP) All --proxy-foo options are independent from their --foo counterparts, except --proxy-crlfile which defaults to --crlfile and --proxy-capath which defaults to --capath. Curl now also supports %{proxy_ssl_verify_result} --write-out variable, similar to the existing %{ssl_verify_result} variable. Supported backends: OpenSSL, GnuTLS, and NSS. * A SOCKS proxy + HTTP/HTTPS proxy combination: If both --socks* and --proxy options are given, Curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. TODO: Update documentation for the new APIs and --proxy-* options. Look for "Added in 7.XXX" marks.
* lib: fix compiler warnings after de4de4e3c7cMarcel Raad2016-11-181-10/+10
| | | | | | | | | Visual C++ now complains about implicitly casting time_t (64-bit) to long (32-bit). Fix this by changing some variables from long to time_t, or explicitly casting to long where the public interface would be affected. Closes #1131
* multi: force connections to get closed in close_all_connectionsDaniel Stenberg2016-10-221-0/+2
| | | | | | | | | | | | | | | Several independent reports on infinite loops hanging in the close_all_connections() function when closing a multi handle, can be fixed by first marking the connection to get closed before calling Curl_disconnect. This is more fixing-the-symptom rather than the underlying problem though. Bug: https://curl.haxx.se/mail/lib-2016-10/0011.html Bug: https://curl.haxx.se/mail/lib-2016-10/0059.html Reported-by: Dan Fandrich, Valentin David, Miloš Ljumović
* curl_multi_remove_handle: fix a double-freeAnders Bakken2016-10-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In short the easy handle needs to be disconnected from its connection at this point since the connection still is serving other easy handles. In our app we can reliably reproduce a crash in our http2 stress test that is fixed by this change. I can't easily reproduce the same test in a small example. This is the gdb/asan output: ==11785==ERROR: AddressSanitizer: heap-use-after-free on address 0xe9f4fb80 at pc 0x09f41f19 bp 0xf27be688 sp 0xf27be67c READ of size 4 at 0xe9f4fb80 thread T13 (RESOURCE_HTTP) #0 0x9f41f18 in curl_multi_remove_handle /path/to/source/3rdparty/curl/lib/multi.c:666 0xe9f4fb80 is located 0 bytes inside of 1128-byte region [0xe9f4fb80,0xe9f4ffe8) freed by thread T13 (RESOURCE_HTTP) here: #0 0xf7b1b5c2 in __interceptor_free /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:45 #1 0x9f7862d in conn_free /path/to/source/3rdparty/curl/lib/url.c:2808 #2 0x9f78c6a in Curl_disconnect /path/to/source/3rdparty/curl/lib/url.c:2876 #3 0x9f41b09 in multi_done /path/to/source/3rdparty/curl/lib/multi.c:615 #4 0x9f48017 in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1896 #5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123 #6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854 #7 0x9c445e0 in ... #8 0x9c4cf1d in ... #9 0xa2be6b5 in ... #10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226 #11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d) previously allocated by thread T13 (RESOURCE_HTTP) here: #0 0xf7b1ba27 in __interceptor_calloc /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:70 #1 0x9f7dfa6 in allocate_conn /path/to/source/3rdparty/curl/lib/url.c:3904 #2 0x9f88ca0 in create_conn /path/to/source/3rdparty/curl/lib/url.c:5797 #3 0x9f8c928 in Curl_connect /path/to/source/3rdparty/curl/lib/url.c:6438 #4 0x9f45a8c in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1411 #5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123 #6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854 #7 0x9c445e0 in ... #8 0x9c4cf1d in ... #9 0xa2be6b5 in ... #10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226 #11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d) SUMMARY: AddressSanitizer: heap-use-after-free /path/to/source/3rdparty/curl/lib/multi.c:666 in curl_multi_remove_handle Shadow bytes around the buggy address: 0x3d3e9f20: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9f30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9f40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9f50: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa 0x3d3e9f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa =>0x3d3e9f70:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9f80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9f90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x3d3e9fc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Heap right redzone: fb Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack partial redzone: f4 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==11785==ABORTING Thread 14 "RESOURCE_HTTP" received signal SIGABRT, Aborted. [Switching to Thread 0xf27bfb40 (LWP 12324)] 0xf7fd8be9 in __kernel_vsyscall () (gdb) bt #0 0xf7fd8be9 in __kernel_vsyscall () #1 0xf4c7ee89 in __GI_raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #2 0xf4c803e7 in __GI_abort () at abort.c:89 #3 0xf7b2ef2e in __sanitizer::Abort () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cc:122 #4 0xf7b262fa in __sanitizer::Die () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_common.cc:145 #5 0xf7b21ab3 in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0xf27be171, __in_chrg=<optimized out>) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:689 #6 0xf7b214a5 in __asan::ReportGenericError (pc=166993689, bp=4068206216, sp=4068206204, addr=3925146496, is_write=false, access_size=4, exp=0, fatal=true) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:1074 #7 0xf7b21fce in __asan::__asan_report_load4 (addr=3925146496) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_rtl.cc:129 #8 0x09f41f19 in curl_multi_remove_handle (multi=0xf3406080, data=0xde582400) at /path/to/source3rdparty/curl/lib/multi.c:666 #9 0x09f6b277 in Curl_close (data=0xde582400) at /path/to/source3rdparty/curl/lib/url.c:415 #10 0x09f3354e in curl_easy_cleanup (data=0xde582400) at /path/to/source3rdparty/curl/lib/easy.c:860 #11 0x09c6de3f in ... #12 0x09c378c5 in ... #13 0x09c48133 in ... #14 0x09c4d092 in ... #15 0x0a2be6b6 in ... #16 0xf7aa5781 in asan_thread_start (arg=0xf2d22938) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226 #17 0xf5de52b5 in start_thread (arg=0xf27bfb40) at pthread_create.c:333 #18 0xf4d3a16e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:114 Fixes #1083
* curl_multi_add_handle: set timeouts in closure handlesDaniel Stenberg2016-10-191-0/+8
| | | | | | | | | The closure handle only ever has default timeouts set. To improve the state somewhat we clone the timeouts from each added handle so that the closure handle always has the same timeouts as the most recently added easy handle. Fixes #739
* speed caps: not based on average speeds anymoreOlivier Brunel2016-09-041-28/+32
| | | | | | | | | | | | | | | | | | | | | | | | Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE & CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits with the cumulative average speed of the entire transfer; While this might work at times with good/constant connections, in other cases it can result to the limits simply being "ignored" for more than "short bursts" (as told in man page). Consider a download that goes on much slower than the limit for some time (because bandwidth is used elsewhere, server is slow, whatever the reason), then once things get better, curl would simply ignore the limit up until the average speed (since the beginning of the transfer) reached the limit. This could prove the limit useless to effectively avoid using the entire bandwidth (at least for quite some time). So instead, we now use a "moving starting point" as reference, and every time at least as much as the limit as been transferred, we can reset this starting point to the current position. This gets a good limiting effect that applies to the "current speed" with instant reactivity (in case of sudden speed burst). Closes #971
* http2: make sure stream errors don't needlessly close the connectionDaniel Stenberg2016-08-281-23/+24
| | | | | | | | With HTTP/2 each transfer is made in an indivial logical stream over the connection, making most previous errors that caused the connection to get forced-closed now instead just kill the stream and not the connection. Fixes #941
* multi: make Curl_expire() work with 0 ms timeoutsDaniel Stenberg2016-08-041-74/+84
| | | | | | | | | | Previously, passing a timeout of zero to Curl_expire() was a magic code for clearing all timeouts for the handle. That is now instead made with the new Curl_expire_clear() function and thus a 0 timeout is fine to set and will trigger a timeout ASAP. This will help removing short delays, in particular notable when doing HTTP/2.
* transfer: return without select when the read loop reached maxcountDaniel Stenberg2016-08-041-1/+4
| | | | | | | | | | | Regression added in 790d6de48515. The was then added to avoid one particular transfer to starve out others. But when aborting due to reading the maxcount, the connection must be marked to be read from again without first doing a select as for some protocols (like SFTP/SCP) the data may already have been read off the socket. Reported-by: Dan Donahue Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
* curl_multi_cleanup: clear connection pointer for easy handlesDaniel Stenberg2016-08-031-0/+2
| | | | | | CVE-2016-5421 Bug: https://curl.haxx.se/docs/adv_20160803C.html Reported-by: Marcelo Echeverria and Fernando Muñoz
* typedefs: use the full structs in internal code...Daniel Stenberg2016-06-221-16/+19
| | | | ... and save the typedef'ed names for headers and external APIs.
* internals: rename the SessionHandle struct to Curl_easyDaniel Stenberg2016-06-221-75/+57
|
* ftp wildcard: segfault due to init only in multi_performDaniel Stenberg2016-05-151-15/+0
| | | | | | | | | | | | The proper FTP wildcard init is now more properly done in Curl_pretransfer() and the corresponding cleanup in Curl_close(). The previous place of init/cleanup code made the internal pointer to be NULL when this feature was used with the multi_socket() API, as it was made within the curl_multi_perform() function. Reported-by: Jonathan Cardoso Machado Fixes #800
* lib: include curl_printf.h as one of the last headersDaniel Stenberg2016-04-291-1/+1
| | | | | | | | | | | | | | | | | | | | curl_printf.h defines printf to curl_mprintf, etc. This can cause problems with external headers which may use __attribute__((format(printf, ...))) markers etc. To avoid that they cause problems with system includes, we include curl_printf.h after any system headers. That makes the three last headers to always be, and we keep them in this order: curl_printf.h curl_memory.h memdebug.h None of them include system headers, they all do funny #defines. Reported-by: David Benjamin Fixes #743
* multi: accidentally used resolved host name instead of proxyDaniel Stenberg2016-04-251-2/+4
| | | | | | | Regression introduced in 09b5a998 Bug: https://curl.haxx.se/mail/lib-2016-04/0084.html Reported-by: BoBo
* news: CURLOPT_CONNECT_TO and --connect-toMichael Kaufmann2016-04-171-1/+7
| | | | | Makes curl connect to the given host+port instead of the host+port found in the URL.
* http2: Add handling stream level errorTatsuhiro Tsujikawa2016-04-111-1/+2
| | | | | | | | | | | | | Previously, when a stream was closed with other than NGHTTP2_NO_ERROR by RST_STREAM, underlying TCP connection was dropped. This is undesirable since there may be other streams multiplexed and they are very much fine. This change introduce new error code CURLE_HTTP2_STREAM, which indicates stream error that only affects the relevant stream, and connection should be kept open. The existing CURLE_HTTP2 means connection error in general. Ref: https://github.com/curl/curl/issues/659 Ref: https://github.com/curl/curl/pull/663
* multi: remove trailing space in debug outputDaniel Stenberg2016-04-051-1/+1
|
* code: style updatesDaniel Stenberg2016-04-031-2/+2
|
* multi: turn Curl_done into file local multi_doneDaniel Stenberg2016-03-301-29/+188
| | | | ... as it now is used by multi.c only.
* multi: multi_reconnect_request is the former Curl_reconnect_requestDaniel Stenberg2016-03-301-2/+58
| | | | now a file local function in multi.c
* multi: move Curl_do and Curl_do_done to multi.c and make staticDaniel Stenberg2016-03-301-3/+82
| | | | ... called multi_do and multi_do_done as they're file local now.
* multi: fix "Operation timed out after" timerDaniel Stenberg2016-03-231-1/+1
| | | | | | | Use the local, reasonably updated, 'now' value when creating the message string to output for the timeout condition. Fixes #619