summaryrefslogtreecommitdiff
path: root/lib/progress.c
Commit message (Collapse)AuthorAgeFilesLines
* misc: remove support for curl_off_t < 8 bytesDaniel Stenberg2023-02-241-11/+2
| | | | Closes #10597
* connections: introduce http/3 happy eyeballsStefan Eissing2023-02-021-12/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing. - filter is installed when `--http3` in the tool is used (or the equivalent CURLOPT_ done in the library) - starts a QUIC/HTTP/3 connect right away. Should that not succeed after 100ms (subject to change), a parallel attempt is started for HTTP/2 and HTTP/1.1 via TCP - both attempts are subject to IPv6/IPv4 eyeballing, same as happens for other connections - tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT - use a `soft` timeout at half the value. When the soft timeout expires, the HTTPS-CONNECT filter checks if the QUIC filter has received any data from the server. If not, it will start the HTTP/2 attempt. HTTP/3(ngtcp2) improvements. - setting call_data in all cfilter calls similar to http/2 and vtls filters for use in callback where no stream data is available. - returning CURLE_PARTIAL_FILE for prematurely terminated transfers - enabling pytest test_05 for h3 - shifting functionality to "connect" UDP sockets from ngtcp2 implementation into the udp socket cfilter. Because unconnected UDP sockets are weird. For example they error when adding to a pollset. HTTP/3(quiche) improvements. - fixed upload bug in quiche implementation, now passes 251 and pytest - error codes on stream RESET - improved debug logs - handling of DRAIN during connect - limiting pending event queue HTTP/2 cfilter improvements. - use LOG_CF macros for dynamic logging in debug build - fix CURLcode on RST streams to be CURLE_PARTIAL_FILE - enable pytest test_05 for h2 - fix upload pytests and improve parallel transfer performance. GOAWAY handling for ngtcp2/quiche - during connect, when the remote server refuses to accept new connections and closes immediately (so the local conn goes into DRAIN phase), the connection is torn down and a another attempt is made after a short grace period. This is the behaviour observed with nghttpx when we tell it to shut down gracefully. Tested in pytest test_03_02. TLS improvements - ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces copy of logic in all tls backends. - standardized the infof logging of offered ALPNs - ALPN negotiated: have common function for all backends that sets alpn proprty and connection related things based on the negotiated protocol (or lack thereof). - new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation. Invoke: python3 tests/tests-httpd/scorecard.py --help for usage. Improvements on gathering connect statistics and socket access. - new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters report connection statistics. This is triggered when the connection has completely connected. - new void Curl_pgrsTimeWas(..) method to report a timer update with a timestamp of when it happend. This allows for updating timers "later", e.g. a connect statistic after full connectivity has been reached. - in case of HTTP eyeballing, the previous changes will update statistics only from the filter chain that "won" the eyeballing. - new cfilter query CF_QUERY_SOCKET for retrieving the socket used by a filter chain. Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket() for convenient use of this query. - Change VTLS backend to query their sub-filters for the socket when checks during the handshake are made. HTTP/3 documentation on how https eyeballing works. TLS improvements - ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces copy of logic in all tls backends. - standardized the infof logging of offered ALPNs - ALPN negotiated: have common function for all backends that sets alpn proprty and connection related things based on the negotiated protocol (or lack thereof). Scorecard with Caddy. - configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing - tests/tests-httpd/scorecard.py now measures download speeds with caddy pytest improvements - adding Makfile to clean gen dir - adding nghttpx rundir creation on start - checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old. - catch exception when checking for caddy existance on system. Closes #10349
* copyright: update all copyright lines and remove year rangesDaniel Stenberg2023-01-031-1/+1
| | | | | | | | | | | | - they are mostly pointless in all major jurisdictions - many big corporations and projects already don't use them - saves us from pointless churn - git keeps history for us - the year range is kept in COPYING checksrc is updated to allow non-year using copyright statements Closes #10205
* copyright: make repository REUSE compliantmax.mehl2022-06-131-1/+3
| | | | | | | | | | | Add licensing and copyright information for all files in this repository. This either happens in the file itself as a comment header or in the file `.reuse/dep5`. This commit also adds a Github workflow to check pull requests and adapts copyright.pl to the changes. Closes #8869
* progress: make trspeed avoid floatsDaniel Stenberg2021-09-011-1/+6
| | | | | | | | and compiler warnings for data conversions. Reported-by: Michał Antoniak Fixes #7645 Closes #7653
* progress: fix a compile warning on some systemsMichael Kaufmann2021-08-101-1/+1
| | | | | | | lib/progress.c:380:40: warning: conversion to 'long double' from 'curl_off_t {aka long long int}' may alter its value [-Wconversion] Closes #7549
* progress: reset limit_size variables at transfer startDaniel Stenberg2021-05-111-0/+2
| | | | | | | | | | Otherwise the old value would linger from a previous use and would mess up the network speed cap logic. Reported-by: Ymir1711 on github Fixes #7042 Closes #7043
* progress/trspeed: use a local convenient pointer to beautify codeDaniel Stenberg2021-05-091-33/+26
| | | | The function becomes easier to read and understand with less repetition.
* trspeed: use long double for transfer speed calculationDaniel Stenberg2021-05-091-19/+6
|
* progress: move transfer speed calc into functionDaniel Stenberg2021-05-091-25/+27
| | | | | | | | This silences two scan-build-11 warnings: "The result of the '/' expression is undefined" Bug: https://curl.se/mail/lib-2021-05/0022.html Closes #7035
* progress: when possible, calculate transfer speeds with microsecondsDaniel Stenberg2021-05-071-2/+8
| | | | | | | | | ... this improves precision, especially for transfers in the few or even sub millisecond range. Reported-by: J. Bromley Fixes #7017 Closes #7020
* send_speed: simplify the checks for if a speed limit is setDaniel Stenberg2021-03-271-2/+2
| | | | | ... as we know the value cannot be set to negative: enforced by setopt()
* config: remove CURL_SIZEOF_CURL_OFF_T use only SIZEOF_CURL_OFF_TDaniel Stenberg2021-03-111-1/+1
| | | | | | | Make the code consistently use a single name for the size of the "curl_off_t" type. Closes #6702
* lib: more conn->data cleanupsDaniel Stenberg2021-01-191-12/+8
| | | | Closes #6479
* Curl_pgrsStartNow: init speed limit time stamps at startDaniel Stenberg2020-11-091-4/+2
| | | | | | | | | | By setting the speed limit time stamps unconditionally at transfer start, we can start off a transfer without speed limits and yet allow them to get set during transfer and have an effect. Reported-by: Kael1117 on github Fixes #6162 Closes #6184
* curl.se: new homeDaniel Stenberg2020-11-041-1/+1
| | | | Closes #6172
* Curl_pgrsTime - return new time to avoid timeout integer overflowDaniel Stenberg2020-08-281-2/+7
| | | | | | | | | | | | | | Setting a timeout to INT_MAX could cause an immediate error to get returned as timeout because of an overflow when different values of 'now' were used. This is primarily fixed by having Curl_pgrsTime() return the "now" when TIMER_STARTSINGLE is set so that the parent function will continue using that time. Reported-by: Ionuț-Francisc Oancea Fixes #5583 Closes #5847
* timeouts: change millisecond timeouts to timediff_t from time_tDaniel Stenberg2020-05-301-3/+3
| | | | | | | For millisecond timers we like timediff_t better. Also, time_t can be unsigned so returning a negative value doesn't work then. Closes #5479
* XFERINFOFUNCTION: support CURL_PROGRESSFUNC_CONTINUEJohn Schroeder2019-11-261-7/+11
| | | | | | | | | (also for PROGRESSFUNCTION) By returning this value from the callback, the internal progress function call is still called afterward. Closes #4599
* timediff: make it 64 bit (if possible) even with 32 bit time_tDaniel Stenberg2019-08-011-5/+6
| | | | | | | ... to make it hold microseconds too. Fixes #4165 Closes #4168
* progress: reset download/uploaded counterDaniel Stenberg2019-07-291-0/+2
| | | | | | | | | | ... to make CURLOPT_MAX_RECV_SPEED_LARGE and CURLOPT_MAX_SEND_SPEED_LARGE work correctly on subsequent transfers that reuse the same handle. Fixed-by: Ironbars13 on github Fixes #4084 Closes #4161
* progress: make the progress meter appear againDaniel Stenberg2019-07-191-118/+108
| | | | | | | | Fix regression caused by 21080e1 Reported-by: Chih-Hsuan Yen Fixes #4122 Closes #4124
* configure: --disable-progress-meterDaniel Stenberg2019-06-181-55/+76
| | | | | | Builds libcurl without support for the built-in progress meter. Closes #4023
* Revert "progress: CURL_DISABLE_PROGRESS_METER"Daniel Stenberg2019-05-231-61/+49
| | | | | | | | | | | | This reverts commit 3b06e68b7734cb10a555f9d7e804dd5d808236a4. Clearly this change wasn't good enough as it broke CURLOPT_LOW_SPEED_LIMIT + CURLOPT_LOW_SPEED_TIME Reported-by: Dave Reisner Fixes #3927 Closes #3928
* progress: CURL_DISABLE_PROGRESS_METERDaniel Stenberg2019-05-171-49/+61
|
* build: fix "clarify calculation precedence" warningsMarcel Raad2019-05-121-2/+2
| | | | | | | Codacy/CppCheck warns about this. Consistently use parentheses as we already do in some places to silence the warning. Closes https://github.com/curl/curl/pull/3866
* snprintf: renamed and we now only use msnprintf()Daniel Stenberg2018-11-231-18/+18
| | | | | | | | | | | The function does not return the same value as snprintf() normally does, so readers may be mislead into thinking the code works differently than it actually does. A different function name makes this easier to detect. Reported-by: Tomas Hoger Assisted-by: Daniel Gustafsson Fixes #3296 Closes #3297
* cppcheck: fix warningsMarian Klymov2018-06-111-19/+20
| | | | | | | | | | | | | - Get rid of variable that was generating false positive warning (unitialized) - Fix issues in tests - Reduce scope of several variables all over etc Closes #2631
* rate-limit: use three second window to better handle high speedsDaniel Stenberg2018-03-161-31/+43
| | | | | | | | | | | | | | | Due to very frequent updates of the rate limit "window", it could attempt to rate limit within the same milliseconds and that then made the calculations wrong, leading to it not behaving correctly on very fast transfers. This new logic updates the rate limit "window" to be no shorter than the last three seconds and only updating the timestamps for this when switching between the states TOOFAST/PERFORM. Reported-by: 刘佩东 Fixes #2386 Closes #2388
* limit-rate: fix compiler warningMichael Kaufmann2018-03-121-1/+1
| | | | follow-up to 72a0f62
* limit-rate: kick in even before "limit" data has been receivedDaniel Stenberg2018-03-111-17/+23
| | | | | | | | ... and make sure to avoid integer overflows with really large values. Reported-by: 刘佩东 Fixes #2371 Closes #2373
* TODO fixed: Detect when called from within callbacksBjörn Stenberg2018-02-151-0/+5
| | | | Closes #2302
* progress: calculate transfer speed on milliseconds if possibleDaniel Stenberg2018-01-081-7/+13
| | | | | | | to increase accuracy for quick transfers Fixes #2200 Closes #2206
* time: rename Curl_tvnow to Curl_nowDaniel Stenberg2017-10-251-5/+5
| | | | | | | | | | ... since the 'tv' stood for timeval and this function does not return a timeval struct anymore. Also, cleaned up the Curl_timediff*() functions to avoid typecasts and clean up the descriptive comments. Closes #2011
* timediff: return timediff_t from the time diff functionsDaniel Stenberg2017-10-251-9/+9
| | | | | | | | | | | | | | | ... to cater for systems with unsigned time_t variables. - Renamed the functions to curlx_timediff and Curl_timediff_us. - Added overflow protection for both of them in either direction for both 32 bit and 64 bit time_ts - Reprefixed the curlx_time functions to use Curl_* Reported-by: Peter Piekarski Fixes #2004 Closes #2005
* code style: use spaces around equals signsDaniel Stenberg2017-09-111-20/+20
|
* progress: Track total times following redirectsRyan Winograd2017-08-151-9/+7
| | | | | | | | | | | | | | | | | | | | | | Update the progress timers `t_nslookup`, `t_connect`, `t_appconnect`, `t_pretransfer`, and `t_starttransfer` to track the total times for these activities when a redirect is followed. Previously, only the times for the most recent request would be tracked. Related changes: - Rename `Curl_pgrsResetTimesSizes` to `Curl_pgrsResetTransferSizes` now that the function only resets transfer sizes and no longer modifies any of the progress timers. - Add a bool to the `Progress` struct that is used to prevent double-counting `t_starttransfer` times. Added test case 1399. Fixes #522 and Known Bug 1.8 Closes #1602 Reported-by: joshhe on github
* timeval: struct curltime is a struct timeval replacementDaniel Stenberg2017-07-281-6/+6
| | | | | | | | | ... to make all libcurl internals able to use the same data types for the struct members. The timeval struct differs subtly on several platforms so it makes it cumbersome to use everywhere. Ref: #1652 Closes #1693
* progress: prevent resetting t_starttransferRyan Winograd2017-06-301-1/+15
| | | | | | | | | | | | | Prevent `Curl_pgrsTime` from modifying `t_starttransfer` when invoked with `TIMER_STARTTRANSFER` more than once during a single request. When a redirect occurs, this is considered a new request and `t_starttransfer` can be updated to reflect the `t_starttransfer` time of the redirect request. Closes #1616 Bug: https://github.com/curl/curl/pull/1602#issuecomment-310267370
* progress: progress.timespent needs to be usDaniel Stenberg2017-06-241-2/+2
| | | | follow-up to 64ed44a815e4e to fix test 500 failures
* progress: fix "time spent", broke in adef394acDaniel Stenberg2017-06-241-4/+4
|
* progress: let "current speed" be UL + DL speeds combinedDaniel Stenberg2017-06-141-7/+5
| | | | | | Bug #1556 Reported-by: Paul Harris Closes #1559
* timers: store internal time stamps as time_t instead of doublesDaniel Stenberg2017-06-141-21/+21
| | | | | | | | | | | | This gives us accurate precision and it allows us to avoid storing "no time" for systems with too low timer resolution as we then bump the time up to 1 microsecond. Should fix test 573 on windows. Remove the now unused curlx_tvdiff_secs() function. Maintains the external getinfo() API with using doubles. Fixes #1531
* spelling fixesklemens2017-03-261-9/+9
| | | | Closes #1356
* Improve code readbilitySylvestre Ledru2017-03-131-3/+3
| | | | | | ... by removing the else branch after a return, break or continue. Closes #1310
* time_t fix: follow-up to de4de4e3c7cDaniel Stenberg2016-11-131-2/+2
| | | | | | Blah, I accidentally wrote size_t instead of time_t for two variables. Reported-by: Dave Reisner
* timeval: prefer time_t to hold seconds instead of longDaniel Stenberg2016-11-121-16/+18
| | | | | ... as long is still 32bit on modern 64bit windows machines, while time_t is generally 64bit.
* Curl_pgrsUpdate: use dedicated function for time passedDaniel Stenberg2016-11-111-4/+2
|
* speed caps: not based on average speeds anymoreOlivier Brunel2016-09-041-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE & CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits with the cumulative average speed of the entire transfer; While this might work at times with good/constant connections, in other cases it can result to the limits simply being "ignored" for more than "short bursts" (as told in man page). Consider a download that goes on much slower than the limit for some time (because bandwidth is used elsewhere, server is slow, whatever the reason), then once things get better, curl would simply ignore the limit up until the average speed (since the beginning of the transfer) reached the limit. This could prove the limit useless to effectively avoid using the entire bandwidth (at least for quite some time). So instead, we now use a "moving starting point" as reference, and every time at least as much as the limit as been transferred, we can reset this starting point to the current position. This gets a good limiting effect that applies to the "current speed" with instant reactivity (in case of sudden speed burst). Closes #971
* internals: rename the SessionHandle struct to Curl_easyDaniel Stenberg2016-06-221-9/+9
|