| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing logic to evaluate multi header "$sent_http_*" variables,
such as $sent_http_cache_control, as previously introduced in 1.23.0,
doesn't take into account that one or more elements can be cleared,
yet still present in a linked list, pointed to by the next field.
Such elements don't contribute to the resulting variable length, an
attempt to append a separator for them ends up in out of bounds write.
This is not possible with standard modules, though at least one third
party module is known to override multi header values this way, so it
makes sense to harden the logic.
The fix restores a generic boundary check.
|
|
|
|
|
|
|
|
|
|
|
|
| |
It now uses custom alloc_aligned() wrapper for all allocations,
therefore all allocations are larger than expected by (64 + sizeof(void*)).
Further, they are seen as allocations of 1 element. Relevant calculations
were adjusted to reflect this, and state allocation is now protected
with a flag to avoid misinterpreting other allocations as the zlib
deflate_state allocation.
Further, it no longer forces window bits to 13 on compression level 1,
so the comment was adjusted to reflect this.
|
| |
|
|
|
|
|
| |
Similarly to 7192:d5a535774861, this avoids spurious zero statuses
in access.log, and in line with other header-related errors.
|
|
|
|
|
|
|
|
| |
Similarly to ticket #274 (7354:1812f1d79d84), early request finalization
without calling ngx_http_run_posted_requests() resulted in a connection
hang (a socket leak) if the 400 (Bad Request) error was generated in
ngx_http_v2_state_process_header() due to invalid request headers and
"return 444" was used in error_page 400.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is expected to help with clients using pipelining with some constant
depth, such as apt[1][2].
When downloading many resources, apt uses pipelining with some constant
depth, a number of requests in flight. This essentially means that after
receiving a response it sends an additional request to the server, and
this can result in requests arriving to the server at any time. Further,
additional requests are sent one-by-one, and can be easily seen as such
(neither as pipelined, nor followed by pipelined requests).
The only safe approach to close such connections (for example, when
keepalive_requests is reached) is with lingering. To do so, now nginx
monitors if pipelining was used on the connection, and if it was, closes
the connection with lingering.
[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=973861#10
[2] https://mailman.nginx.org/pipermail/nginx-devel/2023-January/ZA2SP5SJU55LHEBCJMFDB2AZVELRLTHI.html
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since 4611:2b6cb7528409 responses from the gzip static, flv, and mp4 modules
can be used with subrequests, though empty files were not properly handled.
Empty gzipped, flv, and mp4 files thus resulted in "zero size buf in output"
alerts. While valid corresponding files are not expected to be empty, such
files shouldn't result in alerts.
Fix is to set b->sync on such empty subrequest responses, similarly to what
ngx_http_send_special() does.
Additionally, the static module, the ngx_http_send_response() function, and
file cache are modified to do the same instead of not sending the response
body at all in such cases, since not sending the response body at all is
believed to be at least questionable, and might break various filters
which do not expect such behaviour.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "listen" directive in the http module can be used multiple times
in different server blocks. Originally, it was supposed to be specified
once with various socket options, and without any parameters in virtual
server blocks. For example:
server { listen 80 backlog=1024; server_name foo; ... }
server { listen 80; server_name bar; ... }
server { listen 80; server_name bazz; ... }
The address part of the syntax ("address[:port]" / "port" / "unix:path")
uniquely identifies the listening socket, and therefore is enough for
name-based virtual servers (to let nginx know that the virtual server
accepts requests on the listening socket in question).
To ensure that listening options do not conflict between virtual servers,
they were allowed only once. For example, the following configuration
will be rejected ("duplicate listen options for 0.0.0.0:80 in ..."):
server { listen 80 backlog=1024; server_name foo; ... }
server { listen 80 backlog=512; server_name bar; ... }
At some point it was, however, noticed, that it is sometimes convenient
to repeat some options for clarity. In nginx 0.8.51 the "ssl" parameter
was allowed to be specified multiple times, e.g.:
server { listen 443 ssl backlog=1024; server_name foo; ... }
server { listen 443 ssl; server_name bar; ... }
server { listen 443 ssl; server_name bazz; ... }
This approach makes configuration more readable, since SSL sockets are
immediately visible in the configuration. If this is not needed, just the
address can still be used.
Later, additional protocol-specific options similar to "ssl" were
introduced, notably "http2" and "proxy_protocol". With these options,
one can write:
server { listen 443 ssl backlog=1024; server_name foo; ... }
server { listen 443 http2; server_name bar; ... }
server { listen 443 proxy_protocol; server_name bazz; ... }
The resulting socket will use ssl, http2, and proxy_protocol, but this
is not really obvious from the configuration.
To emphasize such misleading configurations are discouraged, nginx now
warns as long as the "listen" directive is used with options different
from the options previously used if this is potentially confusing.
In particular, the following configurations are allowed:
server { listen 8401 ssl backlog=1024; server_name foo; }
server { listen 8401 ssl; server_name bar; }
server { listen 8401 ssl; server_name bazz; }
server { listen 8402 ssl http2 backlog=1024; server_name foo; }
server { listen 8402 ssl; server_name bar; }
server { listen 8402 ssl; server_name bazz; }
server { listen 8403 ssl; server_name bar; }
server { listen 8403 ssl; server_name bazz; }
server { listen 8403 ssl http2; server_name foo; }
server { listen 8404 ssl http2 backlog=1024; server_name foo; }
server { listen 8404 http2; server_name bar; }
server { listen 8404 http2; server_name bazz; }
server { listen 8405 ssl http2 backlog=1024; server_name foo; }
server { listen 8405 ssl http2; server_name bar; }
server { listen 8405 ssl http2; server_name bazz; }
server { listen 8406 ssl; server_name foo; }
server { listen 8406; server_name bar; }
server { listen 8406; server_name bazz; }
And the following configurations will generate warnings:
server { listen 8501 ssl http2 backlog=1024; server_name foo; }
server { listen 8501 http2; server_name bar; }
server { listen 8501 ssl; server_name bazz; }
server { listen 8502 backlog=1024; server_name foo; }
server { listen 8502 ssl; server_name bar; }
server { listen 8503 ssl; server_name foo; }
server { listen 8503 http2; server_name bar; }
server { listen 8504 ssl; server_name foo; }
server { listen 8504 http2; server_name bar; }
server { listen 8504 proxy_protocol; server_name bazz; }
server { listen 8505 ssl http2 proxy_protocol; server_name foo; }
server { listen 8505 ssl http2; server_name bar; }
server { listen 8505 ssl; server_name bazz; }
server { listen 8506 ssl http2; server_name foo; }
server { listen 8506 ssl; server_name bar; }
server { listen 8506; server_name bazz; }
server { listen 8507 ssl; server_name bar; }
server { listen 8507; server_name bazz; }
server { listen 8507 ssl http2; server_name foo; }
server { listen 8508 ssl; server_name bar; }
server { listen 8508; server_name bazz; }
server { listen 8508 ssl backlog=1024; server_name foo; }
server { listen 8509; server_name bazz; }
server { listen 8509 ssl; server_name bar; }
server { listen 8509 ssl backlog=1024; server_name foo; }
The basic idea is that at most two sets of protocol options are allowed:
the main one (with socket options, if any), and a shorter one, with options
being a subset of the main options, repeated for clarity. As long as the
shorter set of protocol options is used, all listen directives except the
main one should use it.
|
|
|
|
|
|
|
|
|
|
| |
Previously, location prefix length in ngx_http_location_tree_node_t was
stored as "u_char", and therefore location prefixes longer than 255 bytes
were handled incorrectly.
Fix is to use "u_short" instead. With "u_short", prefixes up to 65535 bytes
can be safely used, and this isn't reachable due to NGX_CONF_BUFFER, which
is 4096 bytes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In contrast to on-the-fly gzipping with gzip filter, static gzipped
representation as returned by gzip_static is persistent, and therefore
the same binary representation is available for future requests, making
it possible to use range requests.
Further, if a gzipped representation is re-generated with different
compression settings, it is expected to result in different ETag and
different size reported in the Content-Range header, making it possible
to safely use range requests anyway.
As such, ranges are now allowed for files returned by gzip_static.
|
|
|
|
|
|
|
|
|
| |
Ports difference must be respected when checking addresses for duplicates,
otherwise configurations like this are broken:
listen 127.0.0.1:6000-6005
It was broken by 4cc2bfeff46c (nginx 1.23.3).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the glibc bug[1], getaddrinfo("localhost") with AI_ADDRCONFIG
on a typical host with glibc and without IPv6 returns two 127.0.0.1
addresses, and therefore "listen localhost:80;" used to result in
"duplicate ... address and port pair" after 4f9b72a229c1.
Fix is to explicitly filter out duplicate addresses returned during
resolution of a name.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=14969
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the SSI parser always uses the context from the main request for storing
variables and blocks, that context should always exist for subrequests using
SSI, even though the main request does not necessarily have SSI enabled.
However, `ngx_http_get_module_ctx(r->main, ...)` is getting NULL in such cases,
resulting in the worker crashing SIGSEGV when accessing its attributes.
This patch links the first initialized context to the main request, and
upgrades it only when main context is initialized.
|
|
|
|
|
|
| |
Most atoms should not appear more than once in a container. Previously,
this was not enforced by the module, which could result in worker process
crash, memory corruption and disclosure.
|
|
|
|
|
|
|
| |
Now it properly detects invalid shared zone configuration with omitted size.
Previously it used to read outside of the buffer boundary.
Found with AddressSanitizer.
|
|
|
|
|
| |
The variables have prefix $proxy_protocol_tlv_ and are accessible by name
and by type. Examples are: $proxy_protocol_tlv_0x01, $proxy_protocol_tlv_alpn.
|
|
|
|
|
|
|
|
| |
Some servers might emit Content-Range header on 200 responses, and this
does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range
header has no meaning for status codes other than 206 and 416. Previously
this resulted in duplicate Content-Range headers in nginx responses handled
by the range filter. Fix is to clear pre-existing headers.
|
|
|
|
|
|
|
|
|
|
|
|
| |
To ensure optimal use of memory, SSL contexts for proxying are now
inherited from previous levels as long as relevant proxy_ssl_* directives
are not redefined.
Further, when no proxy_ssl_* directives are redefined in a server block,
we now preserve plcf->upstream.ssl in the "http" section configuration
to inherit it to all servers.
Similar changes made in uwsgi, grpc, and stream proxy.
|
| |
|
|
|
|
|
|
|
|
| |
Both "count" and "duration" variables are 32-bit, so their product might
potentially overflow. It is used to reduce 64-bit start_time variable,
and with very large start_time this can result in incorrect seeking.
Found by Coverity (CID 1499904).
|
|
|
|
|
|
|
| |
Now, if the directive is given an empty string, such configuration cancels
loading of certificates, in particular, if they would be otherwise inherited
from the previous level. This restores previous behaviour, before variables
support in certificates was introduced (3ab8e1e2f0f7).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if caching was disabled due to Expires in the past, nginx
failed to cache the response even if it was cacheable as per subsequently
parsed Cache-Control header (ticket #964).
Similarly, if caching was disabled due to Expires in the past,
"Cache-Control: no-cache" or "Cache-Control: max-age=0", caching was not
used if it was cacheable as per subsequently parsed X-Accel-Expires header.
Fix is to avoid disabling caching immediately after parsing Expires in
the past or Cache-Control, but rather set flags which are later checked by
ngx_http_upstream_process_headers() (and cleared by "Cache-Control: max-age"
and X-Accel-Expires).
Additionally, now X-Accel-Expires does not prevent parsing of cache control
extensions, notably stale-while-revalidate and stale-if-error. This
ensures that order of the X-Accel-Expires and Cache-Control headers is not
important.
Prodded by Vadim Fedorenko and Yugo Horie.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a module adds multiple WWW-Authenticate headers (ticket #485) to the
response, linked in r->headers_out.www_authenticate, all headers are now
cleared if another module later allows access.
This change is a nop for standard modules, since the only access module which
can add multiple WWW-Authenticate headers is the auth request module, and
it is checked after other standard access modules. Though this might
affect some third party access modules.
Note that if a 3rd party module adds a single WWW-Authenticate header
and not yet modified to set the header's next pointer to NULL, attempt to
clear such a header with this change will result in a segmentation fault.
|
|
|
|
|
|
| |
When using auth_request with an upstream server which returns 401
(Unauthorized), multiple WWW-Authenticate headers from the upstream server
response are now properly copied to the response.
|
|
|
|
|
|
| |
When using proxy_intercept_errors and an error page for error 401
(Unauthorized), multiple WWW-Authenticate headers from the upstream server
response are now properly copied to the response.
|
|
|
|
| |
Previously, only the last header value was used when caching.
|
|
|
|
|
|
|
|
|
| |
Most of the known duplicate upstream response headers are now ignored
with a warning.
If syntax permits multiple headers, these are now properly linked to
the lists, notably Vary and WWW-Authenticate. This makes it possible
to further handle such lists where it makes sense.
|
|
|
|
|
|
|
| |
With this change, duplicate Content-Length and Transfer-Encoding headers
are now rejected. Further, responses with invalid Content-Length or
Transfer-Encoding headers are now rejected, as well as responses with both
Content-Length and Transfer-Encoding.
|
| |
|
|
|
|
|
|
|
|
| |
The h->next pointer properly provided as NULL in all cases where known
output headers are added.
Note that there are 3rd party modules which might not do this, and it
might be risky to rely on this for arbitrary headers.
|
|
|
|
| |
The u->headers_in.accept_ranges field is not used anywhere and hence removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since introduction of offset handling in ngx_http_upstream_copy_header_line()
in revision 573:58475592100c, the ngx_http_upstream_copy_content_encoding()
function is no longer needed, as its behaviour is exactly equivalent to
ngx_http_upstream_copy_header_line() with appropriate offset. As such,
the ngx_http_upstream_copy_content_encoding() function was removed.
Further, the u->headers_in.content_encoding field is not used anywhere,
so it was removed as well.
Further, Content-Encoding handling no longer depends on NGX_HTTP_GZIP,
as it can be used even without any gzip handling compiled in (for example,
in the charset filter).
|
| |
|
| |
|
|
|
|
|
|
|
| |
As all known input headers are now linked lists, these are now handled
identically. In particular, this makes it possible to access properly
combined values of headers not specifically handled previously, such
as "Via" or "Connection".
|
|
|
|
|
|
| |
The ngx_http_process_multi_header_lines() function is removed, as it is
exactly equivalent to ngx_http_process_header_line(). Similarly,
ngx_http_variable_header() is used instead of ngx_http_variable_headers().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multi headers are now using linked lists instead of arrays. Notably,
the following fields were changed: r->headers_in.cookies (renamed
to r->headers_in.cookie), r->headers_in.x_forwarded_for,
r->headers_out.cache_control, r->headers_out.link, u->headers_in.cache_control
u->headers_in.cookies (renamed to u->headers_in.set_cookie).
The r->headers_in.cookies and u->headers_in.cookies fields were renamed
to r->headers_in.cookie and u->headers_in.set_cookie to match header names.
The ngx_http_parse_multi_header_lines() and ngx_http_parse_set_cookie_lines()
functions were changed accordingly.
With this change, multi headers are now essentially equivalent to normal
headers, and following changes will further make them equivalent.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, $http_*, $sent_http_*, $sent_trailer_*, $upstream_http_*,
and $upstream_trailer_* variables returned only the first header (with
a few specially handled exceptions: $http_cookie, $http_x_forwarded_for,
$sent_http_cache_control, $sent_http_link).
With this change, all headers are returned, combined together. For
example, $http_foo variable will be "a, b" if there are "Foo: a" and
"Foo: b" headers in the request.
Note that $upstream_http_set_cookie will also return all "Set-Cookie"
headers (ticket #1843), though this might not be what one want, since
the "Set-Cookie" header does not follow the list syntax (see RFC 7230,
section 3.2.2).
|
|
|
|
|
|
|
|
|
|
|
| |
The uwsgi specification states that "The uwsgi block vars represent a
dictionary/hash". This implies that no duplicate headers are expected.
Further, provided headers are expected to follow CGI specification,
which also requires to combine headers (RFC 3875, section "4.1.18.
Protocol-Specific Meta-Variables"): "If multiple header fields with
the same field-name are received then the server MUST rewrite them
as a single value having the same semantics".
|
|
|
|
|
|
|
|
|
|
|
|
| |
SCGI specification explicitly forbids headers with duplicate names
(section "3. Request Format"): "Duplicate names are not allowed in
the headers".
Further, provided headers are expected to follow CGI specification,
which also requires to combine headers (RFC 3875, section "4.1.18.
Protocol-Specific Meta-Variables"): "If multiple header fields with
the same field-name are received then the server MUST rewrite them
as a single value having the same semantics".
|
|
|
|
|
|
|
|
|
|
|
|
| |
FastCGI responder is expected to receive CGI/1.1 environment variables
in the parameters (see section "6.2 Responder" of the FastCGI specification).
Obviously enough, there cannot be multiple environment variables with
the same name.
Further, CGI specification (RFC 3875, section "4.1.18. Protocol-Specific
Meta-Variables") explicitly requires to combine headers: "If multiple
header fields with the same field-name are received then the server MUST
rewrite them as a single value having the same semantics".
|
|
|
|
|
|
| |
Previously, the r->header_in->connection pointer was never set despite
being present in ngx_http_headers_in, resulting in incorrect value returned
by $r->header_in("Connection") in embedded perl.
|
|
|
|
|
|
|
|
|
|
|
| |
With large http2_max_concurrent_streams or http2_max_concurrent_pushes, more
than 255 ngx_http_v2_node_t structures might be allocated, eventually leading
to h2c->closed_nodes overflow when closing corresponding streams. This will
in turn result in additional allocations in ngx_http_v2_get_node_by_id().
While mostly harmless, it can result in excessive memory usage by a HTTP/2
connection, notably in configurations with many keepalive_requests allowed.
Fix is to use ngx_uint_t for h2c->closed_nodes instead of unsigned:8.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Response headers can be buffered in the SSL buffer. But stream's fake
connection buffered flag did not reflect this, so any attempts to flush
the buffer without sending additional data were stopped by the write filter.
It does not seem to be possible to reflect this in fc->buffered though, as
we never known if main connection's c->buffered corresponds to the particular
stream or not. As such, fc->buffered might prevent request finalization
due to sending data on some other stream.
Fix is to implement handling of flush buffers when the c->need_flush_buf
flag is set, similarly to the existing last buffer handling. The same
flag is now used for UDP sockets in the stream module instead of explicit
checking of c->type.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During configuration reload two cache managers might exist for a short
time. If both tried to delete the same cache node, the "ignore long locked
inactive cache entry" alert appeared in logs. Additionally,
ngx_http_file_cache_forced_expire() might be also called by worker
processes, with similar results.
Fix is to ignore cache nodes being deleted, similarly to how it is
done in ngx_http_file_cache_expire() since 3755:76e3a93821b1. This
was somehow missed in 7002:ab199f0eb8e8, when ignoring long locked
cache entries was introduced in ngx_http_file_cache_forced_expire().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a worker process is shutting down, keepalive is not used: this is checked
before the ngx_http_set_keepalive() call in ngx_http_finalize_connection().
Yet the "Connection: keep-alive" header was still sent, even if we know that
the worker process is shutting down, potentially resulting in additional
requests being sent to the connection which is going to be closed anyway.
While clients are expected to be able to handle asynchronous close events
(see ticket #1022), it is certainly possible to send the "Connection: close"
header instead, informing the client that the connection is going to be closed
and potentially saving some unneeded work.
With this change, we additionally check for worker process shutdown just
before sending response headers, and disable keepalive accordingly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Starting with FreeBSD 11, there is no need to use AIO operations to preload
data into cache for sendfile(SF_NODISKIO) to work. Instead, sendfile()
handles non-blocking loading data from disk by itself. It still can, however,
return EBUSY if a page is already being loaded (for example, by a different
process). If this happens, we now post an event for the next event loop
iteration, so sendfile() is retried "after a short period", as manpage
recommends.
The limit of the number of EBUSY tolerated without any progress is preserved,
but now it does not result in an alert, since on an idle system event loop
iteration might be very short and EBUSY can happen many times in a row.
Instead, SF_NODISKIO is simply disabled for one call once the limit is
reached.
With this change, sendfile(SF_NODISKIO) is now used automatically as long as
sendfile() is enabled, and no longer requires "aio on;".
|