Compare commits

...

46 Commits

Author SHA1 Message Date
Maxim Dounin 00a034ce37 release-1.10.3 tag 2017-01-31 18:01:10 +03:00
Maxim Dounin 05dbd1fe03 nginx-1.10.3-RELEASE 2017-01-31 18:01:10 +03:00
Maxim Dounin 9fe5995705 Updated OpenSSL used for win32 builds. 2017-01-27 19:06:35 +03:00
Maxim Dounin 03e74aacb3 Updated zlib and PCRE used for win32 builds. 2017-01-24 16:41:29 +03:00
Maxim Dounin 33385cf1e4 Upstream: fixed cache corruption and socket leaks with aio_write.
The ngx_event_pipe() function wasn't called on write events with
wev->delayed set.  As a result, threaded writing results weren't
properly collected in ngx_event_pipe_write_to_downstream() when a
write event was triggered for a completed write.

Further, this wasn't detected, as p->aio was reset by a thread completion
handler, and results were later collected in ngx_event_pipe_read_upstream()
instead of scheduling a new write of additional data.  If this happened
on the last reading from an upstream, last part of the response was never
written to the cache file.

Similar problems might also happen in case of timeouts when writing to
client, as this also results in ngx_event_pipe() not being called on write
events.  In this scenario socket leaks were observed.

Fix is to check if p->writing is set in ngx_event_pipe_read_upstream(), and
therefore collect results of previous write operations in case of read events
as well, similar to how we do so in ngx_event_pipe_write_downstream().
This is enough to fix the wev->delayed case.  Additionally, we now call
ngx_event_pipe() from ngx_http_upstream_process_request() if there are
uncollected write operations (p->writing and !p->aio).  This also fixes
the wev->timedout case.
2017-01-20 21:14:19 +03:00
Maxim Dounin 0e9f489735 Fixed trailer construction with limit on FreeBSD and macOS.
The ngx_chain_coalesce_file() function may produce more bytes to send then
requested in the limit passed, as it aligns the last file position
to send to memory page boundary.  As a result, (limit - send) may become
negative.  This resulted in big positive number when converted to size_t
while calling ngx_output_chain_to_iovec().

Another part of the problem is in ngx_chain_coalesce_file(): it changes cl
to the next chain link even if the current buffer is only partially sent
due to limit.

Therefore, if a file buffer was not expected to be fully sent due to limit,
and was followed by a memory buffer, nginx called sendfile() with a part
of the file buffer, and the memory buffer in trailer.  If there were enough
room in the socket buffer, this resulted in a part of the file buffer being
skipped, and corresponding part of the memory buffer sent instead.

The bug was introduced in 8e903522c17a (1.7.8).  Configurations affected
are ones using limits, that is, limit_rate and/or sendfile_max_chunk, and
memory buffers after file ones (may happen when using subrequests or
with proxying with disk buffering).

Fix is to explicitly check if (send < limit) before constructing trailer
with ngx_output_chain_to_iovec().  Additionally, ngx_chain_coalesce_file()
was modified to preserve unfinished file buffers in cl.
2017-01-20 21:12:48 +03:00
Valentin Bartenev 1cce6c190a HTTP/2: prevented creating temp files for requests without body.
The problem was introduced by 52bd8cc17f34.
2016-12-10 13:23:38 +03:00
Valentin Bartenev 0766c0922c HTTP/2: fixed posted streams handling.
A bug was introduced by 82efcedb310b that could lead to timing out of
responses or segmentation fault, when accept_mutex was enabled.

The output queue in HTTP/2 can contain frames from different streams.
When the queue is sent, all related write handlers need to be called.
In order to do so, the streams were added to the h2c->posted queue
after handling sent frames.  Then this queue was processed in
ngx_http_v2_write_handler().

If accept_mutex is enabled, the event's "ready" flag is set but its
handler is not called immediately.  Instead, the event is added to
the ngx_posted_events queue.  At the same time in this queue can be
events from upstream connections.  Such events can result in sending
output queue before ngx_http_v2_write_handler() is triggered.  And
at the time ngx_http_v2_write_handler() is called, the output queue
can be already empty with some streams added to h2c->posted.

But after 82efcedb310b, these streams weren't processed if all frames
have already been sent and the output queue was empty.  This might lead
to a situation when a number of streams were get stuck in h2c->posted
queue for a long time.  Eventually these streams might get closed by
the send timeout.

In the worst case this might also lead to a segmentation fault, if
already freed stream was left in the h2c->posted queue.  This could
happen if one of the streams was terminated but wasn't closed, due to
the HEADERS frame or a partially sent DATA frame left in the output
queue.  If this happened the ngx_http_v2_filter_cleanup() handler
removed the stream from the h2c->waiting or h2c->posted queue on
termination stage, before the frame has been sent, and the stream
was again added to the h2c->posted queue after the frame was sent.

In order to fix all these problems and simplify the code, write
events of fake stream connections are now added to ngx_posted_events
instead of using a custom h2c->posted queue.
2016-11-28 20:58:14 +03:00
Valentin Bartenev 45c8e9a64d HTTP/2: fixed saving preread buffer to temp file (ticket #1143).
Previously, a request body bigger than "client_body_buffer_size" wasn't written
into a temporary file if it has been pre-read entirely.  The preread buffer
is freed after processing, thus subsequent use of it might result in sending
corrupted body or cause a segfault.
2016-11-28 19:19:21 +03:00
Valentin Bartenev 825c0fbd2c HTTP/2: graceful shutdown of active connections (closes #1106).
Previously, while shutting down gracefully, the HTTP/2 connections were
closed in transition to idle state after all active streams have been
processed.  That might never happen if the client continued opening new
streams.

Now, nginx sends GOAWAY to all HTTP/2 connections and ignores further
attempts to open new streams.  A worker process will quit as soon as
processing of already opened streams is finished.
2016-10-20 16:15:03 +03:00
Maxim Dounin 15a95ba306 Core: sockaddr lengths now respected by ngx_cmp_sockaddr().
Linux can return AF_UNIX sockaddrs with partially filled sun_path,
resulting in spurious comparison failures and failed binary upgrades.
Added proper checking of the lengths provided.

Reported by Jan Seda,
http://mailman.nginx.org/pipermail/nginx-devel/2016-September/008832.html.
2016-10-10 16:15:41 +03:00
Roman Arutyunyan cf35a0cfb1 Addition filter: set last_in_chain flag when clearing last_buf.
When the last_buf flag is cleared for add_after_body to append more data from a
subrequest, other filters may still have buffered data, which should be flushed
at this point.  For example, the sub_filter may have a partial match buffered,
which will only be flushed after the subrequest is done, ending up with
interleaved data in output.

Setting last_in_chain instead of last_buf flushes the data and fixes the order
of output buffers.
2016-10-03 21:03:27 +03:00
Maxim Dounin 2f5d9e205b Version bump. 2017-01-31 16:15:31 +03:00
Maxim Dounin 947a477636 release-1.10.2 tag 2016-10-18 18:03:13 +03:00
Maxim Dounin 08d0ab04c2 nginx-1.10.2-RELEASE 2016-10-18 18:03:12 +03:00
Maxim Dounin 4bf20e7512 SSL: default DH parameters compatible with OpenSSL 1.1.0.
This is a direct commit to stable as there is no corresponding code
in mainline, default DH parameters were removed in 1aa9650a8154.
2016-10-18 17:25:38 +03:00
Maxim Dounin 09acff05dd Updated OpenSSL used for win32 builds. 2016-10-11 16:52:48 +03:00
Maxim Dounin 8015fc1c8a Event pipe: do not set file's thread_handler if not needed.
This fixes a problem with aio threads and sendfile with aio_write switched
off, as observed with range requests after fc72784b1f52 (1.9.13).  Potential
problems with sendfile in threads were previously described in 9fd738b85fad,
and this seems to be one of them.

The problem occurred as file's thread_handler was set to NULL by event pipe
code after a sendfile thread task was scheduled.  As a result, no sendfile
completion code was executed, and the same buffer was additionally sent
using non-threaded sendfile.  Fix is to avoid modifying file's thread_handler
if aio_write is switched off.

Note that with "aio_write on" it is still possible that sendfile will use
thread_handler as set by event pipe.  This is believed to be safe though,
as handlers used are compatible.
2016-09-01 20:05:23 +03:00
Sergey Kandaurov d8419d7728 SSL: adopted session ticket handling for OpenSSL 1.1.0.
Return 1 in the SSL_CTX_set_tlsext_ticket_key_cb() callback function
to indicate that a new session ticket is created, as per documentation.
Until 1.1.0, OpenSSL didn't make a distinction between non-negative
return values.

See https://git.openssl.org/?p=openssl.git;a=commitdiff;h=5c753de for details.
2016-08-22 18:53:21 +03:00
Sergey Kandaurov bce014aa05 SSL: guarded SSL_R_NO_CIPHERS_PASSED not present in OpenSSL 1.1.0.
It was removed in OpenSSL 1.1.0 Beta 3 (pre-release 6).  It was
not used since OpenSSL 1.0.1n and 1.0.2b.
2016-08-08 13:44:49 +03:00
Valentin Bartenev 264b89563e HTTP/2: flushing of the SSL buffer in transition to the idle state.
It fixes potential connection leak if some unsent data was left in the SSL
buffer.  Particularly, that could happen when a client canceled the stream
after the HEADERS frame has already been created.  In this case no other
frames might be produced and the HEADERS frame alone didn't flush the buffer.
2016-07-19 20:34:17 +03:00
Valentin Bartenev 52340fac51 HTTP/2: refactored ngx_http_v2_send_output_queue().
Now it returns NGX_AGAIN if there's still data to be sent.
2016-07-19 20:34:02 +03:00
Valentin Bartenev 5a66aa321b HTTP/2: fixed send timer handling.
Checking for return value of c->send_chain() isn't sufficient since there
are data can be left in the SSL buffer.  Now the wew->ready flag is used
instead.

In particular, this fixed a connection leak in cases when all streams were
closed, but there's still some data to be sent in the SSL buffer and the
client forgot about the connection.
2016-07-19 20:31:09 +03:00
Valentin Bartenev ae4580e28e HTTP/2: avoid sending output queue if there's nothing to send.
Particularly this fixes alerts on OS X and NetBSD systems when HTTP/2 is
configured over plain TCP sockets.

On these systems calling writev() with no data leads to EINVAL errors
being logged as "writev() failed (22: Invalid argument) while processing
HTTP/2 connection".
2016-07-19 20:30:21 +03:00
Valentin Bartenev 7d1859c37b HTTP/2: always handle streams in error state.
Previously, a stream could be closed by timeout if it was canceled
while its send window was exhausted.
2016-07-19 20:22:44 +03:00
Valentin Bartenev ddaedb6c20 HTTP/2: prevented output of the HEADERS frame for canceled streams.
It's useless to generate HEADERS if the stream has been canceled already.
2016-07-19 20:22:44 +03:00
Valentin Bartenev a010c05355 HTTP/2: always send GOAWAY while worker is shutting down.
Previously, if the worker process exited, GOAWAY was sent to connections in
idle state, but connections with active streams were closed without GOAWAY.
2016-07-19 20:22:44 +03:00
Maxim Dounin b2648441a3 Updated PCRE used for win32 builds. 2016-07-05 18:30:56 +03:00
Roman Arutyunyan e539d7b926 Sub filter: eliminate unnecessary buffering.
Previously, when a buffer was processed by the sub filter, its final bytes
could be buffered by the filter even if they don't match any pattern.
This happened because the Boyer-Moore algorithm, employed by the sub filter
since b9447fc457b4 (1.9.4), matches the last characters of patterns prior to
checking other characters.  If the last character is out of scope, initial
bytes of a potential match are buffered until the last character is available.

Now, after receiving a flush or recycled buffer, the filter performs
additional checks to reduce the number of buffered bytes.  The potential match
is checked against the initial parts of all patterns.  Non-matching bytes are
not buffered.  This improves processing of a chunked response from upstream
by sending the entire chunks without buffering unless a partial match is found
at the end of a chunk.
2016-07-02 15:59:53 +03:00
Roman Arutyunyan 8886b48a05 Sub filter: introduced the ngx_http_sub_match() function.
No functional changes.
2016-07-02 15:59:52 +03:00
Valentin Bartenev 5d7c5f6b8d HTTP/2: fixed the "http request count is zero" alert.
When the stream is terminated the HEADERS frame can still wait in the output
queue.  This frame can't be removed and must be sent to the client anyway,
since HTTP/2 uses stateful compression for headers.  So in order to postpone
closing and freeing memory of such stream the special close stream handler
is set to the write event.  After the HEADERS frame is sent the write event
is called and the stream will be finally closed.

Some events like receiving a RST_STREAM can trigger the read handler of such
stream in closing state and cause unexpected processing that can result in
another attempt to finalize the request.  To prevent it the read handler is
now set to ngx_http_empty_handler.

Thanks to Amazon.
2016-06-16 20:55:11 +03:00
Valentin Bartenev c2897b0028 HTTP/2: avoid adding Content-Length for requests without body.
There is no reason to add the "Content-Length: 0" header to a proxied request
without body if the header isn't presented in the original request.

Thanks to Amazon.
2016-06-16 20:55:11 +03:00
Valentin Bartenev 8b306c84a6 HTTP/2: prevented double termination of a stream.
According to RFC 7540, an endpoint should not send more than one RST_STREAM
frame for any stream.

Also, now all the data frames will be skipped while termination.
2016-06-16 20:55:11 +03:00
Valentin Bartenev a01ccaa262 HTTP/2: fixed a segfault while processing unbuffered upload.
The ngx_http_v2_finalize_connection() closes current stream, but that is an
invalid operation while processing unbuffered upload.  This results in access
to already freed memory, since the upstream module sets a cleanup handler that
also finalizes the request.
2016-06-16 20:55:11 +03:00
Valentin Bartenev 8d58d6a209 HTTP/2: unbreak build on MSVC. 2016-05-24 21:54:32 +03:00
Valentin Bartenev 1aa0c26ed7 HTTP/2: implemented preread buffer for request body (closes #959).
Previously, the stream's window was kept zero in order to prevent a client
from sending the request body before it was requested (see 887cca40ba6a for
details).  Until such initial window was acknowledged all requests with
data were rejected (see 0aa07850922f for details).

That approach revealed a number of problems:

 1. Some clients (notably MS IE/Edge, Safari, iOS applications) show an error
    or even crash if a stream is rejected;

 2. This requires at least one RTT for every request with body before the
    client receives window update and able to send data.

To overcome these problems the new directive "http2_body_preread_size" is
introduced.  It sets the initial window and configures a special per stream
preread buffer that is used to save all incoming data before the body is
requested and processed.

If the directive's value is lower than the default initial window (65535),
as previously, all streams with data will be rejected until the new window
is acknowledged.  Otherwise, no special processing is used and all requests
with data are welcome right from the connection start.

The default value is chosen to be 64k, which is bigger than the default
initial window.  Setting it to zero is fully complaint to the previous
behavior.
2016-05-24 17:37:52 +03:00
Valentin Bartenev bceba50d83 HTTP/2: the "421 Misdirected Request" response (closes #848).
Since 4fbef397c753 nginx rejects with the 400 error any attempts of
requesting different host over the same connection, if the relevant
virtual server requires verification of a client certificate.

While requesting hosts other than negotiated isn't something legal
in HTTP/1.x, the HTTP/2 specification explicitly permits such requests
for connection reuse and has introduced a special response code 421.

According to RFC 7540 Section 9.1.2 this code can be sent by a server
that is not configured to produce responses for the combination of
scheme and authority that are included in the request URI.  And the
client may retry the request over a different connection.

Now this code is used for requests that aren't authorized in current
connection.  After receiving the 421 response a client will be able
to open a new connection, provide the required certificate and retry
the request.

Unfortunately, not all clients currently are able to handle it well.
Notably Chrome just shows an error, while at least the latest version
of Firefox retries the request over a new connection.
2016-05-20 18:41:17 +03:00
Maxim Dounin bf9fb349cc Version bump. 2016-10-18 17:19:51 +03:00
Maxim Dounin 013bbc7229 release-1.10.1 tag 2016-05-31 16:47:01 +03:00
Maxim Dounin 3c32d7d704 nginx-1.10.1-RELEASE 2016-05-31 16:47:01 +03:00
Maxim Dounin f3918385a9 Core: skip special buffers on writing (ticket #981).
A special last buffer with cl->buf->pos set to NULL can be present in
a chain when writing request body if chunked encoding was used.  This
resulted in a NULL pointer dereference if it happened to be the only
buffer left after a do...while loop iteration in ngx_write_chain_to_file().

The problem originally appeared in nginx 1.3.9 with chunked encoding
support.  Additionally, rev. 3832b608dc8d (nginx 1.9.13) changed the
minimum number of buffers to trigger this from IOV_MAX (typically 1024)
to NGX_IOVS_PREALLOCATE (typically 64).

Fix is to skip such buffers in ngx_chain_to_iovec(), much like it is
done in other places.
2016-05-31 05:13:30 +03:00
Maxim Dounin 62c778b7ae Updated OpenSSL used for win32 builds. 2016-05-24 17:44:01 +03:00
Maxim Dounin 415727f942 Version bump. 2016-05-31 02:21:38 +03:00
Maxim Dounin 1c1ca5cf26 release-1.10.0 tag 2016-04-26 16:31:18 +03:00
Maxim Dounin d88e9a5471 nginx-1.10.0-RELEASE 2016-04-26 16:31:18 +03:00
Maxim Dounin 5084075df3 Stable branch. 2016-04-26 16:30:30 +03:00
22 changed files with 743 additions and 178 deletions

View File

@ -5,6 +5,250 @@
<change_log title="nginx">
<changes ver="1.10.3" date="31.01.2017">
<change type="bugfix">
<para lang="ru">
в директиве add_after_body при использовании совместно с директивой sub_filter.
</para>
<para lang="en">
in the "add_after_body" directive when used with the "sub_filter" directive.
</para>
</change>
<change type="bugfix">
<para lang="ru">
unix domain listen-сокеты могли не наследоваться
при обновлении исполняемого файла на Linux.
</para>
<para lang="en">
unix domain listen sockets might not be inherited
during binary upgrade on Linux.
</para>
</change>
<change type="bugfix">
<para lang="ru">
плавное завершение старых рабочих процессов могло занимать бесконечное время
при использовании HTTP/2.
</para>
<para lang="en">
graceful shutdown of old worker processes might require infinite time
when using HTTP/2.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании HTTP/2 и директив limit_req или auth_request
тело запроса могло быть повреждено;
ошибка появилась в 1.10.2.
</para>
<para lang="en">
when using HTTP/2 and the "limit_req" or "auth_request" directives
client request body might be corrupted;
the bug had appeared in 1.10.2.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании HTTP/2 в рабочем процессе мог произойти segmentation fault;
ошибка появилась в 1.10.2.
</para>
<para lang="en">
a segmentation fault might occur in a worker process when using HTTP/2;
the bug had appeared in 1.10.2.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании директивы sendfile на FreeBSD и macOS
мог возвращаться некорректный ответ;
ошибка появилась в 1.7.8.
</para>
<para lang="en">
an incorrect response might be returned
when using the "sendfile" directive on FreeBSD and macOS;
the bug had appeared in 1.7.8.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании директивы aio_write
ответ мог сохраняться в кэш не полностью.
</para>
<para lang="en">
a truncated response might be stored in cache
when using the "aio_write" directive.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании директивы aio_write
могла происходить утечка сокетов.
</para>
<para lang="en">
a socket leak might occur
when using the "aio_write" directive.
</para>
</change>
</changes>
<changes ver="1.10.2" date="18.10.2016">
<change type="change">
<para lang="ru">
при попытке запросить виртуальный сервер,
отличающийся от согласованного в процессе SSL handshake,
теперь возвращается ответ "421 Misdirected Request";
это улучшает совместимость с некоторыми HTTP/2-клиентами
в случае использования клиентских сертификатов.
</para>
<para lang="en">
the "421 Misdirected Request" response now used
when rejecting requests to a virtual server
different from one negotiated during an SSL handshake;
this improves interoperability with some HTTP/2 clients
when using client certificates.
</para>
</change>
<change type="change">
<para lang="ru">
HTTP/2-клиенты теперь могут сразу присылать тело запроса;
директива http2_body_preread_size позволяет указать размер буфера, который
будет использоваться до того, как nginx начнёт читать тело.
</para>
<para lang="en">
HTTP/2 clients can now start sending request body immediately;
the "http2_body_preread_size" directive controls size of the buffer used
before nginx will start reading client request body.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании HTTP/2 и директивы proxy_request_buffering
в рабочем процессе мог произойти segmentation fault.
</para>
<para lang="en">
a segmentation fault might occur in a worker process
when using HTTP/2 and the "proxy_request_buffering" directive.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании HTTP/2
к запросам, передаваемым на бэкенд,
всегда добавлялась строка заголовка "Content-Length",
даже если у запроса не было тела.
</para>
<para lang="en">
the "Content-Length" request header line
was always added to requests passed to backends,
including requests without body,
when using HTTP/2.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании HTTP/2
в логах могли появляться сообщения "http request count is zero".
</para>
<para lang="en">
"http request count is zero" alerts might appear in logs
when using HTTP/2.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании директивы sub_filter
могло буферизироваться больше данных, чем это необходимо;
проблема появилась в 1.9.4.
</para>
<para lang="en">
unnecessary buffering might occur
when using the "sub_filter" directive;
the issue had appeared in 1.9.4.
</para>
</change>
<change type="bugfix">
<para lang="ru">
утечки сокетов при использовании HTTP/2.
</para>
<para lang="en">
socket leak when using HTTP/2.
</para>
</change>
<change type="bugfix">
<para lang="ru">
при использовании директив "aio threads" и sendfile
мог возвращаться некорректный ответ; ошибка появилась в 1.9.13.
</para>
<para lang="en">
an incorrect response might be returned
when using the "aio threads" and "sendfile" directives;
the bug had appeared in 1.9.13.
</para>
</change>
<change type="workaround">
<para lang="ru">
совместимость с OpenSSL 1.1.0.
</para>
<para lang="en">
OpenSSL 1.1.0 compatibility.
</para>
</change>
</changes>
<changes ver="1.10.1" date="31.05.2016">
<change type="security">
<para lang="ru">
при записи тела специально созданного запроса во временный файл
в рабочем процессе мог происходить segmentation fault
(CVE-2016-4450);
ошибка появилась в 1.3.9.
</para>
<para lang="en">
a segmentation fault might occur in a worker process
while writing a specially crafted request body to a temporary file
(CVE-2016-4450);
the bug had appeared in 1.3.9.
</para>
</change>
</changes>
<changes ver="1.10.0" date="26.04.2016">
<change>
<para lang="ru">
Стабильная ветка 1.10.x.
</para>
<para lang="en">
1.10.x stable branch.
</para>
</change>
</changes>
<changes ver="1.9.15" date="19.04.2016">
<change type="bugfix">

View File

@ -5,9 +5,9 @@ NGINX = nginx-$(VER)
TEMP = tmp
OBJS = objs.msvc8
OPENSSL = openssl-1.0.2g
ZLIB = zlib-1.2.8
PCRE = pcre-8.38
OPENSSL = openssl-1.0.2k
ZLIB = zlib-1.2.11
PCRE = pcre-8.40
release: export

View File

@ -9,8 +9,8 @@
#define _NGINX_H_INCLUDED_
#define nginx_version 1009015
#define NGINX_VERSION "1.9.15"
#define nginx_version 1010003
#define NGINX_VERSION "1.10.3"
#define NGINX_VER "nginx/" NGINX_VERSION
#ifdef NGX_BUILD

View File

@ -244,6 +244,9 @@ ngx_chain_coalesce_file(ngx_chain_t **in, off_t limit)
if (aligned <= cl->buf->file_last) {
size = aligned - cl->buf->file_pos;
}
total += size;
break;
}
total += size;

View File

@ -1213,6 +1213,7 @@ ngx_cmp_sockaddr(struct sockaddr *sa1, socklen_t slen1,
struct sockaddr_in6 *sin61, *sin62;
#endif
#if (NGX_HAVE_UNIX_DOMAIN)
size_t len;
struct sockaddr_un *saun1, *saun2;
#endif
@ -1242,15 +1243,21 @@ ngx_cmp_sockaddr(struct sockaddr *sa1, socklen_t slen1,
#if (NGX_HAVE_UNIX_DOMAIN)
case AF_UNIX:
/* TODO length */
saun1 = (struct sockaddr_un *) sa1;
saun2 = (struct sockaddr_un *) sa2;
if (ngx_memcmp(&saun1->sun_path, &saun2->sun_path,
sizeof(saun1->sun_path))
!= 0)
{
if (slen1 < slen2) {
len = slen1 - offsetof(struct sockaddr_un, sun_path);
} else {
len = slen2 - offsetof(struct sockaddr_un, sun_path);
}
if (len > sizeof(saun1->sun_path)) {
len = sizeof(saun1->sun_path);
}
if (ngx_memcmp(&saun1->sun_path, &saun2->sun_path, len) != 0) {
return NGX_DECLINED;
}

View File

@ -951,6 +951,8 @@ ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file)
return NGX_ERROR;
}
#if OPENSSL_VERSION_NUMBER < 0x10100005L
dh->p = BN_bin2bn(dh1024_p, sizeof(dh1024_p), NULL);
dh->g = BN_bin2bn(dh1024_g, sizeof(dh1024_g), NULL);
@ -960,6 +962,23 @@ ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file)
return NGX_ERROR;
}
#else
{
BIGNUM *p, *g;
p = BN_bin2bn(dh1024_p, sizeof(dh1024_p), NULL);
g = BN_bin2bn(dh1024_g, sizeof(dh1024_g), NULL);
if (p == NULL || g == NULL || !DH_set0_pqg(dh, p, NULL, g)) {
ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "BN_bin2bn() failed");
DH_free(dh);
BN_free(p);
BN_free(g);
return NGX_ERROR;
}
}
#endif
SSL_CTX_set_tmp_dh(ssl->ctx, dh);
DH_free(dh);
@ -1938,7 +1957,9 @@ ngx_ssl_connection_error(ngx_connection_t *c, int sslerr, ngx_err_t err,
|| n == SSL_R_ERROR_IN_RECEIVED_CIPHER_LIST /* 151 */
|| n == SSL_R_EXCESSIVE_MESSAGE_SIZE /* 152 */
|| n == SSL_R_LENGTH_MISMATCH /* 159 */
#ifdef SSL_R_NO_CIPHERS_PASSED
|| n == SSL_R_NO_CIPHERS_PASSED /* 182 */
#endif
|| n == SSL_R_NO_CIPHERS_SPECIFIED /* 183 */
|| n == SSL_R_NO_COMPRESSION_SPECIFIED /* 187 */
|| n == SSL_R_NO_SHARED_CIPHER /* 193 */
@ -2898,7 +2919,7 @@ ngx_ssl_session_ticket_key_callback(ngx_ssl_conn_t *ssl_conn,
ngx_ssl_session_ticket_md(), NULL);
ngx_memcpy(name, key[0].name, 16);
return 0;
return 1;
} else {
/* decrypt session ticket */

View File

@ -113,11 +113,24 @@ ngx_event_pipe_read_upstream(ngx_event_pipe_t *p)
}
#if (NGX_THREADS)
if (p->aio) {
ngx_log_debug0(NGX_LOG_DEBUG_EVENT, p->log, 0,
"pipe read upstream: aio");
return NGX_AGAIN;
}
if (p->writing) {
ngx_log_debug0(NGX_LOG_DEBUG_EVENT, p->log, 0,
"pipe read upstream: writing");
rc = ngx_event_pipe_write_chain_to_temp_file(p);
if (rc != NGX_OK) {
return rc;
}
}
#endif
ngx_log_debug1(NGX_LOG_DEBUG_EVENT, p->log, 0,
@ -815,10 +828,12 @@ ngx_event_pipe_write_chain_to_temp_file(ngx_event_pipe_t *p)
}
#if (NGX_THREADS)
p->temp_file->thread_write = p->thread_handler ? 1 : 0;
p->temp_file->file.thread_task = p->thread_task;
p->temp_file->file.thread_handler = p->thread_handler;
p->temp_file->file.thread_ctx = p->thread_ctx;
if (p->thread_handler) {
p->temp_file->thread_write = 1;
p->temp_file->file.thread_task = p->thread_task;
p->temp_file->file.thread_handler = p->thread_handler;
p->temp_file->file.thread_ctx = p->thread_ctx;
}
#endif
n = ngx_write_chain_to_temp_file(p->temp_file, out);

View File

@ -171,6 +171,7 @@ ngx_http_addition_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
for (cl = in; cl; cl = cl->next) {
if (cl->buf->last_buf) {
cl->buf->last_buf = 0;
cl->buf->last_in_chain = 1;
cl->buf->sync = 1;
last = 1;
}

View File

@ -83,7 +83,9 @@ static ngx_uint_t ngx_http_sub_cmp_index;
static ngx_int_t ngx_http_sub_output(ngx_http_request_t *r,
ngx_http_sub_ctx_t *ctx);
static ngx_int_t ngx_http_sub_parse(ngx_http_request_t *r,
ngx_http_sub_ctx_t *ctx);
ngx_http_sub_ctx_t *ctx, ngx_uint_t flush);
static ngx_int_t ngx_http_sub_match(ngx_http_sub_ctx_t *ctx, ngx_int_t start,
ngx_str_t *m);
static char * ngx_http_sub_filter(ngx_conf_t *cf, ngx_command_t *cmd,
void *conf);
@ -285,6 +287,7 @@ ngx_http_sub_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
ngx_int_t rc;
ngx_buf_t *b;
ngx_str_t *sub;
ngx_uint_t flush, last;
ngx_chain_t *cl;
ngx_http_sub_ctx_t *ctx;
ngx_http_sub_match_t *match;
@ -326,6 +329,9 @@ ngx_http_sub_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"http sub filter \"%V\"", &r->uri);
flush = 0;
last = 0;
while (ctx->in || ctx->buf) {
if (ctx->buf == NULL) {
@ -334,11 +340,19 @@ ngx_http_sub_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
ctx->pos = ctx->buf->pos;
}
if (ctx->buf->flush || ctx->buf->recycled) {
flush = 1;
}
if (ctx->in == NULL) {
last = flush;
}
b = NULL;
while (ctx->pos < ctx->buf->last) {
rc = ngx_http_sub_parse(r, ctx);
rc = ngx_http_sub_parse(r, ctx, last);
ngx_log_debug4(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"parse: %i, looked: \"%V\" %p-%p",
@ -590,9 +604,10 @@ ngx_http_sub_output(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx)
static ngx_int_t
ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx)
ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx,
ngx_uint_t flush)
{
u_char *p, *last, *pat, *pat_end, c;
u_char *p, c;
ngx_str_t *m;
ngx_int_t offset, start, next, end, len, rc;
ngx_uint_t shift, i, j;
@ -602,6 +617,7 @@ ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx)
slcf = ngx_http_get_module_loc_conf(r, ngx_http_sub_filter_module);
tables = ctx->tables;
match = ctx->matches->elts;
offset = ctx->offset;
end = ctx->buf->last - ctx->pos;
@ -628,7 +644,6 @@ ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx)
/* a potential match */
start = offset - (ngx_int_t) tables->min_match_len + 1;
match = ctx->matches->elts;
i = ngx_max(tables->index[c], ctx->index);
j = tables->index[c + 1];
@ -641,41 +656,15 @@ ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx)
m = &match[i].match;
pat = m->data;
pat_end = m->data + m->len;
rc = ngx_http_sub_match(ctx, start, m);
if (start >= 0) {
p = ctx->pos + start;
} else {
last = ctx->looked.data + ctx->looked.len;
p = last + start;
while (p < last && pat < pat_end) {
if (ngx_tolower(*p) != *pat) {
goto next;
}
p++;
pat++;
}
p = ctx->pos;
}
while (p < ctx->buf->last && pat < pat_end) {
if (ngx_tolower(*p) != *pat) {
goto next;
}
p++;
pat++;
if (rc == NGX_DECLINED) {
goto next;
}
ctx->index = i;
if (pat != pat_end) {
/* partial match */
if (rc == NGX_AGAIN) {
goto again;
}
@ -695,6 +684,26 @@ ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx)
ctx->index = 0;
}
if (flush) {
for ( ;; ) {
start = offset - (ngx_int_t) tables->min_match_len + 1;
if (start >= end) {
break;
}
for (i = 0; i < ctx->matches->nelts; i++) {
m = &match[i].match;
if (ngx_http_sub_match(ctx, start, m) == NGX_AGAIN) {
goto again;
}
}
offset++;
}
}
again:
ctx->offset = offset;
@ -731,6 +740,51 @@ done:
}
static ngx_int_t
ngx_http_sub_match(ngx_http_sub_ctx_t *ctx, ngx_int_t start, ngx_str_t *m)
{
u_char *p, *last, *pat, *pat_end;
pat = m->data;
pat_end = m->data + m->len;
if (start >= 0) {
p = ctx->pos + start;
} else {
last = ctx->looked.data + ctx->looked.len;
p = last + start;
while (p < last && pat < pat_end) {
if (ngx_tolower(*p) != *pat) {
return NGX_DECLINED;
}
p++;
pat++;
}
p = ctx->pos;
}
while (p < ctx->buf->last && pat < pat_end) {
if (ngx_tolower(*p) != *pat) {
return NGX_DECLINED;
}
p++;
pat++;
}
if (pat != pat_end) {
/* partial match */
return NGX_AGAIN;
}
return NGX_OK;
}
static char *
ngx_http_sub_filter(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
{

View File

@ -95,17 +95,17 @@ static ngx_str_t ngx_http_status_lines[] = {
ngx_string("414 Request-URI Too Large"),
ngx_string("415 Unsupported Media Type"),
ngx_string("416 Requested Range Not Satisfiable"),
ngx_null_string, /* "417 Expectation Failed" */
ngx_null_string, /* "418 unused" */
ngx_null_string, /* "419 unused" */
ngx_null_string, /* "420 unused" */
ngx_string("421 Misdirected Request"),
/* ngx_null_string, */ /* "417 Expectation Failed" */
/* ngx_null_string, */ /* "418 unused" */
/* ngx_null_string, */ /* "419 unused" */
/* ngx_null_string, */ /* "420 unused" */
/* ngx_null_string, */ /* "421 unused" */
/* ngx_null_string, */ /* "422 Unprocessable Entity" */
/* ngx_null_string, */ /* "423 Locked" */
/* ngx_null_string, */ /* "424 Failed Dependency" */
#define NGX_HTTP_LAST_4XX 417
#define NGX_HTTP_LAST_4XX 422
#define NGX_HTTP_OFF_5XX (NGX_HTTP_LAST_4XX - 400 + NGX_HTTP_OFF_4XX)
ngx_string("500 Internal Server Error"),
@ -113,10 +113,10 @@ static ngx_str_t ngx_http_status_lines[] = {
ngx_string("502 Bad Gateway"),
ngx_string("503 Service Temporarily Unavailable"),
ngx_string("504 Gateway Time-out"),
ngx_null_string, /* "505 HTTP Version Not Supported" */
ngx_null_string, /* "506 Variant Also Negotiates" */
ngx_string("507 Insufficient Storage"),
/* ngx_null_string, */ /* "508 unused" */
/* ngx_null_string, */ /* "509 unused" */
/* ngx_null_string, */ /* "510 Not Extended" */

View File

@ -2065,7 +2065,7 @@ ngx_http_set_virtual_server(ngx_http_request_t *r, ngx_str_t *host)
ngx_log_error(NGX_LOG_INFO, r->connection->log, 0,
"client attempted to request the server name "
"different from that one was negotiated");
ngx_http_finalize_request(r, NGX_HTTP_BAD_REQUEST);
ngx_http_finalize_request(r, NGX_HTTP_MISDIRECTED_REQUEST);
return NGX_ERROR;
}
}

View File

@ -95,6 +95,7 @@
#define NGX_HTTP_REQUEST_URI_TOO_LARGE 414
#define NGX_HTTP_UNSUPPORTED_MEDIA_TYPE 415
#define NGX_HTTP_RANGE_NOT_SATISFIABLE 416
#define NGX_HTTP_MISDIRECTED_REQUEST 421
/* Our own HTTP codes */

View File

@ -210,6 +210,14 @@ static char ngx_http_error_416_page[] =
;
static char ngx_http_error_421_page[] =
"<html>" CRLF
"<head><title>421 Misdirected Request</title></head>" CRLF
"<body bgcolor=\"white\">" CRLF
"<center><h1>421 Misdirected Request</h1></center>" CRLF
;
static char ngx_http_error_494_page[] =
"<html>" CRLF
"<head><title>400 Request Header Or Cookie Too Large</title></head>"
@ -334,8 +342,13 @@ static ngx_str_t ngx_http_error_pages[] = {
ngx_string(ngx_http_error_414_page),
ngx_string(ngx_http_error_415_page),
ngx_string(ngx_http_error_416_page),
ngx_null_string, /* 417 */
ngx_null_string, /* 418 */
ngx_null_string, /* 419 */
ngx_null_string, /* 420 */
ngx_string(ngx_http_error_421_page),
#define NGX_HTTP_LAST_4XX 417
#define NGX_HTTP_LAST_4XX 422
#define NGX_HTTP_OFF_5XX (NGX_HTTP_LAST_4XX - 400 + NGX_HTTP_OFF_4XX)
ngx_string(ngx_http_error_494_page), /* 494, request header too large */

View File

@ -3744,9 +3744,24 @@ ngx_http_upstream_process_request(ngx_http_request_t *r,
p = u->pipe;
#if (NGX_THREADS)
if (p->writing && !p->aio) {
/*
* make sure to call ngx_event_pipe()
* if there is an incomplete aio write
*/
if (ngx_event_pipe(p, 1) == NGX_ABORT) {
ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
}
}
if (p->writing) {
return;
}
#endif
if (u->peer.connection) {

View File

@ -48,11 +48,6 @@
#define NGX_HTTP_V2_DEFAULT_FRAME_SIZE (1 << 14)
#define NGX_HTTP_V2_MAX_WINDOW ((1U << 31) - 1)
#define NGX_HTTP_V2_DEFAULT_WINDOW 65535
#define NGX_HTTP_V2_INITIAL_WINDOW 0
#define NGX_HTTP_V2_ROOT (void *) -1
@ -141,6 +136,8 @@ static ngx_int_t ngx_http_v2_send_window_update(ngx_http_v2_connection_t *h2c,
ngx_uint_t sid, size_t window);
static ngx_int_t ngx_http_v2_send_rst_stream(ngx_http_v2_connection_t *h2c,
ngx_uint_t sid, ngx_uint_t status);
static ngx_int_t ngx_http_v2_send_goaway(ngx_http_v2_connection_t *h2c,
ngx_uint_t status);
static ngx_http_v2_out_frame_t *ngx_http_v2_get_frame(
ngx_http_v2_connection_t *h2c, size_t length, ngx_uint_t type,
@ -289,7 +286,6 @@ ngx_http_v2_init(ngx_event_t *rev)
: ngx_http_v2_state_preface;
ngx_queue_init(&h2c->waiting);
ngx_queue_init(&h2c->posted);
ngx_queue_init(&h2c->dependencies);
ngx_queue_init(&h2c->closed);
@ -298,6 +294,8 @@ ngx_http_v2_init(ngx_event_t *rev)
rev->handler = ngx_http_v2_read_handler;
c->write->handler = ngx_http_v2_write_handler;
c->idle = 1;
ngx_http_v2_read_handler(rev);
}
@ -325,6 +323,25 @@ ngx_http_v2_read_handler(ngx_event_t *rev)
h2c->blocked = 1;
if (c->close) {
c->close = 0;
h2c->goaway = 1;
if (ngx_http_v2_send_goaway(h2c, NGX_HTTP_V2_NO_ERROR) == NGX_ERROR) {
ngx_http_v2_finalize_connection(h2c, 0);
return;
}
if (ngx_http_v2_send_output_queue(h2c) == NGX_ERROR) {
ngx_http_v2_finalize_connection(h2c, 0);
return;
}
h2c->blocked = 0;
return;
}
h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
ngx_http_v2_module);
@ -397,9 +414,7 @@ static void
ngx_http_v2_write_handler(ngx_event_t *wev)
{
ngx_int_t rc;
ngx_queue_t *q;
ngx_connection_t *c;
ngx_http_v2_stream_t *stream;
ngx_http_v2_connection_t *h2c;
c = wev->data;
@ -415,6 +430,16 @@ ngx_http_v2_write_handler(ngx_event_t *wev)
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http2 write handler");
if (h2c->last_out == NULL && !c->buffered) {
if (wev->timer_set) {
ngx_del_timer(wev);
}
ngx_http_v2_handle_connection(h2c);
return;
}
h2c->blocked = 1;
rc = ngx_http_v2_send_output_queue(h2c);
@ -424,26 +449,6 @@ ngx_http_v2_write_handler(ngx_event_t *wev)
return;
}
while (!ngx_queue_empty(&h2c->posted)) {
q = ngx_queue_head(&h2c->posted);
ngx_queue_remove(q);
stream = ngx_queue_data(q, ngx_http_v2_stream_t, queue);
stream->handled = 0;
ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,
"run http2 stream %ui", stream->node->id);
wev = stream->request->connection->write;
wev->active = 0;
wev->ready = 1;
wev->handler(wev);
}
h2c->blocked = 0;
if (rc == NGX_AGAIN) {
@ -473,7 +478,7 @@ ngx_http_v2_send_output_queue(ngx_http_v2_connection_t *h2c)
wev = c->write;
if (!wev->ready) {
return NGX_OK;
return NGX_AGAIN;
}
cl = NULL;
@ -544,15 +549,6 @@ ngx_http_v2_send_output_queue(ngx_http_v2_connection_t *h2c)
c->tcp_nodelay = NGX_TCP_NODELAY_SET;
}
if (cl) {
ngx_add_timer(wev, clcf->send_timeout);
} else {
if (wev->timer_set) {
ngx_del_timer(wev);
}
}
for ( /* void */ ; out; out = fn) {
fn = out->next;
@ -577,6 +573,15 @@ ngx_http_v2_send_output_queue(ngx_http_v2_connection_t *h2c)
h2c->last_out = frame;
if (!wev->ready) {
ngx_add_timer(wev, clcf->send_timeout);
return NGX_AGAIN;
}
if (wev->timer_set) {
ngx_del_timer(wev);
}
return NGX_OK;
error:
@ -594,7 +599,8 @@ error:
static void
ngx_http_v2_handle_connection(ngx_http_v2_connection_t *h2c)
{
ngx_connection_t *c;
ngx_int_t rc;
ngx_connection_t *c;
ngx_http_v2_srv_conf_t *h2scf;
if (h2c->last_out || h2c->processing) {
@ -609,6 +615,26 @@ ngx_http_v2_handle_connection(ngx_http_v2_connection_t *h2c)
}
if (c->buffered) {
h2c->blocked = 1;
rc = ngx_http_v2_send_output_queue(h2c);
h2c->blocked = 0;
if (rc == NGX_ERROR) {
ngx_http_close_connection(c);
return;
}
if (rc == NGX_AGAIN) {
return;
}
/* rc == NGX_OK */
}
if (h2c->goaway) {
ngx_http_close_connection(c);
return;
}
@ -619,11 +645,6 @@ ngx_http_v2_handle_connection(ngx_http_v2_connection_t *h2c)
return;
}
if (ngx_terminate || ngx_exiting) {
ngx_http_close_connection(c);
return;
}
ngx_destroy_pool(h2c->pool);
h2c->pool = NULL;
@ -637,7 +658,6 @@ ngx_http_v2_handle_connection(ngx_http_v2_connection_t *h2c)
#endif
c->destroyed = 1;
c->idle = 1;
ngx_reusable_connection(c, 1);
c->write->handler = ngx_http_empty_handler;
@ -879,8 +899,6 @@ ngx_http_v2_state_data(ngx_http_v2_connection_t *h2c, u_char *pos, u_char *end)
return ngx_http_v2_state_skip_padded(h2c, pos, end);
}
stream->in_closed = h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG;
h2c->state.stream = stream;
return ngx_http_v2_state_read_data(h2c, pos, end);
@ -891,10 +909,12 @@ static u_char *
ngx_http_v2_state_read_data(ngx_http_v2_connection_t *h2c, u_char *pos,
u_char *end)
{
size_t size;
ngx_int_t rc;
ngx_uint_t last;
ngx_http_v2_stream_t *stream;
size_t size;
ngx_buf_t *buf;
ngx_int_t rc;
ngx_http_request_t *r;
ngx_http_v2_stream_t *stream;
ngx_http_v2_srv_conf_t *h2scf;
stream = h2c->state.stream;
@ -913,17 +933,42 @@ ngx_http_v2_state_read_data(ngx_http_v2_connection_t *h2c, u_char *pos,
if (size >= h2c->state.length) {
size = h2c->state.length;
last = stream->in_closed;
} else {
last = 0;
stream->in_closed = h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG;
}
rc = ngx_http_v2_process_request_body(stream->request, pos, size, last);
r = stream->request;
if (rc != NGX_OK) {
stream->skip_data = 1;
ngx_http_finalize_request(stream->request, rc);
if (r->request_body) {
rc = ngx_http_v2_process_request_body(r, pos, size, stream->in_closed);
if (rc != NGX_OK) {
stream->skip_data = 1;
ngx_http_finalize_request(r, rc);
}
} else if (size) {
buf = stream->preread;
if (buf == NULL) {
h2scf = ngx_http_get_module_srv_conf(r, ngx_http_v2_module);
buf = ngx_create_temp_buf(r->pool, h2scf->preread_size);
if (buf == NULL) {
return ngx_http_v2_connection_error(h2c,
NGX_HTTP_V2_INTERNAL_ERROR);
}
stream->preread = buf;
}
if (size > (size_t) (buf->end - buf->last)) {
ngx_log_error(NGX_LOG_ALERT, h2c->connection->log, 0,
"http2 preread buffer overflow");
return ngx_http_v2_connection_error(h2c,
NGX_HTTP_V2_INTERNAL_ERROR);
}
buf->last = ngx_cpymem(buf->last, pos, size);
}
pos += size;
@ -981,6 +1026,12 @@ ngx_http_v2_state_headers(ngx_http_v2_connection_t *h2c, u_char *pos,
return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR);
}
if (h2c->goaway) {
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0,
"skipping http2 HEADERS frame");
return ngx_http_v2_state_skip(h2c, pos, end);
}
if ((size_t) (end - pos) < size) {
return ngx_http_v2_state_save(h2c, pos, end,
ngx_http_v2_state_headers);
@ -1058,7 +1109,9 @@ ngx_http_v2_state_headers(ngx_http_v2_connection_t *h2c, u_char *pos,
goto rst_stream;
}
if (!h2c->settings_ack && !(h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG))
if (!h2c->settings_ack
&& !(h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG)
&& h2scf->preread_size < NGX_HTTP_V2_DEFAULT_WINDOW)
{
ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0,
"client sent stream with data "
@ -2164,7 +2217,7 @@ ngx_http_v2_state_window_update(ngx_http_v2_connection_t *h2c, u_char *pos,
stream = ngx_queue_data(q, ngx_http_v2_stream_t, queue);
stream->handled = 0;
stream->waiting = 0;
wev = stream->request->connection->write;
@ -2434,8 +2487,7 @@ ngx_http_v2_send_settings(ngx_http_v2_connection_t *h2c, ngx_uint_t ack)
buf->last = ngx_http_v2_write_uint16(buf->last,
NGX_HTTP_V2_INIT_WINDOW_SIZE_SETTING);
buf->last = ngx_http_v2_write_uint32(buf->last,
NGX_HTTP_V2_INITIAL_WINDOW);
buf->last = ngx_http_v2_write_uint32(buf->last, h2scf->preread_size);
buf->last = ngx_http_v2_write_uint16(buf->last,
NGX_HTTP_V2_MAX_FRAME_SIZE_SETTING);
@ -2643,6 +2695,7 @@ ngx_http_v2_create_stream(ngx_http_v2_connection_t *h2c)
ngx_http_log_ctx_t *ctx;
ngx_http_request_t *r;
ngx_http_v2_stream_t *stream;
ngx_http_v2_srv_conf_t *h2scf;
ngx_http_core_srv_conf_t *cscf;
fc = h2c->free_fake_connections;
@ -2756,8 +2809,10 @@ ngx_http_v2_create_stream(ngx_http_v2_connection_t *h2c)
stream->request = r;
stream->connection = h2c;
h2scf = ngx_http_get_module_srv_conf(r, ngx_http_v2_module);
stream->send_window = h2c->init_window;
stream->recv_window = NGX_HTTP_V2_INITIAL_WINDOW;
stream->recv_window = h2scf->preread_size;
h2c->processing++;
@ -3400,7 +3455,9 @@ ngx_http_v2_run_request(ngx_http_request_t *r)
return;
}
r->headers_in.chunked = (r->headers_in.content_length_n == -1);
if (r->headers_in.content_length_n == -1 && !r->stream->in_closed) {
r->headers_in.chunked = 1;
}
ngx_http_process_request(r);
}
@ -3411,7 +3468,11 @@ ngx_http_v2_read_request_body(ngx_http_request_t *r,
ngx_http_client_body_handler_pt post_handler)
{
off_t len;
size_t size;
ngx_buf_t *buf;
ngx_int_t rc;
ngx_http_v2_stream_t *stream;
ngx_http_v2_srv_conf_t *h2scf;
ngx_http_request_body_t *rb;
ngx_http_core_loc_conf_t *clcf;
ngx_http_v2_connection_t *h2c;
@ -3444,28 +3505,43 @@ ngx_http_v2_read_request_body(ngx_http_request_t *r,
r->request_body = rb;
h2scf = ngx_http_get_module_srv_conf(r, ngx_http_v2_module);
clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
len = r->headers_in.content_length_n;
if (r->request_body_no_buffering && !stream->in_closed) {
r->request_body_in_file_only = 0;
if (len < 0 || len > (off_t) clcf->client_body_buffer_size) {
len = clcf->client_body_buffer_size;
}
/*
* We need a room to store data up to the stream's initial window size,
* at least until this window will be exhausted.
*/
if (len < (off_t) h2scf->preread_size) {
len = h2scf->preread_size;
}
if (len > NGX_HTTP_V2_MAX_WINDOW) {
len = NGX_HTTP_V2_MAX_WINDOW;
}
}
if (len >= 0 && len <= (off_t) clcf->client_body_buffer_size
&& !r->request_body_in_file_only)
rb->buf = ngx_create_temp_buf(r->pool, (size_t) len);
} else if (len >= 0 && len <= (off_t) clcf->client_body_buffer_size
&& !r->request_body_in_file_only)
{
rb->buf = ngx_create_temp_buf(r->pool, (size_t) len);
} else {
if (stream->preread) {
/* enforce writing preread buffer to file */
r->request_body_in_file_only = 1;
}
rb->buf = ngx_calloc_buf(r->pool);
if (rb->buf != NULL) {
@ -3478,22 +3554,44 @@ ngx_http_v2_read_request_body(ngx_http_request_t *r,
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
buf = stream->preread;
if (stream->in_closed) {
r->request_body_no_buffering = 0;
if (buf) {
rc = ngx_http_v2_process_request_body(r, buf->pos,
buf->last - buf->pos, 1);
ngx_pfree(r->pool, buf->start);
return rc;
}
return ngx_http_v2_process_request_body(r, NULL, 0, 1);
}
if (len) {
if (r->request_body_no_buffering) {
stream->recv_window = (size_t) len;
if (buf) {
rc = ngx_http_v2_process_request_body(r, buf->pos,
buf->last - buf->pos, 0);
} else {
stream->no_flow_control = 1;
stream->recv_window = NGX_HTTP_V2_MAX_WINDOW;
ngx_pfree(r->pool, buf->start);
if (rc != NGX_OK) {
stream->skip_data = 1;
return rc;
}
}
if (ngx_http_v2_send_window_update(stream->connection, stream->node->id,
stream->recv_window)
if (r->request_body_no_buffering) {
size = (size_t) len - h2scf->preread_size;
} else {
stream->no_flow_control = 1;
size = NGX_HTTP_V2_MAX_WINDOW - stream->recv_window;
}
if (size) {
if (ngx_http_v2_send_window_update(stream->connection,
stream->node->id, size)
== NGX_ERROR)
{
stream->skip_data = 1;
@ -3508,9 +3606,13 @@ ngx_http_v2_read_request_body(ngx_http_request_t *r,
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
}
stream->recv_window += size;
}
ngx_add_timer(r->connection->read, clcf->client_body_timeout);
if (!buf) {
ngx_add_timer(r->connection->read, clcf->client_body_timeout);
}
r->read_event_handler = ngx_http_v2_read_client_request_body_handler;
r->write_event_handler = ngx_http_request_empty_handler;
@ -3529,13 +3631,8 @@ ngx_http_v2_process_request_body(ngx_http_request_t *r, u_char *pos,
ngx_http_request_body_t *rb;
ngx_http_core_loc_conf_t *clcf;
rb = r->request_body;
if (rb == NULL) {
return NGX_OK;
}
fc = r->connection;
rb = r->request_body;
buf = rb->buf;
if (size) {
@ -3579,7 +3676,7 @@ ngx_http_v2_process_request_body(ngx_http_request_t *r, u_char *pos,
rb->buf = NULL;
}
if (r->headers_in.content_length_n == -1) {
if (r->headers_in.chunked) {
r->headers_in.content_length_n = rb->received;
}
@ -3789,7 +3886,14 @@ ngx_http_v2_read_unbuffered_request_body(ngx_http_request_t *r)
window -= h2c->state.length;
}
if (window == stream->recv_window) {
if (window <= stream->recv_window) {
if (window < stream->recv_window) {
ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
"http2 negative window update");
stream->skip_data = 1;
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
return NGX_AGAIN;
}
@ -3824,6 +3928,10 @@ ngx_http_v2_terminate_stream(ngx_http_v2_connection_t *h2c,
ngx_event_t *rev;
ngx_connection_t *fc;
if (stream->rst_sent) {
return NGX_OK;
}
if (ngx_http_v2_send_rst_stream(h2c, stream->node->id, status)
== NGX_ERROR)
{
@ -3831,6 +3939,7 @@ ngx_http_v2_terminate_stream(ngx_http_v2_connection_t *h2c,
}
stream->rst_sent = 1;
stream->skip_data = 1;
fc = stream->request->connection;
fc->error = 1;
@ -3862,6 +3971,7 @@ ngx_http_v2_close_stream(ngx_http_v2_stream_t *stream, ngx_int_t rc)
if (stream->queued) {
fc->write->handler = ngx_http_v2_close_stream_handler;
fc->read->handler = ngx_http_empty_handler;
return;
}
@ -4062,7 +4172,6 @@ ngx_http_v2_idle_handler(ngx_event_t *rev)
#endif
c->destroyed = 0;
c->idle = 0;
ngx_reusable_connection(c, 0);
h2scf = ngx_http_get_module_srv_conf(h2c->http_connection->conf_ctx,
@ -4097,16 +4206,14 @@ ngx_http_v2_finalize_connection(ngx_http_v2_connection_t *h2c,
h2c->blocked = 1;
if (!c->error && ngx_http_v2_send_goaway(h2c, status) != NGX_ERROR) {
(void) ngx_http_v2_send_output_queue(h2c);
if (!c->error && !h2c->goaway) {
if (ngx_http_v2_send_goaway(h2c, status) != NGX_ERROR) {
(void) ngx_http_v2_send_output_queue(h2c);
}
}
c->error = 1;
if (h2c->state.stream) {
ngx_http_v2_close_stream(h2c->state.stream, NGX_HTTP_BAD_REQUEST);
}
if (!h2c->processing) {
ngx_http_close_connection(c);
return;
@ -4131,7 +4238,7 @@ ngx_http_v2_finalize_connection(ngx_http_v2_connection_t *h2c,
continue;
}
stream->handled = 0;
stream->waiting = 0;
r = stream->request;
fc = r->connection;

View File

@ -46,6 +46,9 @@
#define NGX_HTTP_V2_PADDED_FLAG 0x08
#define NGX_HTTP_V2_PRIORITY_FLAG 0x20
#define NGX_HTTP_V2_MAX_WINDOW ((1U << 31) - 1)
#define NGX_HTTP_V2_DEFAULT_WINDOW 65535
typedef struct ngx_http_v2_connection_s ngx_http_v2_connection_t;
typedef struct ngx_http_v2_node_s ngx_http_v2_node_t;
@ -134,7 +137,6 @@ struct ngx_http_v2_connection_s {
ngx_http_v2_out_frame_t *last_out;
ngx_queue_t posted;
ngx_queue_t dependencies;
ngx_queue_t closed;
@ -143,6 +145,7 @@ struct ngx_http_v2_connection_s {
unsigned closed_nodes:8;
unsigned settings_ack:1;
unsigned blocked:1;
unsigned goaway:1;
};
@ -174,6 +177,8 @@ struct ngx_http_v2_stream_s {
ssize_t send_window;
size_t recv_window;
ngx_buf_t *preread;
ngx_http_v2_out_frame_t *free_frames;
ngx_chain_t *free_frame_headers;
ngx_chain_t *free_bufs;
@ -186,7 +191,7 @@ struct ngx_http_v2_stream_s {
ngx_pool_t *pool;
unsigned handled:1;
unsigned waiting:1;
unsigned blocked:1;
unsigned exhausted:1;
unsigned in_closed:1;

View File

@ -169,6 +169,12 @@ ngx_http_v2_header_filter(ngx_http_request_t *r)
return NGX_OK;
}
fc = r->connection;
if (fc->error) {
return NGX_ERROR;
}
if (r->method == NGX_HTTP_HEAD) {
r->header_only = 1;
}
@ -259,8 +265,6 @@ ngx_http_v2_header_filter(ngx_http_request_t *r)
len += 1 + ngx_http_v2_literal_size("Wed, 31 Dec 1986 18:00:00 GMT");
}
fc = r->connection;
if (r->headers_out.location && r->headers_out.location->value.len) {
if (r->headers_out.location->value.data[0] == '/') {
@ -1118,11 +1122,11 @@ ngx_http_v2_waiting_queue(ngx_http_v2_connection_t *h2c,
ngx_queue_t *q;
ngx_http_v2_stream_t *s;
if (stream->handled) {
if (stream->waiting) {
return;
}
stream->handled = 1;
stream->waiting = 1;
for (q = ngx_queue_last(&h2c->waiting);
q != ngx_queue_sentinel(&h2c->waiting);
@ -1313,18 +1317,29 @@ static ngx_inline void
ngx_http_v2_handle_stream(ngx_http_v2_connection_t *h2c,
ngx_http_v2_stream_t *stream)
{
ngx_event_t *wev;
ngx_event_t *wev;
ngx_connection_t *fc;
if (stream->handled || stream->blocked || stream->exhausted) {
if (stream->waiting || stream->blocked) {
return;
}
wev = stream->request->connection->write;
fc = stream->request->connection;
if (!wev->delayed) {
stream->handled = 1;
ngx_queue_insert_tail(&h2c->posted, &stream->queue);
if (!fc->error && stream->exhausted) {
return;
}
wev = fc->write;
wev->active = 0;
wev->ready = 1;
if (!fc->error && wev->delayed) {
return;
}
ngx_post_event(wev, &ngx_posted_events);
}
@ -1334,11 +1349,13 @@ ngx_http_v2_filter_cleanup(void *data)
ngx_http_v2_stream_t *stream = data;
size_t window;
ngx_event_t *wev;
ngx_queue_t *q;
ngx_http_v2_out_frame_t *frame, **fn;
ngx_http_v2_connection_t *h2c;
if (stream->handled) {
stream->handled = 0;
if (stream->waiting) {
stream->waiting = 0;
ngx_queue_remove(&stream->queue);
}
@ -1372,9 +1389,26 @@ ngx_http_v2_filter_cleanup(void *data)
fn = &frame->next;
}
if (h2c->send_window == 0 && window && !ngx_queue_empty(&h2c->waiting)) {
ngx_queue_add(&h2c->posted, &h2c->waiting);
ngx_queue_init(&h2c->waiting);
if (h2c->send_window == 0 && window) {
while (!ngx_queue_empty(&h2c->waiting)) {
q = ngx_queue_head(&h2c->waiting);
ngx_queue_remove(q);
stream = ngx_queue_data(q, ngx_http_v2_stream_t, queue);
stream->waiting = 0;
wev = stream->request->connection->write;
wev->active = 0;
wev->ready = 1;
if (!wev->delayed) {
ngx_post_event(wev, &ngx_posted_events);
}
}
}
h2c->send_window += window;

View File

@ -30,6 +30,7 @@ static char *ngx_http_v2_merge_loc_conf(ngx_conf_t *cf, void *parent,
static char *ngx_http_v2_recv_buffer_size(ngx_conf_t *cf, void *post,
void *data);
static char *ngx_http_v2_pool_size(ngx_conf_t *cf, void *post, void *data);
static char *ngx_http_v2_preread_size(ngx_conf_t *cf, void *post, void *data);
static char *ngx_http_v2_streams_index_mask(ngx_conf_t *cf, void *post,
void *data);
static char *ngx_http_v2_chunk_size(ngx_conf_t *cf, void *post, void *data);
@ -41,6 +42,8 @@ static ngx_conf_post_t ngx_http_v2_recv_buffer_size_post =
{ ngx_http_v2_recv_buffer_size };
static ngx_conf_post_t ngx_http_v2_pool_size_post =
{ ngx_http_v2_pool_size };
static ngx_conf_post_t ngx_http_v2_preread_size_post =
{ ngx_http_v2_preread_size };
static ngx_conf_post_t ngx_http_v2_streams_index_mask_post =
{ ngx_http_v2_streams_index_mask };
static ngx_conf_post_t ngx_http_v2_chunk_size_post =
@ -84,6 +87,13 @@ static ngx_command_t ngx_http_v2_commands[] = {
offsetof(ngx_http_v2_srv_conf_t, max_header_size),
NULL },
{ ngx_string("http2_body_preread_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1,
ngx_conf_set_size_slot,
NGX_HTTP_SRV_CONF_OFFSET,
offsetof(ngx_http_v2_srv_conf_t, preread_size),
&ngx_http_v2_preread_size_post },
{ ngx_string("http2_streams_index_size"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1,
ngx_conf_set_num_slot,
@ -316,6 +326,8 @@ ngx_http_v2_create_srv_conf(ngx_conf_t *cf)
h2scf->max_field_size = NGX_CONF_UNSET_SIZE;
h2scf->max_header_size = NGX_CONF_UNSET_SIZE;
h2scf->preread_size = NGX_CONF_UNSET_SIZE;
h2scf->streams_index_mask = NGX_CONF_UNSET_UINT;
h2scf->recv_timeout = NGX_CONF_UNSET_MSEC;
@ -341,6 +353,8 @@ ngx_http_v2_merge_srv_conf(ngx_conf_t *cf, void *parent, void *child)
ngx_conf_merge_size_value(conf->max_header_size, prev->max_header_size,
16384);
ngx_conf_merge_size_value(conf->preread_size, prev->preread_size, 65536);
ngx_conf_merge_uint_value(conf->streams_index_mask,
prev->streams_index_mask, 32 - 1);
@ -419,6 +433,23 @@ ngx_http_v2_pool_size(ngx_conf_t *cf, void *post, void *data)
}
static char *
ngx_http_v2_preread_size(ngx_conf_t *cf, void *post, void *data)
{
size_t *sp = data;
if (*sp > NGX_HTTP_V2_MAX_WINDOW) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
"the maximum body preread buffer size is %uz",
NGX_HTTP_V2_MAX_WINDOW);
return NGX_CONF_ERROR;
}
return NGX_CONF_OK;
}
static char *
ngx_http_v2_streams_index_mask(ngx_conf_t *cf, void *post, void *data)
{

View File

@ -25,6 +25,7 @@ typedef struct {
ngx_uint_t concurrent_streams;
size_t max_field_size;
size_t max_header_size;
size_t preread_size;
ngx_uint_t streams_index_mask;
ngx_msec_t recv_timeout;
ngx_msec_t idle_timeout;

View File

@ -98,7 +98,7 @@ ngx_darwin_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit)
send += file_size;
if (header.count == 0) {
if (header.count == 0 && send < limit) {
/*
* create the trailer iovec and coalesce the neighbouring bufs

View File

@ -356,6 +356,11 @@ ngx_chain_to_iovec(ngx_iovec_t *vec, ngx_chain_t *cl)
n = 0;
for ( /* void */ ; cl; cl = cl->next) {
if (ngx_buf_special(cl->buf)) {
continue;
}
size = cl->buf->last - cl->buf->pos;
if (prev == cl->buf->pos) {

View File

@ -114,16 +114,24 @@ ngx_freebsd_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit)
send += file_size;
/* create the trailer iovec and coalesce the neighbouring bufs */
if (send < limit) {
cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send, c->log);
/*
* create the trailer iovec and coalesce the neighbouring bufs
*/
if (cl == NGX_CHAIN_ERROR) {
return NGX_CHAIN_ERROR;
cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send,
c->log);
if (cl == NGX_CHAIN_ERROR) {
return NGX_CHAIN_ERROR;
}
send += trailer.size;
} else {
trailer.count = 0;
}
send += trailer.size;
if (ngx_freebsd_use_tcp_nopush
&& c->tcp_nopush == NGX_TCP_NOPUSH_UNSET)
{