| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Migrate check_version() to check_sanity() to make sure both proxy and backends buffers are clean between sections.
|
| |
|
|
|
|
| |
re-accepting backends was really common.
|
|
|
|
|
|
|
| |
New backend connections returned 'conntimeout' whether it timed out
establishing the TCP connection or if it died waiting for the
"version\r\n" response. Now gives a 'readvalidate' if it's already
properly connected.
|
|
|
|
|
|
|
|
|
|
|
| |
A long sleep in the unix startup code made backends hit the connection
timeout before the backends were configured.
Make all the proxy tests use the unix socket instead of listening on a
hardcoded port. Proxy code is completely equivalent from the client
standpoint.
This fix should make the whole test suite run a bit faster too.
|
|
|
|
|
|
| |
Apparently I don't typically run this one much. I think it should be
deprecated for the newer style used in proxyunits.t/etc, but needs to be
a concerted effort.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The connect timeout won't fire when blocking a backend from connecting
in these tests; it will connect, send a version command to validate,
then time out on read.
With the read timeout set to 0.1 it would sometimes fail before the
restart finished, clogging log lines and causing test failures.
Now we wait for the watcher and remove a sleep, with a longer read
timeout.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When mcp.pool() is called in its two argument form, ie: mcp.pool({b1,
b2}, { foo = bar }), backend objects would not be properly cached
internally, causing objects to leak.
Further, it was settings the objects into the cache table indexed by the
object itself, so they would not be cleaned up by garbage collection.
Bug was introduced as part of 6442017c (allow workers to run IO
optionally)
|
|
|
|
|
| |
A few code paths were returning SERVER_ERROR (a retryable error)
when it should have been CLIENT_ERROR (bad protocol syntax).
|
|
|
|
|
|
|
|
|
|
| |
Cleans up logic around response handling in general. Allows returning
server-sent error messages upstream for handling.
In general SERVER_ERROR means we can keep the connection to the backend.
The rest of the errors are protocol errors, and while some are perfectly
safe to whitelist, clients should not be causing those sorts of errors
and we should cycle the backend regardless.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a client sends multiple requests in the same packet, the proxy would
reverse the requests before sending them to the backend. They would
return to client in the correct order because top level responses are
sent in the order they were created.
In practice I guess this is rarely noticed. If a client sends a series
of commands where the first one generates a syntax error, all prior
commands would still succeed.
It would also trip people up if they test pipelining commands as
read-your-write would fail as the write gets ordered after the read.
Did run into this before, but I thought it was just the ascii multiget
code reversing keys, which would be harmless as the whole command has to
complete regardless of key order.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds:
mcp.active_req_limit(count)
mcp.buffer_memory_limit(kilobytes)
Divides by the number of worker threads and creates a per-worker-thread
limit for the number of concurrent proxy requests, and how many bytes
used specifically for value bytes. This does not represent total memory
usage but will be close.
Buffer memory for inbound set requests is not accounted for until after
the object has been read from the socket; to be improved in a future
update. This should be fine unless clients send just the SET request and
then hang without sending further data.
Limits should be live-adjustable via configuration reloads.
|
|
|
|
|
|
|
|
|
|
| |
Also changes the way the global context and thread contexts are fetched
from lua; via the VM extra space instead of upvalues, which is a little
faster and more universal.
It was always erroneous to run a lot of the config functions from routes
and vice versa, but there was no consistent strictness so users could
get into trouble.
|
|
|
|
|
| |
use a specific error when timeouts happen during connection stage vs
read/write stage. it even had a test!
|
|
|
|
|
|
|
| |
`watch deletions`: would log all keys which are deleted using either `delete` or `md` command.
The log line would contain the command used, the key, the clsid and size of the deleted item.
Items which result in delete miss or are marked as stale wouldn't show up in the logs
|
|
|
|
|
|
|
|
|
| |
Sending 's' flag to metaset now returns the size of the item stored.
Useful if you want to know how large an append/prepended item now is.
If the 'N' flag is supplied while in append/prepend mode, allows
autovivifying (with exptime supplied from N) for append/prepend style
keys that don't need headers created first.
|
|
|
|
| |
somehow missed from earlier change with marking dead backends.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- Refcount leak on sets
- Move the response elapsed timer back closer to when the response was
processed as to not clobber the wrong IO object data
- Restores error messages from set/ms
- Adds start of unit tests
Requests will look like they run a tiiiiny bit faster than they do, but
I need to get the elapsed time there for a later change.
|
|
|
|
| |
test FASTGOOD and some set scenarios
|
|
|
|
|
|
|
|
| |
One of the side effects of pre-warming all of the tests I did with
multiget, and not having done a second round on the unit tests, is that
we somehow never tried an ascii multiget against a damn miss.
Easy to test, easy to fix.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`mcp.pool(p, { dist = etc, iothread = true }`
By default the IO thread is not used; instead a backend connection is
created for each worker thread. This can be overridden by setting
`iothread = true` when creating a pool.
`mcp.pool(p, { dist = etc, beprefix = "etc" }`
If a `beprefix` is added to pool arguments, it will create unique
backend connections for this pool. This allows you to create multiple
sockets per backend by making multiple pools with unique prefixes.
There are legitimate use cases for sharing backend connections across
different pools, which is why that is the default behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The event handling code was unoptimized and temporary; it was slated for
a rewrite for performance and non-critical bugs alone. However the old
code may be causing critical bugs so it's being rewritten now.
Fixes:
- backend disconnects are detected immediately instead of on the next
time they are used.
- backend reconnects happen _after_ the retry timeout, not before
- use a persistent read handler and a temporary write handler to avoid
constantly calling epoll_ctl syscalls for potential performance boost.
Updated some tests for proxyconfig.t as it was picking up the
disconnects immediately.
Unrelated to a timing issue I resolved to the benchmark.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the backend handler reads an incomplete response from the network, it
changes state to wait for more data. The want_read state was considering
the data completed if "data read" was bigger than "value length", but it
should have been "value + result line".
This means if the response buffer landed in a bullseye where it has read
more than the size of the value but less than the total size of the
request (typically a span of 200 bytes or less), it would consider the
request complete and look for the END\r\n marker.
This change has been... here forever.
|
|
|
|
|
|
| |
Errors like "trailing data" or "missingend" or etc are only useful if
you're in a debugger and can break and inspect. This adds detail in
uriencoding into the log message when applicable.
|
|
|
|
|
|
|
| |
Response object error conditions were not being checked before looking
at the response buffer. If a response was partially filled then the
backend timed out, a partial response could be sent intead of the
proper backend error.
|
|
|
|
|
|
|
|
|
|
| |
ie:
local b1 = mcp.backend({ label = "b1", host = "127.0.0.1", port = 11511,
connecttimeout = 1, retrytimeout = 0.5, readtimeout = 0.1,
failurelimit = 11 })
... to allow for overriding connect/retry/etc tunables on a per-backend
basis. If not passed in the global settings are used.
|
|
|
|
|
|
|
| |
Logs any backgrounded requests that resulted in an error.
Note that this may be a temporary interface, and could be deprecated in
the future.
|
|
|
|
|
|
|
|
| |
uses mocked backend servers so we can test:
- end to end client to backend proxying
- lua API functions
- configuration reload
- various error conditions
|
|
|
|
|
|
|
|
|
| |
We want to start using cache commands in contexts without a client
connection, but the client object has always been passed to all
functions.
In most cases we only need the worker thread (LIBEVENT_THREAD *t), so
this change adjusts the arguments passed in.
|
|
|
|
|
|
| |
mcp.await(request, pools, 0, mcp.AWAIT_BACKGROUND) will, instead of
waiting on any request to return, simply return an empty table as soon
as the background requests are dispatched.
|
|
|
|
|
|
|
|
| |
"mg" required at least one flag. now "mg key" returns a bare HD on hit
if you don't care about the value.
HD modes would reflect O and k flags in the response, but EN didn't.
This is now fixed for full coverage.
|
|
|
|
| |
for HD/NF/etc responses but not VA.
|
|
|
|
| |
was returning "HD \r\n" and "HD Oetc\r\n" - not to protocol spec.
|
|
|
|
|
| |
At least FreeBSD has perl in /usr/local/bin/perl and no symlink by
default.
|
|
|
|
|
|
|
|
| |
allows using tagged listeners (ex; `-l tag[test]:127.0.0.1:11212`) to
select a top level route for a function.
expects there to not be dozens of listeners, but for a handful will be
faster than a hash table lookup.
|
|
|
|
|
|
| |
returns (exists, previous_token)
optional second argument will replace the flag/token with supplied
flag/token, or nothing if "" is passed.
|
|
|
|
|
|
| |
function accepts a flag, returns (bool, token|nil).
bool indicates if the flag exists, and if the flag has a token it is
returned instead of nil as the second value.
|
|
|
|
|
|
|
|
|
| |
Adds resp:code(), which returns code you can compare with
mcp.MCMC_CODE_*
Adds resp:line(), which returns the raw response line after the command.
This can be used in lua while the flag/token API is missing for the
response object.
|
|
|
|
|
|
|
|
|
| |
Add mcp.request function for quick checking if a flag exists in a
request string.
Also updates internal code for checking the length of a token to use the
endcap properly, and uses that for the r:token(n) requets as well, which
fixes a subtle bug of the token length being too long.
|
|
|
|
|
|
|
|
|
|
| |
- errors if a string value is missing the "\r\n" terminator
- properly uses a value from a response object
- allows passing in a request object for the value as well
- also adds r:vlen() for recovering the value length of a response
think this still needs r:flags() or similar?
|
| |
|
|
|
|
|
|
| |
Lua level API for logging full context of a request/response. Provides
log_req() for simple logging and log_reqsample() for conditional
logging.
|
|
|
|
|
|
|
|
| |
previously mcp.await() only worked if it was called before any other
dispatches.
also fixes a bug if the supplied pool table was key=value instead of an
array-type table.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is ketama-based, with options for minor compat changes with major
libraries.
does _not_ support weights. The weights bits in the original ketama
broke the algorithm, as changing the number of points would shift
unrelated servers when the list changes.
this also changes backends to take a "name" specifically, instead of an
"ip address". Though note if supplying a hostname instead of an IP there
might be inline DNS lookups on reconnects.
|
|
|
|
|
|
| |
allows tests to run faster, let users make it sleep longer/less time.
Also cuts the sleep time down when actively compacting and coming from
high idle.
|