| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In my experiments I encountered situations where rabbit would not
recover from a high memory alert even though all messages had been
drained from it. By inspecting the running processes I determined that
queue and channel processes sometimes hung on to garbage. Erlang's gc
is per-process and triggered by process reduction counts, which means
an idle process will never perform a gc. This explains the behaviour -
the publisher channel goes idle when channel flow control is activated
and the queue process goes idle once all messages have been drained
from it.
Hibernating idle processes forces a gc, as well as generally reducing
memory consumption. Currently only channel and queue processes are
hibernating, since these are the only two that seemed to be causing
problems in my tests. We may want to extend hibernation to other
processes in the future.
|
|
|
|
|
|
| |
The default 80% is just too low for many systems - I have less than
that on tanto most of the time.
It remains to be seen whether the new figure works ok for most users.
|
|\
| |
| |
| | |
The former triggered errors in the latter
|
| |
| |
| |
| |
| |
| |
| | |
The buffering_proxy:mainloop was unconditionally requesting new
messages from the proxy. It should only do that when it has just
finished handling the messages given to it by the proxy in response to
a previous request, and not after handling a direct message.
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
while building on Debian systems. Unfortunately
.spec doesn't have 'not' logic.
|
| | | | |
|
| |\ \ \ |
|
| | |\ \ \ |
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
When we fire off lots of gen_server:calls in parallel, we may create
enough work for the VM to cause the calls to time out - since the
amount of work that can actually be done in parallel is finite.
The fix is to adjust the timeout based on the total workload.
Alternatively we could not have any timeout at all, but
that is bad Erlang style since a small error somewhere could result in
stuck processes.
I moved the parallelisation - and hence timeout modulation - from the
channel into the amqqueue module, changing the API in the process -
commit, rollback and notify_down now all operate on lists of
QPids (and I've renamed the functions to make that clear). The
alternative would have been to add Timeout params to these
three functions, but I reckon the API is cleaner this way,
particularly considering that rollback doesn't actually do a call - it
does a cast and hence doesn't require a timeout - so in the
alternative API we'd either have to expose that fact indirectly by not
having a Timeout param, or have a bogus Timeout param, neither of
which is particularly appealing.
I considered making the functions take sets instead of lists, since
that's what the channel code produces, plus sets have a more efficient
length operation. However, API-wise I reckon lists are nicer, plus it
means I can give a more precise type to dialyzer - sets would be
opaque and non-polymorphic.
|
| | |/ / /
| |/| | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We propagate changes in the high memory alarm status as channel.flow
messages to the client. The channel.flow_ok replies are simply
accepted.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This now supports the registration of alertee processes with callback
MFAs. We monitor the alertee process to keep the alertee list current,
and notify alertees of initial high memory conditions, and any
changes.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
so we can later experiment with different effects
|
|/ / / /
| | | |
| | | |
| | | | |
configure memsup and hook in our own alarm handler
|
|\ \ \ \
| |/ / / |
|
| |\ \ \
| | |/ / |
|
| |/ / |
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
the status of the node. No need to print node's apps.
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
now use that command.
Used status command in init.d scripts to check
if the server is running before stopping it.
Fixed various indentation problems in init.d scripts.
Synchronized the init.d in Debian and RPM to behave
in a similar way.
|
| |\ \ |
|
| |\ \ \
| | |/ / |
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
it to chkconfig.
|
| | |\ \ \ |
|
| | |/ / /
| | | | |
| | | | |
| | | | |
| | | | | |
Debian packaging has the same license as the
broker itself.
|
| | |\ \ \
| | | | |/
| | | |/| |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | |\ \ |
|
| | | | |\ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
directory of the rabbitmq-multi.bat script
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
in rotate_logs command. Updated usage function()
Reverted change in rabbitmq-server.spec that
didn't belong to this bug.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
the command on specific node returns an error.
Display the error message in that case only.
|