| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
sort of like {node_up, Node, NodeType}. It's not perfect, but it's the best we're going to get.
|
|
|
|
| |
committed as part of f1317bb80df9 (bug 25358) by mistake. Remove.
|
|\ |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
gone down before net_ticktime expires (it will hang for until then instead).
|
| |\ |
|
| |\ \ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Mnesia for majorityness.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
come back but also be waiting (in the no-majority case, and RAM nodes). Better to detect they exist and come back than to stay stuck because they don't happen to be running Mnesia.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
=INFO REPORT==== 27-Feb-2013::14:17:46 ===
application: mnesia
exited: stopped
type: temporary
since they are not very interesting and this bug makes them appear to a highly verbose extent.
|
| | | |
| | | |
| | | |
| | | | |
come back.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| |_|/
|/| | |
|
|\ \ \
| |/ / |
|
| |\ \ |
|
| | | | |
|
| |\ \ \ |
|
| | | | | |
|
| | |\ \ \
| | | |/ / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
backported from bug23749 branch
- the call to update_ch_record in the is_ch_blocked(C1) == false
branch was superfluos since the preceding update_consumer_count
calls update_ch_record
- all the checking whether the channel is blocked, and associated
branching was just an optimisation. And not a particularly important
one, since a) the "a new consumer comes along while its
channel is blocked" case is hardly on the critical path, and b)
exactly the same check is performed as part of run_message_queue (in
deliver_msg_to_consumer/3). So get rid of it.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
ported from bug 23749
|
| | | | | |
|
| | | | | |
|
| | | |\ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
but have unsynced slaves at that point.
|
| | | | | | |
|
| | | |\ \ \
| | | | |_|/
| | | |/| | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
It makes no difference whether we call handle_exception before or
after control_throttle, so lets use an order that more clearly calls
out the similarity to the controlled exit case.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
by re-introducing a call to control_throttle.
This was present in 3.0.x, and is needed there, but is actually not
needed here, due to other changes made in the area. But the reason is
quite subtle...
For control_throttle to do something here, the channel_cleanup would
have to be called for a channel for which we have run out of
credit. Now, credit is only consumed when sending content-bearing
methods to the channel with rabbit_channel:do_flow. There is a
subsequent invocation of control_throttle in the code which will set
the connection_state to 'blocking'. But this is immediately followed
by a call to post_process_frame. The frame we are looking at must be
the last frame of a content-bearing method, since otherwise the method
would not be complete and we wouldn't have passed it to the
channel. Hence that frame can only be a content_header or, more
likely, a content_body. For both of these, post_process_frame invokes
maybe_block, which will turn a 'blocking' into 'blocked'. And from
that point onwards we will no longer read anything from the socket or
process anything already in buf. So we certainly can't be processing a
channel.close_ok.
In other words, post_process_frame can only be invoked with a
channel.close_ok frame when the connection_state is 'running', or
blocking/blocked for a reason other than having run out of credit for
a channel, i.e. an alarm. Therefore forgetting about the channel as
part of the channel_cleanup call does not have an effect on
credit_flow:blocked(). And hence an invocation of control_throttle
will not alter the connection_state and is therefore unnecessary.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
- handle the no-op case (controlled exit of a channel we've forgotten
about already) explicitly
- better clause order and formatting.
|
| | | | | | |
|
| | | |\ \ \ |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | |\ \ \ |
|
| | | | | |\ \ \ |
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
And on recover. And when the timer goes off. That's all we need.
new call sites:
- in deliver_or_enqueue/3, when enqueuing a message (that we couldn't
deliver to a consumer straight away) with an expiry to the head.
the queue. NB: Previously we were always (re)setting a timer when
enqueuing a message with an expiry, which is wasteful when the new
message isn't at the head (i.e. the queue was non-empty) or when it
needs expiring immediately.
- requeue_and_run/2, since a message may get requeued to the
head. This call site arises due to removal of the
run_message_queue/1 call site (see below).
unchanged call sites:
- init_ttl/2 - this is the recovery case
- fetch/2, after fetching - this is the basic "queue head changes"
case
- handle_info/drop_expired - this is the message expiry timer
removed call sites:
- run_message_queue/1 - this internally calls fetch/2 (see above) but
also invoking drop_expired_msgs at the beginning. This now happens
at the call sites, where it is necessary. Which actually only is in
requeue_and_run, and not the others, none of which change the queue
content prior to calling run_message_queue/1
- possibly_unblock/3 - unblocking of consumers
- handle_call/basic_consumer - adding a consumer
- handle_call/basic_get, prior to the call to fetch/2.
- handle_call/stat
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Since that would only be necessary of the BQ:invoke modified the
consumers, which it can't, or added messages to the queue, which it
shouldn't.
|
| | | | | |/ / / |
|