summaryrefslogtreecommitdiff
path: root/COMMITTERS.md
diff options
context:
space:
mode:
authorNick Vatamaniuc <vatamane@apache.org>2020-06-04 17:57:41 -0400
committerNick Vatamaniuc <vatamane@apache.org>2020-06-08 01:14:52 -0400
commita71d63741ca8abaa04097b3373c1a2ec21595709 (patch)
tree52d0d28293d69ce139a19486848f0577dfcfb358 /COMMITTERS.md
parent19ae50815ca1016719f94a2757e08757b37fb949 (diff)
downloadcouchdb-optimize-couch-views-acceptors.tar.gz
Split couch_views acceptors and workersoptimize-couch-views-acceptors
Optimize couch_views by using a separate set of acceptors and workers. Previously, all `max_workers` where spawned on startup, and were to waiting to accept jobs in parallel. In a setup with a large number of pods, and 100 workers per pod, that could lead to a lot of conflicts being generated when all those workers race to accept the same job at the same time. The improvement is to spawn only a limited number of acceptors (5, by default), then, spawn more after some of them become workers. Also, when some workers finish or die with an error, check if more acceptors could be spawned. As an example, here is what might happen with `max_acceptors = 5` and `max_workers = 100` (`A` and `W` are the current counts of acceptors and workers, respectively): 1. Starting out: `A = 5, W = 0` 2. After 2 acceptors start running: `A = 3, W = 2` Then immediately 2 more acceptors are spawned: `A = 5, W = 2` 3. After 95 workers are started: `A = 5, W = 95` 4. Now if 3 acceptors accept, it would look like: `A = 2, W = 98` But no more acceptors would be started. 5. If the last 2 acceptors also accept jobs: `A = 0, W = 100` At this point no more indexing jobs can be accepted and started until at least one of the workers finish and exit. 6. If 1 worker exits: `A = 0, W = 99` An acceptor will be immediately spawned `A = 1, W = 99` 7. If all 99 workers exit, it will go back to: `A = 5, W = 0`
Diffstat (limited to 'COMMITTERS.md')
0 files changed, 0 insertions, 0 deletions