| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This fixes a94e693f32672e4613bce0d80d0b9660f85275ea because a race
condition exisited where the 'DOWN' message could be received
before the compactor pid is spawned. Adding a synchronous call to
get the compactor pid guarantees that the couch_db_updater process
handling of finish_compaction has occurred.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Smoosh monitors the compactor pid to determine when the compaction jobs
finishes, and uses this for its idea of concurrency. However, this isn't
accurate in the case where the compaction job has to re-spawn to catch up on
intervening changes since the same logical compaction job continues with
another pid and smoosh is not aware. In such cases, a smoosh channel with
concurrency one can start arbitrarily many additional database compaction jobs.
To solve this problem, we added a check to see if a compaction PID exists for
a db in `start_compact`. But wee need to add another check because this check
is only for shard that comes off the queue. So the following can still occur:
1. Enqueue a bunch of stuff into channel with concurrency 1
2. Begin highest priority job, Shard1, in channel
3. Compaction finishes, discovers compaction file is behind main file
4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job
5. Smoosh receives the 'DOWN' message, begins the next highest priority job,
Shard2
6. Channel concurrency is now 2, not 1
This change adds another check into the 'DOWN' message so that it checks for
that specific shard. If the compaction PID exists then it means a new process
was spawned and we just monitor that one and add it back to the queue. The
length of the queue does not change and therefore we won’t spawn new
compaction jobs.
|
|\
| |
| | |
Retry filter_docs sequentially if the patch exceeds couchjs stack
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
A document with lots of conflicts can blow up couchjs if the user
calls _changes with a javascript filter and with `style=all_docs` as
this option causes up to fetch all the conflicts.
All leaf revisions of the document are then passed in a single call to
ddoc_prompt, which can fail if there's a lot of them.
In that event, we simply try them sequentially and assemble the
response from each call.
Should be backported to 3.x
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we subtly relied on one set of headers being sorted, then sorted the
other set of headers, and ran `lists:ukeymerge/3`. That function, however,
needs both arguments to be sorted in order for it to work as expected. If one
argument wasn't sorted we could get duplicate headers easily, which is what was
observed in testing.
A better fix than just sorting both sets of keys, is to use an actual header
processing library to combine them so we can account for case insensitivity as
well.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Co-authored-by: mauroporras <mauroporrasc@gmail.com>
|
|
|
|
|
|
| |
e.g;
[jwt]
required_claims = {iss, "https://example.com/issuer"}
|
| |
|
| |
|
|
|
| |
We need to call StartFun as it might add headers, etc.
|
| |
|
| |
|
|
|
|
|
|
| |
Previously there was an error thrown which prevented emitting _scheduler/docs
responses. Instead of throwing an error, return `null` if the URL cannot be
parsed.
|
|\
| |
| | |
Add option to delay responses until the end
|
|/
|
|
|
|
|
|
|
|
|
| |
When set, every response is sent once fully generated on the server
side. This increases memory usage on the nodes but simplifies error
handling for the client as it eliminates the possibility that the
response will be deliberately terminated midway through due to a
timeout.
The config value can be changed at runtime without impacting any
in-flight responses.
|
| |
|
|
|
|
|
|
| |
This will only report "fips" in the welcome message if FIPS mode
was enabled at boot (i.e, in vm.args).
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|\
| |
| | |
3.x backport: Allow to continue to cleanup search index even if there is invalid ddoc
|
|/
|
|
|
|
|
|
| |
In some situation where design document for search index created by
customer is not valid, the _search_cleanup endpoint will stop to clean
up. This will leave some search index orphan. The change is to allow
to continue to clean up search index even if there is invalid design
document for search.
|
|
|
|
|
|
|
|
|
| |
When partition_query_limit is set for couch_mrview, it limits how many
docs can be scanned when executing partitioned queries. But this limits
mango's doc scans internally. This leads to documents not being scanned
to fulfill a query. This fixes:
https://github.com/apache/couchdb/issues/2795
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to https://docs.couchdb.org/en/master/ddocs/search.html there
are parameters for searches that are not allowed for partitioned queries.
Those restrictions were not enforced, thus making the software and docs
inconsistent.
This commit adds them to validation so that the behavior matches the one
described in the docs.
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To use multiple `drilldown` parameters users had to define
`drilldown` multiple times to be able supply them.
This caused interoperability issues as most languages require
defining query parameters and request bodies as associative
arrays, maps or dictionaries where the keys are unique.
This change enables defining `drilldown` as a list of lists so
that other languages can define multiple drilldown keys and values.
Co-authored-by: Robert Newson <rnewson@apache.org>
Co-authored-by: Robert Newson <rnewson@apache.org>
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
|
|\
| |
| | |
update dev/run formatting to adhere to python format checks
|
|/ |
|
|
|
| |
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we don't handle it, it throws an error when trying to encode the full URL
string, for example:
```
badarg,[
{mochiweb_util,quote_plus,2,[{file,"src/mochiweb_util.erl"},{line,192}]},
{couch_replicator_httpc,query_args_to_string,2,[{file,"src/couch_replicator_httpc.erl"},{line,421}]},
{couch_replicator_httpc,full_url,2,[{file,"src/couch_replicator_httpc.erl"},{line,413}]},
{couch_replicator_api_wrap,open_doc_revs,6,[{file,"src/couch_replicator_api_wrap.erl"},{line,255}]}
]
```
This is also similar to what we did for open_revs encoding: https://github.com/apache/couchdb/commit/a2d0c4290dde2015e5fb6184696fec3f89c81a4b
|
|\
| |
| | |
Don't crash couch_index_server if the db isn't known yet
|
|/
|
|
|
|
|
| |
If a ddoc is added immediately after database creation (_users and
_replicator when couchdb is used in a multi-tenant fashion), we can
crash couch_index_server in handle_db_event, as mem3_shards:local
throws an error.
|
|\
| |
| | |
Validate shard specific query params on db create request
|
|/ |
|
|\
| |
| | |
Unlink index pid and swallow EXIT message if present
|
|/
|
|
|
|
|
|
|
| |
This should prevent unexpected exit messages arriving which crash
couch_index_server.
Patch suggested by davisp.
Closes #3061.
|
|
|
|
|
|
|
| |
* fix: send CSP header to make Fauxotn work fully
Co-authored-by: Robert Newson <rnewson@apache.org>
* Remove accidental chttpd_auth.erl.orig commit
|
| |
|