summaryrefslogtreecommitdiff
path: root/drivers/md/bcache/request.h
Commit message (Collapse)AuthorAgeFilesLines
* bcache: Kill dead cgroup codeKent Overstreet2014-03-181-18/+0
| | | | | | This hasn't been used or even enabled in ages. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Fix moving_gc deadlocking with a foreground writeNicholas Swenson2014-03-181-0/+1
| | | | | | | | | | | | | | | Deadlock happened because a foreground write slept, waiting for a bucket to be allocated. Normally the gc would mark buckets available for invalidation. But the moving_gc was stuck waiting for outstanding writes to complete. These writes used the bcache_wq, the same queue foreground writes used. This fix gives moving_gc its own work queue, so it was still finish moving even if foreground writes are stuck waiting for allocation. It also makes work queue a parameter to the data_insert path, so moving_gc can use its workqueue for writes. Signed-off-by: Nicholas Swenson <nks@daterainc.com> Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Zero less memoryKent Overstreet2014-01-081-8/+13
| | | | | | Another minor performance optimization Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Move sector allocator to alloc.cKent Overstreet2013-11-101-4/+1
| | | | | | Just reorganizing things a bit. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Break up struct searchKent Overstreet2013-11-101-29/+8
| | | | | | | | | | | | With all the recent refactoring around struct btree op struct search has gotten rather large. But we can now easily break it up in a different way - we break out struct btree_insert_op which is for inserting data into the cache, and that's now what the copying gc code uses - struct search is now specific to request.c Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Don't use op->insert_collisionKent Overstreet2013-11-101-0/+1
| | | | | | | | | When we convert bch_btree_insert() to bch_btree_map_leaf_nodes(), we won't be passing struct btree_op to bch_btree_insert() anymore - so we need a different way of returning whether there was a collision (really, a replace collision). Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Kill op->replaceKent Overstreet2013-11-101-0/+2
| | | | | | | | This is prep work for converting bch_btree_insert to bch_btree_map_leaf_nodes() - we have to convert all its arguments to actual arguments. Bunch of churn, but should be straightforward. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Kill op->clKent Overstreet2013-11-101-0/+1
| | | | | | | This isn't used for waiting asynchronously anymore - so this is a fairly trivial refactoring. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Prune struct btree_opKent Overstreet2013-11-101-0/+14
| | | | | | | Eventual goal is for struct btree_op to contain only what is necessary for traversing the btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Convert bch_btree_read_async() to bch_btree_map_keys()Kent Overstreet2013-11-101-2/+0
| | | | | | | | | This is a fairly straightforward conversion, mostly reshuffling - op->lookup_done goes away, replaced by MAP_DONE/MAP_CONTINUE. And the code for handling cache hits and misses wasn't really btree code, so it gets moved to request.c. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Move keylist out of btree_opKent Overstreet2013-11-101-1/+3
| | | | | | | Slowly working on pruning struct btree_op - the aim is for it to only contain things that are actually necessary for traversing the btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Refactor journalling flow controlKent Overstreet2013-11-101-2/+1
| | | | | | | | | | Making things less asynchronous that don't need to be - bch_journal() only has to block when the journal or journal entry is full, which is emphatically not a fast path. So make it a normal function that just returns when it finishes, to make the code and control flow easier to follow. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Fix/revamp tracepointsKent Overstreet2013-06-261-1/+1
| | | | | | | | | | | | | | | | The tracepoints were reworked to be more sensible, and fixed a null pointer deref in one of the tracepoints. Converted some of the pr_debug()s to tracepoints - this is partly a performance optimization; it used to be that with DEBUG or CONFIG_DYNAMIC_DEBUG pr_debug() was an empty macro; but at some point it was changed to an empty inline function. Some of the pr_debug() statements had rather expensive function calls as part of the arguments, so this code was getting run unnecessarily even on non debug kernels - in some fast paths, too. Signed-off-by: Kent Overstreet <koverstreet@google.com>
* bcache: A block layer cacheKent Overstreet2013-03-231-0/+62
Does writethrough and writeback caching, handles unclean shutdown, and has a bunch of other nifty features motivated by real world usage. See the wiki at http://bcache.evilpiepirate.org for more. Signed-off-by: Kent Overstreet <koverstreet@google.com>