diff options
author | Vicent Marti <tanoku@gmail.com> | 2013-12-21 09:00:45 -0500 |
---|---|---|
committer | Junio C Hamano <gitster@pobox.com> | 2013-12-30 12:19:23 -0800 |
commit | ae4f07fbccaab6dc93be52c0f34e137dd9fcbcf4 (patch) | |
tree | 7318a55faf8d38e4317a82dda8f1c5f465a9ae53 /pack-bitmap.h | |
parent | bbcefa1f3f8355921137dd7a097b3ee3db66f023 (diff) | |
download | git-ae4f07fbccaab6dc93be52c0f34e137dd9fcbcf4.tar.gz |
pack-bitmap: implement optional name_hash cache
When we use pack bitmaps rather than walking the object
graph, we end up with the list of objects to include in the
packfile, but we do not know the path at which any tree or
blob objects would be found.
In a recently packed repository, this is fine. A fetch would
use the paths only as a heuristic in the delta compression
phase, and a fully packed repository should not need to do
much delta compression.
As time passes, though, we may acquire more objects on top
of our large bitmapped pack. If clients fetch frequently,
then they never even look at the bitmapped history, and all
works as usual. However, a client who has not fetched since
the last bitmap repack will have "have" tips in the
bitmapped history, but "want" newer objects.
The bitmaps themselves degrade gracefully in this
circumstance. We manually walk the more recent bits of
history, and then use bitmaps when we hit them.
But we would also like to perform delta compression between
the newer objects and the bitmapped objects (both to delta
against what we know the user already has, but also between
"new" and "old" objects that the user is fetching). The lack
of pathnames makes our delta heuristics much less effective.
This patch adds an optional cache of the 32-bit name_hash
values to the end of the bitmap file. If present, a reader
can use it to match bitmapped and non-bitmapped names during
delta compression.
Here are perf results for p5310:
Test origin/master HEAD^ HEAD
-------------------------------------------------------------------------------------------------
5310.2: repack to disk 36.81(37.82+1.43) 47.70(48.74+1.41) +29.6% 47.75(48.70+1.51) +29.7%
5310.3: simulated clone 30.78(29.70+2.14) 1.08(0.97+0.10) -96.5% 1.07(0.94+0.12) -96.5%
5310.4: simulated fetch 3.16(6.10+0.08) 3.54(10.65+0.06) +12.0% 1.70(3.07+0.06) -46.2%
5310.6: partial bitmap 36.76(43.19+1.81) 6.71(11.25+0.76) -81.7% 4.08(6.26+0.46) -88.9%
You can see that the time spent on an incremental fetch goes
down, as our delta heuristics are able to do their work.
And we save time on the partial bitmap clone for the same
reason.
Signed-off-by: Vicent Marti <tanoku@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'pack-bitmap.h')
-rw-r--r-- | pack-bitmap.h | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/pack-bitmap.h b/pack-bitmap.h index 09acf02f7b..8b7f4e9f0d 100644 --- a/pack-bitmap.h +++ b/pack-bitmap.h @@ -24,7 +24,8 @@ static const char BITMAP_IDX_SIGNATURE[] = {'B', 'I', 'T', 'M'}; #define NEEDS_BITMAP (1u<<22) enum pack_bitmap_opts { - BITMAP_OPT_FULL_DAG = 1 + BITMAP_OPT_FULL_DAG = 1, + BITMAP_OPT_HASH_CACHE = 4, }; enum pack_bitmap_flags { @@ -57,6 +58,7 @@ void bitmap_writer_select_commits(struct commit **indexed_commits, void bitmap_writer_build(struct packing_data *to_pack); void bitmap_writer_finish(struct pack_idx_entry **index, uint32_t index_nr, - const char *filename); + const char *filename, + uint16_t options); #endif |