summaryrefslogtreecommitdiff
path: root/cache-tree.c
Commit message (Collapse)AuthorAgeFilesLines
* sha1-file: pass git_hash_algo to hash_object_file()Matheus Tavares2020-01-311-3/+6
| | | | | | | | | | | | | | Allow hash_object_file() to work on arbitrary repos by introducing a git_hash_algo parameter. Change callers which have a struct repository pointer in their scope to pass on the git_hash_algo from the said repo. For all other callers, pass on the_hash_algo, which was already being used internally at hash_object_file(). This functionality will be used in the following patch to make check_object_signature() be able to work on arbitrary repos (which, in turn, will be used to fix an inconsistency at object.c:parse_object()). Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* cache-tree: use given repo's hash_algo at verify_one()Matheus Tavares2020-01-311-1/+1
| | | | | | | | | | verify_one() takes a struct repository argument but uses the_hash_algo internally. Replace it with the provided repo's git_hash_algo, for consistency. For now, this is mainly a cosmetic change, as all callers of this function currently only pass the_repository to it. Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'en/merge-recursive-cleanup'Junio C Hamano2019-10-151-23/+62
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The merge-recursive machiery is one of the most complex parts of the system that accumulated cruft over time. This large series cleans up the implementation quite a bit. * en/merge-recursive-cleanup: (26 commits) merge-recursive: fix the fix to the diff3 common ancestor label merge-recursive: fix the diff3 common ancestor label for virtual commits merge-recursive: alphabetize include list merge-recursive: add sanity checks for relevant merge_options merge-recursive: rename MERGE_RECURSIVE_* to MERGE_VARIANT_* merge-recursive: split internal fields into a separate struct merge-recursive: avoid losing output and leaking memory holding that output merge-recursive: comment and reorder the merge_options fields merge-recursive: consolidate unnecessary fields in merge_options merge-recursive: move some definitions around to clean up the header merge-recursive: rename merge_options argument to opt in header merge-recursive: rename 'mrtree' to 'result_tree', for clarity merge-recursive: use common name for ancestors/common/base_list merge-recursive: fix some overly long lines cache-tree: share code between functions writing an index as a tree merge-recursive: don't force external callers to do our logging merge-recursive: remove useless parameter in merge_trees() merge-recursive: exit early if index != head Ensure index matches head before invoking merge machinery, round N merge-recursive: remove another implicit dependency on the_repository ...
| * cache-tree: share code between functions writing an index as a treeElijah Newren2019-08-191-23/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | write_tree_from_memory() appeared to be a merge-recursive special that basically duplicated write_index_as_tree(). The two have a different signature, but the bigger difference was just that write_index_as_tree() would always unconditionally read the index off of disk instead of working on the current in-memory index. So: * split out common code into write_index_as_tree_internal() * rename write_tree_from_memory() to write_inmemory_index_as_tree(), make it call write_index_as_tree_internal(), and move it to cache-tree.c Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jt/cache-tree-avoid-lazy-fetch-during-merge'Junio C Hamano2019-10-071-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | The cache-tree code has been taught to be less aggressive in attempting to see if a tree object it computed already exists in the repository. * jt/cache-tree-avoid-lazy-fetch-during-merge: cache-tree: do not lazy-fetch tentative tree
| * | cache-tree: do not lazy-fetch tentative treeJonathan Tan2019-09-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cache-tree datastructure is used to speed up the comparison between the HEAD and the index, and when the index is updated by a cherry-pick (for example), a tree object that would represent the paths in the index in a directory is constructed in-core, to see if such a tree object exists already in the object store. When the lazy-fetch mechanism was introduced, we converted this "does the tree exist?" check into an "if it does not, and if we lazily cloned, see if the remote has it" call by mistake. Since the whole point of this check is to repair the cache-tree by recording an already existing tree object opportunistically, we shouldn't even try to fetch one from the remote. Pass the OBJECT_INFO_SKIP_FETCH_OBJECT flag to make sure we only check for existence in the local object store without triggering the lazy fetch mechanism. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> [jc: rewritten the proposed log message] Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'cc/multi-promisor'Junio C Hamano2019-09-181-1/+2
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach the lazy clone machinery that there can be more than one promisor remote and consult them in order when downloading missing objects on demand. * cc/multi-promisor: Move core_partial_clone_filter_default to promisor-remote.c Move repository_format_partial_clone to promisor-remote.c Remove fetch-object.{c,h} in favor of promisor-remote.{c,h} remote: add promisor and partial clone config to the doc partial-clone: add multiple remotes in the doc t0410: test fetching from many promisor remotes builtin/fetch: remove unique promisor remote limitation promisor-remote: parse remote.*.partialclonefilter Use promisor_remote_get_direct() and has_promisor_remote() promisor-remote: use repository_format_partial_clone promisor-remote: add promisor_remote_reinit() promisor-remote: implement promisor_remote_get_direct() Add initial support for many promisor remotes fetch-object: make functions return an error code t0410: remove pipes after git commands
| * | Use promisor_remote_get_direct() and has_promisor_remote()Christian Couder2019-06-251-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using the repository_format_partial_clone global and fetch_objects() directly, let's use has_promisor_remote() and promisor_remote_get_direct(). This way all the configured promisor remotes will be taken into account, not only the one specified by extensions.partialClone. Also when cloning or fetching using a partial clone filter, remote.origin.promisor will be set to "true" instead of setting extensions.partialClone to "origin". This makes it possible to use many promisor remote just by fetching from them. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'jk/tree-walk-overflow'Junio C Hamano2019-08-221-1/+1
|\ \ \ | |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Codepaths to walk tree objects have been audited for integer overflows and hardened. * jk/tree-walk-overflow: tree-walk: harden make_traverse_path() length computations tree-walk: add a strbuf wrapper for make_traverse_path() tree-walk: accept a raw length for traverse_path_len() tree-walk: use size_t consistently tree-walk: drop oid from traverse_info setup_traverse_info(): stop copying oid
| * | tree-walk: drop oid from traverse_infoJeff King2019-07-311-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As the previous commit shows, the presence of an oid in each level of the traverse_info is confusing and ultimately not necessary. Let's drop it to make it clear that it will not always be set (as well as convince us that it's unused, and let the compiler catch any merges with other branches that do add new uses). Since the oid is part of name_entry, we'll actually stop embedding a name_entry entirely, and instead just separately hold the pathname, its length, and the mode. This makes the resulting code slightly more verbose as we have to pass those elements around individually. But it also makes it more clear what each code path is going to use (and in most of the paths, we really only care about the pathname itself). A few of these conversions are noisier than they need to be, as they also take the opportunity to rename "len" to "namelen" for clarity (especially where we also have "pathlen" or "ce_len" alongside). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | cache-tree/blame: avoid reusing the DEBUG constantJeff Hostetler2019-06-201-7/+7
|/ | | | | | | | | | | | In MS Visual C, the `DEBUG` constant is set automatically whenever compiling with debug information. This is clearly not what was intended in `cache-tree.c` nor in `builtin/blame.c`, so let's use a less ambiguous name there. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jk/loose-object-cache-oid'Junio C Hamano2019-02-061-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Code clean-up. * jk/loose-object-cache-oid: prefer "hash mismatch" to "sha1 mismatch" sha1-file: avoid "sha1 file" for generic use in messages sha1-file: prefer "loose object file" to "sha1 file" in messages sha1-file: drop has_sha1_file() convert has_sha1_file() callers to has_object_file() sha1-file: convert pass-through functions to object_id sha1-file: modernize loose header/stream functions sha1-file: modernize loose object file functions http: use struct object_id instead of bare sha1 update comment references to sha1_object_info() sha1-file: fix outdated sha1 comment references
| * convert has_sha1_file() callers to has_object_file()Jeff King2019-01-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | The only remaining callers of has_sha1_file() actually have an object_id already. They can use the "object" variant, rather than dereferencing the hash themselves. The code changes here were completely generated by the included coccinelle patch. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'bc/tree-walk-oid'Junio C Hamano2019-01-291-2/+2
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code to walk tree objects has been taught that we may be working with object names that are not computed with SHA-1. * bc/tree-walk-oid: cache: make oidcpy always copy GIT_MAX_RAWSZ bytes tree-walk: store object_id in a separate member match-trees: use hashcpy to splice trees match-trees: compute buffer offset correctly when splicing tree-walk: copy object ID before use
| * | tree-walk: store object_id in a separate memberbrian m. carlson2019-01-151-2/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When parsing a tree, we read the object ID directly out of the tree buffer. This is normally fine, but such an object ID cannot be used with oidcpy, which copies GIT_MAX_RAWSZ bytes, because if we are using SHA-1, there may not be that many bytes to copy. Instead, store the object ID in a separate struct member. Since we can no longer efficiently compute the path length, store that information as well in struct name_entry. Ensure we only copy the object ID into the new buffer if the path length is nonzero, as some callers will pass us an empty path with no object ID following it, and we will not want to read past the end of the buffer. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'nd/indentation-fix'Junio C Hamano2019-01-141-1/+1
|\ \ | |/ |/| | | | | | | | | Code cleanup. * nd/indentation-fix: Indent code with TABs
| * Indent code with TABsNguyễn Thái Ngọc Duy2018-12-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We indent with TABs and sometimes for fine alignment, TABs followed by spaces, but never all spaces (unless the indentation is less than 8 columns). Indenting with spaces slips through in some places. Fix them. Imported code and compat/ are left alone on purpose. The former should remain as close as upstream as possible. The latter pretty much has separate maintainers, it's up to them to decide. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | cache-tree.c: remove the_repository referencesNguyễn Thái Ngọc Duy2018-11-121-11/+15
|/ | | | | | | | | | | This case is more interesting than other boring "remove the_repo" commits because while we need access to the object database, we cannot simply use r->index because unpack-trees.c can operate on a temporary index, not $GIT_DIR/index. Ideally we should be able to pass an object database to lookup_tree() but that ship has sailed. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jt/cache-tree-allow-missing-object-in-partial-clone'Junio C Hamano2018-10-191-1/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | In a partial clone that will lazily be hydrated from the originating repository, we generally want to avoid "does this object exist (locally)?" on objects that we deliberately omitted when we created the clone. The cache-tree codepath (which is used to write a tree object out of the index) however insisted that the object exists, even for paths that are outside of the partial checkout area. The code has been updated to avoid such a check. * jt/cache-tree-allow-missing-object-in-partial-clone: cache-tree: skip some blob checks in partial clone
| * cache-tree: skip some blob checks in partial cloneJonathan Tan2018-10-101-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a partial clone, whenever a sparse checkout occurs, the existence of all blobs in the index is verified, whether they are included or excluded by the .git/info/sparse-checkout specification. This significantly degrades performance because a lazy fetch occurs whenever the existence of a missing blob is checked. This is because cache_tree_update() checks the existence of all objects in the index, whether or not CE_SKIP_WORKTREE is set on them. Teach cache_tree_update() to skip checking CE_SKIP_WORKTREE objects when the repository is a partial clone. This improves performance for sparse checkout and also other operations that use cache_tree_update(). Instead of completely removing the check, an argument could be made that the check should instead be replaced by a check that the blob is promised, but for performance reasons, I decided not to do this. If the user needs to verify the repository, it can be done using fsck (which will notify if a tree points to a missing and non-promised blob, whether the blob is included or excluded by the sparse-checkout specification). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | more oideq/hasheq conversionsJeff King2018-10-041-1/+1
|/ | | | | | | | | | | | We added faster equality-comparison functions for hashes in 14438c4497 (introduce hasheq() and oideq(), 2018-08-28). A few topics were in-flight at the time, and can now be converted. This covers all spots found by "make coccicheck" in master (the coccicheck results were tweaked by hand for style). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jk/cocci'Junio C Hamano2018-09-171-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spatch transformation to replace boolean uses of !hashcmp() to newly introduced oideq() is added, and applied, to regain performance lost due to support of multiple hash algorithms. * jk/cocci: show_dirstat: simplify same-content check read-cache: use oideq() in ce_compare functions convert hashmap comparison functions to oideq() convert "hashcmp() != 0" to "!hasheq()" convert "oidcmp() != 0" to "!oideq()" convert "hashcmp() == 0" to hasheq() convert "oidcmp() == 0" to oideq() introduce hasheq() and oideq() coccinelle: use <...> for function exclusion
| * convert "oidcmp() == 0" to oideq()Jeff King2018-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using the more restrictive oideq() should, in the long run, give the compiler more opportunities to optimize these callsites. For now, this conversion should be a complete noop with respect to the generated code. The result is also perhaps a little more readable, as it avoids the "zero is equal" idiom. Since it's so prevalent in C, I think seasoned programmers tend not to even notice it anymore, but it can sometimes make for awkward double negations (e.g., we can drop a few !!oidcmp() instances here). This patch was generated almost entirely by the included coccinelle patch. This mechanical conversion should be completely safe, because we check explicitly for cases where oidcmp() is compared to 0, which is what oideq() is doing under the hood. Note that we don't have to catch "!oidcmp()" separately; coccinelle's standard isomorphisms make sure the two are treated equivalently. I say "almost" because I did hand-edit the coccinelle output to fix up a few style violations (it mostly keeps the original formatting, but sometimes unwraps long lines). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'nd/unpack-trees-with-cache-tree'Junio C Hamano2018-09-171-0/+80
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The unpack_trees() API used in checking out a branch and merging walks one or more trees along with the index. When the cache-tree in the index tells us that we are walking a tree whose flattened contents is known (i.e. matches a span in the index), as linearly scanning a span in the index is much more efficient than having to open tree objects recursively and listing their entries, the walk can be optimized, which is done in this topic. * nd/unpack-trees-with-cache-tree: Document update for nd/unpack-trees-with-cache-tree cache-tree: verify valid cache-tree in the test suite unpack-trees: add missing cache invalidation unpack-trees: reuse (still valid) cache-tree from src_index unpack-trees: reduce malloc in cache-tree walk unpack-trees: optimize walking same trees with cache-tree unpack-trees: add performance tracing trace.h: support nested performance tracing
| * cache-tree: verify valid cache-tree in the test suiteNguyễn Thái Ngọc Duy2018-08-181-0/+78
| | | | | | | | | | | | | | | | | | | | This makes sure that cache-tree is consistent with the index. The main purpose is to catch potential problems by saving the index in unpack_trees() but the line in write_index() would also help spot missing invalidation in other code. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * unpack-trees: add performance tracingNguyễn Thái Ngọc Duy2018-08-181-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We're going to optimize unpack_trees() a bit in the following patches. Let's add some tracing to measure how long it takes before and after. This is the baseline ("git checkout -" on webkit.git, 275k files on worktree) performance: 0.056651714 s: read cache .git/index performance: 0.183101080 s: preload index performance: 0.008584433 s: refresh index performance: 0.633767589 s: traverse_trees performance: 0.340265448 s: check_updates performance: 0.381884638 s: cache_tree_update performance: 1.401562947 s: unpack_trees performance: 0.338687914 s: write index, changed mask = 2e performance: 0.411927922 s: traverse_trees performance: 0.000023335 s: check_updates performance: 0.423697246 s: unpack_trees performance: 0.423708360 s: diff-index performance: 2.559524127 s: git command: git checkout - Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | cache-tree: wrap the_index based wrappers with #ifdefNguyễn Thái Ngọc Duy2018-08-131-12/+0
|/ | | | | | | | | | | | This puts update_main_cache_tree() and write_cache_as_tree() in the same group of "index compat" functions that assume the_index implicitly, which should only be used within builtin/ or t/helper. sequencer.c is also updated to not use these functions. As of now, no files outside builtin/ use these functions anymore. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* tree: add repository argument to lookup_treeStefan Beller2018-06-291-1/+2
| | | | | | | | | | | | | | Add a repository argument to allow the callers of lookup_tree to be more specific about which repository to act on. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'sb/object-store-grafts' into sb/object-store-lookupJunio C Hamano2018-06-291-0/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * sb/object-store-grafts: commit: allow lookup_commit_graft to handle arbitrary repositories commit: allow prepare_commit_graft to handle arbitrary repositories shallow: migrate shallow information into the object parser path.c: migrate global git_path_* to take a repository argument cache: convert get_graft_file to handle arbitrary repositories commit: convert read_graft_file to handle arbitrary repositories commit: convert register_commit_graft to handle arbitrary repositories commit: convert commit_graft_pos() to handle arbitrary repositories shallow: add repository argument to is_repository_shallow shallow: add repository argument to check_shallow_file_for_update shallow: add repository argument to register_shallow shallow: add repository argument to set_alternate_shallow_file commit: add repository argument to lookup_commit_graft commit: add repository argument to prepare_commit_graft commit: add repository argument to read_graft_file commit: add repository argument to register_commit_graft commit: add repository argument to commit_graft_pos object: move grafts to object parser object-store: move object access functions to object-store.h
| * object-store: move object access functions to object-store.hStefan Beller2018-05-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This should make these functions easier to find and cache.h less overwhelming to read. In particular, this moves: - read_object_file - oid_object_info - write_object_file As a result, most of the codebase needs to #include object-store.h. In this patch the #include is only added to files that would fail to compile otherwise. It would be better to #include wherever identifiers from the header are used. That can happen later when we have better tooling for it. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | cache-tree: use is_empty_tree_oidbrian m. carlson2018-05-021-1/+1
| | | | | | | | | | | | | | | | | | | | When comparing an object ID against that of the empty tree, use the is_empty_tree_oid function to ensure that we abstract over the hash algorithm properly. In addition, this is more readable than a plain oidcmp. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | cache: add a function to read an object ID from a bufferbrian m. carlson2018-05-021-1/+1
|/ | | | | | | | | | In various places throughout the codebase, we need to read data into a struct object_id from a pack or other unsigned char buffer. Add an inline function that does this based on the current hash algorithm in use, and use it in several places. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* cache-tree: convert remnants to struct object_idbrian m. carlson2018-03-141-14/+15
| | | | | | | | Convert the remaining portions of cache-tree.c to use struct object_id. Convert several instances of 20 to use the_hash_algo instead. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* cache-tree: convert write_*_as_tree to object_idbrian m. carlson2018-03-141-5/+5
| | | | | | | | Convert write_index_as_tree and write_cache_as_tree to use struct object_id. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'po/object-id'Junio C Hamano2018-02-151-8/+8
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conversion from uchar[20] to struct object_id continues. * po/object-id: sha1_file: rename hash_sha1_file_literally sha1_file: convert write_loose_object to object_id sha1_file: convert force_object_loose to object_id sha1_file: convert write_sha1_file to object_id notes: convert write_notes_tree to object_id notes: convert combine_notes_* to object_id commit: convert commit_tree* to object_id match-trees: convert splice_tree to object_id cache: clear whole hash buffer with oidclr sha1_file: convert hash_sha1_file to object_id dir: convert struct sha1_stat to use object_id sha1_file: convert pretend_sha1_file to object_id
| * sha1_file: convert write_sha1_file to object_idPatryk Obara2018-01-301-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the definition and declaration of write_sha1_file to struct object_id and adjust usage of this function. This commit also converts static function write_sha1_file_prepare, as it is closely related. Rename these functions to write_object_file and write_object_file_prepare respectively. Replace sha1_to_hex, hashcpy and hashclr with their oid equivalents wherever possible. Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * sha1_file: convert hash_sha1_file to object_idPatryk Obara2018-01-301-6/+5
| | | | | | | | | | | | | | | | | | | | Convert the declaration and definition of hash_sha1_file to use struct object_id and adjust all function calls. Rename this function to hash_object_file. Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'sg/cocci-move-array'Junio C Hamano2018-02-131-3/+2
|\ \ | | | | | | | | | | | | | | | | | | Code clean-up. * sg/cocci-move-array: Use MOVE_ARRAY
| * | Use MOVE_ARRAYsg/cocci-move-arraySZEDER Gábor2018-01-221-3/+2
| |/ | | | | | | | | | | | | | | | | | | | | Use the helper macro MOVE_ARRAY to move arrays. This is shorter and safer, as it automatically infers the size of elements. Patch generated by Coccinelle and contrib/coccinelle/array.cocci in Travis CI's static analysis build job. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'tg/split-index-fixes'Junio C Hamano2018-02-131-1/+1
|\ \ | |/ |/| | | | | | | | | | | | | The split-index mode had a few corner case bugs fixed. * tg/split-index-fixes: travis: run tests with GIT_TEST_SPLIT_INDEX split-index: don't write cache tree with null oid entries read-cache: fix reading the shared index for other repos
| * read-cache: fix reading the shared index for other reposThomas Gummerer2018-01-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | read_index_from() takes a path argument for the location of the index file. For reading the shared index in split index mode however it just ignores that path argument, and reads it from the gitdir of the current repository. This works as long as an index in the_repository is read. Once that changes, such as when we read the index of a submodule, or of a different working tree than the current one, the gitdir of the_repository will no longer contain the appropriate shared index, and git will fail to read it. For example t3007-ls-files-recurse-submodules.sh was broken with GIT_TEST_SPLIT_INDEX set in 188dce131f ("ls-files: use repository object", 2017-06-22), and t7814-grep-recurse-submodules.sh was also broken in a similar manner, probably by introducing struct repository there, although I didn't track down the exact commit for that. be489d02d2 ("revision.c: --indexed-objects add objects from all worktrees", 2017-08-23) breaks with split index mode in a similar manner, not erroring out when it can't read the index, but instead carrying on with pruning, without taking the index of the worktree into account. Fix this by passing an additional gitdir parameter to read_index_from, to indicate where it should look for and read the shared index from. read_cache_from() defaults to using the gitdir of the_repository. As it is mostly a convenience macro, having to pass get_git_dir() for every call seems overkill, and if necessary users can have more control by using read_index_from(). Helped-by: Brandon Williams <bmwill@google.com> Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'ma/lockfile-fixes'Junio C Hamano2017-11-061-8/+4
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An earlier update made it possible to use an on-stack in-core lockfile structure (as opposed to having to deliberately leak an on-heap one). Many codepaths have been updated to take advantage of this new facility. * ma/lockfile-fixes: read_cache: roll back lock in `update_index_if_able()` read-cache: leave lock in right state in `write_locked_index()` read-cache: drop explicit `CLOSE_LOCK`-flag cache.h: document `write_locked_index()` apply: remove `newfd` from `struct apply_state` apply: move lockfile into `apply_state` cache-tree: simplify locking logic checkout-index: simplify locking logic tempfile: fix documentation on `delete_tempfile()` lockfile: fix documentation on `close_lock_file_gently()` treewide: prefer lockfiles on the stack sha1_file: do not leak `lock_file`
| * cache-tree: simplify locking logicMartin Ågren2017-10-061-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After we have taken the lock using `LOCK_DIE_ON_ERROR`, we know that `newfd` is non-negative. So when we check for exactly that property before calling `write_locked_index()`, the outcome is guaranteed. If we write and commit successfully, we set `newfd = -1`, so that we can later avoid calling `rollback_lock_file` on an already-committed lock. But we might just as well unconditionally call `rollback_lock_file()` -- it will be a no-op if we have already committed. All in all, we use `newfd` as a bool and the only benefit we get from it is that we can avoid calling a no-op. Remove `newfd` so that we have one variable less to reason about. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | cleanup: fix possible overflow errors in binary searchds/avoid-overflow-in-midpoint-computationDerrick Stolee2017-10-101-1/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | A common mistake when writing binary search is to allow possible integer overflow by using the simple average: mid = (min + max) / 2; Instead, use the overflow-safe version: mid = min + (max - min) / 2; This translation is safe since the operation occurs inside a loop conditioned on "min < max". The included changes were found using the following git grep: git grep '/ *2;' '*.c' Making this cleanup will prevent future review friction when a new binary search is contructed based on existing code. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* stop leaking lock structs in some simple casesjk/incore-lockfile-removalJeff King2017-09-061-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | Now that it's safe to declare a "struct lock_file" on the stack, we can do so (and avoid an intentional leak). These leaks were found by running t0000 and t0001 under valgrind (though certainly other similar leaks exist and just don't happen to be exercised by those tests). Initializing the lock_file's inner tempfile with NULL is not strictly necessary in these cases, but it's a good practice to model. It means that if we were to call a function like rollback_lock_file() on a lock that was never taken in the first place, it becomes a quiet noop (rather than undefined behavior). Likewise, it's always safe to rollback_lock_file() on a file that has already been committed or deleted, since that operation is a noop on an inactive lockfile (and that's why the case in config.c can drop the "if (lock)" check as we move away from using a pointer). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* write_index_as_tree: cleanup tempfile on errorJeff King2017-09-061-8/+15
| | | | | | | | | | | | | | | | | | If we failed to write our new index file, we rollback our lockfile to remove the temporary index. But if we fail before we even get to the write step (because reading the old index failed), we leave the lockfile in place, which makes no sense. In practice this hasn't been a big deal because failing at write_index_as_tree() typically results in the whole program exiting (and thus the tempfile handler kicking in and cleaning up the files). But this function should consistently take responsibility for the resources it allocates. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* use MOVE_ARRAYRené Scharfe2017-07-171-3/+2
| | | | | | | | | | | | | | Simplify the code for moving members inside of an array and make it more robust by using the helper macro MOVE_ARRAY. It calculates the size based on the specified number of elements for us and supports NULL pointers when that number is zero. Raw memmove(3) calls with NULL can cause the compiler to (over-eagerly) optimize out later NULL checks. This patch was generated with contrib/coccinelle/array.cocci and spatch (Coccinelle). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'bc/object-id'Junio C Hamano2017-05-291-16/+17
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conversion from uchar[20] to struct object_id continues. * bc/object-id: (53 commits) object: convert parse_object* to take struct object_id tree: convert parse_tree_indirect to struct object_id sequencer: convert do_recursive_merge to struct object_id diff-lib: convert do_diff_cache to struct object_id builtin/ls-tree: convert to struct object_id merge: convert checkout_fast_forward to struct object_id sequencer: convert fast_forward_to to struct object_id builtin/ls-files: convert overlay_tree_on_cache to object_id builtin/read-tree: convert to struct object_id sha1_name: convert internals of peel_onion to object_id upload-pack: convert remaining parse_object callers to object_id revision: convert remaining parse_object callers to object_id revision: rename add_pending_sha1 to add_pending_oid http-push: convert process_ls_object and descendants to object_id refs/files-backend: convert many internals to struct object_id refs: convert struct ref_update to use struct object_id ref-filter: convert some static functions to struct object_id Convert struct ref_array_item to struct object_id Convert the verify_pack callback to struct object_id Convert lookup_tag to struct object_id ...
| * Convert lookup_tree to struct object_idbrian m. carlson2017-05-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the lookup_tree function to take a pointer to struct object_id. The commit was created with manual changes to tree.c, tree.h, and object.c, plus the following semantic patch: @@ @@ - lookup_tree(EMPTY_TREE_SHA1_BIN) + lookup_tree(&empty_tree_oid) @@ expression E1; @@ - lookup_tree(E1.hash) + lookup_tree(&E1) @@ expression E1; @@ - lookup_tree(E1->hash) + lookup_tree(E1) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Convert struct cache_tree to use struct object_idbrian m. carlson2017-05-021-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the sha1 member of struct cache_tree to struct object_id by changing the definition and applying the following semantic patch, plus the standard object_id transforms: @@ struct cache_tree E1; @@ - E1.sha1 + E1.oid.hash @@ struct cache_tree *E1; @@ - E1->sha1 + E1->oid.hash Fix up one reference to active_cache_tree which was not automatically caught by Coccinelle. These changes are prerequisites for converting parse_object. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>