| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
When we insert a conflict in a case-insensitive index, accept the
new entry's path as the correct case instead of leaving the path we
already had.
This puts `git_index_conflict_add()` on the same level as
`git_index_add()` in this respect.
|
|
|
|
|
| |
When we're at offset 'i', we're dealing with the 'i+1' stage, since
conflicts start at 1.
|
|\
| |
| | |
Diff: Honor `core.symlinks=false` and fake symlinks
|
| |
| |
| |
| |
| |
| | |
On platforms that lack `core.symlinks`, we should not go looking for
symbolic links and `p_readlink` their target. Instead, we should
examine the file's contents.
|
|\ \
| | |
| | | |
Handle submodules with paths in `git_submodule_update`
|
| | |
| | |
| | |
| | |
| | |
| | | |
Reload the HEAD and index data for a submodule after reading the
configuration. The configuration may specify a `path`, so we must
update HEAD and index data with that path in mind.
|
|\ \ \
| |/ /
|/| | |
stream: allow registering a user-provided TLS constructor
|
| | |
| | |
| | |
| | |
| | | |
This allows the application to use their own TLS stream, regardless of
the capabilities of libgit2 itself.
|
| | | |
|
|/ /
| |
| |
| | |
whitespace. Collapse spaces around newlines for the summary.
|
|/ |
|
|\
| |
| | |
Use checksums to detect config file changes
|
| |
| |
| |
| |
| |
| | |
This reduces the chances of a crash in the thread tests. This shouldn't
affect general usage too much, since the main usage of these functions
are to read into an empty buffer.
|
| |
| |
| |
| |
| |
| | |
Instead of relying on the size and timestamp, which can hide changes
performed in the same second, hash the file content's when we care about
detecting changes.
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
index: read_index must update hashes
|
| | |
|
|/ |
|
| |
|
|\
| |
| | |
Fix segfault when reading reflog with extra newlines
|
| |
| |
| |
| |
| |
| | |
Using calloc instead of malloc because the parse error will lead to an immediate free of committer (and its properties, which can segfault on free if undefined - test_refs_reflog_reflog__reading_a_reflog_with_invalid_format_returns_error segfaulted before the fix).
#3458
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
cc @carlosmn
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
Inserting new REUC entries can quickly become pathological given that
each insert unsorts the REUC vector, and both subsequent lookups *and*
insertions will require sorting it again before being successful.
To avoid this, we're switching to `git_vector_insert_sorted`: this keeps
the REUC vector constantly sorted and lets us use the `on_dup` callback
to skip an extra binary search on each insertion.
|
|\
| |
| | |
xdiff: reference util.h in parent directory
|
| |
| |
| |
| |
| |
| | |
Although CMake will correctly configure include directories for us,
some people may use their own build system, and we should reference
`util.h` based on where it actually lives.
|
| |
| |
| |
| |
| |
| | |
Provide a new merge option, GIT_MERGE_TREE_FAIL_ON_CONFLICT, which
will stop on the first conflict and fail the merge operation with
GIT_EMERGECONFLICT.
|
|\ \
| | |
| | | |
Nanoseconds in the index: ignore for diffing
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
Although our index contains the literal time present in the index,
we do not read nanoseconds from disk, and thus we should not use
them in any comparisons, lest we always think our working directory
is dirty.
Guard this behind a `GIT_USE_NSECS` for future improvement.
|
|\ \
| |/
|/| |
config: add a ProgramData level
|
| |
| |
| |
| |
| | |
This is where portable git stores the global configuration which we can
use to adhere to it even though git isn't quite installed on the system.
|
| | |
|
|/ |
|
|\
| |
| | |
revwalk: make commit list use 64 bits for time
|
| |
| |
| |
| |
| |
| |
| |
| | |
We moved the "main" parsing to use 64 bits for the timestamp, but the
quick parsing for the revwalk did not. This means that for large
timestamps we fail to parse the time and thus the walk.
Move this parser to use 64 bits as well.
|
|\ \
| | |
| | | |
Preserve modes from a conflict in `git_index_insert`
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we do not trust the on-disk mode, we use the mode of an existing
index entry. This allows us to preserve executable bits on platforms
that do not honor them on the filesystem.
If there is no stage 0 index entry, also look at conflicts to attempt
to answer this question: prefer the data from the 'ours' side, then
the 'theirs' side before falling back to the common ancestor.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For most real use cases, repositories with alternates use them as main
object storage. Checking the alternate for objects before the main
repository should result in measurable speedups.
Because of this, we're changing the sorting algorithm to prioritize
alternates *in cases where two backends have the same priority*. This
means that the pack backend for the alternate will be checked before the
pack backend for the main repository *but* both of them will be checked
before any loose backends.
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the current implementation of ODB backends, each backend is tasked
with refreshing itself after a failed lookup. This is standard Git
behavior: we want to e.g. reload the packfiles on disk in case they have
changed and that's the reason we can't find the object we're looking
for.
This behavior, however, becomes pathological in repositories where
multiple alternates have been loaded. Given that each alternate counts
as a separate backend, a miss in the main repository (which can
potentially be very frequent in cases where object storage comes from
the alternate) will result in refreshing all its packfiles before we
move on to the alternate backend where the object will most likely be
found.
To fix this, the code in `odb.c` has been refactored as to perform the
refresh of all the backends externally, once we've verified that the
object is nowhere to be found.
If the refresh is successful, we then perform the lookup sequentially
through all the backends, skipping the ones that we know for sure
weren't refreshed (because they have no refresh API).
The on-disk pack backend has been adjusted accordingly: it no longer
performs refreshes internally.
|
| | |
|
| | |
|