| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
We look at whether we're trying to replace a blob with a tree during the
update phase, but we fail to look at whether we've just inserted a blob
where we're now trying to insert a tree.
Update the check to look at both places. The test for this was
previously succeeding due to the bu where we did not look at the sorted
output.
|
|
|
|
|
| |
The loop is made with the assumption that the inputs are sorted and not
using it leads to bad outputs.
|
|\
| |
| | |
giterr format
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
transports: smart: abort on early end of stream
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When trying to receive packets from the remote, we loop until
either an error distinct to `GIT_EBUFS` occurs or until we
successfully parsed the packet. This does not honor the case
where we are looping over an already closed socket which has no
more data, leaving us in an infinite loop if we got a bogus
packet size or if the remote hang up.
Fix the issue by returning `GIT_EEOF` when we cannot read data
from the socket anymore.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When reading a server's reference announcements via the smart
protocol, we expect the server to send multiple flushes before
the protocol is finished. If we fail to receive new data from the
socket, we will only return an end of stream error if we have not
seen any flush yet.
This logic is flawed in that we may run into an infinite loop
when receiving a server's reference announcement with a bogus
flush packet. E.g. assume the last flushing package is changed to
not be '0000' but instead any other value. In this case, we will
still await one more flush package and ignore the fact that we
are not receiving any data from the socket, causing an infinite
loop.
Fix the issue by always returning `GIT_EEOF` if the socket
indicates an end of stream.
|
|\ \ \
| |_|/
|/| | |
git_repository_open_ext: fix handling of $GIT_NAMESPACE
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The existing code would set a namespace of "" (empty string) with
GIT_NAMESPACE unset. In a repository where refs/heads/namespaces/
exists, that can produce incorrect results. Detect that case and avoid
setting the namespace at all.
Since that makes the last assignment to error conditional, and the
previous assignment can potentially get GIT_ENOTFOUND, set error to 0
explicitly to prevent the call from incorrectly failing with
GIT_ENOTFOUND.
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We're recently trying to upgrade to the current master of libgit2
in Cargo but we're unfortunately hitting a segfault in one of our
tests. This particular test is just a small smoke test that https
works (e.g. it's configured in libgit2). It attempts to clone
from a URL which simply immediately drops connections after
they're accepted (e.g. terminate abnormally). We expect to see a
standard error from libgit2 but unfortunately we're seeing a
segfault.
This segfault is happening inside of the `wait_for` function of
`curl_stream.c` at the line `FD_SET(fd, &errfd)` because `fd` is
-1. This ends up doing an out-of-bounds array access that faults
the program. I tracked back to where this -1 came from to the
line here (returned by `CURLINFO_LASTSOCKET`) and added a check
to return an error.
|
|\ \
| | |
| | | |
global: synchronize initialization and shutdown with pthreads
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When threading is not enabled for libgit2, we keep global state
in a simple static variable. When libgit2 is shut down, we clean
up the global state by freeing the global state's dynamically
allocated memory. When libgit2 is built with threading, we
additionally free the thread-local storage and thus completely
remove the global state. In a non-threaded build, though, we
simply leave the global state as-is, which may result in an error
upon reinitializing libgit2.
Fix the issue by zeroing out the variable on a shutdown, thus
returning it to its initial state.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When trying to initialize and tear down global data structures
from different threads at once with `git_libgit2_init` and
`git_libgit2_shutdown`, we race around initializing data. While
we use `pthread_once` to assert that we only initilize data a
single time, we actually reset the `pthread_once_t` on the last
call to `git_libgit2_shutdown`. As resetting this variable is not
synchronized with other threads trying to access it, this is
actually racy when one thread tries to do a complete shutdown of
libgit2 while another thread tries to initialize it.
Fix the issue by creating a mutex which synchronizes `init_once`
and the library shutdown.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The code correctly detects that forced creation of a branch on a
nonbare repo should not be able to overwrite a branch which is
the HEAD reference. But there's no reason to prevent this on
a bare repo, and in fact, git allows this. I.e.,
git branch -f master new_sha
works on a bare repo with HEAD set to master. This change fixes
that problem, and updates tests so that, for this case, both the
bare and nonbare cases are checked for correct behavior.
|
|\ \ \
| | | |
| | | | |
add support for OpenSSL 1.1.0 for BIO filter
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We need to include the initialisation and construction functions in all
backend, so we include this header when building against SecureTransport
and WinHTTP as well.
|
| | | |
| | | |
| | | |
| | | | |
For older versions we can fall back on the deprecated ASN1_STRING_data.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We want to program against the interface, so recreate it when we compile
against pre-1.1 versions.
|
| |/ /
| | |
| | |
| | |
| | | |
Closes: https://github.com/libgit2/libgit2/issues/3959
Signed-off-by: Igor Gnatenko <i.gnatenko.brain@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In `pack_entry_find_offset`, we try to find the offset of a
certain object in the pack file. To do so, we first assert if the
packfile has already been opened and open it if not. Opening the
packfile is guarded with a mutex, so concurrent access to this is
in fact safe.
What is not thread-safe though is our calculation of offsets
inside the packfile. Assume two threads calling
`pack_entry_find_offset` at the same time. We first calculate the
offset and index location and only then determine if the pack has
already been opened. If so, we re-calculate the offset and index
address.
Now the case for two threads: thread 1 first calculates the
addresses and is subsequently suspended. The second thread will
now call `pack_index_open` and initialize the pack file,
calculating its addresses correctly. When the first thread is
resumed now, he'll see that the pack file has already been
initialized and will happily proceed with the addresses it has
already calculated before the check. As the pack file was not
initialized before, these addresses are bogus.
Fix the issue by only calculating the addresses after having
checked if the pack file is open.
|
|\ \ \
| |_|/
|/| | |
pqueue: resolve possible NULL pointer dereference
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `git_pqueue` struct allows being fixed in its total number of
entries. In this case, we simply throw away items that are
inserted into the priority queue by examining wether the new item
to be inserted has a higher priority than the previous smallest
one.
This feature somewhat contradicts our pqueue implementation in
that it is allowed to not have a comparison function. In fact, we
also fail to check if the comparison function is actually set in
the case where we add a new item into a fully filled fixed-size
pqueue.
As we cannot determine which item is the smallest item in absence
of a comparison function, we fix the `NULL` pointer dereference
by simply dropping all new items which are about to be inserted
into a full fixed-size pqueue.
|
|/ |
|
|\ |
|
| | |
|
|\ \
| | |
| | | |
Object parsing hardening
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When parsing a commit, we will treat all bytes left after parsing
the headers as the commit message. When no bytes are left, we
leave the commit's message uninitialized. While uncommon to have
a commit without message, this is the right behavior as Git
unfortunately allows for empty commit messages.
Given that this scenario is so uncommon, most programs acting on
the commit message will never check if the message is actually
set, which may lead to errors. To work around the error and not
lay the burden of checking for empty commit messages to the
developer, initialize the commit message with an empty string
when no commit message is given.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When parsing tree entries from raw object data, we do not verify
that the tree entry actually has a filename as well as a valid
object ID. Fix this by asserting that the filename length is
non-zero as well as asserting that there are at least
`GIT_OID_RAWSZ` bytes left when parsing the OID.
|
|\ \ \
| | | |
| | | | |
Improve revision walk preparation logic
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When we read from the list which `limit_list()` gives us, we need to check that
the commit is still interesting, as it might have become uninteresting after it
was added to the list.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`git-rebase--merge` does not ask for time sorting, but uses the default. We now
produce the same default time-ordered output as git, so make us of that since
it's not always the same output as our time sorting.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It changed from implementation-defined to git's default sorting, as there are
systems (e.g. rebase) which depend on this order. Also specify more explicitly
how you can get git's "date-order".
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
After `limit_list()` we already have the list in time-sorted order, which is
what we want in the "default" case. Enqueueing into the "unsorted" list would
just reverse it, and the topological sort will do its own sorting if it needs
to.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We've now moved to code that's closer to git and produces the output
during the preparation phase, so we no longer process the commits as
part of generating the output.
This makes a chunk of code redundant, as we're simply short-circuiting
it by detecting we've processed the commits alrady.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Change the condition for returning 0 more in line with that we write
elsewhere in the library.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This returns the integer-cast truth value comparing the dates. What we
want instead of a (-1, 0, 1) output depending on how they compare.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
After porting over the commit hiding and selection we were still left
with mistmaching output due to the topologial sort.
This ports the topological sorting code to make us match with our
equivalent of `--date-order` and `--topo-order` against the output
from `rev-list`.
|
| | | |
| | | |
| | | |
| | | | |
In this case, we simply behave like a vector.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a convenience function to reverse the contents of a vector and a pqueue
in-place.
The pqueue function is useful in the case where we're treating it as a
LIFO queue.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We had some home-grown logic to figure out which objects to show during
the revision walk, but it was rather inefficient, looking over the same
list multiple times to figure out when we had run out of interesting
commits. We now use the lists in a smarter way.
We also introduce the slop mechanism to determine when to stpo
looking. When we run out of interesting objects, we continue preparing
the walk for another 5 rounds in order to make it less likely that we
miss objects in situations with complex graphs.
|
|/ / / |
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
libgit2/ethomson/checkout_dont_calculate_oid_for_dirs
checkout: don't try to calculate oid for directories
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When trying to determine if we can safely overwrite an existing workdir
item, we may need to calculate the oid for the workdir item to determine
if its identical to the old side (and eligible for removal).
We previously did this regardless of the type of entry in the workdir;
if it was a directory, we would open(2) it and then try to read(2).
The read(2) of a directory fails on many platforms, so we would treat it
as if it were unmodified and continue to perform the checkout.
On FreeBSD, you _can_ read(2) a directory, so this pattern failed. We
would calculate an oid from the data read and determine that the
directory was modified and would therefore generate a checkout conflict.
This reliance on read(2) is silly (and was most likely accidentally
giving us the behavior we wanted), we should be explicit about the
directory test.
|
| | | |
|