| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Don't `cl_git_pass` in a child thread. When the assertion fails, clar
will `longjmp` to its error handler, but:
> The effect of a call to longjmp() where initialization of the jmp_buf
> structure was not performed in the calling thread is undefined.
Instead, set up an error context that threads can populate, and the
caller can check.
|
|
|
|
|
|
|
|
|
|
|
| |
We want a predictable number of initializations in our multithreaded
init test, but we also want to make sure that we have _actually_
initialized `git_libgit2_init` before calling `git_thread_create` (since
it now has a sanity check that `git_libgit2_init` has been called).
Since `git_thread_create` is internal-only, keep this sanity check.
Flip the invocation so that we `git_libgit2_init` before our thread
tests and `git_libgit2_shutdown` again after.
|
|
|
|
|
|
|
|
|
|
| |
Introduce `git_thread_exit`, which will allow threads to terminate at an
arbitrary time, returning a `void *`. On Windows, this means that we
need to store the current `git_thread` in TLS, so that we can set its
`return` value when terminating.
We cannot simply use `ExitThread`, since Win32 returns `DWORD`s from
threads; we return `void *`.
|
|\
| |
| | |
Bump version number to v0.25
|
| | |
|
|\ \
| | |
| | | |
sortedcache: plug leaked file descriptor
|
| | | |
|
| | | |
|
|\ \ \
| |_|/
|/| | |
curl_stream: use CURLINFO_ACTIVESOCKET if curl is recent enough
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `CURLINFO_LASTSOCKET` information has been deprecated since
curl version 7.45.0 as it may result in an overflow in the
returned socket on certain systems, most importantly on 64 bit
Windows. Instead, a new call `CURLINFO_ACTIVESOCKET` has been
added which instead returns a `curl_socket_t`, which is always
sufficiently long to store a socket.
As we need to provide backwards compatibility with curl versions
smaller than 7.45.0, alias CURLINFO_ACTIVESOCKET to
CURLINFO_LASTSOCKET on platforms without CURLINFO_ACTIVESOCKET.
|
|\ \ \
| | | |
| | | | |
Introduce a GitHub Issue Template
|
| | | | |
|
|\ \ \ \
| |_|_|/
|/| | | |
CHANGELOG: fill in some updates we missed
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Plug a leak in the refs compressor
|
| |/ / / |
|
|\ \ \ \
| | | | |
| | | | | |
Repository discovery starting from files
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When trying to find a discovery, we walk up the directory
structure checking if there is a ".git" file or directory and, if
so, check its validity. But in the case that we've got a ".git"
file, we do not want to unconditionally assume that the file is
in fact a ".git" file and treat it as such, as we would error out
if it is not.
Fix the issue by only treating a file as a gitlink file if it
ends with "/.git". This allows users of the function to discover
a repository by handing in any path contained inside of a git
repository.
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| |_|/ / /
|/| | | | |
Use the sorted input in the tree updater
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We look at whether we're trying to replace a blob with a tree during the
update phase, but we fail to look at whether we've just inserted a blob
where we're now trying to insert a tree.
Update the check to look at both places. The test for this was
previously succeeding due to the bu where we did not look at the sorted
output.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The loop is made with the assumption that the inputs are sorted and not
using it leads to bad outputs.
|
| |/ / /
| | | |
| | | |
| | | |
| | | | |
We do not currently use the sorted version of this input in the
function, which means we produce bad results.
|
|\ \ \ \
| |/ / /
|/| | | |
Concurrency fixes for the reference db
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
On Windows we can find locked files even when reading a reference or the
packed-refs file. Bubble up the error in this case as well to allow
callers on Windows to retry more intelligently.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
At times we may try to delete a reference which a different thread has
already taken care of.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It does not help us to check whether the file exists before trying to
unlink it since it might be gone by the time unlink is called.
Instead try to remove it and handle the resulting error if it did not
exist.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Checking the size before we open the file descriptor can lead to the
file being replaced from under us when renames aren't quite atomic, so
we can end up reading too little of the file, leading to us thinking the
file is corrupted.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The logic simply consists of retrying for as long as the library says
the data is locked, but it eventually gets through.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This allows the caller to know the errors was e.g. due to the
packed-refs file being already locked and they can try again later.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We can reduce the duplication by cleaning up at the beginning of the
loop, since it's something we want to do every time we continue.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There might be a few threads or processes working with references
concurrently, so fortify the code to ignore errors which come from
concurrent access which do not stop us from continuing the work.
This includes ignoring an unlinking error. Either someone else removed
it or we leave the file around. In the former case the job is done, and
in the latter case, the ref is still in a valid state.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We need to save the errno, lest we clobber it in the giterr_set()
call. Also add code for reporting that a path component is missing,
which is a distinct failure mode.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In order not to undo concurrent modifications to references, we must
make sure that we only delete a loose reference if it still has the same
value as when we packed it.
This means we need to lock it and then compare the value with the one we
put in the packed file.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We can get useful information like GIT_ELOCKED out of this instead of
just -1.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We say it's going to work if you use a different repository in each
thread. Let's do precisely that in our code instead of hoping re-using
the refdb is going to work.
This test does fail currently, surfacing existing bugs.
|
|\ \ \
| | | |
| | | | |
giterr format
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
transports: smart: abort on early end of stream
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When trying to receive packets from the remote, we loop until
either an error distinct to `GIT_EBUFS` occurs or until we
successfully parsed the packet. This does not honor the case
where we are looping over an already closed socket which has no
more data, leaving us in an infinite loop if we got a bogus
packet size or if the remote hang up.
Fix the issue by returning `GIT_EEOF` when we cannot read data
from the socket anymore.
|