| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a 16kb read buffer for compatibility with macOS SecureTransport.
SecureTransport `SSLRead` has the following behavior:
1. It will return _at most_ one TLS packet's worth of data, and
2. It will try to give you as much data as you asked for
This means that if you call `SSLRead` with a buffer size that is smaller
than what _it_ reads (in other words, the maximum size of a TLS packet),
then it will buffer that data for subsequent calls. However, it will
also attempt to give you as much data as you requested in your SSLRead
call. This means that it will guarantee a network read in the event
that it has buffered data.
Consider our 8kb buffer and a server sending us 12kb of data on an HTTP
Keep-Alive session. Our first `SSLRead` will read the TLS packet off
the network. It will return us the 8kb that we requested and buffer the
remaining 4kb. Our second `SSLRead` call will see the 4kb that's
buffered and decide that it could give us an additional 4kb. So it will
do a network read.
But there's nothing left to read; that was the end of the data. The
HTTP server is waiting for us to provide a new request. The server will
eventually time out, our `read` system call will return, `SSLRead` can
return back to us and we can make progress.
While technically correct, this is wildly ineffecient. (Thanks, Tim
Apple!)
Moving us to use an internal buffer that is the maximum size of a TLS
packet (16kb) ensures that `SSLRead` will never buffer and it will
always return everything that it read (albeit decrypted).
|
|\
| |
| | |
deps: ntlmclient: fix missing htonll symbols on FreeBSD and SunOS
|
| |
| |
| |
| |
| |
| |
| |
| | |
In the NTLM authentication code, we accidentally use strdup(3P) and
strndup(3P) instead of our own wrappers git__strdup and git__strndup,
respectively.
Fix the issue by using our own functions.
|
| |
| |
| |
| | |
Signed-off-by: Sven Strickroth <email@cs-ware.de>
|
|\ \
| | |
| | | |
sha1_lookup: inline its only function into "pack.c"
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The file "sha1_lookup.c" contains a single function `sha1_position`
only which is used only in the packfile implementation. As the function
is comparatively small, to enable the compiler to optimize better and to
remove symbol visibility, move it into "pack.c".
|
|\ \ \
| |_|/
|/| | |
Coverity fixes
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
OpenSSL pre-v1.1 required us to set up a locking function to properly
support multithreading. The locking function signature cannot return any
error codes, and as a result we can't do anything if `git_mutex_lock`
fails. To silence static analysis tools, let's just explicitly ignore
its return value by casting it to `void`.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When adding a new entry to our cache where an entry with the same OID
exists already, then we only update the existing entry in case it is
unparsed and the new entry is parsed. Currently, we do not check the
return value of `git_oidmap_set` though when updating the existing
entry. As a result, we will _not_ have updated the existing entry if
`git_oidmap_set` fails, but have decremented its refcount and
incremented the new entry's refcount. Later on, this may likely lead to
dereferencing invalid memory.
Fix the issue by checking the return value of `git_oidmap_set`. In case
it fails, we will simply keep the existing stored instead, even though
it's unparsed.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Git worktree's have the ability to be locked in order to spare them from
deletion, e.g. if a worktree is absent due to being located on a
removable disk it is a good idea to lock it. When locking such
worktrees, it is possible to give a locking reason in order to help the
user later on when inspecting status of any such locked trees.
The function `git_worktree_is_locked` serves to read out the locking
status. It currently does not properly report any errors when reading
the reason file, and callers are unexpecting of any negative return
values, too. Fix this by converting callers to expect error codes and
checking the return code of `git_futils_readbuffer`.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When checking whether a path is a valid repository path, we try to read
the "commondir" link file. In the process, we neither confirm that
constructing the file's path succeeded nor do we verify that reading the
file succeeded, which might cause us to verify repositories on an empty
or bogus path later on.
Fix this by checking return values. As the function to verify repos
doesn't currently support returning errors, this commit also refactors
the function to return an error code, passing validity of the repo via
an out parameter instead, and adjusts all existing callers.
|
| | |
| | |
| | |
| | |
| | |
| | | |
While `git_zstream_set_input` cannot fail right now, it might change in
the future if we ever decide to have it check its parameters more
vigorously. Let's thus check whether its return code signals an error.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Initialization of the hashing context may fail on some systems, most
notably on Win32 via the legacy hashing context. As such, we need to
always check the error code of `git_hash_ctx_init`, which is not done
when creating a new indexer.
Fix the issue by adding checks.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When queueing objects we want to push, we call `git_revwalk_hide` to
hide all objects already known to the remote from our revwalk. We do not
check its return value though, where the orginial intent was to ignore
the case where the pushed OID is not a known committish. As
`git_revwalk_hide` can fail due to other reasons like out-of-memory
exceptions, we should still check its return value.
Fix the issue by checking the function's return value, ignoring
errors hinting that it's not a committish. As `git_revwalk__push_commit`
currently clobbers these error codes, we need to adjust it as well in
order to make it available downstream.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When calling `git_note_next`, we end up calling `git_iterator_advance`
but ignore its error code. The intent is that we do not want to return
an error if it returns `GIT_ITEROVER`, as we want to return that value
on the next invocation of `git_note_next`. We should still check for any
other error codes returned by `git_iterator_advance` to catch unexpected
internal errors.
Fix this by checking the function's return value, ignoring
`GIT_ITEROVER`.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As OpenSSL loves using uninitialized bytes as another source of entropy,
we need to mark them as defined so that Valgrind won't complain about
use of these bytes. Traditionally, we've been using the macro
`VALGRIND_MAKE_MEM_DEFINED` provided by Valgrind, but starting with
OpenSSL 1.1 the code doesn't compile anymore due to `struct SSL` having
become opaque. As such, we also can't set it as defined anymore, as we
have no way of knowing its size.
Let's change gears instead by just swapping out the allocator functions
of OpenSSL with our own ones. The twist is that instead of calling
`malloc`, we just call `calloc` to have the bytes initialized
automatically. Next to soothing Valgrind, this approach has the benefit
of being completely agnostic of the memory sanitizer and is neatly
contained at a single place.
Note that we shouldn't do this for non-Valgrind builds. As we cannot
set up memory functions for a given SSL context, only, we need to swap
them at a global context. Furthermore, as it's possible to call
`OPENSSL_set_mem_functions` once only, we'd prevent users of libgit2 to
set up their own allocators.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
OpenSSL doesn't initialize bytes on purpose in order to generate
additional entropy. Valgrind isn't too happy about that though, causing
it to generate warninings about various issues regarding use of
uninitialized bytes.
We traditionally had some infrastructure to silence these errors in our
OpenSSL stream implementation, where we invoke the Valgrind macro
`VALGRIND_MAKE_MEMDEFINED` in various callbacks that we provide to
OpenSSL. Naturally, we only include these instructions if a preprocessor
define "VALGRIND" is set, and that in turn is only set if passing
"-DVALGRIND" to CMake. We do that in our usual Azure pipelines, but we
in fact forgot to do this in our nightly build. As a result, we get a
slew of warnings for these nightly builds, but not for our normal
builds.
To fix this, we could just add "-DVALGRIND" to our nightly builds. But
starting with commit d827b11b6 (tests: execute leak checker via CTest
directly, 2019-06-28), we do have a secondary variable that directs
whether we want to use memory sanitizers for our builds. As such, every
user wishing to use Valgrind for our tests needs to pass both options
"VALGRIND" and "USE_LEAK_CHECKER", which is cumbersome and error prone,
as can be seen by our own builds.
Instead, let's consolidate this into a single option, removing the old
"-DVALGRIND" one. Instead, let's just add the preprocessor directive if
USE_LEAK_CHECKER equals "valgrind" and remove "-DVALGRIND" from our own
pipelines.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit b9c5b15a7 (http: use the new httpclient, 2019-12-22), the HTTP
code got refactored to extract a generic HTTP client that operates
independently of the Git protocol. Part of refactoring was the creation
of a new `git_http_request` struct that encapsulates the generation of
requests. Our Git-specific HTTP transport was converted to use that in
`generate_request`, but during the process we forgot to set up custom
headers for the `git_http_request` and as a result we do not send out
these headers anymore.
Fix the issue by correctly setting up the request's custom headers and
add a test to verify we correctly send them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If fetching from an anonymous remote via its URL, then the URL gets
written into the FETCH_HEAD reference. This is mainly done to give
valuable context to some commands, like for example git-merge(1), which
will put the URL into the generated MERGE_MSG. As a result, what gets
written into FETCH_HEAD may become public in some cases. This is
especially important considering that URLs may contain credentials, e.g.
when cloning 'https://foo:bar@example.com/repo' we persist the complete
URL into FETCH_HEAD and put it without any kind of sanitization into the
MERGE_MSG. This is obviously bad, as your login data has now just leaked
as soon as you do git-push(1).
When writing the URL into FETCH_HEAD, upstream git does strip
credentials first. Let's do the same by trying to parse the remote URL
as a "real" URL, removing any credentials and then re-formatting the
URL. In case this fails, e.g. when it's a file path or not a valid URL,
we just fall back to using the URL as-is without any sanitization. Add
tests to verify our behaviour.
|
|\
| |
| | |
cred: change enum to git_credential_t and GIT_CREDENTIAL_*
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We avoid abbreviations where possible; rename git_cred to
git_credential.
In addition, we have standardized on a trailing `_t` for enum types,
instead of using "type" in the name. So `git_credtype_t` has become
`git_credential_t` and its members have become `GIT_CREDENTIAL` instead
of `GIT_CREDTYPE`.
Finally, the source and header files have been renamed to `credential`
instead of `cred`.
Keep previous name and values as deprecated, and include the new header
files from the previous ones.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
| |
| |
| |
| |
| | |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|/
|
|
|
| |
Stop returning a void for functions, future-proofing them to allow them
to fail.
|
|
|
|
|
| |
Disambiguate between general network problems and HTTP problems in error
codes.
|
| |
|
|
|
|
|
| |
When tracing is disabled, don't let `git_trace__level` return a void,
since that can't be compared against.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we're authenticating with a connection-based authentication scheme
(NTLM, Negotiate), we need to make sure that we're still connected
between the initial GET where we did the authentication and the POST
that we're about to send. Our keep-alive session may have not kept
alive, but more likely, some servers do not authenticate the entire
keep-alive connection and may have "forgotten" that we were
authenticated, namely Apache and nginx.
Send a "probe" packet, that is an HTTP POST request to the upload-pack
or receive-pack endpoint, that consists of an empty git pkt ("0000").
If we're authenticated, we'll get a 200 back. If we're not, we'll get a
401 back, and then we'll resend that probe packet with the first step of
our authentication (asking to start authentication with the given
scheme). We expect _yet another_ 401 back, with the authentication
challenge.
Finally, we will send our authentication response with the actual POST
data. This will allow us to authenticate without draining the POST data
in the initial request that gets us a 401.
|
|
|
|
|
| |
Untangle the notion of the http transport from the actual http
implementation. The http transport now uses the httpclient.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow users to opt-in to expect/continue handling when sending a POST
and we're authenticated with a "connection-based" authentication
mechanism like NTLM or Negotiate.
If the response is a 100, return to the caller (to allow them to post
their body). If the response is *not* a 100, buffer the response for
the caller.
HTTP expect/continue is generally safe, but some legacy servers
have not implemented it correctly. Require it to be opt-in.
|
|
|
|
|
| |
Fully support HTTP proxies, in particular CONNECT proxies, that allow us
to speak TLS through a proxy.
|
|
|
|
|
| |
Detect responses that are sent with Transfer-Encoding: chunked, and
record that information so that we can consume the entire message body.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store the last-seen credential challenges (eg, all the
'WWW-Authenticate' headers in a response message). Given some
credentials, find the best (first) challenge whose mechanism supports
these credentials. (eg, 'Basic' supports username/password credentials,
'Negotiate' supports default credentials).
Set up an authentication context for this mechanism and these
credentials. Continue exchanging challenge/responses until we're
authenticated.
|
| |
|
|
|
|
|
| |
Introduce a function to format the path and query string for a URL,
suitable for creating an HTTP request.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When sending a new request, ensure that we got the entirety of the
response body. Our caller may have decided that they were done reading.
If we were not at the end of the message, this means that we need to
tear down the connection and cannot do keep-alive.
However, if the caller read all of the message, but we still have a
final end-of-response chunk signifier (ie, "0\r\n\r\n") on the socket,
then we should consider that the response was successfully copmleted.
If we're asked to send a new request, try to read from the socket, just
to clear out that end-of-chunk message, marking ourselves as
disconnected on any errors.
|
|
|
|
| |
Teach httpclient how to support chunking when POSTing request bodies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce a new http client implementation that can GET and POST to
remote URLs.
Consumers can use `git_http_client_init` to create a new client,
`git_http_client_send_request` to send a request to the remote server
and `git_http_client_read_response` to read the response.
The http client implementation will perform the I/O with the remote
server (http or https) but does not understand the git smart transfer
protocol. This allows us to split the concerns of the http subtransport
from the actual http implementation.
|
| |
|
|
|
|
|
| |
Allow users to consume a buffer by the number of bytes, not just to an
ending pointer.
|
|
|
|
|
|
| |
Provide a mechanism to add a path and query string to an existing url
so that we can easily append `/info/refs?...` type url segments to a url
given to us by a user.
|
|
|
|
| |
Move the redirect handling into `git_net_url` for consistency.
|
|
|
|
| |
(Also, mark all the declarations as extern.)
|