| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
In #5118 we remove the double-underscore to make it a normally-named public
function. However, this is not an interesting function outside of the library
and it takes up a name for something that could be more useful.
Remove the single-underscore version as we have not done any releases with it.
|
|\
| |
| | |
config_file: refresh when creating an iterator
|
| |
| |
| |
| |
| |
| |
| |
| | |
There was a bug when calling `git_remote_list` that caused us to not
re-read modified configurations when using `git_config_iterator`. This
bug also impacted `git_remote_list`, which thus failed to provide an
up-to-date list of remotes. Add a test suite remote::list with a single
test that verifies we do the right thing.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When creating a new iterator for a config file backend, then we should
always make sure that we're up to date by calling `config_refresh`.
Otherwise, we might not notice when another process has modified the
configuration file and thus will represent outdated values.
Add two tests to config::stress that verify that we get up-to-date
values when reading configuration entries via `git_config_iterator`.
|
| |
| |
| |
| |
| |
| |
| |
| | |
If calling `config_refresh` on a read-only configuration file backend,
then we will segfault when comparing the timestamp of the file due to
`path` being uninitialized. As a read-only snapshot should not be
refreshed anyway and stay consistent, we can simply return early when
calling `config_refresh` on a read-only snapshot.
|
|/ |
|
|\
| |
| | |
azure: drop powershell
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On Win32 builds, the PID file created by git-daemon contained in invalid
PID that we were not able to kill afterwards. Somehow, it seems like the
contained PID was wrapped in braces. Consequentially, kill(1) failed and
thus caused the build to error.
Fix this by directly grabbing the PID of the spawned git-daemon process.
|
| |
| |
| |
| |
| |
| | |
On Win32 build hosts, we do not have an SSH daemon readily available and
thus cannot perform the SSH tests. Let's skip the tests to not let Azure
Pipelines fail.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Right now, we maintain semantically equivalent build scripts in
both Bash and Powershell to support both Windows and non-Windows
hosts. Azure Pipelines supports Bash on Windows, too, via Git for
Windows, and as such it's not really required to maintain the
Powershell scripts at all.
Remove them to reduce our own maintenance burden.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Until now, we always had the CC variable defined in the build.sh
pipeline. But as we're about to migrate the Windows jobs to Bash, as
well, those will not have CC defined and thus we cannot use "$CC" to
determine the compiler version.
Fix this by only executing "$CC" if the variable was set.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We currently specify the CMake generator as part of the CMAKE_OPTIONS
variable. This is fine in the current setup, but during the conversion
to drop PowerShell scripts this will prove problematic for all
generators that have spaces in their names due to quoting issues.
Convert to use an explicit CMAKE_GENERATOR variable that makes it easier
to get quoting right.
|
| |
| |
| |
| |
| |
| |
| | |
We're about to phase out our Powershell scripts as Azure
Pipelines does in fact support Bash scripts on all platforms. As
a preparatory step, let's replace our MinGW setup script with a
Bash script.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Azure Pipelines supports bash tasks on Windows hosts due to it always
having Git for Windows included. To support this, the Git for Window
directory is added to the PATH environment to make the bash shell
available for execution. Unfortunately, this breaks CMake with the MinGW
generator, as it has sanity checks to verify that no bash executable is
in the PATH. So we can either remove Git for Windows from the path, but
then we're unable to execute bash jobs. Or we can add it to the path,
but then we're unable to execute CMake with the MinGW generator.
Let's re-model how we set the PATH environment. Instead of setting up
PATH for the complete build job, we now set a variable "BUILD_PATH" for
the job. This variable is only being used when executing CMake so that
it encounters a sanitizied PATH environment without GfW's bash shell.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since we have migrated to Azure Pipelines, we have deprecated and
subsequentally removed all infrastructure for AppVeyor and
Travis. Thus it doesn't make a lot of sense to have the split
between "ci/" and "azure-pipelines/" directories anymoer, as
"azure-pipelines/" is essentially our only CI.
Move all CI scripts into the "azure-pipelines/" directory to have
everything centrally located and to remove clutter in the
top-level directory.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Right now, we have an awful hack in our test CI setup that extracts the
test command from CTest's output and then prepends the leak checker.
This is dependent on non-machine-parseable output from CMake and also
breaks on various ocassions, like for example when we have spaces in the
current path or when the path contains backslashes. Both conditions may
easily be triggered on Win32 systems, and in fact they do break our
Azure Pipelines builds.
Remove the awful hack in favour of a new CMake build option
"USE_LEAK_CHECKER". If specifying e.g. "-DUSE_LEAK_CHECKER=valgrind",
then we will set up all tests to be run under valgrind. Like this, we
can again simply execute ctest without needing to rely on evil sourcery.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As different test suites for our CI are mostly defined via CMake, it's
hard to run those tests with a summary file path as that'd require us to
add another parameter to all unit tests. As we do not want to
unconditionally run unit tests with a summary file, we would have to add
another CMake build parameter for test execution, which is ugly.
Instead, implement a way to provide a summary file path via the
environment.
|
| |
| |
| |
| |
| |
| | |
Instead of having to find the fuzzer executables in our Azure test
scripts, provide test targets for each of our fuzzers that will
run them with the correct paths.
|
|\ \
| | |
| | | |
fuzzer: use futils instead of fileops
|
|/ / |
|
|\ \
| |/
|/| |
w32: fix unlinking of directory symlinks
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On most platforms it's fine to create symlinks to nonexisting files. Not
so on Windows, where the type of a symlink (file or directory) needs to
be set at creation time. So depending on whether the target file exists
or not, we may end up with different symlink types. This creates a
problem when performing checkouts, where we simply iterate over all blobs
that need to be updated without treating symlinks any special. If the
target file of the symlink is going to be checked out after the symlink
itself, then the symlink will be created as directory symlink and not as
file symlink.
Fix the issue by iterating over blobs twice: once to perform postponed
deletions and updates to non-symlink blobs, and once to perform updates
to symlink blobs.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When creating a symlink in Windows, one needs to tell Windows whether
the symlink should be a file or directory symlink. To determine which
flag to pass, we call `GetFileAttributesW` on the target file to see
whether it is a directory and then pass the flag accordingly. The
problem though is if create a symlink with a relative target path, then
we will check that relative path while not necessarily being inside of
the working directory where the symlink is to be created. Thus, getting
its attributes will either fail or return attributes of the wrong
target.
Fix this by resolving the target path relative to the directory in which
the symlink is to be created.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Add two more tests to verify that we're not deleting symlink targets,
but the symlinks themselves. Furthermore, convert several `cl_skip`s on
Win32 to conditional skips depending on whether the clar sandbox
supports symlinks or not. Windows is grown up now and may allow
unprivileged symlinks if the machine has been configured accordingly.
|
| |
| |
| |
| |
| |
| |
| | |
Several function calls to `p_stat` and `p_close` have no verification if
they actually succeeded. As these functions _may_ fail and as we also
want to make sure that we're not doing anything dumb, let's check them,
too.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When deleting a symlink on Windows, then the way to delete it depends on
whether it is a directory symlink or a file symlink. In the first case,
we need to use `DeleteFile`, in the second `RemoveDirectory`. Right now,
`p_unlink` will only ever try to use `DeleteFile`, though, and thus fail
to remove directory symlinks. This mismatches how unlink(3P) is expected
to behave, though, as it shall remove any symlink disregarding whether
it is a file or directory symlink.
In order to correctly unlink a symlink, we thus need to check what kind
of file this is. If we were to first query file attributes of every file
upon calling `p_unlink`, then this would penalize the common case
though. Instead, we can try to first delete the file with `DeleteFile`
and only if the error returned is `ERROR_ACCESS_DENIED` will we query
file attributes and determine whether it is a directory symlink to use
`RemoveDirectory` instead.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When initializing a repository, we need to check whether its working
directory supports symlinks to correctly set the initial value of the
"core.symlinks" config variable. The code to check the filesystem is
reusable in other parts of our codebase, like for example in our tests
to determine whether certain tests can be expected to succeed or not.
Extract the code into a new function `git_path_supports_symlinks` to
avoid duplicate implementations. Remove a duplicate implementation in
the repo test helper code.
|
|/
|
|
|
|
|
|
|
| |
Our file utils functions all have a "futils" prefix, e.g.
`git_futils_touch`. One would thus naturally guess that their
definitions and implementation would live in files "futils.h" and
"futils.c", respectively, but in fact they live in "fileops.h".
Rename the files to match expectations.
|
|\
| |
| | |
patch_parse: fix segfault due to line containing static contents
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With commit dedf70ad2 (patch_parse: do not depend on parsed buffer's
lifetime, 2019-07-05), all lines of the patch are allocated with
`strdup` to make lifetime of the parsed patch independent of the buffer
that is currently being parsed. In patch b08932824 (patch_parse: ensure
valid patch output with EOFNL, 2019-07-11), we introduced another
code location where we add lines to the parsed patch. But as that one
was implemented via a separate pull request, it wasn't converted to use
`strdup`, as well. As a consequence, we generate a segfault when trying
to deallocate the potentially static buffer that's now in some of the
lines.
Use `git__strdup` to fix the issue.
|
|\
| |
| | |
ignore: fix determining whether a shorter pattern negates another
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When computing whether we need to store a negative pattern, we iterate
through all previously known patterns and check whether the negative
pattern undoes any of the previous ones. In doing so we call `wildmatch`
and check it's return for any negative error values. If there was a
negative return, we will abort and bubble up that error to the caller.
In fact, this check for negative values stems from the time where we
still used `fnmatch` instead of `wildmatch`. For `fnmatch`, negative
values indicate a "real" error, while for `wildmatch` a negative value
may be returned if the matching was prematurely aborted. A premature
abort may for example also happen if the pattern matches a prefix of the
haystack if the pattern is shorter. Returning an error in that case is
the wrong thing to do.
Fix the code to compare for equality with `WM_MATCH`, only. Negative
values returned by `wildmatch` are perfectly fine and thus should be
ignored. Add a test that verifies we do not see the error.
|
|\ \
| | |
| | | |
patch_parse: handle missing newline indicator in old file
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When either the old or new file contents have no newline at the end of
the file, then git-diff(1) will print out a "\ No newline at end of
file" indicator. While we do correctly handle this in the case where the
new file has this indcator, we fail to parse patches where the old file
is missing a newline at EOF.
Fix this bug by handling and missing newline indicators in the old file.
Add tests to verify that we can parse such files.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The patch ID is supposed to be mostly context-insignificant and
thus only includes added or deleted lines. As such, we shouldn't honor
end-of-file-without-newline markers in diffs.
Ignore such lines to fix how we compute the patch ID for such diffs.
|
|\ \ \
| | | |
| | | | |
patch_parse: do not depend on parsed buffer's lifetime
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When parsing a patch from a buffer, we let the patch lines point into
the original buffer. While this is efficient use of resources, this also
ties the lifetime of the parsed patch to the parsed buffer. As this
behaviour is not documented anywhere in our API it is very surprising to
its users.
Untie the lifetime by duplicating the lines into the parsed patch. Add a
test that verifies that lifetimes are indeed independent of each other.
|
|\ \ \ \
| | | | |
| | | | | |
sha1: fix compilation of WinHTTP backend
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We currently have no job that compiles libgit2 with the WinHTTP backend
for SHA1. Due to this, a compile error has been introduced and not
noticed for several months. Change the x86 MSVC job to use the HTTPS
backend for SHA1. The x86 job was chosen with no particular reason.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In commit bbf034ab9 (hash: move `git_hash_prov` into Win32 backend,
2019-02-22), the `git_hash_prov`'s structure name has been removed in
favour of its typedef'ed name. But as we have no CI that compiles with
the WinHTTPS hashing backend right now, it wasn't noticed that the
implementation that uses this struct wasn't changed correctly.
Fix the struct type to make it compile again.
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When selecting the SHA1 backend, we only include the respective C
implementation of the selected backend. But since commit bd48bf3fb
(hash: introduce source files to break include circles, 2019-06-14), we
have introduced separate headers and compilation units for all hashes.
So by not including the headers, we may not honor them to compute
whether a file needs to be recompiled and they also will not be
displayed in IDEs.
Add the header files to fix this problem.
|
|\ \ \ \
| | | | |
| | | | | |
repository: do not initialize HEAD if it's provided by templates
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When using templates to initialize a git repository, then git-init(1)
will copy over all contents of the template directory. These will be
preferred over the default ones created by git-init(1). While we mostly
do the same, there is the exception of "HEAD". While we do copy over the
template's HEAD file, afterwards we'll immediately re-initialize its
contents with either the default "ref: refs/origin/master" or the init
option's `initial_head` field.
Let's fix the inconsistency with upstream git-init(1) by not overwriting
the template HEAD, but only if the user hasn't set `opts.initial_head`.
If the `initial_head` field has been supplied, we should use that
indifferent from whether the template contained a HEAD file or not. Add
tests to verify we correctly use the template directory's HEAD file and
that `initial_head` overrides the template.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Update `git_repository_init_ext` to use our typical style of error
handling. The function had multiple statements which didn't `goto out`
immediately but instead deferred it to later calls combined with `if`
statements.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The error handling in `git_repository_create_head` completely swallows
all error codes. While probably not too much of a problem, this also
violates our usual coding style.
Refactor the code to use a local `error` variable with the typical `goto
out` statements.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
All tests in repo::template have a common pattern of first setting up
templates, then settung up the repository that makes use of those
templates via several init options. Refactor this pattern into two
functions `setup_templates` and `setup_repo` that handle most of that
logic to make it easier to spot what a test actually wants to check.
Furthermore, this also refactors how we clean up after the tests.
Previously, it was a combination of manually calling
`cl_fixture_cleanup` and `cl_set_cleanup`, which really is kind of hard
to read. This commit refactors this to instead provide the cleaning
parameters in the setup functions. All cleanups are then performed in
the suite's cleanup function.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The repo::template test suite makes use of quite a few local variables
that could be consolidated. Do so to make the code easier to read.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
There's quite a lot of supporting code for our templates and they are an
obvious standalone feature. Thus, let's extract those tests into their
own suite to also make refactoring of them easier.
|