summaryrefslogtreecommitdiff
path: root/run-command.c
Commit message (Collapse)AuthorAgeFilesLines
* run-command: add hint when a hook is ignoreddm/run-command-ignored-hook-adviseDamien Marié2017-10-101-0/+18
| | | | | | | | | | | | | | | | | When an hook is present but the file is not set as executable then git will ignore the hook. For now this is silent which can be confusing. This commit adds this warning to improve the situation: hint: The 'pre-commit' hook was ignored because it's not set as executable. hint: You can disable this warning with `git config advice.ignoredHook false` To allow the old use-case of enabling/disabling hooks via the executable flag a new setting is introduced: advice.ignoredHook. Signed-off-by: Damien Marié <damien@dam.io> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run-command: use ALLOC_ARRAYrs/run-command-use-alloc-arrayRené Scharfe2017-10-031-1/+1
| | | | | | | | | | Use the macro ALLOC_ARRAY to allocate an array. This is shorter and easier, as it automatically infers the size of elements. Patch generated with Coccinelle and contrib/coccinelle/array.cocci. Signeg-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'js/run-process-parallel-api-fix' into maintJunio C Hamano2017-08-231-2/+2
|\ | | | | | | | | | | | | API fix. * js/run-process-parallel-api-fix: run_processes_parallel: change confusing task_cb convention
| * run_processes_parallel: change confusing task_cb conventionjs/run-process-parallel-api-fixJohannes Schindelin2017-07-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By declaring the task_cb parameter of type `void **`, the signature of the get_next_task method suggests that the "task-specific cookie" can be defined in that method, and the signatures of the start_failure and of the task_finished methods declare that parameter of type `void *`, suggesting that those methods are mere users of said cookie. That convention makes a total lot of sense, because the tasks are pretty much dead when one of the latter two methods is called: there would be little use to reset that cookie at that point because nobody would be able to see the change afterwards. However, this is not what the code actually does. For all three methods, it passes the *address* of pp->children[i].data. As reasoned above, this behavior makes no sense. So let's change the implementation to adhere to the convention suggested by the signatures. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Acked-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: restrict PATH search to executable filesBrandon Williams2017-04-251-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some situations run-command will incorrectly try (and fail) to execute a directory instead of an executable file. This was observed by having a directory called "ssh" in $PATH before the real ssh and trying to use ssh protoccol, reslting in the following: $ git ls-remote ssh://url fatal: cannot exec 'ssh': Permission denied It ends up being worse and run-command will even try to execute a non-executable file if it preceeds the executable version of a file on the PATH. For example, if PATH=~/bin1:~/bin2:~/bin3 and there exists a directory 'git-hello' in 'bin1', a non-executable file 'git-hello' in bin2 and an executable file 'git-hello' (which prints "Hello World!") in bin3 the following will occur: $ git hello fatal: cannot exec 'git-hello': Permission denied This is due to only checking 'access()' when locating an executable in PATH, which doesn't distinguish between files and directories. Instead use 'is_executable()' which check that the path is to a regular, executable file. Now run-command won't try to execute the directory or non-executable file 'git-hello': $ git hello Hello World! which matches what execvp(3) would have done when asked to execute git-hello with such a $PATH. Reported-by: Brian Hatfield <bhatfield@google.com> Signed-off-by: Brandon Williams <bmwill@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: expose is_executable functionBrandon Williams2017-04-251-0/+42
| | | | | | | | | | | | | | | | | | | | | | Move the logic for 'is_executable()' from help.c to run_command.c and expose it so that callers from outside help.c can access the function. This is to enable run-command to be able to query if a file is executable in a future patch. Signed-off-by: Brandon Williams <bmwill@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: block signals between fork and execveEric Wong2017-04-201-0/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signal handlers of the parent firing in the forked child may have unintended side effects. Rather than auditing every signal handler we have and will ever have, block signals while forking and restore default signal handlers in the child before execve. Restoring default signal handlers is required because execve does not unblock signals, it only restores default signal handlers. So we must restore them with sigprocmask before execve, leaving a window when signal handlers we control can fire in the child. Continue ignoring ignored signals, but reset the rest to defaults. Similarly, disable pthread cancellation to future-proof our code in case we start using cancellation; as cancellation is implemented with signals in glibc. Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: add note about forking and threadingBrandon Williams2017-04-201-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All non-Async-Signal-Safe functions (e.g. malloc and die) were removed between 'fork' and 'exec' in start_command in order to avoid potential deadlocking when forking while multiple threads are running. This deadlocking is possible when a thread (other than the one forking) has acquired a lock and didn't get around to releasing it before the fork. This leaves the lock in a locked state in the resulting process with no hope of it ever being released. Add a note describing this potential pitfall before the call to 'fork()' so people working in this section of the code know to only use Async-Signal-Safe functions in the child process. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: handle dup2 and close errors in childBrandon Williams2017-04-201-16/+42
| | | | | | | | | | Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: eliminate calls to error handling functions in childBrandon Williams2017-04-201-32/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All of our standard error handling paths have the potential to call malloc or take stdio locks; so we must avoid them inside the forked child. Instead, the child only writes an 8 byte struct atomically to the parent through the notification pipe to propagate an error. All user-visible error reporting happens from the parent; even avoiding functions like atexit(3) and exit(3). Helped-by: Eric Wong <e@80x24.org> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: don't die in child when duping /dev/nullBrandon Williams2017-04-201-15/+13
| | | | | | | | | | Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: prepare child environment before forkingBrandon Williams2017-04-201-10/+56
| | | | | | | | | | | | | | | | | | | | | | In order to avoid allocation between 'fork()' and 'exec()' prepare the environment to be used in the child process prior to forking. Switch to using 'execve()' so that the construct child environment can used in the exec'd process. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: use the async-signal-safe execv instead of execvpBrandon Williams2017-04-201-1/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the function used to exec from 'execvp()' to 'execv()' as the (p) variant of exec isn't async-signal-safe and has the potential to call malloc during the path resolution it performs. Instead we simply do the path resolution ourselves during the preparation stage prior to forking. There also don't exist any portable (p) variants which also take in an environment to use in the exec'd process. This allows easy migration to using 'execve()' in a future patch. Also, as noted in [1], in the event of an ENOEXEC the (p) variants of exec will attempt to execute the command by interpreting it with the 'sh' utility. To maintain this functionality, if 'execv()' fails with ENOEXEC, start_command will atempt to execute the command by interpreting it with 'sh'. [1] http://pubs.opengroup.org/onlinepubs/009695399/functions/exec.html Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: prepare command before forkingBrandon Williams2017-04-201-20/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | According to [1] we need to only call async-signal-safe operations between fork and exec. Using malloc to build the argv array isn't async-signal-safe. In order to avoid allocation between 'fork()' and 'exec()' prepare the argv array used in the exec call prior to forking the process. [1] http://pubs.opengroup.org/onlinepubs/009695399/functions/fork.html Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jk/execv-dashed-external'Junio C Hamano2017-03-241-1/+1
|\ \ | | | | | | | | | | | | | | | | | | Fix for NO_PTHREADS build. * jk/execv-dashed-external: run-command: fix segfault when cleaning forked async process
| * | run-command: fix segfault when cleaning forked async processjk/execv-dashed-externalJeff King2017-03-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Callers of the run-command API may mark a child as "clean_on_exit"; it gets added to a list and killed when the main process dies. Since commit 46df6906f (execv_dashed_external: wait for child on signal death, 2017-01-06), we respect an extra "wait_after_clean" flag, which we expect to find in the child_process struct. When Git is built with NO_PTHREADS, we start "struct async" processes by forking rather than spawning a thread. The resulting processes get added to the cleanup list but they don't have a child_process struct, and the cleanup function ends up dereferencing NULL. We should notice this case and assume that the processes do not need to be waited for (i.e., the same behavior they had before 46df6906f). Reported-by: Brandon Williams <bmwill@google.com> Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'js/mingw-hooks-with-exe-suffix'Junio C Hamano2017-02-021-1/+7
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | Names of the various hook scripts must be spelled exactly, but on Windows, an .exe binary must be named with .exe suffix; notice $GIT_DIR/hooks/<hookname>.exe as a valid <hookname> hook. * js/mingw-hooks-with-exe-suffix: mingw: allow hooks to be .exe files
| * | mingw: allow hooks to be .exe filesjs/mingw-hooks-with-exe-suffixJohannes Schindelin2017-01-301-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Executable files in Windows need to have the extension '.exe', otherwise they do not work. Extend the hooks to not just look at the hard coded names, but also at the names extended by the custom STRIP_EXTENSION, which is defined as '.exe' in Windows. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | execv_dashed_external: wait for child on signal deathJeff King2017-01-091-0/+19
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When you hit ^C to interrupt a git command going to a pager, this usually leaves the pager running. But when a dashed external is in use, the pager ends up in a funny state and quits (but only after eating one more character from the terminal!). This fixes it. Explaining the reason will require a little background. When git runs a pager, it's important for the git process to hang around and wait for the pager to finish, even though it has no more data to feed it. This is because git spawns the pager as a child, and thus the git process is the session leader on the terminal. After it dies, the pager will finish its current read from the terminal (eating the one character), and then get EIO trying to read again. When you hit ^C, that sends SIGINT to git and to the pager, and it's a similar situation. The pager ignores it, but the git process needs to hang around until the pager is done. We addressed that long ago in a3da882120 (pager: do wait_for_pager on signal death, 2009-01-22). But when you have a dashed external (or an alias pointing to a builtin, which will re-exec git for the builtin), there's an extra process in the mix. For instance, running: $ git -c alias.l=log l will end up with a process tree like: git (parent) \ git-log (child) \ less (pager) If you hit ^C, SIGINT goes to all of them. The pager ignores it, and the child git process will end up in wait_for_pager(). But the parent git process will die, and the usual EIO trouble happens. So we really want the parent git process to wait_for_pager(), but of course it doesn't know anything about the pager at all, since it was started by the child. However, we can have it wait on the git-log child, which in turn is waiting on the pager. And that's what this patch does. There are a few design decisions here worth explaining: 1. The new feature is attached to run-command's clean_on_exit feature. Partly this is convenience, since that feature already has a signal handler that deals with child cleanup. But it's also a meaningful connection. The main reason that dashed externals use clean_on_exit is to bind the two processes together. If somebody kills the parent with a signal, we propagate that to the child (in this instance with SIGINT, we do propagate but it doesn't matter because the original signal went to the whole process group). Likewise, we do not want the parent to go away until the child has done so. In a traditional Unix world, we'd probably accomplish this binding by just having the parent execve() the child directly. But since that doesn't work on Windows, everything goes through run_command's more spawn-like interface. 2. We do _not_ automatically waitpid() on any clean_on_exit children. For dashed externals this makes sense; we know that the parent is doing nothing but waiting for the child to exit anyway. But with other children, it's possible that the child, after getting the signal, could be waiting on the parent to do something (like closing a descriptor). If we were to wait on such a child, we'd end up in a deadlock. So this errs on the side of caution, and lets callers enable the feature explicitly. 3. When we send children the cleanup signal, we send all the signals first, before waiting on any children. This is to avoid the case where one child might be waiting on another one to exit, causing a deadlock. We inform all of them that it's time to die before reaping any. In practice, there is only ever one dashed external run from a given process, so this doesn't matter much now. But it future-proofs us if other callers start using the wait_after_clean mechanism. There's no automated test here, because it would end up racy and unportable. But it's easy to reproduce the situation by running the log command given above and hitting ^C. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: add clean_on_exit_handlerLars Schneider2016-10-171-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some processes might want to perform cleanup tasks before Git kills them due to the 'clean_on_exit' flag. Let's give them an interface for doing this. The feature is used in a subsequent patch. Please note, that the cleanup callback is not executed if Git dies of a signal. The reason is that only "async-signal-safe" functions would be allowed to be call in that case. Since we cannot control what functions the callback will use, we will not support the case. See 507d7804 for more details. Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: move check_pipe() from write_or_die to run_commandLars Schneider2016-10-171-2/+15
|/ | | | | | | | | | | | Move check_pipe() to run_command and make it public. This is necessary to call the function from pkt-line in a subsequent patch. While at it, make async_exit() static to run_command.c as it is no longer used from outside. Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'ab/hooks'Junio C Hamano2016-08-191-4/+1
|\ | | | | | | | | | | | | | | | | "git rev-parse --git-path hooks/<hook>" learned to take core.hooksPath configuration variable (introduced during 2.9 cycle) into account. * ab/hooks: rev-parse: respect core.hooksPath in --git-path
| * rev-parse: respect core.hooksPath in --git-pathab/hooksJohannes Schindelin2016-08-161-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea of the --git-path option is not only to avoid having to prefix paths with the output of --git-dir all the time, but also to respect overrides for specific common paths inside the .git directory (e.g. `git rev-parse --git-path objects` will report the value of the environment variable GIT_OBJECT_DIRECTORY, if set). When introducing the core.hooksPath setting, we forgot to adjust git_path() accordingly. This patch fixes that. While at it, revert the special-casing of core.hooksPath in run-command.c, as it is now no longer needed. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: add pipe_command helperJeff King2016-06-171-5/+147
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We already have capture_command(), which captures the stdout of a command in a way that avoids deadlocks. But sometimes we need to do more I/O, like capturing stderr as well, or sending data to stdin. It's easy to write code that deadlocks racily in these situations depending on how fast the command reads its input, or in which order it writes its output. Let's give callers an easy interface for doing this the right way, similar to what capture_command() did for the simple case. The whole thing is backed by a generic poll() loop that can feed an arbitrary number of buffers to descriptors, and fill an arbitrary number of strbufs from other descriptors. This seems like overkill, but the resulting code is actually a bit cleaner than just handling the three descriptors (because the output code for stdout/stderr is effectively duplicated, so being able to loop is a benefit). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'nd/error-errno'Junio C Hamano2016-05-171-8/+7
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code for warning_errno/die_errno has been refactored and a new error_errno() reporting helper is introduced. * nd/error-errno: (41 commits) wrapper.c: use warning_errno() vcs-svn: use error_errno() upload-pack.c: use error_errno() unpack-trees.c: use error_errno() transport-helper.c: use error_errno() sha1_file.c: use {error,die,warning}_errno() server-info.c: use error_errno() sequencer.c: use error_errno() run-command.c: use error_errno() rerere.c: use error_errno() and warning_errno() reachable.c: use error_errno() mailmap.c: use error_errno() ident.c: use warning_errno() http.c: use error_errno() and warning_errno() grep.c: use error_errno() gpg-interface.c: use error_errno() fast-import.c: use error_errno() entry.c: use error_errno() editor.c: use error_errno() diff-no-index.c: use error_errno() ...
| * | run-command.c: use error_errno()Nguyễn Thái Ngọc Duy2016-05-091-8/+7
| | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'ab/hooks'Junio C Hamano2016-05-171-1/+4
|\ \ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | A new configuration variable core.hooksPath allows customizing where the hook directory is. * ab/hooks: hooks: allow customizing where the hook directory is githooks.txt: minor improvements to the grammar & phrasing githooks.txt: amend dangerous advice about 'update' hook ACL githooks.txt: improve the intro section
| * | hooks: allow customizing where the hook directory isÆvar Arnfjörð Bjarmason2016-05-041-1/+4
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change the hardcoded lookup for .git/hooks/* to optionally lookup in $(git config core.hooksPath)/* instead. This is essentially a more intrusive version of the git-init ability to specify hooks on init time via init templates. The difference between that facility and this feature is that this can be set up after the fact via e.g. ~/.gitconfig or /etc/gitconfig to apply for all your personal repositories, or all repositories on the system. I plan on using this on a centralized Git server where users can create arbitrary repositories under /gitroot, but I'd like to manage all the hooks that should be run centrally via a unified dispatch mechanism. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jk/push-client-deadlock-fix'Junio C Hamano2016-04-291-0/+10
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push" from a corrupt repository that attempts to push a large number of refs deadlocked; the thread to relay rejection notices for these ref updates blocked on writing them to the main thread, after the main thread at the receiving end notices that the push failed and decides not to read these notices and return a failure. * jk/push-client-deadlock-fix: t5504: drop sigpipe=ok from push tests fetch-pack: isolate sigpipe in demuxer thread send-pack: isolate sigpipe in demuxer thread run-command: teach async threads to ignore SIGPIPE send-pack: close demux pipe before finishing async process
| * | run-command: teach async threads to ignore SIGPIPEJeff King2016-04-201-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Async processes can be implemented as separate forked processes, or as threads (depending on the NO_PTHREADS setting). In the latter case, if an async thread gets SIGPIPE, it takes down the whole process. This is obviously bad if the main process was not otherwise going to die, but even if we were going to die, it means the main process does not have a chance to report a useful error message. There's also the small matter that forked async processes will not take the main process down on a signal, meaning git will behave differently depending on the NO_PTHREADS setting. This patch fixes it by adding a new flag to "struct async" to block SIGPIPE just in the async thread. In theory, this should always be on (which makes async threads behave more like async processes), but we would first want to make sure that each async process we spawn is careful about checking return codes from write() and would not spew endlessly into a dead pipe. So let's start with it as optional, and we can enable it for specific sites in future patches. The natural name for this option would be "ignore_sigpipe", since that's what it does for the threaded case. But since that name might imply that we are ignoring it in all cases (including the separate-process one), let's call it "isolate_sigpipe". What we are really asking for is isolation. I.e., not to have our main process taken down by signals spawned by the async process. How that is implemented is up to the run-command code. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | Merge branch 'jk/tighten-alloc' into maintJunio C Hamano2016-03-101-35/+25
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * jk/tighten-alloc: (23 commits) compat/mingw: brown paper bag fix for 50a6c8e ewah: convert to REALLOC_ARRAY, etc convert ewah/bitmap code to use xmalloc diff_populate_gitlink: use a strbuf transport_anonymize_url: use xstrfmt git-compat-util: drop mempcpy compat code sequencer: simplify memory allocation of get_message test-path-utils: fix normalize_path_copy output buffer size fetch-pack: simplify add_sought_entry fast-import: simplify allocation in start_packfile write_untracked_extension: use FLEX_ALLOC helper prepare_{git,shell}_cmd: use argv_array use st_add and st_mult for allocation size computation convert trivial cases to FLEX_ARRAY macros use xmallocz to avoid size arithmetic convert trivial cases to ALLOC_ARRAY convert manual allocations to argv_array argv-array: add detach function add helpers for allocating flex-array structs harden REALLOC_ARRAY and xcalloc against size_t overflow ...
* | \ \ Merge branch 'sb/submodule-parallel-update'Junio C Hamano2016-04-061-6/+6
|\ \ \ \ | |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A major part of "git submodule update" has been ported to C to take advantage of the recently added framework to run download tasks in parallel. * sb/submodule-parallel-update: clone: allow an explicit argument for parallel submodule clones submodule update: expose parallelism to the user submodule helper: remove double 'fatal: ' prefix git submodule update: have a dedicated helper for cloning run_processes_parallel: rename parameters for the callbacks run_processes_parallel: treat output of children as byte array submodule update: direct error message to stderr fetching submodules: respect `submodule.fetchJobs` config option submodule-config: drop check against NULL submodule-config: keep update strategy around
| * | | run_processes_parallel: rename parameters for the callbacksStefan Beller2016-03-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The refs code has a similar pattern of passing around 'struct strbuf *err', which is strictly used for error reporting. This is not the case here, as the strbuf is used to accumulate all the output (whether it is error or not) for the user. Rename it to 'out'. Suggested-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | run_processes_parallel: treat output of children as byte arrayStefan Beller2016-03-011-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We do not want the output to be interrupted by a NUL byte, so we cannot use raw fputs. Introduce strbuf_write to avoid having long arguments in run-command.c. Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'sb/submodule-parallel-fetch'Junio C Hamano2016-03-041-21/+3
|\ \ \ \ | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify the two callback functions that are triggered when the child process terminates to avoid misuse of the child-process structure that has already been cleaned up. * sb/submodule-parallel-fetch: run-command: do not pass child process data into callbacks
| * | | run-command: do not pass child process data into callbacksStefan Beller2016-03-011-21/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The expected way to pass data into the callback is to pass them via the customizable callback pointer. The error reporting in default_{start_failure, task_finished} is not user friendly enough, that we want to encourage using the child data for such purposes. Furthermore the struct child data is cleaned by the run-command API, before we access them in the callbacks, leading to use-after-free situations. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jk/epipe-in-async'Junio C Hamano2016-02-261-0/+10
|\ \ \ \ | | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handling of errors while writing into our internal asynchronous process has been made more robust, which reduces flakiness in our tests. * jk/epipe-in-async: t5504: handle expected output from SIGPIPE death test_must_fail: report number of unexpected signal fetch-pack: ignore SIGPIPE in sideband demuxer write_or_die: handle EPIPE in async threads
| * | | write_or_die: handle EPIPE in async threadsJeff King2016-02-251-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When write_or_die() sees EPIPE, it treats it specially by converting it into a SIGPIPE death. We obviously cannot ignore it, as the write has failed and the caller expects us to die. But likewise, we cannot just call die(), because printing any message at all would be a nuisance during normal operations. However, this is a problem if write_or_die() is called from a thread. Our raised signal ends up killing the whole process, when logically we just need to kill the thread (after all, if we are ignoring SIGPIPE, there is good reason to think that the main thread is expecting to handle it). Inside an async thread, the die() code already does the right thing, because we use our custom die_async() routine, which calls pthread_join(). So ideally we would piggy-back on that, and simply call: die_quietly_with_code(141); or similar. But refactoring the die code to do this is surprisingly non-trivial. The die_routines themselves handle both printing and the decision of the exit code. Every one of them would have to be modified to take new parameters for the code, and to tell us to be quiet. Instead, we can just teach write_or_die() to check for the async case and handle it specially. We do have to build an interface to abstract the async exit, but it's simple and self-contained. If we had many call-sites that wanted to do this die_quietly_with_code(), this approach wouldn't scale as well, but we don't. This is the only place where do this weird exit trick. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jk/tighten-alloc'Junio C Hamano2016-02-261-35/+25
|\ \ \ \ | | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update various codepaths to avoid manually-counted malloc(). * jk/tighten-alloc: (22 commits) ewah: convert to REALLOC_ARRAY, etc convert ewah/bitmap code to use xmalloc diff_populate_gitlink: use a strbuf transport_anonymize_url: use xstrfmt git-compat-util: drop mempcpy compat code sequencer: simplify memory allocation of get_message test-path-utils: fix normalize_path_copy output buffer size fetch-pack: simplify add_sought_entry fast-import: simplify allocation in start_packfile write_untracked_extension: use FLEX_ALLOC helper prepare_{git,shell}_cmd: use argv_array use st_add and st_mult for allocation size computation convert trivial cases to FLEX_ARRAY macros use xmallocz to avoid size arithmetic convert trivial cases to ALLOC_ARRAY convert manual allocations to argv_array argv-array: add detach function add helpers for allocating flex-array structs harden REALLOC_ARRAY and xcalloc against size_t overflow tree-diff: catch integer overflow in combine_diff_path allocation ...
| * | | prepare_{git,shell}_cmd: use argv_arrayJeff King2016-02-221-35/+25
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These functions transform an existing argv into one suitable for exec-ing or spawning via git or a shell. We can use an argv_array in each to avoid dealing with manual counting and allocation. This also makes the memory allocation more clear and fixes some leaks. In prepare_shell_cmd, we would sometimes allocate a new string with "$@" in it and sometimes not, meaning the caller could not correctly free it. On the non-Windows side, we are in a child process which will exec() or exit() immediately, so the leak isn't a big deal. On Windows, though, we use spawn() from the parent process, and leak a string for each shell command we run. On top of that, the Windows code did not free the allocated argv array at all (but does for the prepare_git_cmd case!). By switching both of these functions to write into an argv_array, we can consistently free the result as appropriate. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'nd/clear-gitenv-upon-use-of-alias'Junio C Hamano2016-01-201-1/+1
|\ \ \ | |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | d95138e6 (setup: set env $GIT_WORK_TREE when work tree is set, like $GIT_DIR, 2015-06-26) attempted to work around a glitch in alias handling by overwriting GIT_WORK_TREE environment variable to affect subprocesses when set_git_work_tree() gets called, which resulted in a rather unpleasant regression to "clone" and "init". Try to address the same issue by always restoring the environment and respawning the real underlying command when handling alias. * nd/clear-gitenv-upon-use-of-alias: run-command: don't warn on SIGPIPE deaths git.c: make sure we do not leak GIT_* to alias scripts setup.c: re-fix d95138e (setup: set env $GIT_WORK_TREE when .. git.c: make it clear save_env() is for alias handling only
| * | run-command: don't warn on SIGPIPE deathsJeff King2015-12-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When git executes a sub-command, we print a warning if the command dies due to a signal, but make an exception for "uninteresting" cases like SIGINT and SIGQUIT (since the user presumably just hit ^C). We should make a similar exception for SIGPIPE, because it's an expected and uninteresting return in most cases; it generally means the user quit the pager before git had finished generating all output. This used to be very hard to trigger in practice, because: 1. We only complain if we see a real SIGPIPE death, not the shell-induced 141 exit code. This means that anything we run via the shell does not trigger the warning, which includes most non-trivial aliases. 2. The common case for SIGPIPE is the user quitting the pager before git has finished generating all output. But if the user triggers a pager with "-p", we redirect the git wrapper's stderr to that pager, too. Since the pager is dead, it means that the message goes nowhere. 3. You can see it if you run your own pager, like "git foo | head". But that only happens if "foo" is a non-builtin (so it doesn't work with "log", for example). However, it may become more common after 86d26f2, which teaches alias to re-exec builtins rather than running them in the same process. This case doesn't trigger (1), as we don't need a shell to run a git command. It doesn't trigger (2), because the pager is not started by the original git, but by the inner re-exec of git. And it doesn't trigger (3), because builtins are treated more like non-builtins in this case. Given how flaky this message already is (e.g., you cannot even know whether you will see it, as git optimizes out some shell invocations behind the scenes based on the contents of the command!), and that it is unlikely to ever provide useful information, let's suppress it for all cases of SIGPIPE. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | run-command: add an asynchronous parallel child processorStefan Beller2015-12-161-0/+335
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'rs/daemon-plug-child-leak'Junio C Hamano2015-11-031-6/+9
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | "git daemon" uses "run_command()" without "finish_command()", so it needs to release resources itself, which it forgot to do. * rs/daemon-plug-child-leak: daemon: plug memory leak run-command: factor out child_process_clear()
| * | run-command: factor out child_process_clear()René Scharfe2015-11-021-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid duplication by moving the code to release allocated memory for arguments and environment to its own function, child_process_clear(). Export it to provide a counterpart to child_process_init(). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'ti/glibc-stdio-mutex-from-signal-handler'Junio C Hamano2015-10-071-8/+17
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocation related functions and stdio are unsafe things to call inside a signal handler, and indeed killing the pager can cause glibc to deadlock waiting on allocation mutex as our signal handler tries to free() some data structures in wait_for_pager(). Reduce these unsafe calls. * ti/glibc-stdio-mutex-from-signal-handler: pager: don't use unsafe functions in signal handlers
| * | | pager: don't use unsafe functions in signal handlersti/glibc-stdio-mutex-from-signal-handlerTakashi Iwai2015-09-041-8/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jk/async-pkt-line'Junio C Hamano2015-10-051-1/+15
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The debugging infrastructure for pkt-line based communication has been improved to mark the side-band communication specifically. * jk/async-pkt-line: pkt-line: show packets in async processes as "sideband" run-command: provide in_async query function
| * | | | run-command: provide in_async query functionJeff King2015-09-011-1/+15
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's not easy for arbitrary code to find out whether it is running in an async process or not. A top-level function which is fed to start_async() can know (you just pass down an argument saying "you are async"). But that function may call other global functions, and we would not want to have to pass the information all the way through the call stack. Nor can we simply set a global variable, as those may be shared between async threads and the main thread (if the platform supports pthreads). We need pthread tricks _or_ a global variable, depending on how start_async is implemented. The callers don't have enough information to do this right, so let's provide a simple query function that does. Fortunately we can reuse the existing infrastructure to make the pthread case simple (and even simplify die_async() by using our new function). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jk/long-error-messages'Junio C Hamano2015-08-251-15/+2
|\ \ \ \ | | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The codepath to produce error messages had a hard-coded limit to the size of the message, primarily to avoid memory allocation while calling die(). * jk/long-error-messages: vreportf: avoid intermediate buffer vreportf: report to arbitrary filehandles