summaryrefslogtreecommitdiff
path: root/run-command.h
Commit message (Collapse)AuthorAgeFilesLines
* run-command: add clean_on_exit_handlerLars Schneider2016-10-171-0/+2
| | | | | | | | | | | | | | | | Some processes might want to perform cleanup tasks before Git kills them due to the 'clean_on_exit' flag. Let's give them an interface for doing this. The feature is used in a subsequent patch. Please note, that the cleanup callback is not executed if Git dies of a signal. The reason is that only "async-signal-safe" functions would be allowed to be call in that case. Since we cannot control what functions the callback will use, we will not support the case. See 507d7804 for more details. Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run-command: move check_pipe() from write_or_die to run_commandLars Schneider2016-10-171-1/+1
| | | | | | | | | | | | Move check_pipe() to run_command and make it public. This is necessary to call the function from pkt-line in a subsequent patch. While at it, make async_exit() static to run_command.c as it is no longer used from outside. Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run-command: add pipe_command helperJeff King2016-06-171-7/+24
| | | | | | | | | | | | | | | | | | | | | | | | | We already have capture_command(), which captures the stdout of a command in a way that avoids deadlocks. But sometimes we need to do more I/O, like capturing stderr as well, or sending data to stdin. It's easy to write code that deadlocks racily in these situations depending on how fast the command reads its input, or in which order it writes its output. Let's give callers an easy interface for doing this the right way, similar to what capture_command() did for the simple case. The whole thing is backed by a generic poll() loop that can feed an arbitrary number of buffers to descriptors, and fill an arbitrary number of strbufs from other descriptors. This seems like overkill, but the resulting code is actually a bit cleaner than just handling the three descriptors (because the output code for stdout/stderr is effectively duplicated, so being able to loop is a benefit). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jk/push-client-deadlock-fix'Junio C Hamano2016-04-291-0/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push" from a corrupt repository that attempts to push a large number of refs deadlocked; the thread to relay rejection notices for these ref updates blocked on writing them to the main thread, after the main thread at the receiving end notices that the push failed and decides not to read these notices and return a failure. * jk/push-client-deadlock-fix: t5504: drop sigpipe=ok from push tests fetch-pack: isolate sigpipe in demuxer thread send-pack: isolate sigpipe in demuxer thread run-command: teach async threads to ignore SIGPIPE send-pack: close demux pipe before finishing async process
| * run-command: teach async threads to ignore SIGPIPEJeff King2016-04-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Async processes can be implemented as separate forked processes, or as threads (depending on the NO_PTHREADS setting). In the latter case, if an async thread gets SIGPIPE, it takes down the whole process. This is obviously bad if the main process was not otherwise going to die, but even if we were going to die, it means the main process does not have a chance to report a useful error message. There's also the small matter that forked async processes will not take the main process down on a signal, meaning git will behave differently depending on the NO_PTHREADS setting. This patch fixes it by adding a new flag to "struct async" to block SIGPIPE just in the async thread. In theory, this should always be on (which makes async threads behave more like async processes), but we would first want to make sure that each async process we spawn is careful about checking return codes from write() and would not spew endlessly into a dead pipe. So let's start with it as optional, and we can enable it for specific sites in future patches. The natural name for this option would be "ignore_sigpipe", since that's what it does for the threaded case. But since that name might imply that we are ignoring it in all cases (including the separate-process one), let's call it "isolate_sigpipe". What we are really asking for is isolation. I.e., not to have our main process taken down by signals spawned by the async process. How that is implemented is up to the run-command code. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'sb/submodule-parallel-update'Junio C Hamano2016-04-061-5/+5
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A major part of "git submodule update" has been ported to C to take advantage of the recently added framework to run download tasks in parallel. * sb/submodule-parallel-update: clone: allow an explicit argument for parallel submodule clones submodule update: expose parallelism to the user submodule helper: remove double 'fatal: ' prefix git submodule update: have a dedicated helper for cloning run_processes_parallel: rename parameters for the callbacks run_processes_parallel: treat output of children as byte array submodule update: direct error message to stderr fetching submodules: respect `submodule.fetchJobs` config option submodule-config: drop check against NULL submodule-config: keep update strategy around
| * | run_processes_parallel: rename parameters for the callbacksStefan Beller2016-03-011-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The refs code has a similar pattern of passing around 'struct strbuf *err', which is strictly used for error reporting. This is not the case here, as the strbuf is used to accumulate all the output (whether it is error or not) for the user. Rename it to 'out'. Suggested-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'sb/submodule-parallel-fetch'Junio C Hamano2016-03-041-6/+3
|\ \ \ | |/ / | | | | | | | | | | | | | | | | | | | | | Simplify the two callback functions that are triggered when the child process terminates to avoid misuse of the child-process structure that has already been cleaned up. * sb/submodule-parallel-fetch: run-command: do not pass child process data into callbacks
| * | run-command: do not pass child process data into callbacksStefan Beller2016-03-011-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The expected way to pass data into the callback is to pass them via the customizable callback pointer. The error reporting in default_{start_failure, task_finished} is not user friendly enough, that we want to encourage using the child data for such purposes. Furthermore the struct child data is cleaned by the run-command API, before we access them in the callbacks, leading to use-after-free situations. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'jk/epipe-in-async'Junio C Hamano2016-02-261-0/+1
|\ \ \ | |/ / |/| / | |/ | | | | | | | | | | | | | | | | Handling of errors while writing into our internal asynchronous process has been made more robust, which reduces flakiness in our tests. * jk/epipe-in-async: t5504: handle expected output from SIGPIPE death test_must_fail: report number of unexpected signal fetch-pack: ignore SIGPIPE in sideband demuxer write_or_die: handle EPIPE in async threads
| * write_or_die: handle EPIPE in async threadsJeff King2016-02-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When write_or_die() sees EPIPE, it treats it specially by converting it into a SIGPIPE death. We obviously cannot ignore it, as the write has failed and the caller expects us to die. But likewise, we cannot just call die(), because printing any message at all would be a nuisance during normal operations. However, this is a problem if write_or_die() is called from a thread. Our raised signal ends up killing the whole process, when logically we just need to kill the thread (after all, if we are ignoring SIGPIPE, there is good reason to think that the main thread is expecting to handle it). Inside an async thread, the die() code already does the right thing, because we use our custom die_async() routine, which calls pthread_join(). So ideally we would piggy-back on that, and simply call: die_quietly_with_code(141); or similar. But refactoring the die code to do this is surprisingly non-trivial. The die_routines themselves handle both printing and the decision of the exit code. Every one of them would have to be modified to take new parameters for the code, and to tell us to be quiet. Instead, we can just teach write_or_die() to check for the async case and handle it specially. We do have to build an interface to abstract the async exit, but it's simple and self-contained. If we had many call-sites that wanted to do this die_quietly_with_code(), this approach wouldn't scale as well, but we don't. This is the only place where do this weird exit trick. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: add an asynchronous parallel child processorStefan Beller2015-12-161-0/+80
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'rs/daemon-plug-child-leak'Junio C Hamano2015-11-031-0/+1
|\ | | | | | | | | | | | | | | | | "git daemon" uses "run_command()" without "finish_command()", so it needs to release resources itself, which it forgot to do. * rs/daemon-plug-child-leak: daemon: plug memory leak run-command: factor out child_process_clear()
| * run-command: factor out child_process_clear()René Scharfe2015-11-021-0/+1
| | | | | | | | | | | | | | | | | | Avoid duplication by moving the code to release allocated memory for arguments and environment to its own function, child_process_clear(). Export it to provide a counterpart to child_process_init(). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'ti/glibc-stdio-mutex-from-signal-handler'Junio C Hamano2015-10-071-0/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocation related functions and stdio are unsafe things to call inside a signal handler, and indeed killing the pager can cause glibc to deadlock waiting on allocation mutex as our signal handler tries to free() some data structures in wait_for_pager(). Reduce these unsafe calls. * ti/glibc-stdio-mutex-from-signal-handler: pager: don't use unsafe functions in signal handlers
| * | pager: don't use unsafe functions in signal handlersti/glibc-stdio-mutex-from-signal-handlerTakashi Iwai2015-09-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'jk/async-pkt-line'Junio C Hamano2015-10-051-0/+1
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The debugging infrastructure for pkt-line based communication has been improved to mark the side-band communication specifically. * jk/async-pkt-line: pkt-line: show packets in async processes as "sideband" run-command: provide in_async query function
| * | | run-command: provide in_async query functionJeff King2015-09-011-0/+1
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's not easy for arbitrary code to find out whether it is running in an async process or not. A top-level function which is fed to start_async() can know (you just pass down an argument saying "you are async"). But that function may call other global functions, and we would not want to have to pass the information all the way through the call stack. Nor can we simply set a global variable, as those may be shared between async threads and the main thread (if the platform supports pthreads). We need pthread tricks _or_ a global variable, depending on how start_async is implemented. The callers don't have enough information to do this right, so let's provide a simple query function that does. Fortunately we can reuse the existing infrastructure to make the pthread case simple (and even simplify die_async() by using our new function). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | find_hook: keep our own static bufferJeff King2015-08-101-0/+5
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The find_hook function returns the results of git_path, which is a static buffer shared by other path-related calls. Returning such a buffer is slightly dangerous, because it can be overwritten by seemingly unrelated functions. Let's at least keep our _own_ static buffer, so you can only get in trouble by calling find_hook in quick succession, which is less likely to happen and more obvious to notice. While we're at it, let's add some documentation of the function's limitations. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'nd/multiple-work-trees'Junio C Hamano2015-05-111-1/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A replacement for contrib/workdir/git-new-workdir that does not rely on symbolic links and make sharing of objects and refs safer by making the borrowee and borrowers aware of each other. * nd/multiple-work-trees: (41 commits) prune --worktrees: fix expire vs worktree existence condition t1501: fix test with split index t2026: fix broken &&-chain t2026 needs procondition SANITY git-checkout.txt: a note about multiple checkout support for submodules checkout: add --ignore-other-wortrees checkout: pass whole struct to parse_branchname_arg instead of individual flags git-common-dir: make "modules/" per-working-directory directory checkout: do not fail if target is an empty directory t2025: add a test to make sure grafts is working from a linked checkout checkout: don't require a work tree when checking out into a new one git_path(): keep "info/sparse-checkout" per work-tree count-objects: report unused files in $GIT_DIR/worktrees/... gc: support prune --worktrees gc: factor out gc.pruneexpire parsing code gc: style change -- no SP before closing parenthesis checkout: clean up half-prepared directories in --to mode checkout: reject if the branch is already checked out elsewhere prune: strategies for linked checkouts checkout: support checking out into a new working directory ...
| * path.c: make get_pathname() call sites return const char *Nguyễn Thái Ngọc Duy2014-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before the previous commit, get_pathname returns an array of PATH_MAX length. Even if git_path() and similar functions does not use the whole array, git_path() caller can, in theory. After the commit, get_pathname() may return a buffer that has just enough room for the returned string and git_path() caller should never write beyond that. Make git_path(), mkpath() and git_path_submodule() return a const buffer to make sure callers do not write in it at all. This could have been part of the previous commit, but the "const" conversion is too much distraction from the core changes in path.c. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: introduce capture_command helperJeff King2015-03-221-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Something as simple as reading the stdout from a command turns out to be rather hard to do right. Doing: cmd.out = -1; run_command(&cmd); strbuf_read(&buf, cmd.out, 0); can result in deadlock if the child process produces a large amount of output. What happens is: 1. The parent spawns the child with its stdout connected to a pipe, of which the parent is the sole reader. 2. The parent calls wait(), blocking until the child exits. 3. The child writes to stdout. If it writes more data than the OS pipe buffer can hold, the write() call will block. This is a deadlock; the parent is waiting for the child to exit, and the child is waiting for the parent to call read(). So we might try instead: start_command(&cmd); strbuf_read(&buf, cmd.out, 0); finish_command(&cmd); But that is not quite right either. We are examining cmd.out and running finish_command whether start_command succeeded or not, which is wrong. Moreover, these snippets do not do any error handling. If our read() fails, we must make sure to still call finish_command (to reap the child process). And both snippets failed to close the cmd.out descriptor, which they must do (provided start_command succeeded). Let's introduce a run-command helper that can make this a bit simpler for callers to get right. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jc/hook-cleanup'Junio C Hamano2014-12-221-4/+0
|\ \ | |/ |/| | | | | | | | | Remove unused code. * jc/hook-cleanup: run-command.c: retire unused run_hook_with_custom_index()
| * run-command.c: retire unused run_hook_with_custom_index()jc/hook-cleanupJunio C Hamano2014-12-011-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | This was originally meant to be used to rewrite run_commit_hook() that only special cases the GIT_INDEX_FILE environment, but the run_hook_ve() refactoring done earlier made the implementation of run_commit_hook() thin and clean enough. Nobody uses this, so retire it as an unfinished clean-up made unnecessary. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: add env_array, an optional argv_array for envRené Scharfe2014-10-191-1/+2
| | | | | | | | | | | | | | | | | | | | | | Similar to args, add a struct argv_array member to struct child_process that simplifies specifying the environment for children. It is freed automatically by finish_command() or if start_command() encounters an error. Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: introduce child_process_init()René Scharfe2014-08-201-0/+1
| | | | | | | | | | | | | | | | | | Add a helper function for initializing those struct child_process variables for which the macro CHILD_PROCESS_INIT can't be used. Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: introduce CHILD_PROCESS_INITRené Scharfe2014-08-201-0/+2
|/ | | | | | | | | | | | Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run-command: store an optional argv_arrayJeff King2014-05-151-0/+3
| | | | | | | | | | | | | | | | | | | | | | | All child_process structs need to point to an argv. For flexibility, we do not mandate the use of a dynamic argv_array. However, because the child_process does not own the memory, this can make memory management with a separate argv_array difficult. For example, if a function calls start_command but not finish_command, the argv memory must persist. The code needs to arrange to clean up the argv_array separately after finish_command runs. As a result, some of our code in this situation just leaks the memory. To help such cases, this patch adds a built-in argv_array to the child_process, which gets cleaned up automatically (both in finish_command and when start_command fails). Callers may use it if they choose, but can continue to use the raw argv if they wish. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run-command: mark run_hook_with_custom_index as deprecatedbp/commit-p-editorBenoit Pierre2014-03-181-0/+1
| | | | | Signed-off-by: Benoit Pierre <benoit.pierre@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* commit: fix patch hunk editing with "commit -p -m"Benoit Pierre2014-03-181-1/+5
| | | | | | | | Don't change git environment: move the GIT_EDITOR=":" override to the hook command subprocess, like it's already done for GIT_INDEX_FILE. Signed-off-by: Benoit Pierre <benoit.pierre@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Add the LAST_ARG_MUST_BE_NULL macrojk/gcc-function-attributesRamsay Jones2013-07-191-1/+1
| | | | | | | | | | | | The sentinel function attribute is not understood by versions of the gcc compiler prior to v4.0. At present, for earlier versions of gcc, the build issues 108 warnings related to the unknown attribute. In order to suppress the warnings, we conditionally define the LAST_ARG_MUST_BE_NULL macro to provide the sentinel attribute for gcc v4.0 and newer. Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* use "sentinel" function attribute for variadic listsJeff King2013-07-091-0/+1
| | | | | | | | | | | | | This attribute can help gcc notice when callers forget to add a NULL sentinel to the end of the function. This is our first use of the sentinel attribute, but we shouldn't need to #ifdef for other compilers, as __attribute__ is already a no-op on non-gcc-compatible compilers. Suggested-by: Bert Wesarg <bert.wesarg@googlemail.com> More-Spots-Found-By: Matt Kraai <kraai@ftbfs.org> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* hooks: Add function to check if a hook existsAaron Schrab2013-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | Create find_hook() function to determine if a given hook exists and is executable. If it is, the path to the script will be returned, otherwise NULL is returned. This encapsulates the tests that are used to check for the existence of a hook in one place, making it easier to modify those checks if that is found to be necessary. This also makes it simple for places that can use a hook to check if a hook exists before doing, possibly lengthy, setup work which would be pointless if no such hook is present. The returned value is left as a static value from get_pathname() rather than a duplicate because it is anticipated that the return value will either be used as a boolean, immediately added to an argv_array list which would result in it being duplicated at that point, or used to actually run the command without much intervening work. Callers which need to hold onto the returned value for a longer time are expected to duplicate the return value themselves. Signed-off-by: Aaron Schrab <aaron@schrab.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* pager: drop "wait for output to run less" hackJeff King2012-06-051-1/+0
| | | | | | | | | | | | | | | | | | | Commit 35ce862 (pager: Work around window resizing bug in 'less', 2007-01-24) causes git's pager sub-process to wait to receive input after forking but before exec-ing the pager. To handle this, run-command had to grow a "pre-exec callback" feature. Unfortunately, this feature does not work at all on Windows (where we do not fork), and interacts poorly with run-command's parent notification system. Its use should be discouraged. The bug in less was fixed in version 406, which was released in June 2007. It is probably safe at this point to remove our workaround. That lets us rip out the preexec_cb feature entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* dashed externals: kill children on exitcb/maint-kill-subprocess-upon-signalClemens Buchacher2012-01-081-0/+1
| | | | | | | | | | | | | | | Several git commands are so-called dashed externals, that is commands executed as a child process of the git wrapper command. If the git wrapper is killed by a signal, the child process will continue to run. This is different from internal commands, which always die with the git wrapper command. Enable the recently introduced cleanup mechanism for child processes in order to make dashed externals act more in line with internal commands. Signed-off-by: Clemens Buchacher <drizzd@aon.at> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run-command: optionally kill children on exitJeff King2012-01-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we spawn a helper process, it should generally be done and finish_command called before we exit. However, if we exit abnormally due to an early return or a signal, the helper may continue to run in our absence. In the best case, this may simply be wasted CPU cycles or a few stray messages on a terminal. But it could also mean a process that the user thought was aborted continues to run to completion (e.g., a push's pack-objects helper will complete the push, even though you killed the push process). This patch provides infrastructure for run-command to keep track of PIDs to be killed, and clean them on signal reception or input, just as we do with tempfiles. PIDs can be added in two ways: 1. If NO_PTHREADS is defined, async helper processes are automatically marked. By definition this code must be ready to die when the parent dies, since it may be implemented as a thread of the parent process. 2. If the run-command caller specifies the "clean_on_exit" option. This is not the default, as there are cases where it is OK for the child to outlive us (e.g., when spawning a pager). PIDs are cleared from the kill-list automatically during wait_or_whine, which is called from finish_command and finish_async. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Enable threaded async procedures whenever pthreads is availableJohannes Sixt2010-03-101-2/+2
| | | | | Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Reimplement async procedures using pthreadsJohannes Sixt2010-03-071-2/+6
| | | | | | | | | | | | | | | | | | On Windows, async procedures have always been run in threads, and the implementation used Windows specific APIs. Rewrite the code to use pthreads. A new configuration option is introduced so that the threaded implementation can also be used on POSIX systems. Since this option is intended only as playground on POSIX, but is mandatory on Windows, the option is not documented. One detail is that on POSIX it is necessary to set FD_CLOEXEC on the pipe handles. On Windows, this is not needed because pipe handles are not inherited to child processes, and the new calls to set_cloexec() are effectively no-ops. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'sp/maint-push-sideband' into sp/push-sidebandJunio C Hamano2010-02-051-4/+7
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | * sp/maint-push-sideband: receive-pack: Send hook output over side band #2 receive-pack: Wrap status reports inside side-band-64k receive-pack: Refactor how capabilities are shown to the client send-pack: demultiplex a sideband stream with status data run-command: support custom fd-set in async run-command: Allow stderr to be a caller supplied pipe Update git fsck --full short description to mention packs Conflicts: run-command.c
| * run-command: support custom fd-set in asyncErik Faye-Lund2010-02-051-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the possibility to supply a set of non-0 file descriptors for async process communication instead of the default-created pipe. Additionally, we now support bi-directional communiction with the async procedure, by giving the async function both read and write file descriptors. To retain compatiblity and similar "API feel" with start_command, we require start_async callers to set .out = -1 to get a readable file descriptor. If either of .in or .out is 0, we supply no file descriptor to the async process. [sp: Note: Erik started this patch, and a huge bulk of it is his work. All bugs were introduced later by Shawn.] Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * run-command: Allow stderr to be a caller supplied pipeShawn O. Pearce2010-02-051-1/+1
| | | | | | | | | | | | | | | | | | Like .out, .err may now be set to a file descriptor > 0, which is a writable pipe/socket/file that the child's stderr will be redirected into. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | run-command: add "use shell" optionJeff King2010-01-011-0/+2
|/ | | | | | | | | | | | | | | | | Many callsites run "sh -c $CMD" to run $CMD. We can make it a little simpler for them by factoring out the munging of argv. For simple cases with no arguments, this doesn't help much, but: 1. For cases with arguments, we save the caller from having to build the appropriate shell snippet. 2. We can later optimize to avoid the shell when there are no metacharacters in the program. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Test for WIN32 instead of __MINGW32_Frank Li2009-09-181-1/+1
| | | | | | | | | | | | | | | The code which is conditional on MinGW32 is actually conditional on Windows. Use the WIN32 symbol, which is defined by the MINGW32 and MSVC environments, but not by Cygwin. Define SNPRINTF_SIZE_CORR=1 for MSVC too, as its vsnprintf function does not add NUL at the end of the buffer if the result fits the buffer size exactly. Signed-off-by: Frank Li <lznuaa@gmail.com> Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com> Acked-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run_command: report failure to execute the program, but optionally don'tJohannes Sixt2009-07-061-0/+2
| | | | | | | | | | | | | | In the case where a program was not found, it was still the task of the caller to report an error to the user. Usually, this is an interesting case but only few callers actually reported a specific error (though many call sites report a generic error message regardless of the cause). With this change the error is reported by run_command, but since there is one call site in git.c that does not want that, an option is added to struct child_process, which is used to turn the error off. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run_command: report system call errors instead of returning error codesJohannes Sixt2009-07-061-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* run_command: return exit code as positive valueJohannes Sixt2009-07-051-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | As a general guideline, functions in git's code return zero to indicate success and negative values to indicate failure. The run_command family of functions followed this guideline. But there are actually two different kinds of failure: - failures of system calls; - non-zero exit code of the program that was run. Usually, a non-zero exit code of the program is a failure and means a failure to the caller. Except that sometimes it does not. For example, the exit code of merge programs (e.g. external merge drivers) conveys information about how the merge failed, and not all exit calls are actually failures. Furthermore, the return value of run_command is sometimes used as exit code by the caller. This change arranges that the exit code of the program is returned as a positive value, which can now be regarded as the "result" of the function. System call failures continue to be reported as negative values. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fix portability problem with IS_RUN_COMMAND_ERRJeff King2009-04-011-1/+1
| | | | | | | | | | Some old versions of gcc don't seem to like us negating an enum constant. Let's work around it by negating the other half of the comparison instead. Reported by Pierre Poissinger on gcc 2.9. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jk/maint-cleanup-after-exec-failure'Junio C Hamano2009-02-031-0/+1
|\ | | | | | | | | | | | | | | * jk/maint-cleanup-after-exec-failure: git: use run_command() to execute dashed externals run_command(): help callers distinguish errors run_command(): handle missing command errors more gracefully git: s/run_command/run_builtin/
| * run_command(): help callers distinguish errorsJeff King2009-01-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | run_command() returns a single integer specifying either an error code or the exit status of the spawned program. The only way to tell the difference is that the error codes are outside of the allowed range of exit status values. Rather than make each caller implement the test against a magic limit, let's provide a macro. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Move run_hook() from builtin-commit.c into run-command.c (libgit)Stephan Beyer2009-01-171-0/+2
|/ | | | | | | | | | | | | | | | | | A function that runs a hook is used in several Git commands. builtin-commit.c has the one that is most general for cases without piping. The one in builtin-gc.c prints some useful warnings. This patch moves a merged version of these variants into libgit and lets the other builtins use this libified run_hook(). The run_hook() function used in receive-pack.c feeds the standard input of the pre-receive or post-receive hooks. This function is renamed to run_receive_hook() because the libified run_hook() cannot handle this. Mentored-by: Daniel Barkalow <barkalow@iabervon.org> Mentored-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Stephan Beyer <s-beyer@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>