summaryrefslogtreecommitdiff
path: root/fetch-pack.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'nd/fetch-pack-shallow-fix' into maintJunio C Hamano2013-09-051-1/+3
|\ | | | | | | | | | | | | | | The recent "short-cut clone connectivity check" topic broke a shallow repository when a fetch operation tries to auto-follow tags. * nd/fetch-pack-shallow-fix: fetch-pack: do not remove .git/shallow file when --depth is not specified
| * fetch-pack: do not remove .git/shallow file when --depth is not specifiednd/fetch-pack-shallow-fixNguyễn Thái Ngọc Duy2013-08-251-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fetch_pack() can remove .git/shallow file when a shallow repository becomes a full one again. This behavior is triggered incorrectly when tags are also fetched because fetch_pack() will be called twice. At the first fetch_pack() call: - shallow_lock is set up - alternate_shallow_file points to shallow_lock.filename, which is "shallow.lock" - commit_lock_file is called, which sets shallow_lock.filename to "". alternate_shallow_file also becomes "" because it points to the same memory. At the second call, setup_alternate_shallow() is not called and alternate_shallow_file remains "". It's mistaken as unshallow case and .git/shallow is removed. The end result is a broken repository. Fix this by always initializing alternate_shallow_file when fetch_pack() is called. As an extra measure, check if args->depth > 0 before commit/rollback shallow file. Reported-by: Kacper Kornet <kornet@camk.edu.pl> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | fetch-pack: avoid quadratic behavior in rev_list_pushjk/fetch-pack-many-refsJeff King2013-07-021-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we call find_common to start finding common ancestors with the remote side of a fetch, the first thing we do is insert the tip of each ref into our rev_list linked list. We keep the list sorted the whole time with commit_list_insert_by_date, which means our insertion ends up doing O(n^2) timestamp comparisons. We could teach rev_list_push to use an unsorted list, and then sort it once after we have added each ref. However, in get_rev, we process the list by popping commits off the front and adding parents back in timestamp-sorted order. So that procedure would still operate on the large list. Instead, we can replace the linked list with a heap-based priority queue, which can do O(log n) insertion, making the whole insertion procedure O(n log n). As a result of switching to the prio_queue struct, we fix two minor bugs: 1. When we "pop" a commit in get_rev, and when we clear the rev_list in find_common, we do not take care to free the "struct commit_list", and just leak its memory. With the prio_queue implementation, the memory management is handled for us. 2. In get_rev, we look at the head commit of the list, possibly push its parents onto the list, and then "pop" the front of the list off, assuming it is the same element that we just peeked at. This is typically going to be the case, but would not be in the face of clock skew: the parents are inserted by date, and could potentially be inserted at the head of the list if they have a timestamp newer than their descendent. In this case, we would accidentally pop the parent, and never process it at all. The new implementation pulls the commit off of the queue as we examine it, and so does not suffer from this problem. With this patch, a fetch of a single commit into a repository with 50,000 refs went from: real 0m7.984s user 0m7.852s sys 0m0.120s to: real 0m2.017s user 0m1.884s sys 0m0.124s Before this patch, a larger case with 370K refs still had not completed after tens of minutes; with this patch, it completes in about 12 seconds. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | fetch-pack: avoid quadratic list insertion in mark_completeJeff King2013-07-021-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We insert the commit pointed to by each ref one-by-one into the "complete" commit_list using insert_by_date. Because each insertion is O(n), we end up with O(n^2) behavior. This typically doesn't matter, because the number of refs is reasonably small. And even if there are a lot of refs, they often point to a smaller set of objects (in which case the optimization in commit ea5f220 keeps our "n" small). However, in pathological repositories (hundreds of thousands of refs, each pointing to a unique commit), this quadratic behavior can make a difference. Since we do not care about the list order until we have finished building it, we can simply keep it unsorted during the insertion phase, then sort it afterwards. On a repository like the one described above, this dropped the time to do a no-op fetch from 2.0s to 1.7s. On normal repositories, it probably does not matter at all, but it does not hurt to protect ourselves from pathological cases. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | clone: open a shortcut for connectivity checkNguyễn Thái Ngọc Duy2013-05-281-1/+10
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to make sure the cloned repository is good, we run "rev-list --objects --not --all $new_refs" on the repository. This is expensive on large repositories. This patch attempts to mitigate the impact in this special case. In the "good" clone case, we only have one pack. If all of the following are met, we can be sure that all objects reachable from the new refs exist, which is the intention of running "rev-list ...": - all refs point to an object in the pack - there are no dangling pointers in any object in the pack - no objects in the pack point to objects outside the pack The second and third checks can be done with the help of index-pack as a slight variation of --strict check (which introduces a new condition for the shortcut: pack transfer must be used and the number of objects large enough to call index-pack). The first is checked in check_everything_connected after we get an "ok" from index-pack. "index-pack + new checks" is still faster than the current "index-pack + rev-list", which is the whole point of this patch. If any of the conditions fail, we fall back to the good old but expensive "rev-list ..". In that case it's even more expensive because we have to pay for the new checks in index-pack. But that should only happen when the other side is either buggy or malicious. Cloning linux-2.6 over file:// before after real 3m25.693s 2m53.050s user 5m2.037s 4m42.396s sys 0m13.750s 0m16.574s A more realistic test with ssh:// over wireless before after real 11m26.629s 10m4.213s user 5m43.196s 5m19.444s sys 0m35.812s 0m37.630s This shortcut is not applied to shallow clones, partly because shallow clones should have no more objects than a usual fetch and the cost of rev-list is acceptable, partly to avoid dealing with corner cases when grafting is involved. This shortcut does not apply to unpack-objects code path either because the number of objects must be small in order to trigger that code path. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fetch-pack: prepare updated shallow file before fetching the packNguyễn Thái Ngọc Duy2013-05-281-36/+37
| | | | | | | | | | | | | | index-pack --strict looks up and follows parent commits. If shallow information is not ready by the time index-pack is run, index-pack may be led to non-existent objects. Make fetch-pack save shallow file to disk before invoking index-pack. git learns new global option --shallow-file to pass on the alternate shallow file path. Undocumented (and not even support --shallow-file= syntax) because it's unlikely to be used again elsewhere. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jk/pkt-line-cleanup'Junio C Hamano2013-04-011-9/+9
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up pkt-line API, implementation and its callers to make them more robust. * jk/pkt-line-cleanup: do not use GIT_TRACE_PACKET=3 in tests remote-curl: always parse incoming refs remote-curl: move ref-parsing code up in file remote-curl: pass buffer straight to get_remote_heads teach get_remote_heads to read from a memory buffer pkt-line: share buffer/descriptor reading implementation pkt-line: provide a LARGE_PACKET_MAX static buffer pkt-line: move LARGE_PACKET_MAX definition from sideband pkt-line: teach packet_read_line to chomp newlines pkt-line: provide a generic reading function with options pkt-line: drop safe_write function pkt-line: move a misplaced comment write_or_die: raise SIGPIPE when we get EPIPE upload-archive: use argv_array to store client arguments upload-archive: do not copy repo name send-pack: prefer prefixcmp over memcmp in receive_status fetch-pack: fix out-of-bounds buffer offset in get_ack upload-pack: remove packet debugging harness upload-pack: do not add duplicate objects to shallow list upload-pack: use get_sha1_hex to parse "shallow" lines
| * pkt-line: provide a LARGE_PACKET_MAX static bufferJeff King2013-02-201-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most of the callers of packet_read_line just read into a static 1000-byte buffer (callers which handle arbitrary binary data already use LARGE_PACKET_MAX). This works fine in practice, because: 1. The only variable-sized data in these lines is a ref name, and refs tend to be a lot shorter than 1000 characters. 2. When sending ref lines, git-core always limits itself to 1000 byte packets. However, the only limit given in the protocol specification in Documentation/technical/protocol-common.txt is LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in pack-protocol.txt, and then only describing what we write, not as a specific limit for readers. This patch lets us bump the 1000-byte limit to LARGE_PACKET_MAX. Even though git-core will never write a packet where this makes a difference, there are two good reasons to do this: 1. Other git implementations may have followed protocol-common.txt and used a larger maximum size. We don't bump into it in practice because it would involve very long ref names. 2. We may want to increase the 1000-byte limit one day. Since packets are transferred before any capabilities, it's difficult to do this in a backwards-compatible way. But if we bump the size of buffer the readers can handle, eventually older versions of git will be obsolete enough that we can justify bumping the writers, as well. We don't have plans to do this anytime soon, but there is no reason not to start the clock ticking now. Just bumping all of the reading bufs to LARGE_PACKET_MAX would waste memory. Instead, since most readers just read into a temporary buffer anyway, let's provide a single static buffer that all callers can use. We can further wrap this detail away by having the packet_read_line wrapper just use the buffer transparently and return a pointer to the static storage. That covers most of the cases, and the remaining ones already read into their own LARGE_PACKET_MAX buffers. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * pkt-line: teach packet_read_line to chomp newlinesJeff King2013-02-201-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The packets sent during ref negotiation are all terminated by newline; even though the code to chomp these newlines is short, we end up doing it in a lot of places. This patch teaches packet_read_line to auto-chomp the trailing newline; this lets us get rid of a lot of inline chomping code. As a result, some call-sites which are not reading line-oriented data (e.g., when reading chunks of packfiles alongside sideband) transition away from packet_read_line to the generic packet_read interface. This patch converts all of the existing callsites. Since the function signature of packet_read_line does not change (but its behavior does), there is a possibility of new callsites being introduced in later commits, silently introducing an incompatibility. However, since a later patch in this series will change the signature, such a commit would have to be merged directly into this commit, not to the tip of the series; we can therefore ignore the issue. This is an internal cleanup and should produce no change of behavior in the normal case. However, there is one corner case to note. Callers of packet_read_line have never been able to tell the difference between a flush packet ("0000") and an empty packet ("0004"), as both cause packet_read_line to return a length of 0. Readers treat them identically, even though Documentation/technical/protocol-common.txt says we must not; it also says that implementations should not send an empty pkt-line. By stripping out the newline before the result gets to the caller, we will now treat the newline-only packet ("0005\n") the same as an empty packet, which in turn gets treated like a flush packet. In practice this doesn't matter, as neither empty nor newline-only packets are part of git's protocols (at least not for the line-oriented bits, and readers who are not expecting line-oriented packets will be calling packet_read directly, anyway). But even if we do decide to care about the distinction later, it is orthogonal to this patch. The right place to tighten would be to stop treating empty packets as flush packets, and this change does not make doing so any harder. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * pkt-line: drop safe_write functionJeff King2013-02-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | This is just write_or_die by another name. The one distinction is that write_or_die will treat EPIPE specially by suppressing error messages. That's fine, as we die by SIGPIPE anyway (and in the off chance that it is disabled, write_or_die will simulate it). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * fetch-pack: fix out-of-bounds buffer offset in get_ackJeff King2013-02-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we read acks from the remote, we expect either: ACK <sha1> or ACK <sha1> <multi-ack-flag> We parse the "ACK <sha1>" bit from the line, and then start looking for the flag strings at "line+45"; if we don't have them, we assume it's of the first type. But if we do have the first type, then line+45 is not necessarily inside our string at all! It turns out that this works most of the time due to the way we parse the packets. They should come in with a newline, and packet_read puts an extra NUL into the buffer, so we end up with: ACK <sha1>\n\0 with the newline at offset 44 and the NUL at offset 45. We then strip the newline, putting a NUL at offset 44. So when we look at "line+45", we are looking past the end of our string; but it's OK, because we hit the terminator from the original string. This breaks down, however, if the other side does not terminate their packets with a newline. In that case, our packet is one character shorter, and we start looking through uninitialized memory for the flag. No known implementation sends such a packet, so it has never come up in practice. This patch tightens the check by looking for a short, flagless ACK before trying to parse the flag. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jc/fetch-raw-sha1'Junio C Hamano2013-03-211-32/+69
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | Allows requests to fetch objects at any tip of refs (including hidden ones). It seems that there may be use cases even outside Gerrit (e.g. $gmane/215701). * jc/fetch-raw-sha1: fetch: fetch objects by their exact SHA-1 object names upload-pack: optionally allow fetching from the tips of hidden refs fetch: use struct ref to represent refs to be fetched parse_fetch_refspec(): clarify the codeflow a bit
| * fetch: fetch objects by their exact SHA-1 object namesJunio C Hamano2013-02-071-1/+21
| | | | | | | | | | | | | | | | | | Teach "git fetch" to accept an exact SHA-1 object name the user may obtain out of band on the LHS of a pathspec, and send it on a "want" message when the server side advertises the allow-tip-sha1-in-want capability. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * fetch: use struct ref to represent refs to be fetchedJunio C Hamano2013-02-071-31/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | Even though "git fetch" has full infrastructure to parse refspecs to be fetched and match them against the list of refs to come up with the final list of refs to be fetched, the list of refs that are requested to be fetched were internally converted to a plain list of strings at the transport layer and then passed to the underlying fetch-pack driver. Stop this conversion and instead pass around an array of refs. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jk/gc-auto-after-fetch'Junio C Hamano2013-02-011-0/+3
|\ \ | |/ |/| | | | | | | | | | | | | Help "fetch only" repositories that do not trigger "gc --auto" often enough. * jk/gc-auto-after-fetch: fetch-pack: avoid repeatedly re-scanning pack directory fetch: run gc --auto after fetching
| * Merge branch 'jk/maint-gc-auto-after-fetch' into jk/gc-auto-after-fetchJunio C Hamano2013-01-261-0/+3
| |\ | | | | | | | | | | | | | | | * jk/maint-gc-auto-after-fetch: fetch-pack: avoid repeatedly re-scanning pack directory fetch: run gc --auto after fetching
* | \ Merge branch 'mk/qnx'Junio C Hamano2013-01-031-2/+1
|\ \ \ | |/ / |/| / | |/ | | | | | | | | Port to QNX. * mk/qnx: Port to QNX Make lock local to fetch_pack
* | fetch-pack: move core code to libgit.aNguyễn Thái Ngọc Duy2012-10-291-0/+951
|/ | | | | | | | | | fetch_pack() is used by transport.c, part of libgit.a while it stays in builtin/fetch-pack.c. Move it to fetch-pack.c so that we won't get undefined reference if a program that uses libgit.a happens to pull it in. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Jeff King <peff@peff.net>
* Make fetch-pack a builtin with an internal APIDaniel Barkalow2007-09-191-789/+0
| | | | | Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* War on whitespaceJunio C Hamano2007-06-071-1/+1
| | | | | | | | | This uses "git-apply --whitespace=strip" to fix whitespace errors that have crept in to our source files over time. There are a few files that need to have trailing whitespaces (most notably, test vectors). The results still passes the test, and build result in Documentation/ area is unchanged. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* connect: display connection progressMichael S. Tsirkin2007-05-161-1/+1
| | | | | | | | | | Make git notify the user about host resolution/connection attempts. This is useful both as a progress indicator on slow links, and helps reassure the user there are no firewall problems. Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'js/fetch-progress' (early part)Junio C Hamano2007-03-041-3/+9
|\ | | | | | | | | | | | | | | | | | | * 'js/fetch-progress' (early part): Fixup no-progress for fetch & clone fetch & clone: do not output progress when not on a tty Conflicts: git-fetch.sh
| * Fixup no-progress for fetch & cloneJohannes Schindelin2007-02-241-7/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The intent of the commit 'fetch & clone: do not output progress when not on a tty' was to make fetching and cloning less chatty when output was not redirected (such as in a cron job). However, there was a serious thinko in that commit. It assumed that the client _and_ the server got this update at the same time. But this is obviously not the case, and therefore upload-pack died on seeing the option "--no-progress". This patch fixes that issue by making it a protocol option. So, until your server is updated, you still see the progress, but once the server has this patch, it will be quiet. A minor issue was also fixed: when cloning, the checkout did not heed no_progress. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * fetch & clone: do not output progress when not on a ttyJohannes Schindelin2007-02-191-3/+13
| | | | | | | | | | | | | | | | | | | | | | This adds the option "--no-progress" to fetch-pack and upload-pack, and makes fetch and clone pass this option when stdout is not a tty. While at documenting that option, also document --strict and --timeout options for upload-pack. Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | prefixcmp(): fix-up mechanical conversion.Junio C Hamano2007-02-201-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Previous step converted use of strncmp() with literal string mechanically even when the result is only used as a boolean: if (!strncmp("foo", arg, 3)) ==> if (!(-prefixcmp(arg, "foo"))) This step manually cleans them up to read: if (!prefixcmp(arg, "foo")) Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Mechanical conversion to use prefixcmp()Junio C Hamano2007-02-201-6/+6
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | This mechanically converts strncmp() to use prefixcmp(), but only when the parameters match specific patterns, so that they can be verified easily. Leftover from this will be fixed in a separate step, including idiotic conversions like if (!strncmp("foo", arg, 3)) => if (!(-prefixcmp(arg, "foo"))) This was done by using this script in px.perl #!/usr/bin/perl -i.bak -p if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) { s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|; } if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) { s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|; } and running: $ git grep -l strncmp -- '*.c' | xargs perl px.perl Signed-off-by: Junio C Hamano <junkio@cox.net>
* Don't force everybody to call setup_ident().Junio C Hamano2007-01-281-1/+0
| | | | | | | | | | | | Back when only handful commands that created commit and tag were the only users of committer identity information, it made sense to explicitly call setup_ident() to pre-fill the default value from the gecos information. But it is much simpler for programs to make the call automatic when get_ident() is called these days, since many more programs want to use the information when updating the reflog. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Consolidate {receive,fetch}.unpackLimitJunio C Hamano2007-01-241-1/+13
| | | | | | | | | | This allows transfer.unpackLimit to specify what these two configuration variables want to set. We would probably want to deprecate the two separate variables, as I do not see much point in specifying them independently. Signed-off-by: Junio C Hamano <junkio@cox.net>
* fetch-pack: remove --keep-auto and make it the default.Junio C Hamano2007-01-241-14/+17
| | | | | | | | | | | | | | | | This makes git-fetch over git native protocol to automatically decide to keep the downloaded pack if the fetch results in more than 100 objects, just like receive-pack invoked by git-push does. This logic is disabled when --keep is explicitly given from the command line, so that a very small clone still keeps the downloaded pack as before. The 100 threshold can be adjusted with fetch.unpacklimit configuration. We might want to introduce transfer.unpacklimit to consolidate the two unpacklimit variables, which will be a topic for the next patch. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Allow fetch-pack to decide keeping the fetched pack without explodingJunio C Hamano2007-01-241-32/+59
| | | | | | | | | With --keep-auto option, fetch-pack decides to keep the pack without exploding it just like receive-pack does. We may want to later make this the default. Signed-off-by: Junio C Hamano <junkio@cox.net>
* rename --exec to --upload-pack for fetch-pack and peek-remoteUwe Kleine-König2007-01-241-4/+8
| | | | | | | | | | Just some option name disambiguation. This is the counter part to commit d23842fd which made a similar change for push and send-pack. --exec continues to work. Signed-off-by: Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Update documentation of fetch-pack, push and send-packUwe Kleine-König2007-01-191-1/+1
| | | | | | | add all supported options to Documentation/git-....txt and the usage strings. Signed-off-by: Uwe Kleine-König <zeisberg@informatik.uni-freiburg.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* fetch-pack: do not use lockfile structure on stack.Junio C Hamano2007-01-021-1/+2
| | | | | | | | | They are used in atexit() for clean-up, and you will be accessing unallocated memory at that point. See 31f584c2 for the fix for a similar problem. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'master' into js/shallowJunio C Hamano2006-12-271-1/+25
|\ | | | | | | | | | | | | | | | | | | This is to adjust to: count-objects -v: show number of packs as well. which will break a test in this series. Signed-off-by: Junio C Hamano <junkio@cox.net>
| * simplify inclusion of system header files.Junio C Hamano2006-12-201-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a mechanical clean-up of the way *.c files include system header files. (1) sources under compat/, platform sha-1 implementations, and xdelta code are exempt from the following rules; (2) the first #include must be "git-compat-util.h" or one of our own header file that includes it first (e.g. config.h, builtin.h, pkt-line.h); (3) system headers that are included in "git-compat-util.h" need not be included in individual C source files. (4) "git-compat-util.h" does not have to include subsystem specific header files (e.g. expat.h). Signed-off-by: Junio C Hamano <junkio@cox.net>
| * fetch-pack: do not barf when duplicate re patterns are givenJunio C Hamano2006-11-251-0/+25
| | | | | | | | Signed-off-by: Junio C Hamano <junkio@cox.net>
* | fetch-pack: Do not fetch tags for shallow clones.Alexandre Julliard2006-11-241-1/+2
| | | | | | | | | | | | | | | | | | A better fix may be to only fetch tags that point to commits that we are downloading, but git-clone doesn't have support for following tags. This will happen automatically on the next git-fetch though. Signed-off-by: Alexandre Julliard <julliard@winehq.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | fetch-pack: Properly remove the shallow file when it becomes empty.Alexandre Julliard2006-11-241-1/+1
| | | | | | | | | | | | | | The code was unlinking the lock file instead. Signed-off-by: Alexandre Julliard <julliard@winehq.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Why does it mean we do not have to register shallow if we have one?Junio C Hamano2006-11-241-3/+0
| |
* | We should make sure that the protocol is still extensible.Junio C Hamano2006-11-241-4/+7
| | | | | | | | | | This just reformats if .. else if .. else chain to make it clear we are handling extended response from the other end.
* | allow deepening of a shallow repositoryJohannes Schindelin2006-11-241-6/+16
| | | | | | | | | | | | | | | | Now, by saying "git fetch -depth <n> <repo>" you can deepen a shallow repository. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | allow cloning a repository "shallowly"Johannes Schindelin2006-11-241-1/+60
| | | | | | | | | | | | | | | | | | | | | | | | By specifying a depth, you can now clone a repository such that all fetched ancestor-chains' length is at most "depth". For example, if the upstream repository has only 2 branches ("A" and "B"), which are linear, and you specify depth 3, you will get A, A~1, A~2, A~3, B, B~1, B~2, and B~3. The ends are automatically made shallow commits. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | support fetching into a shallow repositoryJohannes Schindelin2006-11-241-0/+4
|/ | | | | | | | | | | | | | | | | | | | A shallow commit is a commit which has parents, which in turn are "grafted away", i.e. the commit appears as if it were a root. Since these shallow commits should not be edited by the user, but only by core git, they are recorded in the file $GIT_DIR/shallow. A repository containing shallow commits is called shallow. The advantage of a shallow repository is that even if the upstream contains lots of history, your local (shallow) repository needs not occupy much disk space. The disadvantage is that you might miss a merge base when pulling some remote branch. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
* improve fetch-pack's handling of kept packsNicolas Pitre2006-11-031-7/+103
| | | | | | | | | | | | | | | | | | Since functions in fetch-clone.c were only used from fetch-pack.c, its content has been merged with fetch-pack.c. This allows for better coupling of features with much simpler implementations. One new thing is that the (abscence of) --thin also enforce it on index-pack now, such that index-pack will abort if a thin pack was _not_ asked for. The -k or --keep, when provided twice, now causes the fetched pack to be left as a kept pack just like receive-pack currently does. Eventually this will be used to close a race against concurrent repacking. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'master' into np/index-packJunio C Hamano2006-11-031-4/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * master: (90 commits) gitweb: Better support for non-CSS aware web browsers gitweb: Output also empty patches in "commitdiff" view gitweb: Use git-for-each-ref to generate list of heads and/or tags for-each-ref: "creator" and "creatordate" fields Add --global option to git-repo-config. pack-refs: Store the full name of the ref even when packing only tags. git-clone documentation didn't mention --origin as equivalent of -o Minor grammar fixes for git-diff-index.txt link_temp_to_file: call adjust_shared_perm() only when we created the directory Remove uneccessarily similar printf() from print_ref_list() in builtin-branch pack-objects doesn't create random pack names branch: work in subdirectories. gitweb: Use 's' regexp modifier to secure against filenames with LF gitweb: Secure against commit-ish/tree-ish with the same name as path gitweb: esc_html() author in blame git-svnimport: support for partial imports link_temp_to_file: don't leave the path truncated on adjust_shared_perm failure Move deny_non_fast_forwards handling completely into receive-pack. revision traversal: --unpacked does not limit commit list anymore. Continue traversal when rev-list --unpacked finds a packed commit. ...
| * Merge branch 'lj/refs'Junio C Hamano2006-11-011-4/+4
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * lj/refs: (63 commits) Fix show-ref usagestring t3200: git-branch testsuite update sha1_name.c: avoid compilation warnings. Make git-branch a builtin ref-log: fix D/F conflict coming from deleted refs. git-revert with conflicts to behave as git-merge with conflicts core.logallrefupdates thinko-fix git-pack-refs --all core.logallrefupdates create new log file only for branch heads. Remove bashism from t3210-pack-refs.sh ref-log: allow ref@{count} syntax. pack-refs: call fflush before fsync. pack-refs: use lockfile as everybody else does. git-fetch: do not look into $GIT_DIR/refs to see if a tag exists. lock_ref_sha1_basic does not remove empty directories on BSD Do not create tag leading directories since git update-ref does it. Check that a tag exists using show-ref instead of looking for the ref file. Use git-update-ref to delete a tag instead of rm()ing the ref file. Fix refs.c;:repack_without_ref() clean-up path Clean up "git-branch.sh" and add remove recursive dir test cases. ...
| | * Tell between packed, unpacked and symbolic refs.Junio C Hamano2006-09-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a "int *flag" parameter to resolve_ref() and makes for_each_ref() family to call callback function with an extra "int flag" parameter. They are used to give two bits of information (REF_ISSYMREF and REF_ISPACKED) about the ref. Signed-off-by: Junio C Hamano <junkio@cox.net>
| | * Add callback data to for_each_ref() family.Junio C Hamano2006-09-201-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a long overdue fix to the API for for_each_ref() family of functions. It allows the callers to specify a callback data pointer, so that the caller does not have to use static variables to communicate with the callback funciton. The updated for_each_ref() family takes a function of type int (*fn)(const char *, const unsigned char *, void *) and a void pointer as parameters, and calls the function with the name of the ref and its SHA-1 with the caller-supplied void pointer as parameters. The commit updates two callers, builtin-name-rev.c and builtin-pack-refs.c as an example. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | | enhance clone and fetch -k experienceNicolas Pitre2006-10-271-2/+0
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | Now that index-pack can be streamed with a pack, it is probably a good idea to use it directly instead of creating a temporary file and running index-pack afterwards. This way index-pack can abort early whenever a corruption is encountered even if the pack has not been fully downloaded, it can display a progress percentage as it knows how much to expects, and it is a bit faster since the pack indexing is partially done as data is received. Using fetch -k doesn't need to disable thin pack generation on the remote end either. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | let the GIT native protocol use offsets to delta base when possibleNicolas Pitre2006-09-271-2/+3
|/ | | | | | | | | | | There is no reason not to always do this when both ends agree. Therefore a client that can accept offsets to delta base always sends the "ofs-delta" flag. The server will stream a pack with or without offset to delta base depending on whether that flag is provided or not with no additional cost. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>