summaryrefslogtreecommitdiff
path: root/test-sha1.sh
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2009-08-10 16:52:07 -0700
committerJunio C Hamano <gitster@pobox.com>2009-08-10 17:26:51 -0700
commit926172c5e4808726244713ef70398cd38b055f1e (patch)
tree882344c9a83dd2f43129a2c4ba1d277edd8e5fe1 /test-sha1.sh
parent66c9c6c0fbba0894ebce3da572f62eb05162e547 (diff)
downloadgit-926172c5e4808726244713ef70398cd38b055f1e.tar.gz
block-sha1: improve code on large-register-set machines
For x86 performance (especially in 32-bit mode) I added that hack to write the SHA1 internal temporary hash using a volatile pointer, in order to get gcc to not try to cache the array contents. Because gcc will do all the wrong things, and then spill things in insane random ways. But on architectures like PPC, where you have 32 registers, it's actually perfectly reasonable to put the whole temporary array[] into the register set, and gcc can do so. So make the 'volatile unsigned int *' cast be dependent on a SMALL_REGISTER_SET preprocessor symbol, and enable it (currently) on just x86 and x86-64. With that, the routine is fairly reasonable even when compared to the hand-scheduled PPC version. Ben Herrenschmidt reports on a G5: * Paulus asm version: about 3.67s * Yours with no change: about 5.74s * Yours without "volatile": about 3.78s so with this the C version is within about 3% of the asm one. And add a lot of commentary on what the heck is going on. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'test-sha1.sh')
0 files changed, 0 insertions, 0 deletions