diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2016-03-28 13:13:36 -0700 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2016-03-28 13:13:51 -0700 |
commit | c365e615f7429aee302f8af7bf07ae262278febb (patch) | |
tree | 871a829257ab6f5ba2584e4d9be93cbf97f56991 /sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S | |
parent | e41b395523040fcb58c7d378475720c2836d280c (diff) | |
download | glibc-c365e615f7429aee302f8af7bf07ae262278febb.tar.gz |
Implement x86-64 multiarch mempcpy in memcpy
Implement x86-64 multiarch mempcpy in memcpy to share most of code. It
reduces code size of libc.so.
[BZ #18858]
* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
mempcpy-ssse3, mempcpy-ssse3-back, mempcpy-avx-unaligned
and mempcpy-avx512-no-vzeroupper.
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMPCPY_CHK):
New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S
(MEMPCPY_CHK): New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S (MEMPCPY_CHK): New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/memcpy-ssse3.S (MEMPCPY_CHK): New.
(MEMPCPY): Likewise.
* sysdeps/x86_64/multiarch/mempcpy-avx-unaligned.S: Removed.
* sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/mempcpy-ssse3-back.S: Likewise.
* sysdeps/x86_64/multiarch/mempcpy-ssse3.S: Likewise.
Diffstat (limited to 'sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S')
-rw-r--r-- | sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S | 18 |
1 files changed, 17 insertions, 1 deletions
diff --git a/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S b/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S index b615d063c0..dd4187fa36 100644 --- a/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S +++ b/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S @@ -25,11 +25,26 @@ #include "asm-syntax.h" #ifndef MEMCPY -# define MEMCPY __memcpy_avx_unaligned +# define MEMCPY __memcpy_avx_unaligned # define MEMCPY_CHK __memcpy_chk_avx_unaligned +# define MEMPCPY __mempcpy_avx_unaligned +# define MEMPCPY_CHK __mempcpy_chk_avx_unaligned #endif .section .text.avx,"ax",@progbits +#if !defined USE_AS_MEMPCPY && !defined USE_AS_MEMMOVE +ENTRY (MEMPCPY_CHK) + cmpq %rdx, %rcx + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMPCPY_CHK) + +ENTRY (MEMPCPY) + movq %rdi, %rax + addq %rdx, %rax + jmp L(start) +END (MEMPCPY) +#endif + #if !defined USE_AS_BCOPY ENTRY (MEMCPY_CHK) cmpq %rdx, %rcx @@ -42,6 +57,7 @@ ENTRY (MEMCPY) #ifdef USE_AS_MEMPCPY add %rdx, %rax #endif +L(start): cmp $256, %rdx jae L(256bytesormore) cmp $16, %dl |