| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
* All files with FSF copyright notices: Update copyright dates
using scripts/update-copyrights.
* locale/programs/charmap-kw.h: Regenerated.
* locale/programs/locfile-kw.h: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current s_sincosf.c is faster than s_sincosf-sse2.S. On Broadwell
with FMA disabled, bench-sincosf shows:
Before After Improvement
max 154.032 114.517 34%
min 6.25 5.609 11%
mean 14.8728 12.8589 15%
* sysdeps/x86_64/fpu/s_sincosf.S: Removed.
* sysdeps/x86_64/fpu/multiarch/s_sincosf-sse2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sincosf-sse2.c: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add <sincosf_poly.h> and include it in s_sincosf.h to allow vectorized
sincosf_poly. Add x86 sincosf_poly.h to vectorize sincosf_poly. On
Broadwell, bench-sincosf shows:
Before After Improvement
max 160.273 114.198 40%
min 6.25 5.625 11%
mean 13.0325 10.6462 22%
Vectorized sincosf_poly shows
Before After Improvement
max 138.653 114.198 21%
min 5.004 5.625 -11%
mean 11.5934 10.6462 9%
Tested on x86-64 and i686 as well as with build-many-glibcs.py.
* sysdeps/ieee754/flt-32/s_sincosf.h: Include <sincosf_poly.h>.
(sincos_t, sincosf_poly, sinf_poly): Moved to ...
* sysdeps/ieee754/flt-32/sincosf_poly.h: Here. New file.
* sysdeps/x86/fpu/s_sincosf_data.c: New file.
* sysdeps/x86/fpu/sincosf_poly.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sincosf-fma.c: Just include
<sysdeps/ieee754/flt-32/s_sincosf.c>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce new pow symbol version that doesn't do SVID compatible error
handling. The standard errno and fp exception based error handling is
inline in the new code and does not have significant overhead.
The wrapper is disabled for sysdeps/ieee754/dbl-64 by using empty
w_pow.c and enabled for targets with their own pow implementation or
ifunc dispatch on __ieee754_pow by including math/w_pow.c.
The compatibility symbol version still uses the wrapper with SVID error
handling around the new code. There is no new symbol version nor
compatibility code on !LIBM_SVID_COMPAT targets (e.g. riscv).
On targets where previously powl was an alias of pow, now it points to
the compatibility symbol with the wrapper, because it still need the
SVID compatible error handling. This affects NO_LONG_DOUBLE (e.g. arm)
and LONG_DOUBLE_COMPAT (e.g. alpha) targets as well.
The __pow_finite symbol is now an alias of pow. Both __pow_finite and
pow set errno and thus not const functions.
The ia64 asm is changed so the compat and new symbol versions map to the
same address.
On x86_64 #include <math.h> was added before macro definitions that
may affect that header.
Tested with build-many-glibcs.py.
* math/Versions (GLIBC_2.29): Add pow.
* math/w_pow_compat.c (__pow_compat): Change to versioned compat
symbol.
* math/w_pow.c: New file.
* sysdeps/i386/fpu/w_pow.c: New file.
* sysdeps/ia64/fpu/e_pow.S: Add versioned symbols.
* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Rename to __pow
and add necessary aliases.
* sysdeps/ieee754/dbl-64/w_pow.c: New file.
* sysdeps/m68k/m680x0/fpu/w_pow.c: New file.
* sysdeps/mach/hurd/i386/libm.abilist: Update.
* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/alpha/libm.abilist: Update.
* sysdeps/unix/sysv/linux/arm/libm.abilist: Update.
* sysdeps/unix/sysv/linux/hppa/libm.abilist: Update.
* sysdeps/unix/sysv/linux/i386/libm.abilist: Update.
* sysdeps/unix/sysv/linux/ia64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Update.
* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Update.
* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Update.
* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/nios2/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sh/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Update.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__ieee754_pow): Rename to
__pow.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__ieee754_pow): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_pow.c (__ieee754_pow): Likewise.
* sysdeps/x86_64/fpu/multiarch/w_pow.c: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce new log symbol version that doesn't do SVID compatible error
handling. The standard errno and fp exception based error handling is
inline in the new code and does not have significant overhead.
The wrapper is disabled for sysdeps/ieee754/dbl-64 by using empty
w_log.c and enabled for targets with their own log implementation by
including math/w_log.c.
The compatibility symbol version still uses the wrapper with SVID error
handling around the new code. There is no new symbol version nor
compatibility code on !LIBM_SVID_COMPAT targets (e.g. riscv).
On targets where previously logl was an alias of log, now it points to
the compatibility symbol with the wrapper, because it still need the
SVID compatible error handling. This affects NO_LONG_DOUBLE (e.g. arm)
and LONG_DOUBLE_COMPAT (e.g. alpha) targets as well.
The __log_finite symbol is now an alias of log. Both __log_finite and
log set errno and thus not const functions.
The ia64 asm is changed so the compat and new symbol versions map to the
same address.
On x86_64 #include <math.h> was added before macro definitions that may
affect that header.
Tested with build-many-glibcs.py.
* math/Versions (GLIBC_2.29): Add log.
* math/w_log_compat.c (__log_compat): Change to versioned compat
symbol.
* math/w_log.c: New file.
* sysdeps/i386/fpu/w_log.c: New file.
* sysdeps/ia64/fpu/e_log.S: Update.
* sysdeps/ieee754/dbl-64/e_log.c (__ieee754_log): Rename to __log
and add necessary aliases.
* sysdeps/ieee754/dbl-64/w_log.c: New file.
* sysdeps/m68k/m680x0/fpu/w_log.c: New file.
* sysdeps/mach/hurd/i386/libm.abilist: Update.
* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/alpha/libm.abilist: Update.
* sysdeps/unix/sysv/linux/arm/libm.abilist: Update.
* sysdeps/unix/sysv/linux/hppa/libm.abilist: Update.
* sysdeps/unix/sysv/linux/i386/libm.abilist: Update.
* sysdeps/unix/sysv/linux/ia64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Update.
* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Update.
* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Update.
* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/nios2/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sh/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Update.
* sysdeps/x86_64/fpu/multiarch/e_log-avx.c (__ieee754_log): Rename to
__log.
* sysdeps/x86_64/fpu/multiarch/e_log-fma.c (__ieee754_log): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log-fma4.c (__ieee754_log): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log.c (__ieee754_log): Likewise.
* sysdeps/x86_64/fpu/multiarch/w_log.c: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce new exp and exp2 symbol version that don't do SVID compatible
error handling. The standard errno and fp exception based error handling
is inline in the new code and does not have significant overhead.
The double precision wrappers are disabled for sysdeps/ieee754/dbl-64
by using empty w_exp.c and w_exp2.c files, the math/w_exp.c and
math/w_exp2.c files use the wrapper template and can be included by
targets that have their own exp and exp2 implementations or use ifunc
on the glibc internal __ieee754_exp symbol.
The compatibility symbol versions still use the wrapper with SVID error
handling around the new code. There is no new symbol version nor
compatibility code on !LIBM_SVID_COMPAT targets (e.g. riscv).
On targets where previously expl and exp2l were aliases of exp and exp2,
now they point to the compatibility symbols with the wrapper, because
they still need the SVID compatible error handling. This affects
NO_LONG_DOUBLE (e.g arm) and LONG_DOUBLE_COMPAT (e.g. alpha) targets
as well.
The _finite symbols are now aliases of the standard symbols (they have
no performance advantage anymore). Both the standard symbols and
_finite symbols set errno and thus not const functions.
The ia64 asm is changed so the compat and new symbol versions map to the
same address.
On x86_64 #include <math.h> was added before macro definitions that may
affect that header (the new macro name is __exp instead of __ieee754_exp
which breaks some math.h macros).
Tested with build-many-glibcs.py.
* math/Versions (GLIBC_2.29): Add exp and exp2.
* math/w_exp2_compat.c (__exp2_compat): Change to versioned compat
symbol, handle NO_LONG_DOUBLE and LONG_DOUBLE_COMPAT explicitly.
* math/w_exp_compat.c (__exp_compat): Likewise.
* math/w_exp.c: New file.
* math/w_exp2.c: New file.
* sysdeps/i386/fpu/w_exp.c: New file.
* sysdeps/i386/fpu/w_exp2.c: New file.
* sysdeps/ia64/fpu/e_exp.S: Add versioned symbols.
* sysdeps/ia64/fpu/e_exp2.S: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Rename to __exp
and add necessary aliases.
* sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Rename to __exp2
and add necessary aliases.
* sysdeps/ieee754/dbl-64/w_exp.c: New file.
* sysdeps/ieee754/dbl-64/w_exp2.c: New file.
* sysdeps/m68k/m680x0/fpu/w_exp.c: New file.
* sysdeps/m68k/m680x0/fpu/w_exp2.c: New file.
* sysdeps/mach/hurd/i386/libm.abilist: Update.
* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/alpha/libm.abilist: Update.
* sysdeps/unix/sysv/linux/arm/libm.abilist: Update.
* sysdeps/unix/sysv/linux/hppa/libm.abilist: Update.
* sysdeps/unix/sysv/linux/i386/libm.abilist: Update.
* sysdeps/unix/sysv/linux/ia64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Update.
* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Update.
* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Update.
* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/nios2/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sh/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Update.
* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Update.
* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Update.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__exp1): Remove.
(__ieee754_exp): Rename to __exp.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__exp1): Remove.
(__ieee754_exp): Rename to __exp.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__exp1): Remove.
(__ieee754_exp): Rename to __exp.
* sysdeps/x86_64/fpu/multiarch/e_exp.c (__ieee754_exp): Rename to
__exp.
* sysdeps/x86_64/fpu/multiarch/w_exp.c: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __trunc functions to call the
corresponding trunc names instead, with asm redirection to __trunc
when the calls are not inlined.
Tested for x86_64, and with build-many-glibcs.py.
* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (trunc): Redirect
using MATH_REDIRECT.
* sysdeps/aarch64/fpu/s_trunc.c: Define NO_MATH_REDIRECT before
header inclusion.
* sysdeps/aarch64/fpu/s_truncf.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_trunc.c: Likewise.
* sysdeps/ieee754/float128/s_truncf128.c: Likewise.
* sysdeps/ieee754/dbl-64/s_trunc.c: Likewise.
* sysdeps/ieee754/flt-32/s_truncf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_truncl.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_trunc.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_truncf.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise.
* sysdeps/riscv/rvf/s_truncf.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_trunc.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_truncf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_trunc.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_truncf.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_trunc_template.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_truncl.c: Likewise.
(ceil): Redirect to __ceil.
(floor): Redirect to __floor.
(trunc): Redirect to __trunc.
(__truncl): Call trunc instead of __trunc.
* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__trunc):
Remove macro.
[_ARCH_PWR5X] (__truncf): Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Use
trunc functions instead of __trunc variants.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The algorithm is exp(y * log(x)), where log(x) is computed with about
1.3*2^-68 relative error (1.5*2^-68 without fma), returning the result
in two doubles, and the exp part uses the same algorithm (and lookup
tables) as exp, but takes the input as two doubles and a sign (to handle
negative bases with odd integer exponent). The __exp1 internal symbol
is no longer necessary.
There is separate code path when fma is not available but the worst case
error is about 0.54 ULP in both cases. The lookup table and consts for
log are 4168 bytes. The .rodata+.text is decreased by 37908 bytes on
aarch64. The non-nearest rounding error is less than 1 ULP.
Improvements on Cortex-A72 compared to current glibc master:
pow thruput: 2.40x in [0.01 11.1]x[0.01 11.1]
pow latency: 1.84x in [0.01 11.1]x[0.01 11.1]
Tested on
aarch64-linux-gnu (defined __FP_FAST_FMA, TOINT_INTRINSICS) and
arm-linux-gnueabihf (!defined __FP_FAST_FMA, !TOINT_INTRINSICS) and
x86_64-linux-gnu (!defined __FP_FAST_FMA, !TOINT_INTRINSICS) and
powerpc64le-linux-gnu (defined __FP_FAST_FMA, !TOINT_INTRINSICS) targets.
* NEWS: Mention pow improvements.
* math/Makefile (type-double-routines): Add e_pow_log_data.
* sysdeps/generic/math_private.h (__exp1): Remove.
* sysdeps/i386/fpu/e_pow_log_data.c: New file.
* sysdeps/ia64/fpu/e_pow_log_data.c: New file.
* sysdeps/ieee754/dbl-64/Makefile (CFLAGS-e_pow.c): Allow fma
contraction.
* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove.
(exp_inline): Remove.
(__ieee754_exp): Only single double input is handled.
* sysdeps/ieee754/dbl-64/e_pow.c: Rewrite.
* sysdeps/ieee754/dbl-64/e_pow_log_data.c: New file.
* sysdeps/ieee754/dbl-64/math_config.h (issignaling_inline): Define.
(__pow_log_data): Define.
* sysdeps/ieee754/dbl-64/upow.h: Remove.
* sysdeps/ieee754/dbl-64/upow.tbl: Remove.
* sysdeps/m68k/m680x0/fpu/e_pow_log_data.c: New file.
* sysdeps/x86_64/fpu/multiarch/Makefile (CFLAGS-e_pow-fma.c): Allow fma
contraction.
(CFLAGS-e_pow-fma4.c): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __ceil functions to call the
corresponding ceil names instead, with asm redirection to __ceil when
the calls are not inlined.
Tested for x86_64, and with build-many-glibcs.py.
* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (ceil): Redirect
using MATH_REDIRECT.
* sysdeps/aarch64/fpu/s_ceil.c: Define NO_MATH_REDIRECT before
header inclusion.
* sysdeps/aarch64/fpu/s_ceilf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_ceil.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_ceil.c: Likewise.
* sysdeps/ieee754/float128/s_ceilf128.c: Likewise.
* sysdeps/ieee754/flt-32/s_ceilf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_ceill.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_ceill.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_ceil_template.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_ceil.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_ceilf.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise.
* sysdeps/riscv/rvf/s_ceilf.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_ceil.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_ceilf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_ceil.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_ceilf.c: Likewise.
* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__ceil):
Remove macro.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use ceil
functions instead of __ceil variants.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/powerpc/power5+/fpu/s_modf.c (__modf): Likewise.
* sysdeps/powerpc/power5+/fpu/s_modff.c (__modff): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __rint functions to call the
corresponding rint names instead, with asm redirection to __rint when
the calls are not inlined. The x86_64 math_private.h is removed as no
longer useful after this patch.
This patch is relative to a tree with my floor patch
<https://sourceware.org/ml/libc-alpha/2018-09/msg00148.html> applied,
and much the same considerations arise regarding possibly replacing an
IFUNC call with a direct inline expansion.
Tested for x86_64, and with build-many-glibcs.py.
* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (rint): Redirect
using MATH_REDIRECT.
* sysdeps/aarch64/fpu/s_rint.c: Define NO_MATH_REDIRECT before
header inclusion.
* sysdeps/aarch64/fpu/s_rintf.c: Likewise.
* sysdeps/alpha/fpu/s_rint.c: Likewise.
* sysdeps/alpha/fpu/s_rintf.c: Likewise.
* sysdeps/i386/fpu/s_rintl.c: Likewise.
* sysdeps/ieee754/dbl-64/s_rint.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_rint.c: Likewise.
* sysdeps/ieee754/float128/s_rintf128.c: Likewise.
* sysdeps/ieee754/flt-32/s_rintf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_rintl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
* sysdeps/m68k/coldfire/fpu/s_rint.c: Likewise.
* sysdeps/m68k/coldfire/fpu/s_rintf.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_rint.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_rintf.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_rintl.c: Likewise.
* sysdeps/powerpc/fpu/s_rint.c: Likewise.
* sysdeps/powerpc/fpu/s_rintf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_rint.c: Likewise.
* sysdeps/riscv/rvf/s_rintf.c: Likewise.
* sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_rint.c: Likewise.
* sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_rintf.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_rint.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_rintf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_rint.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_rintf.c: Likewise.
* sysdeps/x86_64/fpu/math_private.h: Remove file.
* math/e_scalb.c (invalid_fn): Use rint functions instead of
__rint variants.
* math/e_scalbf.c (invalid_fn): Likewise.
* math/e_scalbl.c (invalid_fn): Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r):
Likewise.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
Likewise.
* sysdeps/ieee754/k_standard.c (__kernel_standard): Likewise.
* sysdeps/ieee754/k_standardl.c (__kernel_standard_l): Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/powerpc/powerpc32/fpu/s_llrint.c (__llrint): Likewise.
* sysdeps/powerpc/powerpc32/fpu/s_llrintf.c (__llrintf): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to the changes that were made to call sqrt functions directly
in glibc, instead of __ieee754_sqrt variants, so that the compiler
could inline them automatically without needing special inline
definitions in lots of math_private.h headers, this patch makes libm
code call floor functions directly instead of __floor variants,
removing the inlines / macros for x86_64 (SSE4.1) and powerpc
(POWER5).
The redirection used to ensure that __ieee754_sqrt does still get
called when the compiler doesn't inline a built-in function expansion
is refactored so it can be applied to other functions; the refactoring
is arranged so it's not limited to unary functions either (it would be
reasonable to use this mechanism for copysign - removing the inline in
math_private_calls.h but also eliminating unnecessary local PLT entry
use in the cases (powerpc soft-float and e500v1, for IBM long double)
where copysign calls don't get inlined).
The point of this change is that more architectures can get floor
calls inlined where they weren't previously (AArch64, for example),
without needing special inline definitions in their math_private.h,
and existing such definitions in math_private.h headers can be
removed.
Note that it's possible that in some cases an inline may be used where
an IFUNC call was previously used - this is the case on x86_64, for
example. I think the direct calls to floor are still appropriate; if
there's any significant performance cost from inline SSE2 floor
instead of an IFUNC call ending up with SSE4.1 floor, that indicates
that either the function should be doing something else that's faster
than using floor at all, or it should itself have IFUNC variants, or
that the compiler choice of inlining for generic tuning should change
to allow for the possibility that, by not inlining, an SSE4.1 IFUNC
might be called at runtime - but not that glibc should avoid calling
floor internally. (After all, all the same considerations would apply
to any user program calling floor, where it might either be inlined or
left as an out-of-line call allowing for a possible IFUNC.)
Tested for x86_64, and with build-many-glibcs.py.
* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (MATH_REDIRECT):
New macro.
[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
&& !NO_MATH_REDIRECT] (MATH_REDIRECT_LDBL): Likewise.
[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
&& !NO_MATH_REDIRECT] (MATH_REDIRECT_F128): Likewise.
[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
&& !NO_MATH_REDIRECT] (MATH_REDIRECT_UNARY_ARGS): Likewise.
[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
&& !NO_MATH_REDIRECT] (sqrt): Redirect using MATH_REDIRECT.
[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
&& !NO_MATH_REDIRECT] (floor): Likewise.
* sysdeps/aarch64/fpu/s_floor.c: Define NO_MATH_REDIRECT before
header inclusion.
* sysdeps/aarch64/fpu/s_floorf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_floor.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_floor.c: Likewise.
* sysdeps/ieee754/float128/s_floorf128.c: Likewise.
* sysdeps/ieee754/flt-32/s_floorf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_floorl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_floorl.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_floor_template.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_floor.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_floorf.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_floor.c: Likewise.
* sysdeps/riscv/rvf/s_floorf.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_floor.c: Likewise.
* sysdeps/sparc/sparc64/fpu/multiarch/s_floorf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floor.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floorf.c: Likewise.
* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__floor):
Remove macro.
[_ARCH_PWR5X] (__floorf): Likewise.
* sysdeps/x86_64/fpu/math_private.h [__SSE4_1__] (__floor): Remove
inline function.
[__SSE4_1__] (__floorf): Likewise.
* math/w_lgamma_main.c (LGFUNC (__lgamma)): Use floor functions
instead of __floor variants.
* math/w_lgamma_r_compat.c (__lgamma_r): Likewise.
* math/w_lgammaf_main.c (LGFUNC (__lgammaf)): Likewise.
* math/w_lgammaf_r_compat.c (__lgammaf_r): Likewise.
* math/w_lgammal_main.c (LGFUNC (__lgammal)): Likewise.
* math/w_lgammal_r_compat.c (__lgammal_r): Likewise.
* math/w_tgamma_compat.c (__tgamma): Likewise.
* math/w_tgamma_template.c (M_DECL_FUNC (__tgamma)): Likewise.
* math/w_tgammaf_compat.c (__tgammaf): Likewise.
* math/w_tgammal_compat.c (__tgammal): Likewise.
* sysdeps/ieee754/dbl-64/e_lgamma_r.c (sin_pi): Likewise.
* sysdeps/ieee754/dbl-64/k_rem_pio2.c (__kernel_rem_pio2):
Likewise.
* sysdeps/ieee754/dbl-64/lgamma_neg.c (__lgamma_neg): Likewise.
* sysdeps/ieee754/flt-32/e_lgammaf_r.c (sin_pif): Likewise.
* sysdeps/ieee754/flt-32/lgamma_negf.c (__lgamma_negf): Likewise.
* sysdeps/ieee754/ldbl-128/e_lgammal_r.c (__ieee754_lgammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise.
* sysdeps/ieee754/ldbl-128/lgamma_negl.c (__lgamma_negl):
Likewise.
* sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_lgammal_r.c (__ieee754_lgammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c (__lgamma_negl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_expm1l.c (__expm1l): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Likewise.
* sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise.
* sysdeps/ieee754/ldbl-96/lgamma_negl.c (__lgamma_negl): Likewise.
* sysdeps/powerpc/power5+/fpu/s_modf.c (__modf): Likewise.
* sysdeps/powerpc/power5+/fpu/s_modff.c (__modff): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The second patch improves performance of sinf and cosf using the same
algorithms and polynomials. The returned values are identical to sincosf
for the same input. ULP definitions for AArch64 and x64 are updated.
sinf/cosf througput gains on Cortex-A72:
* |x| < 0x1p-12 : 1.2x
* |x| < M_PI_4 : 1.8x
* |x| < 2 * M_PI: 1.7x
* |x| < 120.0 : 2.3x
* |x| < Inf : 3.0x
* NEWS: Mention sinf, cosf, sincosf.
* sysdeps/aarch64/libm-test-ulps: Update ULP for sinf, cosf, sincosf.
* sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sinf and cosf.
* sysdeps/x86_64/fpu/multiarch/s_sincosf-fma.c: Add definitions of
constants rather than including generic sincosf.h.
* sysdeps/x86_64/fpu/s_sincosf_data.c: Remove.
* sysdeps/ieee754/flt-32/s_cosf.c (cosf): Rewrite.
* sysdeps/ieee754/flt-32/s_sincosf.h (reduced_sin): Remove.
(reduced_cos): Remove.
(sinf_poly): New function.
* sysdeps/ieee754/flt-32/s_sinf.c (sinf): Rewrite.
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no need to include <init-arch.h> in assembly codes since all
x86 IFUNC selector functions are written in C. Tested on i686 and
x86-64. There is no code change in libc.so, ld.so and libmvec.so.
* sysdeps/i386/i686/multiarch/bzero-ia32.S: Don't include
<init-arch.h>.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core-avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core-avx2.S: Likewise.
* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the now unused mplog and mpexp files.
* math/Makefile: Remove mpexp.c and mplog.c
* sysdeps/i386/fpu/mpexp.c: Delete file.
* sysdeps/i386/fpu/mplog.c: Likewise.
* sysdeps/ia64/fpu/mpexp.c: Likewise.
* sysdeps/ia64/fpu/mplog.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c: Remove mention of mpexp and mplog.
* sysdeps/ieee754/dbl-64/mpa.h (__pow_mp): Remove unused function.
* sysdeps/ieee754/dbl-64/mpexp.c: Delete file.
* sysdeps/ieee754/dbl-64/mplog.c: Likewise.
* sysdeps/m68k/m680x0/fpu/mpexp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/mplog.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/Makefile: Remove mpexp* and mplog*.
* sysdeps/x86_64/fpu/multiarch/e_log-avx.c: Remove unused defines.
* sysdeps/x86_64/fpu/multiarch/e_log-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log-fma4.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpexp-avx.c: Delete file.
* sysdeps/x86_64/fpu/multiarch/mpexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpexp-fma4.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mplog-avx.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mplog-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mplog-fma4.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the __slowexp code, so exp is no longer correctly rounded. The
result is computed to about 70 bits precision so the worst case ulp
error is about 0.500007 in nearest rounding mode.
* manual/probes.texi: Remove slowexp probes.
* math/Makefile: Remove slowexp.
* sysdeps/generic/math_private.h (__slowexp): Remove.
* sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Remove __slowexp and
document error bounds.
* sysdeps/i386/fpu/slowexp.c: Remove.
* sysdeps/ia64/fpu/slowexp.c: Remove.
* sysdeps/ieee754/dbl-64/slowexp.c: Remove.
* sysdeps/ieee754/dbl-64/uexp.h (err_0): Remove.
* sysdeps/m68k/m680x0/fpu/slowexp.c: Remove.
* sysdeps/powerpc/power4/fpu/Makefile (CPPFLAGS-slowexp.c): Remove.
* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowexp-fma.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Remove.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Remove.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Remove.
* sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Remove.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Remove.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Remove.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the slow paths from pow. Like several other double precision math
functions, pow is exactly rounded. This is not required from math functions
and causes major overheads as it requires multiple fallbacks using higher
precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns
of up to 100000x have been reported when the highest precision path triggers.
All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1).
The worst case error is ~0.506ULP. A simple test over a few hundred million
values shows pow is 10% faster on average. This fixes BZ #13932.
[BZ #13932]
* sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove.
* benchtests/pow-inputs: Update comment for slow path cases.
* manual/probes.texi (slowpow_p10): Delete removed probe.
(slowpow_p10): Likewise.
* math/Makefile: Remove halfulp.c and slowpow.c.
* sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1.
* sysdeps/generic/math_private.h (__exp1): Remove error argument.
(__halfulp): Remove.
(__slowpow): Remove.
* sysdeps/i386/fpu/halfulp.c: Delete file.
* sysdeps/i386/fpu/slowpow.c: Likewise.
* sysdeps/ia64/fpu/halfulp.c: Likewise.
* sysdeps/ia64/fpu/slowpow.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument,
improve comments and add error analysis.
* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis.
(power1): Remove function:
(log1): Remove error argument, add error analysis.
(my_log2): Remove function.
* sysdeps/ieee754/dbl-64/halfulp.c: Delete file.
* sysdeps/ieee754/dbl-64/slowpow.c: Likewise.
* sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise.
* sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c.
* sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1.
* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c,
slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the x86-64 assembly version of sincosf is higly optimized with
vector instructions, there isn't much room for improvement. However
s_sincosf.c written in C with vector math and intrinsics can be
optimized by GCC with FMA.
On Skylake, bench-sincosf reports performance improvement:
Assembly FMA improvement
max 104.042 101.008 3%
min 9.426 8.586 10%
mean 20.6209 18.2238 13%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add s_sincosf-sse2 and s_sincosf-fma.
(CFLAGS-s_sincosf-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/s_sincosf-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/s_sincosf-sse2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sincosf.c: Likewise.
* sysdeps/x86_64/fpu/s_sincosf.S: Don't add alias if
__sincosf is defined.
|
|
|
|
|
|
|
| |
* All files with FSF copyright notices: Update copyright dates
using scripts/update-copyrights.
* locale/programs/charmap-kw.h: Regenerated.
* locale/programs/locfile-kw.h: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Revert:
2017-12-19 Joseph Myers <joseph@codesourcery.com>
* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2017-12-19 Patrick McGehearty <patrick.mcgehearty@oracle.com>
* sysdeps/ieee754/dbl-64/e_exp.c: Include <math-svid-compat.h> and
<errno.h>. Include "eexp.tbl".
(half): New constant.
(one): Likewise.
(__ieee754_exp): Rewrite.
(__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/eexp.tbl: New file.
* sysdeps/ieee754/dbl-64/slowexp.c: Remove file.
* sysdeps/i386/fpu/slowexp.c: Likewise.
* sysdeps/ia64/fpu/slowexp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowexp.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Likewise.
* sysdeps/generic/math_private.h (__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/e_pow.c: Remove mention of slowexp.c in
comment.
* sysdeps/powerpc/power4/fpu/Makefile [$(subdir) = math]
(CPPFLAGS-slowexp.c): Remove variable.
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Remove slowexp-fma, slowexp-fma4 and slowexp-avx.
(CFLAGS-slowexp-fma.c): Remove variable.
(CFLAGS-slowexp-fma4.c): Likewise.
(CFLAGS-slowexp-avx.c): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Do not
define as macro.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Likewise.
* math/Makefile (type-double-routines): Remove slowexp.
* manual/probes.texi (slowexp_p6): Remove.
(slowexp_p32): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These changes will be active for all platforms that don't provide
their own exp() routines. They will also be active for ieee754
versions of ccos, ccosh, cosh, csin, csinh, sinh, exp10, gamma, and
erf.
Typical performance gains is typically around 5x when measured on
Sparc s7 for common values between exp(1) and exp(40).
Using the glibc perf tests on sparc,
sparc (nsec) x86 (nsec)
old new old new
max 17629 395 5173 144
min 399 54 15 13
mean 5317 200 1349 23
The extreme max times for the old (ieee754) exp are due to the
multiprecision computation in the old algorithm when the true value is
very near 0.5 ulp away from an value representable in double
precision. The new algorithm does not take special measures for those
cases. The current glibc exp perf tests overrepresent those values.
Informal testing suggests approximately one in 200 cases might
invoke the high cost computation. The performance advantage of the new
algorithm for other values is still large but not as large as indicated
by the chart above.
Glibc correctness tests for exp() and expf() were run. Within the
test suite 3 input values were found to cause 1 bit differences (ulp)
when "FE_TONEAREST" rounding mode is set. No differences in exp() were
seen for the tested values for the other rounding modes.
Typical example:
exp(-0x1.760cd2p+0) (-1.46113312244415283203125)
new code: 2.31973271630014299393707e-01 0x1.db14cd799387ap-3
old code: 2.31973271630014271638132e-01 0x1.db14cd7993879p-3
exp = 2.31973271630014285508337 (high precision)
Old delta: off by 0.49 ulp
New delta: off by 0.51 ulp
In addition, because ieee754_exp() is used by other routines, cexp()
showed test results with very small imaginary input values where the
imaginary portion of the result was off by 3 ulp when in upward
rounding mode, but not in the other rounding modes. For x86, tgamma
showed a few values where the ulp increased to 6 (max ulp for tgamma
is 5). Sparc tgamma did not show these failures. I presume the tgamma
differences are due to compiler optimization differences within the
gamma function.The gamma function is known to be difficult to compute
accurately.
* sysdeps/ieee754/dbl-64/e_exp.c: Include <math-svid-compat.h> and
<errno.h>. Include "eexp.tbl".
(half): New constant.
(one): Likewise.
(__ieee754_exp): Rewrite.
(__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/eexp.tbl: New file.
* sysdeps/ieee754/dbl-64/slowexp.c: Remove file.
* sysdeps/i386/fpu/slowexp.c: Likewise.
* sysdeps/ia64/fpu/slowexp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowexp.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Likewise.
* sysdeps/generic/math_private.h (__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/e_pow.c: Remove mention of slowexp.c in
comment.
* sysdeps/powerpc/power4/fpu/Makefile [$(subdir) = math]
(CPPFLAGS-slowexp.c): Remove variable.
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Remove slowexp-fma, slowexp-fma4 and slowexp-avx.
(CFLAGS-slowexp-fma.c): Remove variable.
(CFLAGS-slowexp-fma4.c): Likewise.
(CFLAGS-slowexp-avx.c): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Do not
define as macro.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Likewise.
* math/Makefile (type-double-routines): Remove slowexp.
* manual/probes.texi (slowexp_p6): Remove.
(slowexp_p32): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Skylake, bench-cosf reports performance improvement:
Before After Improvement
max 135.362 94.552 43%
min 8.532 7.688 11%
mean 17.1446 11.8128 45%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add s_cosf-sse2 and s_cosf-fma.
(CFLAGS-s_cosf-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/s_cosf-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/s_cosf-sse2.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_cosf.c: Likewise.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Skylake, bench-sinf reports performance improvement:
Before After Improvement
max 153.996 100.094 54%
min 8.546 6.852 25%
mean 18.1223 11.802 54%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add s_sinf-sse2 and s_sinf-fma.
(CFLAGS-s_sinf-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/s_sinf-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/s_sinf-sse2.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sinf.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes x86_64 libm function implementations use
libm_alias_float to define function aliases, or libm_alias_float_other
where the main name is defined with versioned_symbol.
Tested with the glibc testsuite for x86_64, and tested with
build-many-glibcs.py for all its x86_64 configurations that installed
stripped shared libraries are unchanged by the patch.
* sysdeps/x86_64/fpu/multiarch/e_exp2f.c: Include
<libm-alias-float.h>.
(exp2f): Define using libm_alias_float, or libm_alias_float_other
if [SHARED].
* sysdeps/x86_64/fpu/multiarch/e_expf.c: Include
<libm-alias-float.h>.
(exp2f): Define using libm_alias_float, or libm_alias_float_other
if [SHARED].
* sysdeps/x86_64/fpu/multiarch/e_log2f.c: Include
<libm-alias-float.h>.
(exp2f): Define using libm_alias_float, or libm_alias_float_other
if [SHARED].
* sysdeps/x86_64/fpu/multiarch/e_logf.c: Include
<libm-alias-float.h>.
(exp2f): Define using libm_alias_float, or libm_alias_float_other
if [SHARED].
* sysdeps/x86_64/fpu/multiarch/e_powf.c: Include
<libm-alias-float.h>.
(exp2f): Define using libm_alias_float, or libm_alias_float_other
if [SHARED].
* sysdeps/x86_64/fpu/multiarch/s_ceilf.c: Include
<libm-alias-float.h>.
(ceilf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/multiarch/s_floorf.c: Include
<libm-alias-float.h>.
(floorf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/multiarch/s_fmaf.c: Include
<libm-alias-float.h>.
(fmaf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/multiarch/s_nearbyintf.c: Include
<libm-alias-float.h>.
(nearbyintf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/multiarch/s_rintf.c: Include
<libm-alias-float.h>.
(rintf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/multiarch/s_truncf.c: Include
<libm-alias-float.h>.
(truncf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_copysignf.S: Include <libm-alias-float.h>.
(copysignf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_cosf.S: Include <libm-alias-float.h>.
(cosf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_fabsf.c: Include <libm-alias-float.h>.
(fabsf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_fmaxf.S: Include <libm-alias-float.h>.
(fmaxf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_fminf.S: Include <libm-alias-float.h>.
(fminf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_llrintf.S: Include <libm-alias-float.h>.
(llrintf): Define using libm_alias_float.
[!__ILP32__] (lrintf): Likewise.
* sysdeps/x86_64/fpu/s_sincosf.S: Include <libm-alias-float.h>.
(sincosf): Define using libm_alias_float.
* sysdeps/x86_64/fpu/s_sinf.S: Include <libm-alias-float.h>.
(sinf): Define using libm_alias_float.
* sysdeps/x86_64/x32/fpu/s_lrintf.S: Include <libm-alias-float.h>.
(lrintf): Define using libm_alias_float.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes x86_64 libm function implementations use
libm_alias_double to define function aliases.
Tested with the glibc testsuite for x86_64, and tested with
build-many-glibcs.py for all its x86_64 configurations that installed
stripped shared libraries are unchanged by the patch.
* sysdeps/x86_64/fpu/multiarch/s_atan.c: Include
<libm-alias-double.h>.
(atan): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_ceil.c: Include
<libm-alias-double.h>.
(ceil): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_floor.c: Include
<libm-alias-double.h>.
(floor): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_fma.c: Include
<libm-alias-double.h>.
(fma): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_nearbyint.c: Include
<libm-alias-double.h>.
(nearbyint): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_rint.c: Include
<libm-alias-double.h>.
(rint): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_sin.c: Include
<libm-alias-double.h>.
(sin): Define using libm_alias_double.
(cos): Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan.c: Include
<libm-alias-double.h>.
(tan): Define using libm_alias_double.
* sysdeps/x86_64/fpu/multiarch/s_trunc.c: Include
<libm-alias-double.h>.
(trunc): Define using libm_alias_double.
* sysdeps/x86_64/fpu/s_copysign.S: Include <libm-alias-double.h>.
(copysign): Define using libm_alias_double.
* sysdeps/x86_64/fpu/s_fabs.c: Include <libm-alias-double.h>.
(fabs): Define using libm_alias_double.
* sysdeps/x86_64/fpu/s_fmax.S: Include <libm-alias-double.h>.
(fmax): Define using libm_alias_double.
* sysdeps/x86_64/fpu/s_fmin.S: Include <libm-alias-double.h>.
(fmin): Define using libm_alias_double.
* sysdeps/x86_64/fpu/s_llrint.S: Include <libm-alias-double.h>.
(llrint): Define using libm_alias_double.
[!__ILP32__] (lrint): Likewise.
* sysdeps/x86_64/x32/fpu/s_lrint.S: Include <libm-alias-double.h>.
(lrint): Define using libm_alias_double.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* include/alloc_buffer.h: Replace "if if " with "if " in
comments.
* sysdeps/mips/memcpy.S: Likkewise.
* sysdeps/mips/memset.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S:
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For workload-spec2017.wrf, on Skylake, it improves performance by:
Before After Improvement
reciprocal-throughput 35.4713 27.3842 29%
latency 82.4537 66.3175 24%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_powf-fma.
(CFLAGS-e_powf-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/e_powf-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/e_powf.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For workload-spec2017.wrf, on Skylake, it improves performance by:
Before After Improvement
reciprocal-throughput 16.5937 14.0789 17%
latency 41.7755 35.3586 18%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_log2f-fma.
(CFLAGS-e_log2f-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/e_log2f-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/e_log2f.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For workload-spec2017.wrf, on Skylake, it improves performance by:
Before After Improvement
reciprocal-throughput 16.1534 13.8874 16%
latency 41.9642 34.3072 22%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_logf-fma.
(CFLAGS-e_logf-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/e_logf-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/e_logf.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For workload-spec2017.wrf, on Skylake, it improves performance by:
Before After Improvement
reciprocal-throughput 13.0291 11.2225 16%
latency 44.5154 37.5766 18%
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_exp2f-fma.
(CFLAGS-e_exp2f-fma.c): New.
* sysdeps/x86_64/fpu/multiarch/e_exp2f-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/e_exp2f.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces x86-64 assembly versions of e_expf with generic
e_expf.c. For workload-spec2017.wrf, on Nehalem, it improves
performance by:
Before After Improvement
reciprocal-throughput 36.039 20.7749 73%
latency 58.8096 40.8715 43%
On Skylake, it improves
Before After Improvement
reciprocal-throughput 18.4436 11.1693 65%
latency 47.5162 37.5411 26%
* sysdeps/x86_64/fpu/e_expf.S: Removed.
* sysdeps/x86_64/fpu/multiarch/e_expf-fma.S: Likewise.
* sysdeps/x86_64/fpu/w_expf.c: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps: Updated for generic
e_expf.c.
* sysdeps/x86_64/fpu/multiarch/Makefile (CFLAGS-e_expf-fma.c):
New.
* sysdeps/x86_64/fpu/multiarch/e_expf-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/e_expf.c (__redirect_ieee754_expf):
Renamed to ...
(__redirect_expf): This.
(SYMBOL_NAME): Changed to expf.
(__ieee754_expf): Renamed to ...
(__expf): This.
(__GI___expf): This.
(__ieee754_expf): Add strong_alias.
(__expf_finite): Likewise.
(__expf): New.
Include <sysdeps/ieee754/flt-32/e_expf.c>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch converts the dbl-64 implementations of atan and tan into
weak aliases of __atan and __tan, in preparation for making them use
libm_alias_double. Consequent changes are made to the x86_64
multiarch versions wrapping round them (with the dbl-64 functions,
like other such functions, being made not to define their aliases at
all if __atan or __tan are defined as macros by an including file).
Tested for x86_64, and with build-many-glibcs.py.
* sysdeps/ieee754/dbl-64/s_atan.c (atan): Rename to __atan and
define as weak alias of __atan. Do not define any aliases if
[__atan].
[NO_LONG_DOUBLE] (__atanl): Define as strong alias of __atan.
[NO_LONG_DOUBLE] (atanl): Define as weak alias of __atanl.
* sysdeps/ieee754/dbl-64/s_tan.c (tan): Rename to __tan and define
as weak alias of __tan. Do not define any aliases if [__tan].
[NO_LONG_DOUBLE] (__tanl): Define as strong alias of __tan.
[NO_LONG_DOUBLE] (tanl): Define as weak alias of __tanl.
* sysdeps/x86_64/fpu/multiarch/s_atan-avx.c (atan): Rename to
__atan.
* sysdeps/x86_64/fpu/multiarch/s_atan-fma.c (atan): Likewise.
* sysdeps/x86_64/fpu/multiarch/s_atan-fma4.c (atan): Likewise.
* sysdeps/x86_64/fpu/multiarch/s_atan.c (atan): Rename to __atan
and define as weak alias of __atan.
* sysdeps/x86_64/fpu/multiarch/s_tan-avx.c (tan): Rename to
__atan.
* sysdeps/x86_64/fpu/multiarch/s_tan-fma.c (tan): Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan-fma4.c (tan): Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan.c (tan): Rename to __tan and
define as weak alias of __tan.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds SSE4.1 versions of trunc and truncf, using the roundsd
/ roundss instructions, similar to the versions of ceil, floor, rint
and nearbyint functions we already have. In my testing with the glibc
benchtests these are about 30% faster than the C versions for double,
20% faster for float.
Tested for x86_64.
[BZ #20142]
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add s_trunc-c, s_truncf-c, s_trunc-sse4_1 and s_truncf-sse4_1.
* sysdeps/x86_64/fpu/multiarch/s_trunc-c.c: New file.
* sysdeps/x86_64/fpu/multiarch/s_trunc-sse4_1.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_trunc.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_truncf-c.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_truncf-sse4_1.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_truncf.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AVX512 functions in mathvec are used on machines with AVX512. An AVX2
wrapper is also provided and it can be used when the AVX512 version
isn't profitable. MathVec_Prefer_No_AVX512 is addded to cpu-features.
If glibc.tune.hwcaps=MathVec_Prefer_No_AVX512 is set in GLIBC_TUNABLES
environment variable, the AVX2 wrapper will be used.
Tested on x86-64 machines with and without AVX512. Also verified
glibc.tune.hwcaps=MathVec_Prefer_No_AVX512 on AVX512 machine.
[BZ #21967]
* sysdeps/x86/cpu-features.h (bit_arch_MathVec_Prefer_No_AVX512):
New.
(index_arch_MathVec_Prefer_No_AVX512): Likewise.
* sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)):
Handle MathVec_Prefer_No_AVX512.
* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h
(IFUNC_SELECTOR): Return AVX2 version if MathVec_Prefer_No_AVX512
is set.
|
|
|
|
|
|
|
| |
__redirect_ieee754_expf has type float, not double.
* sysdeps/x86_64/fpu/multiarch/e_expf.c (__redirect_ieee754_expf):
Change double to float.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since binutils 2.25 or later is required to build glibc, we can replace
AVX512F .byte sequences with AVX512F instructions.
Tested on x86-64 and x32. There are no code differences in libmvec.so
and libmvec.a.
* sysdeps/x86_64/fpu/svml_d_sincos8_core.S: Replace AVX512F
.byte sequences with AVX512F instructions.
* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h: Likewise.
* sysdeps/x86_64/fpu/svml_s_sincosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S:
Likewise.
|
|
|
|
|
|
|
|
|
|
| |
Since the AVX2 version of mathvec functions uses FMA, it can only be
used when FMA is usable.
[BZ #21966]
* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h
(IFUNC_SELECTOR): Don't use the AVX2 version if FMA isn't
usable.
|
|
|
|
|
|
|
|
|
|
|
| |
FMA optimized e_expf improves performance by more than 50% on Skylake.
[BZ #21912]
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_expf-fma.
* sysdeps/x86_64/fpu/multiarch/e_expf-fma.S: New file.
* sysdeps/x86_64/fpu/multiarch/e_expf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/ifunc-fma.h: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds multiarch functions optimized with -mfma -mavx2 to libm.
e_pow-fma.c is compiled with $(config-cflags-nofma) due to PR 19003.
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_exp-fma, e_log-fma, e_pow-fma, s_atan-fma, e_asin-fma,
e_atan2-fma, s_sin-fma, s_tan-fma, mplog-fma, mpa-fma,
slowexp-fma, slowpow-fma, sincos32-fma, doasin-fma, dosincos-fma,
halfulp-fma, mpexp-fma, mpatan2-fma, mpatan-fma, mpsqrt-fma,
and mptan-fma.
(CFLAGS-doasin-fma.c): New.
(CFLAGS-dosincos-fma.c): Likewise.
(CFLAGS-e_asin-fma.c): Likewise.
(CFLAGS-e_atan2-fma.c): Likewise.
(CFLAGS-e_exp-fma.c): Likewise.
(CFLAGS-e_log-fma.c): Likewise.
(CFLAGS-e_pow-fma.c): Likewise.
(CFLAGS-halfulp-fma.c): Likewise.
(CFLAGS-mpa-fma.c): Likewise.
(CFLAGS-mpatan-fma.c): Likewise.
(CFLAGS-mpatan2-fma.c): Likewise.
(CFLAGS-mpexp-fma.c): Likewise.
(CFLAGS-mplog-fma.c): Likewise.
(CFLAGS-mpsqrt-fma.c): Likewise.
(CFLAGS-mptan-fma.c): Likewise.
(CFLAGS-s_atan-fma.c): Likewise.
(CFLAGS-sincos32-fma.c): Likewise.
(CFLAGS-slowexp-fma.c): Likewise.
(CFLAGS-slowpow-fma.c): Likewise.
(CFLAGS-s_sin-fma.c): Likewise.
(CFLAGS-s_tan-fma.c): Likewise.
* sysdeps/x86_64/fpu/multiarch/doasin-fma.c: New file.
* sysdeps/x86_64/fpu/multiarch/dosincos-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_asin-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_atan2-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpa-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpatan-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpatan2-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mplog-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mpsqrt-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/mptan-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_atan-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sin-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/sincos32-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_asin.c: Rewrite.
* sysdeps/x86_64/fpu/multiarch/e_atan2.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_pow.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_atan.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sin.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines)
Add svml_d_cos2_core-sse2, svml_d_cos4_core-sse,
svml_d_cos8_core-avx2, svml_d_exp2_core-sse2,
svml_d_exp4_core-sse, svml_d_exp8_core-avx2,
svml_d_log2_core-sse2, svml_d_log4_core-sse,
svml_d_log8_core-avx2, svml_d_pow2_core-sse2,
svml_d_pow4_core-sse, svml_d_pow8_core-avx2
svml_d_sin2_core-sse2, svml_d_sin4_core-sse,
svml_d_sin8_core-avx2, svml_d_sincos2_core-sse2,
svml_d_sincos4_core-sse, svml_d_sincos8_core-avx2,
svml_s_cosf16_core-avx2, svml_s_cosf4_core-sse2,
svml_s_cosf8_core-sse, svml_s_expf16_core-avx2,
svml_s_expf4_core-sse2, svml_s_expf8_core-sse,
svml_s_logf16_core-avx2, svml_s_logf4_core-sse2,
svml_s_logf8_core-sse, svml_s_powf16_core-avx2,
svml_s_powf4_core-sse2, svml_s_powf8_core-sse,
svml_s_sincosf16_core-avx2, svml_s_sincosf4_core-sse2,
svml_s_sincosf8_core-sse, svml_s_sinf16_core-avx2,
svml_s_sinf4_core-sse2 and svml_s_sinf8_core-sse.
* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h: New file.
* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-sse4_1.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf16_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_expf16_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_expf4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_expf8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_logf16_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_logf4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_logf8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_powf16_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_powf4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_powf8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf16_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf16_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf4_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf8_core.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN2v_cos): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN4v_cos): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN8v_cos): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN2v_exp): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN4v_exp): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN8v_exp): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN2v_log): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN4v_log): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN8v_log): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN2vv_pow): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN4vv_pow): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN8vv_pow): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN2v_sin): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4v_sin): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN8v_sin): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN2vvv_sincos): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN4vvv_sincos): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN8vvv_sincos): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf16_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf16_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN16v_cosf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf4_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4v_cosf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_cosf8_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN8v_cosf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_expf16_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_expf16_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN16v_expf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_expf4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_expf4_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4v_expf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_expf8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_expf8_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN8v_expf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_logf16_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_logf16_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN16v_logf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_logf4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_logf4_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4v_logf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_logf8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_logf8_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN8v_logf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_powf16_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_powf16_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN16vv_powf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_powf4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_powf4_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4vv_powf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_powf8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_powf8_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN8vv_powf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf16_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf16_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN16vvv_sincosf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf4_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4vvv_sincosf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf8_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN8vvv_sincosf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf16_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf16_core-avx2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVeN16v_sinf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf4_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf4_core-sse2.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVbN4v_sinf): Removed.
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf8_core.S: Renamed to
...
* sysdeps/x86_64/fpu/multiarch/svml_d_sinf8_core-sse.S: This.
Don't include <sysdep.h> nor <init-arch.h>.
(_ZGVdN8v_sinf): Removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Add s_ceil-sse4_1, s_ceilf-sse4_1, s_floor-sse4_1,
s_floorf-sse4_1, s_nearbyint-sse4_1, s_nearbyintf-sse4_1,
s_rint-sse4_1 and s_rintf-sse4_1.
* sysdeps/x86_64/fpu/multiarch/ifunc-sse4_1.h: New file.
* sysdeps/x86_64/fpu/multiarch/s_ceil.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_ceilf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floor.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floorf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_nearbyint.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_nearbyintf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_rint.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_rintf.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_ceil.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_ceil-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__ceil): Removed.
* sysdeps/x86_64/fpu/multiarch/s_ceilf.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_ceilf-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__ceilf): Removed.
* sysdeps/x86_64/fpu/multiarch/s_floor.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_floor-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__floor): Removed.
* sysdeps/x86_64/fpu/multiarch/s_floorf.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_floorf-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__floorf): Removed.
* sysdeps/x86_64/fpu/multiarch/s_nearbyint.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_nearbyint-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__nearbyint): Removed.
* sysdeps/x86_64/fpu/multiarch/s_nearbyintf.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_nearbyintf-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__nearbyintf): Removed.
* sysdeps/x86_64/fpu/multiarch/s_rint.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_rint-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__rint): Removed.
* sysdeps/x86_64/fpu/multiarch/s_rintf.S: Renamed to ...
* sysdeps/x86_64/fpu/multiarch/s_rintf-sse4_1.S: This. Don't
include <machine/asm.h> nor <init-arch.h>. Include <sysdep.h>.
(__rintf): Removed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Vector math functions require -ffast-math which sets -ffinite-math-only,
so it is needed to call finite scalar versions (which are called from
vector functions in some cases).
Since finite version of pow() returns qNaN instead of 1.0 for several
inputs, those inputs are excluded for tests of vector math functions.
[BZ #20033]
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S: Call
finite version.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_exp2_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_log2_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_pow2_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_expf4_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_logf4_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_powf4_core.S: Likewise.
* math/libm-test.inc (pow_test_data): Exclude tests for qNaN
in power zero.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If assembler doesn't support AVX512DQ, _dl_runtime_resolve_avx is used
to save the first 8 vector registers, which only saves the lower 256
bits of vector register, for lazy binding. When it is called on AVX512
platform, the upper 256 bits of ZMM registers are clobbered. Parameters
passed in ZMM registers will be wrong when the function is called the
first time. This patch requires binutils 2.24, whose assembler can store
and load ZMM registers, to build x86-64 glibc. Since mathvec library
needs assembler support for AVX512DQ, we disable mathvec if assembler
doesn't support AVX512DQ.
[BZ #20139]
* config.h.in (HAVE_AVX512_ASM_SUPPORT): Renamed to ...
(HAVE_AVX512DQ_ASM_SUPPORT): This.
* sysdeps/x86_64/configure.ac: Require assembler from binutils
2.24 or above.
(HAVE_AVX512_ASM_SUPPORT): Removed.
(HAVE_AVX512DQ_ASM_SUPPORT): New.
* sysdeps/x86_64/configure: Regenerated.
* sysdeps/x86_64/dl-trampoline.S: Make HAVE_AVX512_ASM_SUPPORT
check unconditional.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Likewise.
* sysdeps/x86_64/multiarch/memcpy.S: Likewise.
* sysdeps/x86_64/multiarch/memcpy_chk.S: Likewise.
* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
Likewise.
* sysdeps/x86_64/multiarch/memmove.S: Likewise.
* sysdeps/x86_64/multiarch/memmove_chk.S: Likewise.
* sysdeps/x86_64/multiarch/mempcpy.S: Likewise.
* sysdeps/x86_64/multiarch/mempcpy_chk.S: Likewise.
* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
Likewise.
* sysdeps/x86_64/multiarch/memset.S: Likewise.
* sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S: Check
HAVE_AVX512DQ_ASM_SUPPORT instead of HAVE_AVX512_ASM_SUPPORT.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx51:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S:
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
current vector function declaration "#pragma omp declare simd notinbranch",
according to which vector sincos should have vector of pointers for second and
third parameters. It is fixed with implementation as wrapper to version
having second and third parameters as pointers.
[BZ #20024]
* sysdeps/x86/fpu/test-math-vector-sincos.h: New.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S: Fixed ABI
of this implementation of vector function.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_sincos2_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_sincos4_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_sincos4_core_avx.S: Likewise.
* sysdeps/x86_64/fpu/svml_d_sincos8_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_sincosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_sincosf4_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_sincosf8_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_sincosf8_core_avx.S: Likewise.
* sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Use another wrapper
for testing vector sincos with fixed ABI.
* sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
* sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx.c: New test.
* sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx2.c: Likewise.
* sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx512.c: Likewise.
* sysdeps/x86_64/fpu/test-double-libmvec-sincos.c: Likewise.
* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx.c: Likewise.
* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx2.c: Likewise.
* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx512.c: Likewise.
* sysdeps/x86_64/fpu/test-float-libmvec-sincosf.c: Likewise.
* sysdeps/x86_64/fpu/Makefile: Added new tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing fixes for ceil and floor functions not to raise the
"inexact" exception, this patch fixes the x86_64 SSE4.1 versions. The
roundss / roundsd instructions take an immediate operand that
determines the rounding mode and whether to raise "inexact"; this just
needs bit 3 set to disable "inexact", which this patch does.
Remark: we don't have an SSE4.1 version of trunc / truncf (using this
instruction with operand 11); I'd expect one to make sense, but of
course it should be benchmarked against the existing C code. I'll
file a bug in Bugzilla for the lack of such a version.
Tested for x86_64.
[BZ #15479]
* sysdeps/x86_64/fpu/multiarch/s_ceil.S (__ceil_sse41): Set bit 3
of immediate operand to rounding instruction.
* sysdeps/x86_64/fpu/multiarch/s_ceilf.S (__ceilf_sse41):
Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floor.S (__floor_sse41):
Likewise.
* sysdeps/x86_64/fpu/multiarch/s_floorf.S (__floorf_sse41):
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When PLT may be used, JUMPTARGET should be used instead calling the
function directly.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core_sse4.S
(_ZGVbN2v_cos_sse4): Use JUMPTARGET to call cos.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core_avx2.S
(_ZGVdN4v_cos_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S
(_ZGVdN4v_cos): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S
(_ZGVbN2v_exp_sse4): Use JUMPTARGET to call exp.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S
(_ZGVdN4v_exp_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S
(_ZGVdN4v_exp): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S
(_ZGVbN2v_log_sse4): Use JUMPTARGET to call log.
* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S
(_ZGVdN4v_log_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S
(_ZGVdN4v_log): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S
(_ZGVbN2vv_pow_sse4): Use JUMPTARGET to call pow.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S
(_ZGVdN4vv_pow_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S
(_ZGVdN4vv_pow): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core_sse4.S
(_ZGVbN2v_sin_sse4): Use JUMPTARGET to call sin.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core_avx2.S
(_ZGVdN4v_sin_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S
(_ZGVdN4v_sin): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S
(_ZGVbN2vvv_sincos_sse4): Use JUMPTARGET to call sin and cos.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S
(_ZGVdN4vvv_sincos_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S
(_ZGVdN4vvv_sincos): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S
(_ZGVdN8v_cosf): Use JUMPTARGET to call cosf.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core_sse4.S
(_ZGVbN4v_cosf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core_avx2.S
(_ZGVdN8v_cosf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S
(_ZGVdN8v_expf): Use JUMPTARGET to call expf.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S
(_ZGVbN4v_expf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S
(_ZGVdN8v_expf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
(_ZGVdN8v_logf): Use JUMPTARGET to call logf.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
(_ZGVbN4v_logf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
(_ZGVdN8v_logf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S
(_ZGVdN8vv_powf): Use JUMPTARGET to call powf.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S
(_ZGVbN4vv_powf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S
(_ZGVdN8vv_powf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S
(_ZGVdN8vv_powf): Use JUMPTARGET to call sinf and cosf.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S
(_ZGVbN4vvv_sincosf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S
(_ZGVdN8vvv_sincosf_avx2): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S
(_ZGVdN8v_sinf): Use JUMPTARGET to call sinf.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core_sse4.S
(_ZGVbN4v_sinf_sse4): Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core_avx2.S
(_ZGVdN8v_sinf_avx2): Likewise.
* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h (WRAPPER_IMPL_SSE2):
Use JUMPTARGET to call callee.
(WRAPPER_IMPL_SSE2_ff): Likewise.
(WRAPPER_IMPL_SSE2_fFF): Likewise.
(WRAPPER_IMPL_AVX): Likewise.
(WRAPPER_IMPL_AVX_ff): Likewise.
(WRAPPER_IMPL_AVX_fFF): Likewise.
(WRAPPER_IMPL_AVX512): Likewise.
(WRAPPER_IMPL_AVX512_ff): Likewise.
* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h (WRAPPER_IMPL_SSE2):
Likewise.
(WRAPPER_IMPL_SSE2_ff): Likewise.
(WRAPPER_IMPL_SSE2_fFF): Likewise.
(WRAPPER_IMPL_AVX): Likewise.
(WRAPPER_IMPL_AVX_ff): Likewise.
(WRAPPER_IMPL_AVX_fFF): Likewise.
(WRAPPER_IMPL_AVX512): Likewise.
(WRAPPER_IMPL_AVX512_ff): Likewise.
(WRAPPER_IMPL_AVX512_fFF): Likewise.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GCC added support for -mfma4 in version 4.5. Thus the configure tests
for this support are obsolete, and this patch removes them.
Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).
* sysdeps/i386/configure.ac (libc_cv_cc_fma4): Remove configure
test.
* sysdeps/i386/configure: Regenerated.
* sysdeps/x86_64/configure.ac (libc_cv_cc_fma4): Remove configure
test.
* sysdeps/x86_64/configure: Regenerated.
* sysdeps/x86_64/fpu/multiarch/Makefile [$(have-mfma4) = yes]:
Make code unconditional.
* sysdeps/x86_64/fpu/multiarch/e_asin.c [HAVE_FMA4_SUPPORT]:
Likewise.
* sysdeps/x86_64/fpu/multiarch/e_atan2.c [HAVE_FMA4_SUPPORT]:
Likewise.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/e_exp.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/e_log.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/e_pow.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
* sysdeps/x86_64/fpu/multiarch/s_atan.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/s_fma.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/s_fmaf.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/s_sin.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* sysdeps/x86_64/fpu/multiarch/s_tan.c [HAVE_FMA4_SUPPORT]: Make
code unconditional.
[!HAVE_FMA4_SUPPORT]: Remove conditional code.
* config.h.in (HAVE_FMA4_SUPPORT): Remove #undef.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GCC added support for -mavx and -msse2avx in version 4.4. Thus the
configure tests for this support are obsolete, and this patch removes
them.
Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).
* sysdeps/i386/configure.ac (libc_cv_cc_avx): Remove configure
test.
(libc_cv_cc_sse2avx): Likewise.
* sysdeps/i386/configure: Regenerated.
* sysdeps/i386/i686/multiarch/Makefile
[$(subdir)$(config-cflags-avx) = mathyes]: Change conditional to
[$(subdir) = math].
* sysdeps/i386/i686/multiarch/s_fma-fma.c [HAVE_AVX_SUPPORT]: Make
code unconditional.
* sysdeps/i386/i686/multiarch/s_fma.c [HAVE_AVX_SUPPORT]:
Likewise.
* sysdeps/i386/i686/multiarch/s_fmaf-fma.c [HAVE_AVX_SUPPORT]:
Likewise.
* sysdeps/i386/i686/multiarch/s_fmaf.c [HAVE_AVX_SUPPORT]:
Likewise.
* sysdeps/x86_64/configure.ac (libc_cv_cc_avx): Remove configure
test.
(libc_cv_cc_sse2avx): Likewise.
* sysdeps/x86_64/configure: Regenerated.
* sysdeps/x86_64/Makefile [$(config-cflags-avx) = yes]: Make code
unconditional.
* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_profile)
[HAVE_AVX_SUPPORT || HAVE_AVX512_ASM_SUPPORT]: Make code
unconditional.
(_dl_runtime_profile)
[!(HAVE_AVX_SUPPORT || HAVE_AVX512_ASM_SUPPORT)]: Remove
conditional code.
* sysdeps/x86_64/fpu/multiarch/Makefile
[$(config-cflags-sse2avx) = yes]: Make code unconditional.
* sysdeps/x86_64/fpu/multiarch/e_atan2.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
* sysdeps/x86_64/fpu/multiarch/e_log.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_atan.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_fma.c [HAVE_AVX_SUPPORT]:
Likewise.
* sysdeps/x86_64/fpu/multiarch/s_fmaf.c [HAVE_AVX_SUPPORT]:
Likewise.
* sysdeps/x86_64/fpu/multiarch/s_sin.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
* sysdeps/x86_64/fpu/multiarch/s_tan.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
* sysdeps/x86_64/multiarch/strcmp.S [HAVE_AVX_SUPPORT]: Likewise.
* config.h.in (HAVE_AVX_SUPPORT): Remove #undef.
(HAVE_SSE2AVX_SUPPORT): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The x86_64 fma4 version of pow fails to disable contraction of
operations other than those explicitly intended to use fma
instructions, so resulting in large ulps errors on processors with
fma4 instructions, as in bug 18104 (165ulp for the test added for that
bug; error originally reported by "blaaa" on #glibc). This patch adds
$(config-cflags-nofma) for e_pow-fma4.c, corresponding to the use for
e_pow.c in sysdeps/ieee754/dbl-64/Makefile.
Tested for x86_64 on a processor with fma4.
[BZ #19003]
* sysdeps/x86_64/fpu/multiarch/Makefile (CFLAGS-e_pow-fma4.c): Add
$(config-cflags-nofma).
|