diff options
author | Joseph Myers <joseph@codesourcery.com> | 2018-09-27 12:35:23 +0000 |
---|---|---|
committer | Joseph Myers <joseph@codesourcery.com> | 2018-09-27 12:35:23 +0000 |
commit | 9755bc4686d8cd6a0e9539040b903e9e9291c319 (patch) | |
tree | 05d7d23577087ebb8c4d9b8122f604c06b1513d3 /sysdeps/ieee754/ldbl-128ibm | |
parent | f841c97e515a1673485a2b12b3c280073d737890 (diff) | |
download | glibc-9755bc4686d8cd6a0e9539040b903e9e9291c319.tar.gz |
Use round functions not __round functions in glibc libm.
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __round functions to call the
corresponding round names instead, with asm redirection to __round
when the calls are not inlined.
An additional complication arises in
sysdeps/ieee754/ldbl-128ibm/e_expl.c, where a call to roundl, with the
result converted to int, gets converted by the compiler to call
lroundl in the case of 32-bit long, so resulting in localplt test
failures. It's logically correct to let the compiler make such an
optimization; an appropriate asm redirection of lroundl to __lroundl
is thus added to that file (it's not needed anywhere else).
Tested for x86_64, and with build-many-glibcs.py.
* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (round): Redirect
using MATH_REDIRECT.
* sysdeps/aarch64/fpu/s_round.c: Define NO_MATH_REDIRECT before
header inclusion.
* sysdeps/aarch64/fpu/s_roundf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_round.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_round.c: Likewise.
* sysdeps/ieee754/float128/s_roundf128.c: Likewise.
* sysdeps/ieee754/flt-32/s_roundf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_roundl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_roundl.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_round.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_roundf.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_round.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
* sysdeps/riscv/rvf/s_roundf.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_roundl.c: Likewise.
(round): Redirect to __round.
(__roundl): Call round instead of __round.
* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__round):
Remove macro.
[_ARCH_PWR5X] (__roundf): Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use round
functions instead of __round variants.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_expl.c (lroundl): Redirect to
__lroundl.
(__ieee754_expl): Call roundl instead of __roundl.
Diffstat (limited to 'sysdeps/ieee754/ldbl-128ibm')
-rw-r--r-- | sysdeps/ieee754/ldbl-128ibm/e_expl.c | 11 | ||||
-rw-r--r-- | sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c | 2 | ||||
-rw-r--r-- | sysdeps/ieee754/ldbl-128ibm/s_roundl.c | 7 |
3 files changed, 14 insertions, 6 deletions
diff --git a/sysdeps/ieee754/ldbl-128ibm/e_expl.c b/sysdeps/ieee754/ldbl-128ibm/e_expl.c index a5024559dc..0c33d88010 100644 --- a/sysdeps/ieee754/ldbl-128ibm/e_expl.c +++ b/sysdeps/ieee754/ldbl-128ibm/e_expl.c @@ -134,6 +134,11 @@ static const long double C[] = { 1.98412698413981650382436541785404286E-04L, }; +/* Avoid local PLT entry use from (int) roundl (...) being converted + to a call to lroundl in the case of 32-bit long and roundl not + inlined. */ +long int lroundl (long double) asm ("__lroundl"); + long double __ieee754_expl (long double x) { @@ -149,15 +154,15 @@ __ieee754_expl (long double x) SET_RESTORE_ROUND (FE_TONEAREST); - n = __roundl (x*M_1_LN2); + n = roundl (x*M_1_LN2); x = x-n*M_LN2_0; xl = n*M_LN2_1; - tval1 = __roundl (x*TWO8); + tval1 = roundl (x*TWO8); x -= __expl_table[T_EXPL_ARG1+2*tval1]; xl -= __expl_table[T_EXPL_ARG1+2*tval1+1]; - tval2 = __roundl (x*TWO15); + tval2 = roundl (x*TWO15); x -= __expl_table[T_EXPL_ARG2+2*tval2]; xl -= __expl_table[T_EXPL_ARG2+2*tval2+1]; diff --git a/sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c b/sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c index 36801213d4..6361d35428 100644 --- a/sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c +++ b/sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c @@ -95,7 +95,7 @@ gammal_positive (long double x, int *exp2_adj) starting by computing pow (X_ADJ, X_ADJ) with a power of 2 factored out. */ long double exp_adj = -eps; - long double x_adj_int = __roundl (x_adj); + long double x_adj_int = roundl (x_adj); long double x_adj_frac = x_adj - x_adj_int; int x_adj_log2; long double x_adj_mant = __frexpl (x_adj, &x_adj_log2); diff --git a/sysdeps/ieee754/ldbl-128ibm/s_roundl.c b/sysdeps/ieee754/ldbl-128ibm/s_roundl.c index 94a62dcd6c..156be267a0 100644 --- a/sysdeps/ieee754/ldbl-128ibm/s_roundl.c +++ b/sysdeps/ieee754/ldbl-128ibm/s_roundl.c @@ -20,12 +20,15 @@ /* This has been coded in assembler because GCC makes such a mess of it when it's coded in C. */ +#define NO_MATH_REDIRECT #include <math.h> #include <math_private.h> #include <math_ldbl_opt.h> #include <float.h> #include <ieee754.h> +double round (double) asm ("__round"); + long double __roundl (long double x) @@ -39,7 +42,7 @@ __roundl (long double x) && __builtin_isless (__builtin_fabs (xh), __builtin_inf ()), 1)) { - hi = __round (xh); + hi = round (xh); if (hi != xh) { /* The high part is not an integer; the low part only @@ -62,7 +65,7 @@ __roundl (long double x) else { /* The high part is a nonzero integer. */ - lo = __round (xl); + lo = round (xl); if (fabs (lo - xl) == 0.5) { if (xh > 0 && xl < 0) |