diff options
author | rsandifo <rsandifo@138bc75d-0d04-0410-961f-82ee72b054a4> | 2017-10-22 21:39:29 +0000 |
---|---|---|
committer | rsandifo <rsandifo@138bc75d-0d04-0410-961f-82ee72b054a4> | 2017-10-22 21:39:29 +0000 |
commit | 1048c15588636609c35c11bd401e073d47e7cd54 (patch) | |
tree | 4eaaa8513cdd739b4ec805e56e182a356e6d60b9 /gcc/cse.c | |
parent | b8510cb117780365a38216d22de7cb49b3f54fbb (diff) | |
download | gcc-1048c15588636609c35c11bd401e073d47e7cd54.tar.gz |
Make more use of GET_MODE_UNIT_PRECISION
This patch is like the earlier GET_MODE_UNIT_SIZE one,
but for precisions rather than sizes. There is one behavioural
change in expand_debug_expr: we shouldn't use lowpart subregs
for non-scalar truncations, since that would just reinterpret
some of the scalars and drop the rest. (This probably doesn't
trigger in practice.) Using TRUNCATE is fine for scalars,
since simplify_gen_unary knows when a subreg can be used.
2017-10-22 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* cfgexpand.c (expand_debug_expr): Use GET_MODE_UNIT_PRECISION.
(expand_debug_source_expr): Likewise.
* combine.c (combine_simplify_rtx): Likewise.
* cse.c (fold_rtx): Likewise.
* optabs.c (expand_float): Likewise.
* simplify-rtx.c (simplify_unary_operation_1): Likewise.
(simplify_binary_operation_1): Likewise.
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@253991 138bc75d-0d04-0410-961f-82ee72b054a4
Diffstat (limited to 'gcc/cse.c')
-rw-r--r-- | gcc/cse.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/gcc/cse.c b/gcc/cse.c index 25653ac77bb..65cc9ae110c 100644 --- a/gcc/cse.c +++ b/gcc/cse.c @@ -3607,7 +3607,7 @@ fold_rtx (rtx x, rtx_insn *insn) enum rtx_code associate_code; if (is_shift - && (INTVAL (const_arg1) >= GET_MODE_PRECISION (mode) + && (INTVAL (const_arg1) >= GET_MODE_UNIT_PRECISION (mode) || INTVAL (const_arg1) < 0)) { if (SHIFT_COUNT_TRUNCATED) @@ -3656,7 +3656,7 @@ fold_rtx (rtx x, rtx_insn *insn) break; if (is_shift - && (INTVAL (inner_const) >= GET_MODE_PRECISION (mode) + && (INTVAL (inner_const) >= GET_MODE_UNIT_PRECISION (mode) || INTVAL (inner_const) < 0)) { if (SHIFT_COUNT_TRUNCATED) @@ -3687,7 +3687,7 @@ fold_rtx (rtx x, rtx_insn *insn) if (is_shift && CONST_INT_P (new_const) - && INTVAL (new_const) >= GET_MODE_PRECISION (mode)) + && INTVAL (new_const) >= GET_MODE_UNIT_PRECISION (mode)) { /* As an exception, we can turn an ASHIFTRT of this form into a shift of the number of bits - 1. */ |