summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlexandre Oliva <aoliva@redhat.com>2015-11-26 21:03:24 -0200
committerAlexandre Oliva <aoliva@redhat.com>2015-11-26 21:03:24 -0200
commit270b1bc8c0fb5ae820adbf2e4cd79f32181c6ce1 (patch)
tree6da23a9e9861a3fc4a03f0b3b132f47786a1273b
parent4b8026f85a902e5b5b4207ab1ab7ba16a7512b01 (diff)
downloadgcc-aoliva/pr67355.tar.gz
[PR67355] drop dummy zero from reverse VTA ops, fix infinite recursionaoliva/pr67355
VTA's cselib expression hashing compares expressions with the same hash before adding them to the hash table. When there is a collision involving a self-referencing expression, we could get infinite recursion, in spite of the cycle breakers already in place. The problem is currently latent in the trunk, because by chance we don't get a collision. Such value cycles are often introduced by reverse_op; most often, they're indirect, and then value canonicalization takes care of the cycle, but if the reverse operation simplifies to the original value, we used to issue a (plus V (const_int 0)), because at some point adding a plain value V to a location list as a reverse_op equivalence caused other problems. This dummy zero, in turn, caused the value canonicalizer to not fully realize the equivalence, leading to more complex graphs and, occasionally, to infinite recursion when comparing such value-plus-zero expressions recursively. Simply using V solves the infinite recursion from the PR testcase, since the extra equivalence and the preexisting value canonicalization together prevent recursion while the unrecognized equivalence wouldn't, but it exposed another infinite recursion in memrefs_conflict_p: get_addr had a cycle breaker in place, to skip RTL referencing values introduced after the one we're examining, but it wouldn't break the cycle if the value itself appeared in the expression being examined. After removing the dummy zero above, this kind of cycle in the equivalence graph is no longer introduced by VTA itself, but dummy zeros are also present in generated code, such as in the 32-bit x86's pro_epilogue_adjust_stack_si_add epilogue insn generated as part of the builtin longjmp in _Unwind_RaiseException building libgcc's unwind-dw2.o. So, break the recursion cycle for them too. for gcc/ChangeLog PR debug/67355 * var-tracking.c (reverse_op): Don't add dummy zero to reverse ops that simplify back to the original value. * alias.c (refs_newer_value_p): Cut off recursion for expressions containing the original value.
-rw-r--r--gcc/alias.c4
-rw-r--r--gcc/var-tracking.c5
2 files changed, 2 insertions, 7 deletions
diff --git a/gcc/alias.c b/gcc/alias.c
index 9a642dde03e..d868da347d3 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2072,7 +2072,7 @@ base_alias_check (rtx x, rtx x_base, rtx y, rtx y_base,
}
/* Return TRUE if EXPR refers to a VALUE whose uid is greater than
- that of V. */
+ (or equal to) that of V. */
static bool
refs_newer_value_p (const_rtx expr, rtx v)
@@ -2080,7 +2080,7 @@ refs_newer_value_p (const_rtx expr, rtx v)
int minuid = CSELIB_VAL_PTR (v)->uid;
subrtx_iterator::array_type array;
FOR_EACH_SUBRTX (iter, array, expr, NONCONST)
- if (GET_CODE (*iter) == VALUE && CSELIB_VAL_PTR (*iter)->uid > minuid)
+ if (GET_CODE (*iter) == VALUE && CSELIB_VAL_PTR (*iter)->uid >= minuid)
return true;
return false;
}
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 9185bfd39cf..07eea841f44 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -5774,11 +5774,6 @@ reverse_op (rtx val, const_rtx expr, rtx_insn *insn)
return;
}
ret = simplify_gen_binary (code, GET_MODE (val), val, arg);
- if (ret == val)
- /* Ensure ret isn't VALUE itself (which can happen e.g. for
- (plus (reg1) (reg2)) when reg2 is known to be 0), as that
- breaks a lot of routines during var-tracking. */
- ret = gen_rtx_fmt_ee (PLUS, GET_MODE (val), val, const0_rtx);
break;
default:
gcc_unreachable ();