summaryrefslogtreecommitdiff
path: root/gcc/config/rs6000/vector.md
diff options
context:
space:
mode:
authorKyrylo Tkachov <ktkachov@gcc.gnu.org>2018-03-20 17:13:16 +0000
committerKyrylo Tkachov <ktkachov@gcc.gnu.org>2018-03-20 17:13:16 +0000
commit770ebe99fe48aca10f3553c4195deba1757d328a (patch)
tree16f58159f03d1ab59f47648999b34f8d718cbae5 /gcc/config/rs6000/vector.md
parent6f87580f7d0726d9683ca0f4a703a857f06f00d5 (diff)
downloadgcc-770ebe99fe48aca10f3553c4195deba1757d328a.tar.gz
This PR shows that we get the load/store_lanes logic wrong for arm big-endian.
It is tricky to get right. Aarch64 does it by adding the appropriate lane-swapping operations during expansion. I'd like to do the same on arm eventually, but we'd need to port and validate the VTBL-generating code and add it to all the right places and I'm not comfortable enough doing it for GCC 8, but I am keen in getting the wrong-code fixed. As I say in the PR, vectorisation on armeb is already severely restricted (we disable many patterns on BYTES_BIG_ENDIAN) and the load/store_lanes patterns really were not working properly at all, so disabling them is not a radical approach. The way to do that is to return false in ARRAY_MODE_SUPPORTED_P for BYTES_BIG_ENDIAN. Bootstrapped and tested on arm-none-linux-gnueabihf. Also tested on armeb-none-eabi. PR target/82518 * config/arm/arm.c (arm_array_mode_supported_p): Return false for BYTES_BIG_ENDIAN. * lib/target-supports.exp (check_effective_target_vect_load_lanes): Disable for armeb targets. * gcc.target/arm/pr82518.c: New test. From-SVN: r258687
Diffstat (limited to 'gcc/config/rs6000/vector.md')
0 files changed, 0 insertions, 0 deletions