summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authoraldyh <aldyh@138bc75d-0d04-0410-961f-82ee72b054a4>2002-07-25 02:22:47 +0000
committeraldyh <aldyh@138bc75d-0d04-0410-961f-82ee72b054a4>2002-07-25 02:22:47 +0000
commitf19493d50b1f30e80b2558366faf0dc3aa85e0f6 (patch)
treeb3f65ba411bdbbb0640ca3fb4529474a863b4f05
parentfea8b765ff8b59bcd6185251d12e20aa475232d6 (diff)
downloadgcc-f19493d50b1f30e80b2558366faf0dc3aa85e0f6.tar.gz
2002-07-24 Aldy Hernandez <aldyh@redhat.com>
* config/rs6000/eabi.h: Define TARGET_SPE_ABI, TARGET_SPE, TARGET_ISEL, and TARGET_FPRS. * doc/invoke.texi (RS/6000 and PowerPC Options): Document -mabi=spe, -mabi=no-spe, and -misel=. * config/rs6000/rs6000-protos.h: Add output_isel. Move vrsave_operation prototype here. * config/rs6000/rs6000.md (sminsi3): Allow pattern for TARGET_ISEL. (smaxsi3): Same. (uminsi3): Same. (umaxsi3): Same. (abssi2_nopower): Disallow when TARGET_ISEL. (*ne0): Same. (negsf2): Change to expand and rename old pattern to *negsf2. (abssf2): Change to expand and rename old pattern to *abssf2. New expanders: fix_truncsfsi2, floatunssisf2, floatsisf2, fixunssfsi2. Change patterns that check for TARGET_HARD_FLOAT or TARGET_SOFT_FLOAT to also check TARGET_FPRS. * config/rs6000/rs6000.c: New globals: rs6000_spe_abi, rs6000_isel, rs6000_fprs, rs6000_isel_string. (rs6000_override_options): Add 8540 case to processor_target_table. Set rs6000_isel for the 8540. Call rs6000_parse_isel_option. (enable_mask_for_builtins): New. (rs6000_parse_isel_option): New. (rs6000_parse_abi_options): Add spe and no-spe. (easy_fp_constant): Treat !TARGET_FPRS as soft-float. (rs6000_legitimize_address): Check for TARGET_FPRS when checking for TARGET_HARD_FLOAT. Add case for SPE_VECTOR_MODE. (rs6000_legitimize_reload_address): Handle SPE vector modes. (rs6000_legitimate_address): Disallow PRE_INC/PRE_DEC for SPE vector modes. Check for TARGET_FPRS when checking for TARGET_HARD_FLOAT. (rs6000_emit_move): Check for TARGET_FPRS. Add cases for SPE vector modes. (function_arg_boundary): Return 64 for SPE vector modes. (function_arg_advance): Check for TARGET_FPRS and Handle SPE vectors. (function_arg): Same. (setup_incoming_varargs): Check for TARGET_FPRS. (rs6000_va_arg): Same. (struct builtin_description): Un-constify mask field. Move up in file. (bdesc_2arg): Un-constify and add SPE builtins. (bdesc_1arg): Same. (bdesc_spe_predicates): New. (bdesc_spe_evsel): New. (rs6000_expand_unop_builtin): Add SPE 5-bit literal builtins. (rs6000_expand_binop_builtin): Same. (bdesc_2arg_spe): New. (spe_expand_builtin): New. (spe_expand_predicate_builtin): New. (spe_expand_evsel_builtin): New. (rs6000_expand_builtin): Call spe_expand_builtin for SPE. (rs6000_init_builtins): Initialize SPE builtins. Call rs6000_common_init_builtins. (altivec_init_builtins): Move all non-altivec builtin code to... (rs6000_common_init_builtins): ...here. New function. (branch_positive_comparison_operator): Allow NE code for SPE. (ccr_bit): Return correct ccr bit for SPE fp. (print_operand): Emit crnor in 'D' case for SPE. New case 't'. Add SPE code for 'y' case. (rs6000_generate_compare): Generate rtl for SPE fp. (output_cbranch): Handle SPE hard floats. (rs6000_emit_cmove): Handle isel. (rs6000_emit_int_cmove): New. (output_isel): New. (rs6000_stack_info): Adjust stack frame so GPRs are saved in 64-bits for SPE. (debug_stack_info): Add SPE info. (gen_frame_mem_offset): New. (rs6000_emit_prologue): Save GPRs in 64-bits for SPE abi. Change mode of frame pointer, when saving it, to Pmode. (rs6000_emit_epilogue): Restore GPRs in 64-bits for SPE abi. Misc cleanups and use gen_frame_mem_offset when appropriate. * config/rs6000/rs6000.h (processor_type): Add PROCESSOR_PPC8540. (TARGET_SPE_ABI): New. (TARGET_SPE): New. (TARGET_ISEL): New. (TARGET_FPRS): New. (FIXED_SCRATCH): New. (RTX_COSTS): Add PROCESSOR_PPC8540. (ASM_CPU_SPEC): Add case for 8540. (TARGET_OPTIONS): Add isel= case. (rs6000_spe_abi): New. (rs6000_isel): New. (rs6000_fprs): New. (rs6000_isel_string): New. (UNITS_PER_SPE_WORD): New. (LOCAL_ALIGNMENT): Adjust for SPE. (HARD_REGNO_MODE_OK): Same. (DATA_ALIGNMENT): Same. (MEMBER_TYPE_FORCES_BLK): New. (FIRST_PSEUDO_REGISTER): Set to 113. (FIXED_REGISTERS): Add SPE registers. (reg_class): Same. (REG_CLASS_NAMES): Same. (REG_CLASS_CONTENTS): Same. (REGNO_REG_CLASS): Same. (REGISTER_NAMES): Same. (DEBUG_REGISTER_NAMES): Same. (ADDITIONAL_REGISTER_NAMES): Same. (CALL_USED_REGISTERS): Same. (CALL_REALLY_USED_REGISTERS): Same. (SPE_ACC_REGNO): New. (SPEFSCR_REGNO): New. (SPE_SIMD_REGNO_P): New. (HARD_REGNO_NREGS): Adjust for SPE. (VECTOR_MODE_SUPPORTED_P): Same. (REGNO_REG_CLASS): Same. (FUNCTION_VALUE): Same. (LIBCALL_VALUE): Same. (LEGITIMATE_OFFSET_ADDRESS_P): Same. (SPE_VECTOR_MODE): New. (CONDITIONAL_REGISTER_USAGE): Disable FPRs when target does FP on the GPRs. Set FIXED_SCRATCH fixed in SPE case. (rs6000_stack): Add spe_gp_size, spe_padding_size, spe_gp_save_offset. (USE_FP_FOR_ARG_P): Check for TARGET_FPRS. (LEGITIMATE_LO_SUM_ADDRESS_P): Same. (SPE_CONST_OFFSET_OK): New. (rs6000_builtins): Add SPE builtins. * testsuite/gcc.dg/ppc-spe.c: New. * config/rs6000/eabispe.h: New. * config/rs6000/spe.h: New. * config/rs600/spe.md: New. * config/rs6000/rs6000-c.c (rs6000_cpu_cpp_builtins): Define __SIMD__ for TARGET_SPE. * config.gcc: Add powerpc-*-eabispe* case. Add spe.h to user headers for powerpc. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@55731 138bc75d-0d04-0410-961f-82ee72b054a4
-rw-r--r--gcc/config/rs6000/eabi.h10
-rw-r--r--gcc/config/rs6000/eabispe.h51
-rw-r--r--gcc/config/rs6000/rs6000-c.c2
-rw-r--r--gcc/config/rs6000/rs6000-protos.h2
-rw-r--r--gcc/config/rs6000/rs6000.c1842
-rw-r--r--gcc/config/rs6000/rs6000.h361
-rw-r--r--gcc/config/rs6000/rs6000.md384
-rw-r--r--gcc/config/rs6000/spe.h1099
-rw-r--r--gcc/config/rs6000/spe.md2550
9 files changed, 5953 insertions, 348 deletions
diff --git a/gcc/config/rs6000/eabi.h b/gcc/config/rs6000/eabi.h
index fd41177ea33..373dd2b6fc7 100644
--- a/gcc/config/rs6000/eabi.h
+++ b/gcc/config/rs6000/eabi.h
@@ -42,3 +42,13 @@ Boston, MA 02111-1307, USA. */
builtin_assert ("machine=powerpc"); \
} \
while (0)
+
+#undef TARGET_SPE_ABI
+#undef TARGET_SPE
+#undef TARGET_ISEL
+#undef TARGET_FPRS
+
+#define TARGET_SPE_ABI rs6000_spe_abi
+#define TARGET_SPE (rs6000_cpu == PROCESSOR_PPC8540)
+#define TARGET_ISEL rs6000_isel
+#define TARGET_FPRS rs6000_fprs
diff --git a/gcc/config/rs6000/eabispe.h b/gcc/config/rs6000/eabispe.h
new file mode 100644
index 00000000000..b0047cd0c8a
--- /dev/null
+++ b/gcc/config/rs6000/eabispe.h
@@ -0,0 +1,51 @@
+/* Core target definitions for GNU compiler
+ for PowerPC embedded targeted systems with SPE support.
+ Copyright (C) 2002 Free Software Foundation, Inc.
+ Contributed by Aldy Hernandez (aldyh@redhat.com).
+
+This file is part of GNU CC.
+
+GNU CC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2, or (at your option)
+any later version.
+
+GNU CC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GNU CC; see the file COPYING. If not, write to
+the Free Software Foundation, 59 Temple Place - Suite 330,
+Boston, MA 02111-1307, USA. */
+
+#undef TARGET_DEFAULT
+#define TARGET_DEFAULT (MASK_POWERPC | MASK_NEW_MNEMONICS | MASK_EABI)
+
+#undef TARGET_VERSION
+#define TARGET_VERSION fprintf (stderr, " (PowerPC Embedded SPE)");
+
+#undef SUBSUBTARGET_OVERRIDE_OPTIONS
+#define SUBSUBTARGET_OVERRIDE_OPTIONS \
+ rs6000_cpu = PROCESSOR_PPC8540; \
+ rs6000_spe_abi = 1; \
+ rs6000_fprs = 0; \
+ /* See note below. */ \
+ /*rs6000_long_double_type_size = 128;*/ \
+ rs6000_isel = 1
+
+/*
+ The e500 ABI says that either long doubles are 128 bits, or if
+ implemented in any other size, the compiler/linker should error out.
+ We have no emulation libraries for 128 bit long doubles, and I hate
+ the dozens of failures on the regression suite. So I'm breaking ABI
+ specifications, until I properly fix the emulation.
+
+ Enable these later.
+#undef CPP_LONGDOUBLE_DEFAULT_SPEC
+#define CPP_LONGDOUBLE_DEFAULT_SPEC "-D__LONG_DOUBLE_128__=1"
+*/
+
+#undef ASM_DEFAULT_SPEC
+#define ASM_DEFAULT_SPEC "-mppc -mspe -me500"
diff --git a/gcc/config/rs6000/rs6000-c.c b/gcc/config/rs6000/rs6000-c.c
index ab31cc58f7c..14132ca68ad 100644
--- a/gcc/config/rs6000/rs6000-c.c
+++ b/gcc/config/rs6000/rs6000-c.c
@@ -93,6 +93,8 @@ rs6000_cpu_cpp_builtins (pfile)
builtin_define ("_ARCH_COM");
if (TARGET_ALTIVEC)
builtin_define ("__ALTIVEC__");
+ if (TARGET_SPE)
+ builtin_define ("__SPE__");
if (TARGET_SOFT_FLOAT)
builtin_define ("_SOFT_FLOAT");
if (BYTES_BIG_ENDIAN)
diff --git a/gcc/config/rs6000/rs6000-protos.h b/gcc/config/rs6000/rs6000-protos.h
index 652cfb9f40a..42563d30378 100644
--- a/gcc/config/rs6000/rs6000-protos.h
+++ b/gcc/config/rs6000/rs6000-protos.h
@@ -186,6 +186,8 @@ extern void rs6000_emit_load_toc_table PARAMS ((int));
extern void rs6000_aix_emit_builtin_unwind_init PARAMS ((void));
extern void rs6000_emit_epilogue PARAMS ((int));
extern void debug_stack_info PARAMS ((rs6000_stack_t *));
+extern const char *output_isel PARAMS ((rtx *));
+extern int vrsave_operation PARAMS ((rtx, enum machine_mode));
extern void machopic_output_stub PARAMS ((FILE *, const char *, const char *));
diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c
index dbb1dee253d..aaaccc87283 100644
--- a/gcc/config/rs6000/rs6000.c
+++ b/gcc/config/rs6000/rs6000.c
@@ -80,6 +80,18 @@ int rs6000_altivec_vrsave;
/* String from -mvrsave= option. */
const char *rs6000_altivec_vrsave_string;
+/* Nonzero if we want SPE ABI extensions. */
+int rs6000_spe_abi;
+
+/* Whether isel instructions should be generated. */
+int rs6000_isel;
+
+/* Nonzero if we have FPRs. */
+int rs6000_fprs = 1;
+
+/* String from -misel=. */
+const char *rs6000_isel_string;
+
/* Set to non-zero once AIX common-mode calls have been defined. */
static int common_mode_defined;
@@ -131,6 +143,17 @@ static int rs6000_sr_alias_set;
int rs6000_default_long_calls;
const char *rs6000_longcall_switch;
+struct builtin_description
+{
+ /* mask is not const because we're going to alter it below. This
+ nonsense will go away when we rewrite the -march infrastructure
+ to give us more target flag bits. */
+ unsigned int mask;
+ const enum insn_code icode;
+ const char *const name;
+ const enum rs6000_builtins code;
+};
+
static void rs6000_add_gc_roots PARAMS ((void));
static int num_insns_constant_wide PARAMS ((HOST_WIDE_INT));
static rtx expand_block_move_mem PARAMS ((enum machine_mode, rtx, rtx));
@@ -142,6 +165,7 @@ static void rs6000_emit_stack_tie PARAMS ((void));
static void rs6000_frame_related PARAMS ((rtx, rtx, HOST_WIDE_INT, rtx, rtx));
static void emit_frame_save PARAMS ((rtx, rtx, enum machine_mode,
unsigned int, int, int));
+static rtx gen_frame_mem_offset PARAMS ((enum machine_mode, rtx, int));
static void rs6000_emit_allocate_stack PARAMS ((HOST_WIDE_INT, int));
static unsigned rs6000_hash_constant PARAMS ((rtx));
static unsigned toc_hash_function PARAMS ((const void *));
@@ -193,6 +217,17 @@ static rtx rs6000_expand_binop_builtin PARAMS ((enum insn_code, tree, rtx));
static rtx rs6000_expand_ternop_builtin PARAMS ((enum insn_code, tree, rtx));
static rtx rs6000_expand_builtin PARAMS ((tree, rtx, rtx, enum machine_mode, int));
static void altivec_init_builtins PARAMS ((void));
+static void rs6000_common_init_builtins PARAMS ((void));
+
+static void enable_mask_for_builtins PARAMS ((struct builtin_description *,
+ int, enum rs6000_builtins,
+ enum rs6000_builtins));
+static void spe_init_builtins PARAMS ((void));
+static rtx spe_expand_builtin PARAMS ((tree, rtx, bool *));
+static rtx spe_expand_predicate_builtin PARAMS ((enum insn_code, tree, rtx));
+static rtx spe_expand_evsel_builtin PARAMS ((enum insn_code, tree, rtx));
+static int rs6000_emit_int_cmove PARAMS ((rtx, rtx, rtx, rtx));
+
static rtx altivec_expand_builtin PARAMS ((tree, rtx, bool *));
static rtx altivec_expand_ld_builtin PARAMS ((tree, rtx, bool *));
static rtx altivec_expand_st_builtin PARAMS ((tree, rtx, bool *));
@@ -202,10 +237,10 @@ static rtx altivec_expand_predicate_builtin PARAMS ((enum insn_code, const char
static rtx altivec_expand_stv_builtin PARAMS ((enum insn_code, tree));
static void rs6000_parse_abi_options PARAMS ((void));
static void rs6000_parse_vrsave_option PARAMS ((void));
+static void rs6000_parse_isel_option PARAMS ((void));
static int first_altivec_reg_to_save PARAMS ((void));
static unsigned int compute_vrsave_mask PARAMS ((void));
static void is_altivec_return_reg PARAMS ((rtx, void *));
-int vrsave_operation PARAMS ((rtx, enum machine_mode));
static rtx generate_set_vrsave PARAMS ((rtx, rs6000_stack_t *, int));
static void altivec_frame_fixup PARAMS ((rtx, rtx, HOST_WIDE_INT));
static int easy_vector_constant PARAMS ((rtx));
@@ -436,6 +471,9 @@ rs6000_override_options (default_cpu)
{"7450", PROCESSOR_PPC7450,
MASK_POWERPC | MASK_PPC_GFXOPT | MASK_NEW_MNEMONICS,
POWER_MASKS | MASK_PPC_GPOPT | MASK_POWERPC64},
+ {"8540", PROCESSOR_PPC8540,
+ MASK_POWERPC | MASK_PPC_GFXOPT | MASK_NEW_MNEMONICS,
+ POWER_MASKS | MASK_PPC_GPOPT | MASK_POWERPC64},
{"801", PROCESSOR_MPCCORE,
MASK_POWERPC | MASK_SOFT_FLOAT | MASK_NEW_MNEMONICS,
POWER_MASKS | POWERPC_OPT_MASKS | MASK_POWERPC64},
@@ -484,6 +522,9 @@ rs6000_override_options (default_cpu)
}
}
+ if (rs6000_cpu == PROCESSOR_PPC8540)
+ rs6000_isel = 1;
+
/* If we are optimizing big endian systems for space, use the store
multiple instructions. */
if (BYTES_BIG_ENDIAN && optimize_size)
@@ -578,6 +619,9 @@ rs6000_override_options (default_cpu)
/* Handle -mvrsave= option. */
rs6000_parse_vrsave_option ();
+ /* Handle -misel= option. */
+ rs6000_parse_isel_option ();
+
#ifdef SUBTARGET_OVERRIDE_OPTIONS
SUBTARGET_OVERRIDE_OPTIONS;
#endif
@@ -640,6 +684,21 @@ rs6000_override_options (default_cpu)
init_machine_status = rs6000_init_machine_status;
}
+/* Handle -misel= option. */
+static void
+rs6000_parse_isel_option ()
+{
+ if (rs6000_isel_string == 0)
+ return;
+ else if (! strcmp (rs6000_isel_string, "yes"))
+ rs6000_isel = 1;
+ else if (! strcmp (rs6000_isel_string, "no"))
+ rs6000_isel = 0;
+ else
+ error ("unknown -misel= option specified: '%s'",
+ rs6000_isel_string);
+}
+
/* Handle -mvrsave= options. */
static void
rs6000_parse_vrsave_option ()
@@ -665,6 +724,10 @@ rs6000_parse_abi_options ()
rs6000_altivec_abi = 1;
else if (! strcmp (rs6000_abi_string, "no-altivec"))
rs6000_altivec_abi = 0;
+ else if (! strcmp (rs6000_abi_string, "spe"))
+ rs6000_spe_abi = 1;
+ else if (! strcmp (rs6000_abi_string, "no-spe"))
+ rs6000_spe_abi = 0;
else
error ("unknown ABI specified: '%s'", rs6000_abi_string);
}
@@ -1208,7 +1271,8 @@ easy_fp_constant (op, mode)
return 0;
/* Consider all constants with -msoft-float to be easy. */
- if (TARGET_SOFT_FLOAT && mode != DImode)
+ if ((TARGET_SOFT_FLOAT || !TARGET_FPRS)
+ && mode != DImode)
return 1;
/* If we are using V.4 style PIC, consider all constants to be hard. */
@@ -2007,7 +2071,9 @@ rs6000_legitimize_address (x, oldx, mode)
&& GET_CODE (XEXP (x, 0)) == REG
&& GET_CODE (XEXP (x, 1)) != CONST_INT
&& GET_MODE_NUNITS (mode) == 1
- && (TARGET_HARD_FLOAT || TARGET_POWERPC64 || mode != DFmode)
+ && ((TARGET_HARD_FLOAT && TARGET_FPRS)
+ || TARGET_POWERPC64
+ || mode != DFmode)
&& (TARGET_POWERPC64 || mode != DImode)
&& mode != TImode)
{
@@ -2026,13 +2092,34 @@ rs6000_legitimize_address (x, oldx, mode)
reg = force_reg (Pmode, x);
return reg;
}
+ else if (SPE_VECTOR_MODE (mode))
+ {
+ /* We accept [reg + reg] and [reg + OFFSET]. */
+
+ if (GET_CODE (x) == PLUS)
+ {
+ rtx op1 = XEXP (x, 0);
+ rtx op2 = XEXP (x, 1);
+
+ op1 = force_reg (Pmode, op1);
+
+ if (GET_CODE (op2) != REG
+ && (GET_CODE (op2) != CONST_INT
+ || !SPE_CONST_OFFSET_OK (INTVAL (op2))))
+ op2 = force_reg (Pmode, op2);
+
+ return gen_rtx_PLUS (Pmode, op1, op2);
+ }
+
+ return force_reg (Pmode, x);
+ }
else if (TARGET_ELF && TARGET_32BIT && TARGET_NO_TOC && ! flag_pic
&& GET_CODE (x) != CONST_INT
&& GET_CODE (x) != CONST_DOUBLE
&& CONSTANT_P (x)
&& GET_MODE_NUNITS (mode) == 1
&& (GET_MODE_BITSIZE (mode) <= 32
- || (TARGET_HARD_FLOAT && mode == DFmode)))
+ || ((TARGET_HARD_FLOAT && TARGET_FPRS) && mode == DFmode)))
{
rtx reg = gen_reg_rtx (Pmode);
emit_insn (gen_elf_high (reg, (x)));
@@ -2043,7 +2130,7 @@ rs6000_legitimize_address (x, oldx, mode)
&& GET_CODE (x) != CONST_INT
&& GET_CODE (x) != CONST_DOUBLE
&& CONSTANT_P (x)
- && (TARGET_HARD_FLOAT || mode != DFmode)
+ && ((TARGET_HARD_FLOAT && TARGET_FPRS) || mode != DFmode)
&& mode != DImode
&& mode != TImode)
{
@@ -2130,6 +2217,7 @@ rs6000_legitimize_reload_address (x, mode, opnum, type, ind_levels, win)
&& REGNO (XEXP (x, 0)) < FIRST_PSEUDO_REGISTER
&& REG_MODE_OK_FOR_BASE_P (XEXP (x, 0), mode)
&& GET_CODE (XEXP (x, 1)) == CONST_INT
+ && !SPE_VECTOR_MODE (mode)
&& !ALTIVEC_VECTOR_MODE (mode))
{
HOST_WIDE_INT val = INTVAL (XEXP (x, 1));
@@ -2218,6 +2306,7 @@ rs6000_legitimate_address (mode, x, reg_ok_strict)
return 1;
if ((GET_CODE (x) == PRE_INC || GET_CODE (x) == PRE_DEC)
&& !ALTIVEC_VECTOR_MODE (mode)
+ && !SPE_VECTOR_MODE (mode)
&& TARGET_UPDATE
&& LEGITIMATE_INDIRECT_ADDRESS_P (XEXP (x, 0), reg_ok_strict))
return 1;
@@ -2235,7 +2324,9 @@ rs6000_legitimate_address (mode, x, reg_ok_strict)
if (LEGITIMATE_OFFSET_ADDRESS_P (mode, x, reg_ok_strict))
return 1;
if (mode != TImode
- && (TARGET_HARD_FLOAT || TARGET_POWERPC64 || mode != DFmode)
+ && ((TARGET_HARD_FLOAT && TARGET_FPRS)
+ || TARGET_POWERPC64
+ || mode != DFmode)
&& (TARGET_POWERPC64 || mode != DImode)
&& LEGITIMATE_INDEXED_ADDRESS_P (x, reg_ok_strict))
return 1;
@@ -2430,7 +2521,8 @@ rs6000_emit_move (dest, source, mode)
if (! no_new_pseudos && GET_CODE (operands[0]) != REG)
operands[1] = force_reg (mode, operands[1]);
- if (mode == SFmode && ! TARGET_POWERPC && TARGET_HARD_FLOAT
+ if (mode == SFmode && ! TARGET_POWERPC
+ && TARGET_HARD_FLOAT && TARGET_FPRS
&& GET_CODE (operands[0]) == MEM)
{
int regnum;
@@ -2489,6 +2581,9 @@ rs6000_emit_move (dest, source, mode)
case V8HImode:
case V4SFmode:
case V4SImode:
+ case V4HImode:
+ case V2SFmode:
+ case V2SImode:
if (CONSTANT_P (operands[1])
&& !easy_vector_constant (operands[1]))
operands[1] = force_const_mem (mode, operands[1]);
@@ -2774,6 +2869,8 @@ function_arg_boundary (mode, type)
{
if (DEFAULT_ABI == ABI_V4 && (mode == DImode || mode == DFmode))
return 64;
+ else if (SPE_VECTOR_MODE (mode))
+ return 64;
else if (TARGET_ALTIVEC_ABI && ALTIVEC_VECTOR_MODE (mode))
return 128;
else
@@ -2800,9 +2897,14 @@ function_arg_advance (cum, mode, type, named)
else
cum->words += RS6000_ARG_SIZE (mode, type);
}
+ else if (TARGET_SPE_ABI && TARGET_SPE && SPE_VECTOR_MODE (mode))
+ {
+ cum->words += RS6000_ARG_SIZE (mode, type);
+ cum->sysv_gregno++;
+ }
else if (DEFAULT_ABI == ABI_V4)
{
- if (TARGET_HARD_FLOAT
+ if (TARGET_HARD_FLOAT && TARGET_FPRS
&& (mode == SFmode || mode == DFmode))
{
if (cum->fregno <= FP_ARG_V4_MAX_REG)
@@ -2862,7 +2964,8 @@ function_arg_advance (cum, mode, type, named)
cum->words += align + RS6000_ARG_SIZE (mode, type);
- if (GET_MODE_CLASS (mode) == MODE_FLOAT && TARGET_HARD_FLOAT)
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT
+ && TARGET_HARD_FLOAT && TARGET_FPRS)
cum->fregno++;
if (TARGET_DEBUG_ARG)
@@ -2915,14 +3018,17 @@ function_arg (cum, mode, type, named)
if (mode == VOIDmode)
{
if (abi == ABI_V4
- && TARGET_HARD_FLOAT
&& cum->nargs_prototype < 0
&& type && (cum->prototype || TARGET_NO_PROTOTYPE))
{
- return GEN_INT (cum->call_cookie
- | ((cum->fregno == FP_ARG_MIN_REG)
- ? CALL_V4_SET_FP_ARGS
- : CALL_V4_CLEAR_FP_ARGS));
+ /* For the SPE, we need to crxor CR6 always. */
+ if (TARGET_SPE_ABI)
+ return GEN_INT (cum->call_cookie | CALL_V4_SET_FP_ARGS);
+ else if (TARGET_HARD_FLOAT && TARGET_FPRS)
+ return GEN_INT (cum->call_cookie
+ | ((cum->fregno == FP_ARG_MIN_REG)
+ ? CALL_V4_SET_FP_ARGS
+ : CALL_V4_CLEAR_FP_ARGS));
}
return GEN_INT (cum->call_cookie);
@@ -2935,9 +3041,16 @@ function_arg (cum, mode, type, named)
else
return NULL;
}
+ else if (TARGET_SPE_ABI && TARGET_SPE && SPE_VECTOR_MODE (mode))
+ {
+ if (cum->sysv_gregno - 1 <= GP_ARG_MAX_REG)
+ return gen_rtx_REG (mode, cum->sysv_gregno);
+ else
+ return NULL;
+ }
else if (abi == ABI_V4)
{
- if (TARGET_HARD_FLOAT
+ if (TARGET_HARD_FLOAT && TARGET_FPRS
&& (mode == SFmode || mode == DFmode))
{
if (cum->fregno <= FP_ARG_V4_MAX_REG)
@@ -3159,7 +3272,8 @@ setup_incoming_varargs (cum, mode, type, pretend_size, no_rtl)
/* Save FP registers if needed. */
if (DEFAULT_ABI == ABI_V4
- && TARGET_HARD_FLOAT && ! no_rtl
+ && TARGET_HARD_FLOAT && TARGET_FPRS
+ && ! no_rtl
&& next_cum.fregno <= FP_ARG_V4_MAX_REG)
{
int fregno = next_cum.fregno;
@@ -3340,7 +3454,7 @@ rs6000_va_arg (valist, type)
size = UNITS_PER_WORD;
rsize = 1;
}
- else if (FLOAT_TYPE_P (type) && ! TARGET_SOFT_FLOAT)
+ else if (FLOAT_TYPE_P (type) && TARGET_HARD_FLOAT && TARGET_FPRS)
{
/* FP args go in FP registers, if present. */
indirect_p = 0;
@@ -3476,14 +3590,6 @@ do { \
NULL, NULL_TREE); \
} while (0)
-struct builtin_description
-{
- const unsigned int mask;
- const enum insn_code icode;
- const char *const name;
- const enum rs6000_builtins code;
-};
-
/* Simple ternary operations: VECd = foo (VECa, VECb, VECc). */
static const struct builtin_description bdesc_3arg[] =
@@ -3525,7 +3631,7 @@ static const struct builtin_description bdesc_dst[] =
/* Simple binary operations: VECc = foo (VECa, VECb). */
-static const struct builtin_description bdesc_2arg[] =
+static struct builtin_description bdesc_2arg[] =
{
{ MASK_ALTIVEC, CODE_FOR_addv16qi3, "__builtin_altivec_vaddubm", ALTIVEC_BUILTIN_VADDUBM },
{ MASK_ALTIVEC, CODE_FOR_addv8hi3, "__builtin_altivec_vadduhm", ALTIVEC_BUILTIN_VADDUHM },
@@ -3640,6 +3746,158 @@ static const struct builtin_description bdesc_2arg[] =
{ MASK_ALTIVEC, CODE_FOR_altivec_vsum2sws, "__builtin_altivec_vsum2sws", ALTIVEC_BUILTIN_VSUM2SWS },
{ MASK_ALTIVEC, CODE_FOR_altivec_vsumsws, "__builtin_altivec_vsumsws", ALTIVEC_BUILTIN_VSUMSWS },
{ MASK_ALTIVEC, CODE_FOR_xorv4si3, "__builtin_altivec_vxor", ALTIVEC_BUILTIN_VXOR },
+
+ /* Place holder, leave as first spe builtin. */
+ { 0, CODE_FOR_spe_evaddw, "__builtin_spe_evaddw", SPE_BUILTIN_EVADDW },
+ { 0, CODE_FOR_spe_evand, "__builtin_spe_evand", SPE_BUILTIN_EVAND },
+ { 0, CODE_FOR_spe_evandc, "__builtin_spe_evandc", SPE_BUILTIN_EVANDC },
+ { 0, CODE_FOR_spe_evdivws, "__builtin_spe_evdivws", SPE_BUILTIN_EVDIVWS },
+ { 0, CODE_FOR_spe_evdivwu, "__builtin_spe_evdivwu", SPE_BUILTIN_EVDIVWU },
+ { 0, CODE_FOR_spe_eveqv, "__builtin_spe_eveqv", SPE_BUILTIN_EVEQV },
+ { 0, CODE_FOR_spe_evfsadd, "__builtin_spe_evfsadd", SPE_BUILTIN_EVFSADD },
+ { 0, CODE_FOR_spe_evfsdiv, "__builtin_spe_evfsdiv", SPE_BUILTIN_EVFSDIV },
+ { 0, CODE_FOR_spe_evfsmul, "__builtin_spe_evfsmul", SPE_BUILTIN_EVFSMUL },
+ { 0, CODE_FOR_spe_evfssub, "__builtin_spe_evfssub", SPE_BUILTIN_EVFSSUB },
+ { 0, CODE_FOR_spe_evmergehi, "__builtin_spe_evmergehi", SPE_BUILTIN_EVMERGEHI },
+ { 0, CODE_FOR_spe_evmergehilo, "__builtin_spe_evmergehilo", SPE_BUILTIN_EVMERGEHILO },
+ { 0, CODE_FOR_spe_evmergelo, "__builtin_spe_evmergelo", SPE_BUILTIN_EVMERGELO },
+ { 0, CODE_FOR_spe_evmergelohi, "__builtin_spe_evmergelohi", SPE_BUILTIN_EVMERGELOHI },
+ { 0, CODE_FOR_spe_evmhegsmfaa, "__builtin_spe_evmhegsmfaa", SPE_BUILTIN_EVMHEGSMFAA },
+ { 0, CODE_FOR_spe_evmhegsmfan, "__builtin_spe_evmhegsmfan", SPE_BUILTIN_EVMHEGSMFAN },
+ { 0, CODE_FOR_spe_evmhegsmiaa, "__builtin_spe_evmhegsmiaa", SPE_BUILTIN_EVMHEGSMIAA },
+ { 0, CODE_FOR_spe_evmhegsmian, "__builtin_spe_evmhegsmian", SPE_BUILTIN_EVMHEGSMIAN },
+ { 0, CODE_FOR_spe_evmhegumiaa, "__builtin_spe_evmhegumiaa", SPE_BUILTIN_EVMHEGUMIAA },
+ { 0, CODE_FOR_spe_evmhegumian, "__builtin_spe_evmhegumian", SPE_BUILTIN_EVMHEGUMIAN },
+ { 0, CODE_FOR_spe_evmhesmf, "__builtin_spe_evmhesmf", SPE_BUILTIN_EVMHESMF },
+ { 0, CODE_FOR_spe_evmhesmfa, "__builtin_spe_evmhesmfa", SPE_BUILTIN_EVMHESMFA },
+ { 0, CODE_FOR_spe_evmhesmfaaw, "__builtin_spe_evmhesmfaaw", SPE_BUILTIN_EVMHESMFAAW },
+ { 0, CODE_FOR_spe_evmhesmfanw, "__builtin_spe_evmhesmfanw", SPE_BUILTIN_EVMHESMFANW },
+ { 0, CODE_FOR_spe_evmhesmi, "__builtin_spe_evmhesmi", SPE_BUILTIN_EVMHESMI },
+ { 0, CODE_FOR_spe_evmhesmia, "__builtin_spe_evmhesmia", SPE_BUILTIN_EVMHESMIA },
+ { 0, CODE_FOR_spe_evmhesmiaaw, "__builtin_spe_evmhesmiaaw", SPE_BUILTIN_EVMHESMIAAW },
+ { 0, CODE_FOR_spe_evmhesmianw, "__builtin_spe_evmhesmianw", SPE_BUILTIN_EVMHESMIANW },
+ { 0, CODE_FOR_spe_evmhessf, "__builtin_spe_evmhessf", SPE_BUILTIN_EVMHESSF },
+ { 0, CODE_FOR_spe_evmhessfa, "__builtin_spe_evmhessfa", SPE_BUILTIN_EVMHESSFA },
+ { 0, CODE_FOR_spe_evmhessfaaw, "__builtin_spe_evmhessfaaw", SPE_BUILTIN_EVMHESSFAAW },
+ { 0, CODE_FOR_spe_evmhessfanw, "__builtin_spe_evmhessfanw", SPE_BUILTIN_EVMHESSFANW },
+ { 0, CODE_FOR_spe_evmhessiaaw, "__builtin_spe_evmhessiaaw", SPE_BUILTIN_EVMHESSIAAW },
+ { 0, CODE_FOR_spe_evmhessianw, "__builtin_spe_evmhessianw", SPE_BUILTIN_EVMHESSIANW },
+ { 0, CODE_FOR_spe_evmheumi, "__builtin_spe_evmheumi", SPE_BUILTIN_EVMHEUMI },
+ { 0, CODE_FOR_spe_evmheumia, "__builtin_spe_evmheumia", SPE_BUILTIN_EVMHEUMIA },
+ { 0, CODE_FOR_spe_evmheumiaaw, "__builtin_spe_evmheumiaaw", SPE_BUILTIN_EVMHEUMIAAW },
+ { 0, CODE_FOR_spe_evmheumianw, "__builtin_spe_evmheumianw", SPE_BUILTIN_EVMHEUMIANW },
+ { 0, CODE_FOR_spe_evmheusiaaw, "__builtin_spe_evmheusiaaw", SPE_BUILTIN_EVMHEUSIAAW },
+ { 0, CODE_FOR_spe_evmheusianw, "__builtin_spe_evmheusianw", SPE_BUILTIN_EVMHEUSIANW },
+ { 0, CODE_FOR_spe_evmhogsmfaa, "__builtin_spe_evmhogsmfaa", SPE_BUILTIN_EVMHOGSMFAA },
+ { 0, CODE_FOR_spe_evmhogsmfan, "__builtin_spe_evmhogsmfan", SPE_BUILTIN_EVMHOGSMFAN },
+ { 0, CODE_FOR_spe_evmhogsmiaa, "__builtin_spe_evmhogsmiaa", SPE_BUILTIN_EVMHOGSMIAA },
+ { 0, CODE_FOR_spe_evmhogsmian, "__builtin_spe_evmhogsmian", SPE_BUILTIN_EVMHOGSMIAN },
+ { 0, CODE_FOR_spe_evmhogumiaa, "__builtin_spe_evmhogumiaa", SPE_BUILTIN_EVMHOGUMIAA },
+ { 0, CODE_FOR_spe_evmhogumian, "__builtin_spe_evmhogumian", SPE_BUILTIN_EVMHOGUMIAN },
+ { 0, CODE_FOR_spe_evmhosmf, "__builtin_spe_evmhosmf", SPE_BUILTIN_EVMHOSMF },
+ { 0, CODE_FOR_spe_evmhosmfa, "__builtin_spe_evmhosmfa", SPE_BUILTIN_EVMHOSMFA },
+ { 0, CODE_FOR_spe_evmhosmfaaw, "__builtin_spe_evmhosmfaaw", SPE_BUILTIN_EVMHOSMFAAW },
+ { 0, CODE_FOR_spe_evmhosmfanw, "__builtin_spe_evmhosmfanw", SPE_BUILTIN_EVMHOSMFANW },
+ { 0, CODE_FOR_spe_evmhosmi, "__builtin_spe_evmhosmi", SPE_BUILTIN_EVMHOSMI },
+ { 0, CODE_FOR_spe_evmhosmia, "__builtin_spe_evmhosmia", SPE_BUILTIN_EVMHOSMIA },
+ { 0, CODE_FOR_spe_evmhosmiaaw, "__builtin_spe_evmhosmiaaw", SPE_BUILTIN_EVMHOSMIAAW },
+ { 0, CODE_FOR_spe_evmhosmianw, "__builtin_spe_evmhosmianw", SPE_BUILTIN_EVMHOSMIANW },
+ { 0, CODE_FOR_spe_evmhossf, "__builtin_spe_evmhossf", SPE_BUILTIN_EVMHOSSF },
+ { 0, CODE_FOR_spe_evmhossfa, "__builtin_spe_evmhossfa", SPE_BUILTIN_EVMHOSSFA },
+ { 0, CODE_FOR_spe_evmhossfaaw, "__builtin_spe_evmhossfaaw", SPE_BUILTIN_EVMHOSSFAAW },
+ { 0, CODE_FOR_spe_evmhossfanw, "__builtin_spe_evmhossfanw", SPE_BUILTIN_EVMHOSSFANW },
+ { 0, CODE_FOR_spe_evmhossiaaw, "__builtin_spe_evmhossiaaw", SPE_BUILTIN_EVMHOSSIAAW },
+ { 0, CODE_FOR_spe_evmhossianw, "__builtin_spe_evmhossianw", SPE_BUILTIN_EVMHOSSIANW },
+ { 0, CODE_FOR_spe_evmhoumi, "__builtin_spe_evmhoumi", SPE_BUILTIN_EVMHOUMI },
+ { 0, CODE_FOR_spe_evmhoumia, "__builtin_spe_evmhoumia", SPE_BUILTIN_EVMHOUMIA },
+ { 0, CODE_FOR_spe_evmhoumiaaw, "__builtin_spe_evmhoumiaaw", SPE_BUILTIN_EVMHOUMIAAW },
+ { 0, CODE_FOR_spe_evmhoumianw, "__builtin_spe_evmhoumianw", SPE_BUILTIN_EVMHOUMIANW },
+ { 0, CODE_FOR_spe_evmhousiaaw, "__builtin_spe_evmhousiaaw", SPE_BUILTIN_EVMHOUSIAAW },
+ { 0, CODE_FOR_spe_evmhousianw, "__builtin_spe_evmhousianw", SPE_BUILTIN_EVMHOUSIANW },
+ { 0, CODE_FOR_spe_evmwhsmf, "__builtin_spe_evmwhsmf", SPE_BUILTIN_EVMWHSMF },
+ { 0, CODE_FOR_spe_evmwhsmfa, "__builtin_spe_evmwhsmfa", SPE_BUILTIN_EVMWHSMFA },
+ { 0, CODE_FOR_spe_evmwhsmi, "__builtin_spe_evmwhsmi", SPE_BUILTIN_EVMWHSMI },
+ { 0, CODE_FOR_spe_evmwhsmia, "__builtin_spe_evmwhsmia", SPE_BUILTIN_EVMWHSMIA },
+ { 0, CODE_FOR_spe_evmwhssf, "__builtin_spe_evmwhssf", SPE_BUILTIN_EVMWHSSF },
+ { 0, CODE_FOR_spe_evmwhssfa, "__builtin_spe_evmwhssfa", SPE_BUILTIN_EVMWHSSFA },
+ { 0, CODE_FOR_spe_evmwhumi, "__builtin_spe_evmwhumi", SPE_BUILTIN_EVMWHUMI },
+ { 0, CODE_FOR_spe_evmwhumia, "__builtin_spe_evmwhumia", SPE_BUILTIN_EVMWHUMIA },
+ { 0, CODE_FOR_spe_evmwlsmf, "__builtin_spe_evmwlsmf", SPE_BUILTIN_EVMWLSMF },
+ { 0, CODE_FOR_spe_evmwlsmfa, "__builtin_spe_evmwlsmfa", SPE_BUILTIN_EVMWLSMFA },
+ { 0, CODE_FOR_spe_evmwlsmfaaw, "__builtin_spe_evmwlsmfaaw", SPE_BUILTIN_EVMWLSMFAAW },
+ { 0, CODE_FOR_spe_evmwlsmfanw, "__builtin_spe_evmwlsmfanw", SPE_BUILTIN_EVMWLSMFANW },
+ { 0, CODE_FOR_spe_evmwlsmiaaw, "__builtin_spe_evmwlsmiaaw", SPE_BUILTIN_EVMWLSMIAAW },
+ { 0, CODE_FOR_spe_evmwlsmianw, "__builtin_spe_evmwlsmianw", SPE_BUILTIN_EVMWLSMIANW },
+ { 0, CODE_FOR_spe_evmwlssf, "__builtin_spe_evmwlssf", SPE_BUILTIN_EVMWLSSF },
+ { 0, CODE_FOR_spe_evmwlssfa, "__builtin_spe_evmwlssfa", SPE_BUILTIN_EVMWLSSFA },
+ { 0, CODE_FOR_spe_evmwlssfaaw, "__builtin_spe_evmwlssfaaw", SPE_BUILTIN_EVMWLSSFAAW },
+ { 0, CODE_FOR_spe_evmwlssfanw, "__builtin_spe_evmwlssfanw", SPE_BUILTIN_EVMWLSSFANW },
+ { 0, CODE_FOR_spe_evmwlssiaaw, "__builtin_spe_evmwlssiaaw", SPE_BUILTIN_EVMWLSSIAAW },
+ { 0, CODE_FOR_spe_evmwlssianw, "__builtin_spe_evmwlssianw", SPE_BUILTIN_EVMWLSSIANW },
+ { 0, CODE_FOR_spe_evmwlumi, "__builtin_spe_evmwlumi", SPE_BUILTIN_EVMWLUMI },
+ { 0, CODE_FOR_spe_evmwlumia, "__builtin_spe_evmwlumia", SPE_BUILTIN_EVMWLUMIA },
+ { 0, CODE_FOR_spe_evmwlumiaaw, "__builtin_spe_evmwlumiaaw", SPE_BUILTIN_EVMWLUMIAAW },
+ { 0, CODE_FOR_spe_evmwlumianw, "__builtin_spe_evmwlumianw", SPE_BUILTIN_EVMWLUMIANW },
+ { 0, CODE_FOR_spe_evmwlusiaaw, "__builtin_spe_evmwlusiaaw", SPE_BUILTIN_EVMWLUSIAAW },
+ { 0, CODE_FOR_spe_evmwlusianw, "__builtin_spe_evmwlusianw", SPE_BUILTIN_EVMWLUSIANW },
+ { 0, CODE_FOR_spe_evmwsmf, "__builtin_spe_evmwsmf", SPE_BUILTIN_EVMWSMF },
+ { 0, CODE_FOR_spe_evmwsmfa, "__builtin_spe_evmwsmfa", SPE_BUILTIN_EVMWSMFA },
+ { 0, CODE_FOR_spe_evmwsmfaa, "__builtin_spe_evmwsmfaa", SPE_BUILTIN_EVMWSMFAA },
+ { 0, CODE_FOR_spe_evmwsmfan, "__builtin_spe_evmwsmfan", SPE_BUILTIN_EVMWSMFAN },
+ { 0, CODE_FOR_spe_evmwsmi, "__builtin_spe_evmwsmi", SPE_BUILTIN_EVMWSMI },
+ { 0, CODE_FOR_spe_evmwsmia, "__builtin_spe_evmwsmia", SPE_BUILTIN_EVMWSMIA },
+ { 0, CODE_FOR_spe_evmwsmiaa, "__builtin_spe_evmwsmiaa", SPE_BUILTIN_EVMWSMIAA },
+ { 0, CODE_FOR_spe_evmwsmian, "__builtin_spe_evmwsmian", SPE_BUILTIN_EVMWSMIAN },
+ { 0, CODE_FOR_spe_evmwssf, "__builtin_spe_evmwssf", SPE_BUILTIN_EVMWSSF },
+ { 0, CODE_FOR_spe_evmwssfa, "__builtin_spe_evmwssfa", SPE_BUILTIN_EVMWSSFA },
+ { 0, CODE_FOR_spe_evmwssfaa, "__builtin_spe_evmwssfaa", SPE_BUILTIN_EVMWSSFAA },
+ { 0, CODE_FOR_spe_evmwssfan, "__builtin_spe_evmwssfan", SPE_BUILTIN_EVMWSSFAN },
+ { 0, CODE_FOR_spe_evmwumi, "__builtin_spe_evmwumi", SPE_BUILTIN_EVMWUMI },
+ { 0, CODE_FOR_spe_evmwumia, "__builtin_spe_evmwumia", SPE_BUILTIN_EVMWUMIA },
+ { 0, CODE_FOR_spe_evmwumiaa, "__builtin_spe_evmwumiaa", SPE_BUILTIN_EVMWUMIAA },
+ { 0, CODE_FOR_spe_evmwumian, "__builtin_spe_evmwumian", SPE_BUILTIN_EVMWUMIAN },
+ { 0, CODE_FOR_spe_evnand, "__builtin_spe_evnand", SPE_BUILTIN_EVNAND },
+ { 0, CODE_FOR_spe_evnor, "__builtin_spe_evnor", SPE_BUILTIN_EVNOR },
+ { 0, CODE_FOR_spe_evor, "__builtin_spe_evor", SPE_BUILTIN_EVOR },
+ { 0, CODE_FOR_spe_evorc, "__builtin_spe_evorc", SPE_BUILTIN_EVORC },
+ { 0, CODE_FOR_spe_evrlw, "__builtin_spe_evrlw", SPE_BUILTIN_EVRLW },
+ { 0, CODE_FOR_spe_evslw, "__builtin_spe_evslw", SPE_BUILTIN_EVSLW },
+ { 0, CODE_FOR_spe_evsrws, "__builtin_spe_evsrws", SPE_BUILTIN_EVSRWS },
+ { 0, CODE_FOR_spe_evsrwu, "__builtin_spe_evsrwu", SPE_BUILTIN_EVSRWU },
+ { 0, CODE_FOR_spe_evsubfw, "__builtin_spe_evsubfw", SPE_BUILTIN_EVSUBFW },
+
+ /* SPE binary operations expecting a 5-bit unsigned literal. */
+ { 0, CODE_FOR_spe_evaddiw, "__builtin_spe_evaddiw", SPE_BUILTIN_EVADDIW },
+
+ { 0, CODE_FOR_spe_evrlwi, "__builtin_spe_evrlwi", SPE_BUILTIN_EVRLWI },
+ { 0, CODE_FOR_spe_evslwi, "__builtin_spe_evslwi", SPE_BUILTIN_EVSLWI },
+ { 0, CODE_FOR_spe_evsrwis, "__builtin_spe_evsrwis", SPE_BUILTIN_EVSRWIS },
+ { 0, CODE_FOR_spe_evsrwiu, "__builtin_spe_evsrwiu", SPE_BUILTIN_EVSRWIU },
+ { 0, CODE_FOR_spe_evsubifw, "__builtin_spe_evsubifw", SPE_BUILTIN_EVSUBIFW },
+ { 0, CODE_FOR_spe_evmwhssfaa, "__builtin_spe_evmwhssfaa", SPE_BUILTIN_EVMWHSSFAA },
+ { 0, CODE_FOR_spe_evmwhssmaa, "__builtin_spe_evmwhssmaa", SPE_BUILTIN_EVMWHSSMAA },
+ { 0, CODE_FOR_spe_evmwhsmfaa, "__builtin_spe_evmwhsmfaa", SPE_BUILTIN_EVMWHSMFAA },
+ { 0, CODE_FOR_spe_evmwhsmiaa, "__builtin_spe_evmwhsmiaa", SPE_BUILTIN_EVMWHSMIAA },
+ { 0, CODE_FOR_spe_evmwhusiaa, "__builtin_spe_evmwhusiaa", SPE_BUILTIN_EVMWHUSIAA },
+ { 0, CODE_FOR_spe_evmwhumiaa, "__builtin_spe_evmwhumiaa", SPE_BUILTIN_EVMWHUMIAA },
+ { 0, CODE_FOR_spe_evmwhssfan, "__builtin_spe_evmwhssfan", SPE_BUILTIN_EVMWHSSFAN },
+ { 0, CODE_FOR_spe_evmwhssian, "__builtin_spe_evmwhssian", SPE_BUILTIN_EVMWHSSIAN },
+ { 0, CODE_FOR_spe_evmwhsmfan, "__builtin_spe_evmwhsmfan", SPE_BUILTIN_EVMWHSMFAN },
+ { 0, CODE_FOR_spe_evmwhsmian, "__builtin_spe_evmwhsmian", SPE_BUILTIN_EVMWHSMIAN },
+ { 0, CODE_FOR_spe_evmwhusian, "__builtin_spe_evmwhusian", SPE_BUILTIN_EVMWHUSIAN },
+ { 0, CODE_FOR_spe_evmwhumian, "__builtin_spe_evmwhumian", SPE_BUILTIN_EVMWHUMIAN },
+ { 0, CODE_FOR_spe_evmwhgssfaa, "__builtin_spe_evmwhgssfaa", SPE_BUILTIN_EVMWHGSSFAA },
+ { 0, CODE_FOR_spe_evmwhgsmfaa, "__builtin_spe_evmwhgsmfaa", SPE_BUILTIN_EVMWHGSMFAA },
+ { 0, CODE_FOR_spe_evmwhgsmiaa, "__builtin_spe_evmwhgsmiaa", SPE_BUILTIN_EVMWHGSMIAA },
+ { 0, CODE_FOR_spe_evmwhgumiaa, "__builtin_spe_evmwhgumiaa", SPE_BUILTIN_EVMWHGUMIAA },
+ { 0, CODE_FOR_spe_evmwhgssfan, "__builtin_spe_evmwhgssfan", SPE_BUILTIN_EVMWHGSSFAN },
+ { 0, CODE_FOR_spe_evmwhgsmfan, "__builtin_spe_evmwhgsmfan", SPE_BUILTIN_EVMWHGSMFAN },
+ { 0, CODE_FOR_spe_evmwhgsmian, "__builtin_spe_evmwhgsmian", SPE_BUILTIN_EVMWHGSMIAN },
+ { 0, CODE_FOR_spe_evmwhgumian, "__builtin_spe_evmwhgumian", SPE_BUILTIN_EVMWHGUMIAN },
+ { 0, CODE_FOR_spe_brinc, "__builtin_spe_brinc", SPE_BUILTIN_BRINC },
+
+ /* Place-holder. Leave as last binary SPE builtin. */
+ { 0, CODE_FOR_spe_evxor, "__builtin_spe_evxor", SPE_BUILTIN_EVXOR },
};
/* AltiVec predicates. */
@@ -3670,6 +3928,42 @@ static const struct builtin_description_predicates bdesc_altivec_preds[] =
{ MASK_ALTIVEC, CODE_FOR_altivec_predicate_v16qi, "*vcmpgtub.", "__builtin_altivec_vcmpgtub_p", ALTIVEC_BUILTIN_VCMPGTUB_P }
};
+/* SPE predicates. */
+static struct builtin_description bdesc_spe_predicates[] =
+{
+ /* Place-holder. Leave as first. */
+ { 0, CODE_FOR_spe_evcmpeq, "__builtin_spe_evcmpeq", SPE_BUILTIN_EVCMPEQ },
+ { 0, CODE_FOR_spe_evcmpgts, "__builtin_spe_evcmpgts", SPE_BUILTIN_EVCMPGTS },
+ { 0, CODE_FOR_spe_evcmpgtu, "__builtin_spe_evcmpgtu", SPE_BUILTIN_EVCMPGTU },
+ { 0, CODE_FOR_spe_evcmplts, "__builtin_spe_evcmplts", SPE_BUILTIN_EVCMPLTS },
+ { 0, CODE_FOR_spe_evcmpltu, "__builtin_spe_evcmpltu", SPE_BUILTIN_EVCMPLTU },
+ { 0, CODE_FOR_spe_evfscmpeq, "__builtin_spe_evfscmpeq", SPE_BUILTIN_EVFSCMPEQ },
+ { 0, CODE_FOR_spe_evfscmpgt, "__builtin_spe_evfscmpgt", SPE_BUILTIN_EVFSCMPGT },
+ { 0, CODE_FOR_spe_evfscmplt, "__builtin_spe_evfscmplt", SPE_BUILTIN_EVFSCMPLT },
+ { 0, CODE_FOR_spe_evfststeq, "__builtin_spe_evfststeq", SPE_BUILTIN_EVFSTSTEQ },
+ { 0, CODE_FOR_spe_evfststgt, "__builtin_spe_evfststgt", SPE_BUILTIN_EVFSTSTGT },
+ /* Place-holder. Leave as last. */
+ { 0, CODE_FOR_spe_evfststlt, "__builtin_spe_evfststlt", SPE_BUILTIN_EVFSTSTLT },
+};
+
+/* SPE evsel predicates. */
+static struct builtin_description bdesc_spe_evsel[] =
+{
+ /* Place-holder. Leave as first. */
+ { 0, CODE_FOR_spe_evcmpgts, "__builtin_spe_evsel_gts", SPE_BUILTIN_EVSEL_CMPGTS },
+ { 0, CODE_FOR_spe_evcmpgtu, "__builtin_spe_evsel_gtu", SPE_BUILTIN_EVSEL_CMPGTU },
+ { 0, CODE_FOR_spe_evcmplts, "__builtin_spe_evsel_lts", SPE_BUILTIN_EVSEL_CMPLTS },
+ { 0, CODE_FOR_spe_evcmpltu, "__builtin_spe_evsel_ltu", SPE_BUILTIN_EVSEL_CMPLTU },
+ { 0, CODE_FOR_spe_evcmpeq, "__builtin_spe_evsel_eq", SPE_BUILTIN_EVSEL_CMPEQ },
+ { 0, CODE_FOR_spe_evfscmpgt, "__builtin_spe_evsel_fsgt", SPE_BUILTIN_EVSEL_FSCMPGT },
+ { 0, CODE_FOR_spe_evfscmplt, "__builtin_spe_evsel_fslt", SPE_BUILTIN_EVSEL_FSCMPLT },
+ { 0, CODE_FOR_spe_evfscmpeq, "__builtin_spe_evsel_fseq", SPE_BUILTIN_EVSEL_FSCMPEQ },
+ { 0, CODE_FOR_spe_evfststgt, "__builtin_spe_evsel_fststgt", SPE_BUILTIN_EVSEL_FSTSTGT },
+ { 0, CODE_FOR_spe_evfststlt, "__builtin_spe_evsel_fststlt", SPE_BUILTIN_EVSEL_FSTSTLT },
+ /* Place-holder. Leave as last. */
+ { 0, CODE_FOR_spe_evfststeq, "__builtin_spe_evsel_fststeq", SPE_BUILTIN_EVSEL_FSTSTEQ },
+};
+
/* ABS* opreations. */
static const struct builtin_description bdesc_abs[] =
@@ -3686,7 +3980,7 @@ static const struct builtin_description bdesc_abs[] =
/* Simple unary operations: VECb = foo (unsigned literal) or VECb =
foo (VECa). */
-static const struct builtin_description bdesc_1arg[] =
+static struct builtin_description bdesc_1arg[] =
{
{ MASK_ALTIVEC, CODE_FOR_altivec_vexptefp, "__builtin_altivec_vexptefp", ALTIVEC_BUILTIN_VEXPTEFP },
{ MASK_ALTIVEC, CODE_FOR_altivec_vlogefp, "__builtin_altivec_vlogefp", ALTIVEC_BUILTIN_VLOGEFP },
@@ -3705,6 +3999,42 @@ static const struct builtin_description bdesc_1arg[] =
{ MASK_ALTIVEC, CODE_FOR_altivec_vupklsb, "__builtin_altivec_vupklsb", ALTIVEC_BUILTIN_VUPKLSB },
{ MASK_ALTIVEC, CODE_FOR_altivec_vupklpx, "__builtin_altivec_vupklpx", ALTIVEC_BUILTIN_VUPKLPX },
{ MASK_ALTIVEC, CODE_FOR_altivec_vupklsh, "__builtin_altivec_vupklsh", ALTIVEC_BUILTIN_VUPKLSH },
+
+ /* The SPE unary builtins must start with SPE_BUILTIN_EVABS and
+ end with SPE_BUILTIN_EVSUBFUSIAAW. */
+ { 0, CODE_FOR_spe_evabs, "__builtin_spe_evabs", SPE_BUILTIN_EVABS },
+ { 0, CODE_FOR_spe_evaddsmiaaw, "__builtin_spe_evaddsmiaaw", SPE_BUILTIN_EVADDSMIAAW },
+ { 0, CODE_FOR_spe_evaddssiaaw, "__builtin_spe_evaddssiaaw", SPE_BUILTIN_EVADDSSIAAW },
+ { 0, CODE_FOR_spe_evaddumiaaw, "__builtin_spe_evaddumiaaw", SPE_BUILTIN_EVADDUMIAAW },
+ { 0, CODE_FOR_spe_evaddusiaaw, "__builtin_spe_evaddusiaaw", SPE_BUILTIN_EVADDUSIAAW },
+ { 0, CODE_FOR_spe_evcntlsw, "__builtin_spe_evcntlsw", SPE_BUILTIN_EVCNTLSW },
+ { 0, CODE_FOR_spe_evcntlzw, "__builtin_spe_evcntlzw", SPE_BUILTIN_EVCNTLZW },
+ { 0, CODE_FOR_spe_evextsb, "__builtin_spe_evextsb", SPE_BUILTIN_EVEXTSB },
+ { 0, CODE_FOR_spe_evextsh, "__builtin_spe_evextsh", SPE_BUILTIN_EVEXTSH },
+ { 0, CODE_FOR_spe_evfsabs, "__builtin_spe_evfsabs", SPE_BUILTIN_EVFSABS },
+ { 0, CODE_FOR_spe_evfscfsf, "__builtin_spe_evfscfsf", SPE_BUILTIN_EVFSCFSF },
+ { 0, CODE_FOR_spe_evfscfsi, "__builtin_spe_evfscfsi", SPE_BUILTIN_EVFSCFSI },
+ { 0, CODE_FOR_spe_evfscfuf, "__builtin_spe_evfscfuf", SPE_BUILTIN_EVFSCFUF },
+ { 0, CODE_FOR_spe_evfscfui, "__builtin_spe_evfscfui", SPE_BUILTIN_EVFSCFUI },
+ { 0, CODE_FOR_spe_evfsctsf, "__builtin_spe_evfsctsf", SPE_BUILTIN_EVFSCTSF },
+ { 0, CODE_FOR_spe_evfsctsi, "__builtin_spe_evfsctsi", SPE_BUILTIN_EVFSCTSI },
+ { 0, CODE_FOR_spe_evfsctsiz, "__builtin_spe_evfsctsiz", SPE_BUILTIN_EVFSCTSIZ },
+ { 0, CODE_FOR_spe_evfsctuf, "__builtin_spe_evfsctuf", SPE_BUILTIN_EVFSCTUF },
+ { 0, CODE_FOR_spe_evfsctui, "__builtin_spe_evfsctui", SPE_BUILTIN_EVFSCTUI },
+ { 0, CODE_FOR_spe_evfsctuiz, "__builtin_spe_evfsctuiz", SPE_BUILTIN_EVFSCTUIZ },
+ { 0, CODE_FOR_spe_evfsnabs, "__builtin_spe_evfsnabs", SPE_BUILTIN_EVFSNABS },
+ { 0, CODE_FOR_spe_evfsneg, "__builtin_spe_evfsneg", SPE_BUILTIN_EVFSNEG },
+ { 0, CODE_FOR_spe_evmra, "__builtin_spe_evmra", SPE_BUILTIN_EVMRA },
+ { 0, CODE_FOR_spe_evneg, "__builtin_spe_evneg", SPE_BUILTIN_EVNEG },
+ { 0, CODE_FOR_spe_evrndw, "__builtin_spe_evrndw", SPE_BUILTIN_EVRNDW },
+ { 0, CODE_FOR_spe_evsubfsmiaaw, "__builtin_spe_evsubfsmiaaw", SPE_BUILTIN_EVSUBFSMIAAW },
+ { 0, CODE_FOR_spe_evsubfssiaaw, "__builtin_spe_evsubfssiaaw", SPE_BUILTIN_EVSUBFSSIAAW },
+ { 0, CODE_FOR_spe_evsubfumiaaw, "__builtin_spe_evsubfumiaaw", SPE_BUILTIN_EVSUBFUMIAAW },
+ { 0, CODE_FOR_spe_evsplatfi, "__builtin_spe_evsplatfi", SPE_BUILTIN_EVSPLATFI },
+ { 0, CODE_FOR_spe_evsplati, "__builtin_spe_evsplati", SPE_BUILTIN_EVSPLATI },
+
+ /* Place-holder. Leave as last unary SPE builtin. */
+ { 0, CODE_FOR_spe_evsubfusiaaw, "__builtin_spe_evsubfusiaaw", SPE_BUILTIN_EVSUBFUSIAAW },
};
static rtx
@@ -3729,6 +4059,8 @@ rs6000_expand_unop_builtin (icode, arglist, target)
case CODE_FOR_altivec_vspltisb:
case CODE_FOR_altivec_vspltish:
case CODE_FOR_altivec_vspltisw:
+ case CODE_FOR_spe_evsplatfi:
+ case CODE_FOR_spe_evsplati:
if (GET_CODE (op0) != CONST_INT
|| INTVAL (op0) > 0x1f
|| INTVAL (op0) < -0x1f)
@@ -3821,6 +4153,22 @@ rs6000_expand_binop_builtin (icode, arglist, target)
case CODE_FOR_altivec_vspltb:
case CODE_FOR_altivec_vsplth:
case CODE_FOR_altivec_vspltw:
+ case CODE_FOR_spe_evaddiw:
+ case CODE_FOR_spe_evldd:
+ case CODE_FOR_spe_evldh:
+ case CODE_FOR_spe_evldw:
+ case CODE_FOR_spe_evlhhesplat:
+ case CODE_FOR_spe_evlhhossplat:
+ case CODE_FOR_spe_evlhhousplat:
+ case CODE_FOR_spe_evlwhe:
+ case CODE_FOR_spe_evlwhos:
+ case CODE_FOR_spe_evlwhou:
+ case CODE_FOR_spe_evlwhsplat:
+ case CODE_FOR_spe_evlwwsplat:
+ case CODE_FOR_spe_evrlwi:
+ case CODE_FOR_spe_evslwi:
+ case CODE_FOR_spe_evsrwis:
+ case CODE_FOR_spe_evsrwiu:
if (TREE_CODE (arg1) != INTEGER_CST
|| TREE_INT_CST_LOW (arg1) & ~0x1f)
{
@@ -4152,7 +4500,7 @@ altivec_expand_dst_builtin (exp, target, expandedp)
enum machine_mode mode0, mode1, mode2;
rtx pat, op0, op1, op2;
struct builtin_description *d;
- int i;
+ size_t i;
*expandedp = false;
@@ -4352,6 +4700,328 @@ altivec_expand_builtin (exp, target, expandedp)
return NULL_RTX;
}
+/* Binops that need to be initialized manually, but can be expanded
+ automagically by rs6000_expand_binop_builtin. */
+static struct builtin_description bdesc_2arg_spe[] =
+{
+ { 0, CODE_FOR_spe_evlddx, "__builtin_spe_evlddx", SPE_BUILTIN_EVLDDX },
+ { 0, CODE_FOR_spe_evldwx, "__builtin_spe_evldwx", SPE_BUILTIN_EVLDWX },
+ { 0, CODE_FOR_spe_evldhx, "__builtin_spe_evldhx", SPE_BUILTIN_EVLDHX },
+ { 0, CODE_FOR_spe_evlwhex, "__builtin_spe_evlwhex", SPE_BUILTIN_EVLWHEX },
+ { 0, CODE_FOR_spe_evlwhoux, "__builtin_spe_evlwhoux", SPE_BUILTIN_EVLWHOUX },
+ { 0, CODE_FOR_spe_evlwhosx, "__builtin_spe_evlwhosx", SPE_BUILTIN_EVLWHOSX },
+ { 0, CODE_FOR_spe_evlwwsplatx, "__builtin_spe_evlwwsplatx", SPE_BUILTIN_EVLWWSPLATX },
+ { 0, CODE_FOR_spe_evlwhsplatx, "__builtin_spe_evlwhsplatx", SPE_BUILTIN_EVLWHSPLATX },
+ { 0, CODE_FOR_spe_evlhhesplatx, "__builtin_spe_evlhhesplatx", SPE_BUILTIN_EVLHHESPLATX },
+ { 0, CODE_FOR_spe_evlhhousplatx, "__builtin_spe_evlhhousplatx", SPE_BUILTIN_EVLHHOUSPLATX },
+ { 0, CODE_FOR_spe_evlhhossplatx, "__builtin_spe_evlhhossplatx", SPE_BUILTIN_EVLHHOSSPLATX },
+ { 0, CODE_FOR_spe_evldd, "__builtin_spe_evldd", SPE_BUILTIN_EVLDD },
+ { 0, CODE_FOR_spe_evldw, "__builtin_spe_evldw", SPE_BUILTIN_EVLDW },
+ { 0, CODE_FOR_spe_evldh, "__builtin_spe_evldh", SPE_BUILTIN_EVLDH },
+ { 0, CODE_FOR_spe_evlwhe, "__builtin_spe_evlwhe", SPE_BUILTIN_EVLWHE },
+ { 0, CODE_FOR_spe_evlwhou, "__builtin_spe_evlwhou", SPE_BUILTIN_EVLWHOU },
+ { 0, CODE_FOR_spe_evlwhos, "__builtin_spe_evlwhos", SPE_BUILTIN_EVLWHOS },
+ { 0, CODE_FOR_spe_evlwwsplat, "__builtin_spe_evlwwsplat", SPE_BUILTIN_EVLWWSPLAT },
+ { 0, CODE_FOR_spe_evlwhsplat, "__builtin_spe_evlwhsplat", SPE_BUILTIN_EVLWHSPLAT },
+ { 0, CODE_FOR_spe_evlhhesplat, "__builtin_spe_evlhhesplat", SPE_BUILTIN_EVLHHESPLAT },
+ { 0, CODE_FOR_spe_evlhhousplat, "__builtin_spe_evlhhousplat", SPE_BUILTIN_EVLHHOUSPLAT },
+ { 0, CODE_FOR_spe_evlhhossplat, "__builtin_spe_evlhhossplat", SPE_BUILTIN_EVLHHOSSPLAT }
+};
+
+/* Expand the builtin in EXP and store the result in TARGET. Store
+ true in *EXPANDEDP if we found a builtin to expand.
+
+ This expands the SPE builtins that are not simple unary and binary
+ operations. */
+static rtx
+spe_expand_builtin (exp, target, expandedp)
+ tree exp;
+ rtx target;
+ bool *expandedp;
+{
+ tree fndecl = TREE_OPERAND (TREE_OPERAND (exp, 0), 0);
+ tree arglist = TREE_OPERAND (exp, 1);
+ tree arg1, arg0;
+ unsigned int fcode = DECL_FUNCTION_CODE (fndecl);
+ enum insn_code icode;
+ enum machine_mode tmode, mode0;
+ rtx pat, op0;
+ struct builtin_description *d;
+ size_t i;
+
+ *expandedp = true;
+
+ /* Syntax check for a 5-bit unsigned immediate. */
+ switch (fcode)
+ {
+ case SPE_BUILTIN_EVSTDD:
+ case SPE_BUILTIN_EVSTDH:
+ case SPE_BUILTIN_EVSTDW:
+ case SPE_BUILTIN_EVSTWHE:
+ case SPE_BUILTIN_EVSTWHO:
+ case SPE_BUILTIN_EVSTWWE:
+ case SPE_BUILTIN_EVSTWWO:
+ arg1 = TREE_VALUE (TREE_CHAIN (TREE_CHAIN (arglist)));
+ if (TREE_CODE (arg1) != INTEGER_CST
+ || TREE_INT_CST_LOW (arg1) & ~0x1f)
+ {
+ error ("argument 2 must be a 5-bit unsigned literal");
+ return const0_rtx;
+ }
+ break;
+ default:
+ break;
+ }
+
+ d = (struct builtin_description *) bdesc_2arg_spe;
+ for (i = 0; i < ARRAY_SIZE (bdesc_2arg_spe); ++i, ++d)
+ if (d->code == fcode)
+ return rs6000_expand_binop_builtin (d->icode, arglist, target);
+
+ d = (struct builtin_description *) bdesc_spe_predicates;
+ for (i = 0; i < ARRAY_SIZE (bdesc_spe_predicates); ++i, ++d)
+ if (d->code == fcode)
+ return spe_expand_predicate_builtin (d->icode, arglist, target);
+
+ d = (struct builtin_description *) bdesc_spe_evsel;
+ for (i = 0; i < ARRAY_SIZE (bdesc_spe_evsel); ++i, ++d)
+ if (d->code == fcode)
+ return spe_expand_evsel_builtin (d->icode, arglist, target);
+
+ switch (fcode)
+ {
+ case SPE_BUILTIN_EVSTDDX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstddx, arglist);
+ case SPE_BUILTIN_EVSTDHX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstdhx, arglist);
+ case SPE_BUILTIN_EVSTDWX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstdwx, arglist);
+ case SPE_BUILTIN_EVSTWHEX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwhex, arglist);
+ case SPE_BUILTIN_EVSTWHOX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwhox, arglist);
+ case SPE_BUILTIN_EVSTWWEX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwwex, arglist);
+ case SPE_BUILTIN_EVSTWWOX:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwwox, arglist);
+ case SPE_BUILTIN_EVSTDD:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstdd, arglist);
+ case SPE_BUILTIN_EVSTDH:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstdh, arglist);
+ case SPE_BUILTIN_EVSTDW:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstdw, arglist);
+ case SPE_BUILTIN_EVSTWHE:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwhe, arglist);
+ case SPE_BUILTIN_EVSTWHO:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwho, arglist);
+ case SPE_BUILTIN_EVSTWWE:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwwe, arglist);
+ case SPE_BUILTIN_EVSTWWO:
+ return altivec_expand_stv_builtin (CODE_FOR_spe_evstwwo, arglist);
+ case SPE_BUILTIN_MFSPEFSCR:
+ icode = CODE_FOR_spe_mfspefscr;
+ tmode = insn_data[icode].operand[0].mode;
+
+ if (target == 0
+ || GET_MODE (target) != tmode
+ || ! (*insn_data[icode].operand[0].predicate) (target, tmode))
+ target = gen_reg_rtx (tmode);
+
+ pat = GEN_FCN (icode) (target);
+ if (! pat)
+ return 0;
+ emit_insn (pat);
+ return target;
+ case SPE_BUILTIN_MTSPEFSCR:
+ icode = CODE_FOR_spe_mtspefscr;
+ arg0 = TREE_VALUE (arglist);
+ op0 = expand_expr (arg0, NULL_RTX, VOIDmode, 0);
+ mode0 = insn_data[icode].operand[0].mode;
+
+ if (arg0 == error_mark_node)
+ return const0_rtx;
+
+ if (! (*insn_data[icode].operand[0].predicate) (op0, mode0))
+ op0 = copy_to_mode_reg (mode0, op0);
+
+ pat = GEN_FCN (icode) (op0);
+ if (pat)
+ emit_insn (pat);
+ return NULL_RTX;
+ default:
+ break;
+ }
+
+ *expandedp = false;
+ return NULL_RTX;
+}
+
+static rtx
+spe_expand_predicate_builtin (icode, arglist, target)
+ enum insn_code icode;
+ tree arglist;
+ rtx target;
+{
+ rtx pat, scratch, tmp;
+ tree form = TREE_VALUE (arglist);
+ tree arg0 = TREE_VALUE (TREE_CHAIN (arglist));
+ tree arg1 = TREE_VALUE (TREE_CHAIN (TREE_CHAIN (arglist)));
+ rtx op0 = expand_expr (arg0, NULL_RTX, VOIDmode, 0);
+ rtx op1 = expand_expr (arg1, NULL_RTX, VOIDmode, 0);
+ enum machine_mode mode0 = insn_data[icode].operand[1].mode;
+ enum machine_mode mode1 = insn_data[icode].operand[2].mode;
+ int form_int;
+ enum rtx_code code;
+
+ if (TREE_CODE (form) != INTEGER_CST)
+ {
+ error ("argument 1 of __builtin_spe_predicate must be a constant");
+ return const0_rtx;
+ }
+ else
+ form_int = TREE_INT_CST_LOW (form);
+
+ if (mode0 != mode1)
+ abort ();
+
+ if (arg0 == error_mark_node || arg1 == error_mark_node)
+ return const0_rtx;
+
+ if (target == 0
+ || GET_MODE (target) != SImode
+ || ! (*insn_data[icode].operand[0].predicate) (target, SImode))
+ target = gen_reg_rtx (SImode);
+
+ if (! (*insn_data[icode].operand[1].predicate) (op0, mode0))
+ op0 = copy_to_mode_reg (mode0, op0);
+ if (! (*insn_data[icode].operand[2].predicate) (op1, mode1))
+ op1 = copy_to_mode_reg (mode1, op1);
+
+ scratch = gen_reg_rtx (CCmode);
+
+ pat = GEN_FCN (icode) (scratch, op0, op1);
+ if (! pat)
+ return const0_rtx;
+ emit_insn (pat);
+
+ /* There are 4 variants for each predicate: _any_, _all_, _upper_,
+ _lower_. We use one compare, but look in different bits of the
+ CR for each variant.
+
+ There are 2 elements in each SPE simd type (upper/lower). The CR
+ bits are set as follows:
+
+ BIT0 | BIT 1 | BIT 2 | BIT 3
+ U | L | (U | L) | (U & L)
+
+ So, for an "all" relationship, BIT 3 would be set.
+ For an "any" relationship, BIT 2 would be set. Etc.
+
+ Following traditional nomenclature, these bits map to:
+
+ BIT0 | BIT 1 | BIT 2 | BIT 3
+ LT | GT | EQ | OV
+
+ Later, we will generate rtl to look in the LT/EQ/EQ/OV bits.
+ */
+
+ switch (form_int)
+ {
+ /* All variant. OV bit. */
+ case 0:
+ /* We need to get to the OV bit, which is the ORDERED bit. We
+ could generate (ordered:SI (reg:CC xx) (const_int 0)), but
+ that's ugly and will trigger a validate_condition_mode abort.
+ So let's just use another pattern. */
+ emit_insn (gen_move_from_CR_ov_bit (target, scratch));
+ return target;
+ /* Any variant. EQ bit. */
+ case 1:
+ code = EQ;
+ break;
+ /* Upper variant. LT bit. */
+ case 2:
+ code = LT;
+ break;
+ /* Lower variant. GT bit. */
+ case 3:
+ code = GT;
+ break;
+ default:
+ error ("argument 1 of __builtin_spe_predicate is out of range");
+ return const0_rtx;
+ }
+
+ tmp = gen_rtx_fmt_ee (code, SImode, scratch, const0_rtx);
+ emit_move_insn (target, tmp);
+
+ return target;
+}
+
+/* The evsel builtins look like this:
+
+ e = __builtin_spe_evsel_OP (a, b, c, d);
+
+ and work like this:
+
+ e[upper] = a[upper] *OP* b[upper] ? c[upper] : d[upper];
+ e[lower] = a[lower] *OP* b[lower] ? c[lower] : d[lower];
+*/
+
+static rtx
+spe_expand_evsel_builtin (icode, arglist, target)
+ enum insn_code icode;
+ tree arglist;
+ rtx target;
+{
+ rtx pat, scratch;
+ tree arg0 = TREE_VALUE (arglist);
+ tree arg1 = TREE_VALUE (TREE_CHAIN (arglist));
+ tree arg2 = TREE_VALUE (TREE_CHAIN (TREE_CHAIN (arglist)));
+ tree arg3 = TREE_VALUE (TREE_CHAIN (TREE_CHAIN (TREE_CHAIN (arglist))));
+ rtx op0 = expand_expr (arg0, NULL_RTX, VOIDmode, 0);
+ rtx op1 = expand_expr (arg1, NULL_RTX, VOIDmode, 0);
+ rtx op2 = expand_expr (arg2, NULL_RTX, VOIDmode, 0);
+ rtx op3 = expand_expr (arg3, NULL_RTX, VOIDmode, 0);
+ enum machine_mode mode0 = insn_data[icode].operand[1].mode;
+ enum machine_mode mode1 = insn_data[icode].operand[2].mode;
+
+ if (mode0 != mode1)
+ abort ();
+
+ if (arg0 == error_mark_node || arg1 == error_mark_node
+ || arg2 == error_mark_node || arg3 == error_mark_node)
+ return const0_rtx;
+
+ if (target == 0
+ || GET_MODE (target) != mode0
+ || ! (*insn_data[icode].operand[0].predicate) (target, mode0))
+ target = gen_reg_rtx (mode0);
+
+ if (! (*insn_data[icode].operand[1].predicate) (op0, mode0))
+ op0 = copy_to_mode_reg (mode0, op0);
+ if (! (*insn_data[icode].operand[1].predicate) (op1, mode1))
+ op1 = copy_to_mode_reg (mode0, op1);
+ if (! (*insn_data[icode].operand[1].predicate) (op2, mode1))
+ op2 = copy_to_mode_reg (mode0, op2);
+ if (! (*insn_data[icode].operand[1].predicate) (op3, mode1))
+ op3 = copy_to_mode_reg (mode0, op3);
+
+ /* Generate the compare. */
+ scratch = gen_reg_rtx (CCmode);
+ pat = GEN_FCN (icode) (scratch, op0, op1);
+ if (! pat)
+ return const0_rtx;
+ emit_insn (pat);
+
+ if (mode0 == V2SImode)
+ emit_insn (gen_spe_evsel (target, op2, op3, scratch));
+ else
+ emit_insn (gen_spe_evsel_fs (target, op2, op3, scratch));
+
+ return target;
+}
+
/* Expand an expression EXP that calls a built-in function,
with result going to TARGET if that's convenient
(and in mode MODE if that's convenient).
@@ -4381,6 +5051,13 @@ rs6000_expand_builtin (exp, target, subtarget, mode, ignore)
if (success)
return ret;
}
+ if (TARGET_SPE)
+ {
+ ret = spe_expand_builtin (exp, target, &success);
+
+ if (success)
+ return ret;
+ }
/* Handle simple unary operations. */
d = (struct builtin_description *) bdesc_1arg;
@@ -4407,8 +5084,250 @@ rs6000_expand_builtin (exp, target, subtarget, mode, ignore)
static void
rs6000_init_builtins ()
{
+ if (TARGET_SPE)
+ spe_init_builtins ();
if (TARGET_ALTIVEC)
altivec_init_builtins ();
+ rs6000_common_init_builtins ();
+}
+
+/* Search through a set of builtins and enable the mask bits.
+ DESC is an array of builtins.
+ SIZE is the totaly number of builtins.
+ START is the builtin enum at which to start.
+ END is the builtin enum at which to end. */
+static void
+enable_mask_for_builtins (desc, size, start, end)
+ struct builtin_description *desc;
+ int size;
+ enum rs6000_builtins start, end;
+{
+ int i;
+
+ for (i = 0; i < size; ++i)
+ if (desc[i].code == start)
+ break;
+
+ if (i == size)
+ return;
+
+ for (; i < size; ++i)
+ {
+ /* Flip all the bits on. */
+ desc[i].mask = target_flags;
+ if (desc[i].code == end)
+ break;
+ }
+}
+
+static void
+spe_init_builtins (void)
+{
+ tree endlink = void_list_node;
+ tree puint_type_node = build_pointer_type (unsigned_type_node);
+ tree pushort_type_node = build_pointer_type (short_unsigned_type_node);
+ tree pv2si_type_node = build_pointer_type (V2SI_type_node);
+ struct builtin_description *d;
+ size_t i;
+
+ tree v2si_ftype_4_v2si
+ = build_function_type
+ (V2SI_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ endlink)))));
+
+ tree v2sf_ftype_4_v2sf
+ = build_function_type
+ (V2SF_type_node,
+ tree_cons (NULL_TREE, V2SF_type_node,
+ tree_cons (NULL_TREE, V2SF_type_node,
+ tree_cons (NULL_TREE, V2SF_type_node,
+ tree_cons (NULL_TREE, V2SF_type_node,
+ endlink)))));
+
+ tree int_ftype_int_v2si_v2si
+ = build_function_type
+ (integer_type_node,
+ tree_cons (NULL_TREE, integer_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ endlink))));
+
+ tree int_ftype_int_v2sf_v2sf
+ = build_function_type
+ (integer_type_node,
+ tree_cons (NULL_TREE, integer_type_node,
+ tree_cons (NULL_TREE, V2SF_type_node,
+ tree_cons (NULL_TREE, V2SF_type_node,
+ endlink))));
+
+ tree void_ftype_v2si_puint_int
+ = build_function_type (void_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, puint_type_node,
+ tree_cons (NULL_TREE,
+ integer_type_node,
+ endlink))));
+
+ tree void_ftype_v2si_puint_char
+ = build_function_type (void_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, puint_type_node,
+ tree_cons (NULL_TREE,
+ char_type_node,
+ endlink))));
+
+ tree void_ftype_v2si_pv2si_int
+ = build_function_type (void_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, pv2si_type_node,
+ tree_cons (NULL_TREE,
+ integer_type_node,
+ endlink))));
+
+ tree void_ftype_v2si_pv2si_char
+ = build_function_type (void_type_node,
+ tree_cons (NULL_TREE, V2SI_type_node,
+ tree_cons (NULL_TREE, pv2si_type_node,
+ tree_cons (NULL_TREE,
+ char_type_node,
+ endlink))));
+
+ tree void_ftype_int
+ = build_function_type (void_type_node,
+ tree_cons (NULL_TREE, integer_type_node, endlink));
+
+ tree int_ftype_void
+ = build_function_type (integer_type_node,
+ tree_cons (NULL_TREE, void_type_node, endlink));
+
+ tree v2si_ftype_pv2si_int
+ = build_function_type (V2SI_type_node,
+ tree_cons (NULL_TREE, pv2si_type_node,
+ tree_cons (NULL_TREE, integer_type_node,
+ endlink)));
+
+ tree v2si_ftype_puint_int
+ = build_function_type (V2SI_type_node,
+ tree_cons (NULL_TREE, puint_type_node,
+ tree_cons (NULL_TREE, integer_type_node,
+ endlink)));
+
+ tree v2si_ftype_pushort_int
+ = build_function_type (V2SI_type_node,
+ tree_cons (NULL_TREE, pushort_type_node,
+ tree_cons (NULL_TREE, integer_type_node,
+ endlink)));
+
+ /* The initialization of the simple binary and unary builtins is
+ done in rs6000_common_init_builtins, but we have to enable the
+ mask bits here manually because we have run out of `target_flags'
+ bits. We really need to redesign this mask business. */
+
+ enable_mask_for_builtins ((struct builtin_description *) bdesc_2arg,
+ ARRAY_SIZE (bdesc_2arg),
+ SPE_BUILTIN_EVADDW,
+ SPE_BUILTIN_EVXOR);
+ enable_mask_for_builtins ((struct builtin_description *) bdesc_1arg,
+ ARRAY_SIZE (bdesc_1arg),
+ SPE_BUILTIN_EVABS,
+ SPE_BUILTIN_EVSUBFUSIAAW);
+ enable_mask_for_builtins ((struct builtin_description *) bdesc_spe_predicates,
+ ARRAY_SIZE (bdesc_spe_predicates),
+ SPE_BUILTIN_EVCMPEQ,
+ SPE_BUILTIN_EVFSTSTLT);
+ enable_mask_for_builtins ((struct builtin_description *) bdesc_spe_evsel,
+ ARRAY_SIZE (bdesc_spe_evsel),
+ SPE_BUILTIN_EVSEL_CMPGTS,
+ SPE_BUILTIN_EVSEL_FSTSTEQ);
+
+ /* Initialize irregular SPE builtins. */
+
+ def_builtin (target_flags, "__builtin_spe_mtspefscr", void_ftype_int, SPE_BUILTIN_MTSPEFSCR);
+ def_builtin (target_flags, "__builtin_spe_mfspefscr", int_ftype_void, SPE_BUILTIN_MFSPEFSCR);
+ def_builtin (target_flags, "__builtin_spe_evstddx", void_ftype_v2si_pv2si_int, SPE_BUILTIN_EVSTDDX);
+ def_builtin (target_flags, "__builtin_spe_evstdhx", void_ftype_v2si_pv2si_int, SPE_BUILTIN_EVSTDHX);
+ def_builtin (target_flags, "__builtin_spe_evstdwx", void_ftype_v2si_pv2si_int, SPE_BUILTIN_EVSTDWX);
+ def_builtin (target_flags, "__builtin_spe_evstwhex", void_ftype_v2si_puint_int, SPE_BUILTIN_EVSTWHEX);
+ def_builtin (target_flags, "__builtin_spe_evstwhox", void_ftype_v2si_puint_int, SPE_BUILTIN_EVSTWHOX);
+ def_builtin (target_flags, "__builtin_spe_evstwwex", void_ftype_v2si_puint_int, SPE_BUILTIN_EVSTWWEX);
+ def_builtin (target_flags, "__builtin_spe_evstwwox", void_ftype_v2si_puint_int, SPE_BUILTIN_EVSTWWOX);
+ def_builtin (target_flags, "__builtin_spe_evstdd", void_ftype_v2si_pv2si_char, SPE_BUILTIN_EVSTDD);
+ def_builtin (target_flags, "__builtin_spe_evstdh", void_ftype_v2si_pv2si_char, SPE_BUILTIN_EVSTDH);
+ def_builtin (target_flags, "__builtin_spe_evstdw", void_ftype_v2si_pv2si_char, SPE_BUILTIN_EVSTDW);
+ def_builtin (target_flags, "__builtin_spe_evstwhe", void_ftype_v2si_puint_char, SPE_BUILTIN_EVSTWHE);
+ def_builtin (target_flags, "__builtin_spe_evstwho", void_ftype_v2si_puint_char, SPE_BUILTIN_EVSTWHO);
+ def_builtin (target_flags, "__builtin_spe_evstwwe", void_ftype_v2si_puint_char, SPE_BUILTIN_EVSTWWE);
+ def_builtin (target_flags, "__builtin_spe_evstwwo", void_ftype_v2si_puint_char, SPE_BUILTIN_EVSTWWO);
+
+ /* Loads. */
+ def_builtin (target_flags, "__builtin_spe_evlddx", v2si_ftype_pv2si_int, SPE_BUILTIN_EVLDDX);
+ def_builtin (target_flags, "__builtin_spe_evldwx", v2si_ftype_pv2si_int, SPE_BUILTIN_EVLDWX);
+ def_builtin (target_flags, "__builtin_spe_evldhx", v2si_ftype_pv2si_int, SPE_BUILTIN_EVLDHX);
+ def_builtin (target_flags, "__builtin_spe_evlwhex", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHEX);
+ def_builtin (target_flags, "__builtin_spe_evlwhoux", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHOUX);
+ def_builtin (target_flags, "__builtin_spe_evlwhosx", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHOSX);
+ def_builtin (target_flags, "__builtin_spe_evlwwsplatx", v2si_ftype_puint_int, SPE_BUILTIN_EVLWWSPLATX);
+ def_builtin (target_flags, "__builtin_spe_evlwhsplatx", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHSPLATX);
+ def_builtin (target_flags, "__builtin_spe_evlhhesplatx", v2si_ftype_pushort_int, SPE_BUILTIN_EVLHHESPLATX);
+ def_builtin (target_flags, "__builtin_spe_evlhhousplatx", v2si_ftype_pushort_int, SPE_BUILTIN_EVLHHOUSPLATX);
+ def_builtin (target_flags, "__builtin_spe_evlhhossplatx", v2si_ftype_pushort_int, SPE_BUILTIN_EVLHHOSSPLATX);
+ def_builtin (target_flags, "__builtin_spe_evldd", v2si_ftype_pv2si_int, SPE_BUILTIN_EVLDD);
+ def_builtin (target_flags, "__builtin_spe_evldw", v2si_ftype_pv2si_int, SPE_BUILTIN_EVLDW);
+ def_builtin (target_flags, "__builtin_spe_evldh", v2si_ftype_pv2si_int, SPE_BUILTIN_EVLDH);
+ def_builtin (target_flags, "__builtin_spe_evlhhesplat", v2si_ftype_pushort_int, SPE_BUILTIN_EVLHHESPLAT);
+ def_builtin (target_flags, "__builtin_spe_evlhhossplat", v2si_ftype_pushort_int, SPE_BUILTIN_EVLHHOSSPLAT);
+ def_builtin (target_flags, "__builtin_spe_evlhhousplat", v2si_ftype_pushort_int, SPE_BUILTIN_EVLHHOUSPLAT);
+ def_builtin (target_flags, "__builtin_spe_evlwhe", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHE);
+ def_builtin (target_flags, "__builtin_spe_evlwhos", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHOS);
+ def_builtin (target_flags, "__builtin_spe_evlwhou", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHOU);
+ def_builtin (target_flags, "__builtin_spe_evlwhsplat", v2si_ftype_puint_int, SPE_BUILTIN_EVLWHSPLAT);
+ def_builtin (target_flags, "__builtin_spe_evlwwsplat", v2si_ftype_puint_int, SPE_BUILTIN_EVLWWSPLAT);
+
+ /* Predicates. */
+ d = (struct builtin_description *) bdesc_spe_predicates;
+ for (i = 0; i < ARRAY_SIZE (bdesc_spe_predicates); ++i, d++)
+ {
+ tree type;
+
+ switch (insn_data[d->icode].operand[1].mode)
+ {
+ case V2SImode:
+ type = int_ftype_int_v2si_v2si;
+ break;
+ case V2SFmode:
+ type = int_ftype_int_v2sf_v2sf;
+ break;
+ default:
+ abort ();
+ }
+
+ def_builtin (d->mask, d->name, type, d->code);
+ }
+
+ /* Evsel predicates. */
+ d = (struct builtin_description *) bdesc_spe_evsel;
+ for (i = 0; i < ARRAY_SIZE (bdesc_spe_evsel); ++i, d++)
+ {
+ tree type;
+
+ switch (insn_data[d->icode].operand[1].mode)
+ {
+ case V2SImode:
+ type = v2si_ftype_4_v2si;
+ break;
+ case V2SFmode:
+ type = v2sf_ftype_4_v2sf;
+ break;
+ default:
+ abort ();
+ }
+
+ def_builtin (d->mask, d->name, type, d->code);
+ }
}
static void
@@ -4417,70 +5336,57 @@ altivec_init_builtins (void)
struct builtin_description *d;
struct builtin_description_predicates *dp;
size_t i;
-
+ tree pfloat_type_node = build_pointer_type (float_type_node);
tree pint_type_node = build_pointer_type (integer_type_node);
- tree pvoid_type_node = build_pointer_type (void_type_node);
tree pshort_type_node = build_pointer_type (short_integer_type_node);
tree pchar_type_node = build_pointer_type (char_type_node);
- tree pfloat_type_node = build_pointer_type (float_type_node);
- tree v4sf_ftype_v4sf_v4sf_v16qi
- = build_function_type_list (V4SF_type_node,
- V4SF_type_node, V4SF_type_node,
- V16QI_type_node, NULL_TREE);
- tree v4si_ftype_v4si_v4si_v16qi
- = build_function_type_list (V4SI_type_node,
- V4SI_type_node, V4SI_type_node,
- V16QI_type_node, NULL_TREE);
- tree v8hi_ftype_v8hi_v8hi_v16qi
- = build_function_type_list (V8HI_type_node,
- V8HI_type_node, V8HI_type_node,
- V16QI_type_node, NULL_TREE);
- tree v16qi_ftype_v16qi_v16qi_v16qi
- = build_function_type_list (V16QI_type_node,
- V16QI_type_node, V16QI_type_node,
- V16QI_type_node, NULL_TREE);
- tree v4si_ftype_char
- = build_function_type_list (V4SI_type_node, char_type_node, NULL_TREE);
- tree v8hi_ftype_char
- = build_function_type_list (V8HI_type_node, char_type_node, NULL_TREE);
- tree v16qi_ftype_char
- = build_function_type_list (V16QI_type_node, char_type_node, NULL_TREE);
- tree v4sf_ftype_v4sf
- = build_function_type_list (V4SF_type_node, V4SF_type_node, NULL_TREE);
- tree v4si_ftype_pint
- = build_function_type_list (V4SI_type_node, pint_type_node, NULL_TREE);
- tree v8hi_ftype_pshort
- = build_function_type_list (V8HI_type_node, pshort_type_node, NULL_TREE);
- tree v16qi_ftype_pchar
- = build_function_type_list (V16QI_type_node, pchar_type_node, NULL_TREE);
+ tree pvoid_type_node = build_pointer_type (void_type_node);
+
+ tree int_ftype_int_v4si_v4si
+ = build_function_type_list (integer_type_node,
+ integer_type_node, V4SI_type_node,
+ V4SI_type_node, NULL_TREE);
tree v4sf_ftype_pfloat
= build_function_type_list (V4SF_type_node, pfloat_type_node, NULL_TREE);
- tree v8hi_ftype_v16qi
- = build_function_type_list (V8HI_type_node, V16QI_type_node, NULL_TREE);
- tree void_ftype_pvoid_int_char
+ tree void_ftype_pfloat_v4sf
= build_function_type_list (void_type_node,
- pvoid_type_node, integer_type_node,
- char_type_node, NULL_TREE);
- tree void_ftype_pint_v4si
+ pfloat_type_node, V4SF_type_node, NULL_TREE);
+ tree v4si_ftype_pint
+ = build_function_type_list (V4SI_type_node, pint_type_node, NULL_TREE); tree void_ftype_pint_v4si
= build_function_type_list (void_type_node,
pint_type_node, V4SI_type_node, NULL_TREE);
+ tree v8hi_ftype_pshort
+ = build_function_type_list (V8HI_type_node, pshort_type_node, NULL_TREE);
tree void_ftype_pshort_v8hi
= build_function_type_list (void_type_node,
pshort_type_node, V8HI_type_node, NULL_TREE);
+ tree v16qi_ftype_pchar
+ = build_function_type_list (V16QI_type_node, pchar_type_node, NULL_TREE);
tree void_ftype_pchar_v16qi
= build_function_type_list (void_type_node,
pchar_type_node, V16QI_type_node, NULL_TREE);
- tree void_ftype_pfloat_v4sf
- = build_function_type_list (void_type_node,
- pfloat_type_node, V4SF_type_node, NULL_TREE);
tree void_ftype_v4si
= build_function_type_list (void_type_node, V4SI_type_node, NULL_TREE);
+ tree v8hi_ftype_void
+ = build_function_type (V8HI_type_node, void_list_node);
+ tree void_ftype_void
+ = build_function_type (void_type_node, void_list_node);
+ tree void_ftype_qi
+ = build_function_type_list (void_type_node, char_type_node, NULL_TREE);
+ tree v16qi_ftype_int_pvoid
+ = build_function_type_list (V16QI_type_node,
+ integer_type_node, pvoid_type_node, NULL_TREE);
+ tree v8hi_ftype_int_pvoid
+ = build_function_type_list (V8HI_type_node,
+ integer_type_node, pvoid_type_node, NULL_TREE);
+ tree v4si_ftype_int_pvoid
+ = build_function_type_list (V4SI_type_node,
+ integer_type_node, pvoid_type_node, NULL_TREE);
tree void_ftype_v4si_int_pvoid
= build_function_type_list (void_type_node,
V4SI_type_node, integer_type_node,
pvoid_type_node, NULL_TREE);
-
tree void_ftype_v16qi_int_pvoid
= build_function_type_list (void_type_node,
V16QI_type_node, integer_type_node,
@@ -4489,12 +5395,198 @@ altivec_init_builtins (void)
= build_function_type_list (void_type_node,
V8HI_type_node, integer_type_node,
pvoid_type_node, NULL_TREE);
- tree void_ftype_qi
- = build_function_type_list (void_type_node, char_type_node, NULL_TREE);
- tree void_ftype_void
- = build_function_type (void_type_node, void_list_node);
- tree v8hi_ftype_void
- = build_function_type (V8HI_type_node, void_list_node);
+ tree int_ftype_int_v8hi_v8hi
+ = build_function_type_list (integer_type_node,
+ integer_type_node, V8HI_type_node,
+ V8HI_type_node, NULL_TREE);
+ tree int_ftype_int_v16qi_v16qi
+ = build_function_type_list (integer_type_node,
+ integer_type_node, V16QI_type_node,
+ V16QI_type_node, NULL_TREE);
+ tree int_ftype_int_v4sf_v4sf
+ = build_function_type_list (integer_type_node,
+ integer_type_node, V4SF_type_node,
+ V4SF_type_node, NULL_TREE);
+ tree v4si_ftype_v4si
+ = build_function_type_list (V4SI_type_node, V4SI_type_node, NULL_TREE);
+ tree v8hi_ftype_v8hi
+ = build_function_type_list (V8HI_type_node, V8HI_type_node, NULL_TREE);
+ tree v16qi_ftype_v16qi
+ = build_function_type_list (V16QI_type_node, V16QI_type_node, NULL_TREE);
+ tree v4sf_ftype_v4sf
+ = build_function_type_list (V4SF_type_node, V4SF_type_node, NULL_TREE);
+ tree void_ftype_pvoid_int_char
+ = build_function_type_list (void_type_node,
+ pvoid_type_node, integer_type_node,
+ char_type_node, NULL_TREE);
+
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_4sf", v4sf_ftype_pfloat, ALTIVEC_BUILTIN_LD_INTERNAL_4sf);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_4sf", void_ftype_pfloat_v4sf, ALTIVEC_BUILTIN_ST_INTERNAL_4sf);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_4si", v4si_ftype_pint, ALTIVEC_BUILTIN_LD_INTERNAL_4si);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_4si", void_ftype_pint_v4si, ALTIVEC_BUILTIN_ST_INTERNAL_4si);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_8hi", v8hi_ftype_pshort, ALTIVEC_BUILTIN_LD_INTERNAL_8hi);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_8hi", void_ftype_pshort_v8hi, ALTIVEC_BUILTIN_ST_INTERNAL_8hi);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_16qi", v16qi_ftype_pchar, ALTIVEC_BUILTIN_LD_INTERNAL_16qi);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_16qi", void_ftype_pchar_v16qi, ALTIVEC_BUILTIN_ST_INTERNAL_16qi);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_mtvscr", void_ftype_v4si, ALTIVEC_BUILTIN_MTVSCR);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_mfvscr", v8hi_ftype_void, ALTIVEC_BUILTIN_MFVSCR);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_dssall", void_ftype_void, ALTIVEC_BUILTIN_DSSALL);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_dss", void_ftype_qi, ALTIVEC_BUILTIN_DSS);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvsl", v16qi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVSL);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvsr", v16qi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVSR);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvebx", v16qi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVEBX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvehx", v8hi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVEHX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvewx", v4si_ftype_int_pvoid, ALTIVEC_BUILTIN_LVEWX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvxl", v4si_ftype_int_pvoid, ALTIVEC_BUILTIN_LVXL);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvx", v4si_ftype_int_pvoid, ALTIVEC_BUILTIN_LVX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvx", void_ftype_v4si_int_pvoid, ALTIVEC_BUILTIN_STVX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvewx", void_ftype_v4si_int_pvoid, ALTIVEC_BUILTIN_STVEWX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvxl", void_ftype_v4si_int_pvoid, ALTIVEC_BUILTIN_STVXL);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvebx", void_ftype_v16qi_int_pvoid, ALTIVEC_BUILTIN_STVEBX);
+ def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvehx", void_ftype_v8hi_int_pvoid, ALTIVEC_BUILTIN_STVEHX);
+
+ /* Add the DST variants. */
+ d = (struct builtin_description *) bdesc_dst;
+ for (i = 0; i < ARRAY_SIZE (bdesc_dst); i++, d++)
+ def_builtin (d->mask, d->name, void_ftype_pvoid_int_char, d->code);
+
+ /* Initialize the predicates. */
+ dp = (struct builtin_description_predicates *) bdesc_altivec_preds;
+ for (i = 0; i < ARRAY_SIZE (bdesc_altivec_preds); i++, dp++)
+ {
+ enum machine_mode mode1;
+ tree type;
+
+ mode1 = insn_data[dp->icode].operand[1].mode;
+
+ switch (mode1)
+ {
+ case V4SImode:
+ type = int_ftype_int_v4si_v4si;
+ break;
+ case V8HImode:
+ type = int_ftype_int_v8hi_v8hi;
+ break;
+ case V16QImode:
+ type = int_ftype_int_v16qi_v16qi;
+ break;
+ case V4SFmode:
+ type = int_ftype_int_v4sf_v4sf;
+ break;
+ default:
+ abort ();
+ }
+
+ def_builtin (dp->mask, dp->name, type, dp->code);
+ }
+
+ /* Initialize the abs* operators. */
+ d = (struct builtin_description *) bdesc_abs;
+ for (i = 0; i < ARRAY_SIZE (bdesc_abs); i++, d++)
+ {
+ enum machine_mode mode0;
+ tree type;
+
+ mode0 = insn_data[d->icode].operand[0].mode;
+
+ switch (mode0)
+ {
+ case V4SImode:
+ type = v4si_ftype_v4si;
+ break;
+ case V8HImode:
+ type = v8hi_ftype_v8hi;
+ break;
+ case V16QImode:
+ type = v16qi_ftype_v16qi;
+ break;
+ case V4SFmode:
+ type = v4sf_ftype_v4sf;
+ break;
+ default:
+ abort ();
+ }
+
+ def_builtin (d->mask, d->name, type, d->code);
+ }
+}
+
+static void
+rs6000_common_init_builtins (void)
+{
+ struct builtin_description *d;
+ size_t i;
+
+ tree v4sf_ftype_v4sf_v4sf_v16qi
+ = build_function_type_list (V4SF_type_node,
+ V4SF_type_node, V4SF_type_node,
+ V16QI_type_node, NULL_TREE);
+ tree v4si_ftype_v4si_v4si_v16qi
+ = build_function_type_list (V4SI_type_node,
+ V4SI_type_node, V4SI_type_node,
+ V16QI_type_node, NULL_TREE);
+ tree v8hi_ftype_v8hi_v8hi_v16qi
+ = build_function_type_list (V8HI_type_node,
+ V8HI_type_node, V8HI_type_node,
+ V16QI_type_node, NULL_TREE);
+ tree v16qi_ftype_v16qi_v16qi_v16qi
+ = build_function_type_list (V16QI_type_node,
+ V16QI_type_node, V16QI_type_node,
+ V16QI_type_node, NULL_TREE);
+ tree v4si_ftype_char
+ = build_function_type_list (V4SI_type_node, char_type_node, NULL_TREE);
+ tree v8hi_ftype_char
+ = build_function_type_list (V8HI_type_node, char_type_node, NULL_TREE);
+ tree v16qi_ftype_char
+ = build_function_type_list (V16QI_type_node, char_type_node, NULL_TREE);
+ tree v8hi_ftype_v16qi
+ = build_function_type_list (V8HI_type_node, V16QI_type_node, NULL_TREE);
+ tree v4sf_ftype_v4sf
+ = build_function_type_list (V4SF_type_node, V4SF_type_node, NULL_TREE);
+
+ tree v2si_ftype_v2si_v2si
+ = build_function_type_list (V2SI_type_node,
+ V2SI_type_node, V2SI_type_node, NULL_TREE);
+
+ tree v2sf_ftype_v2sf_v2sf
+ = build_function_type_list (V2SF_type_node,
+ V2SF_type_node, V2SF_type_node, NULL_TREE);
+
+ tree v2si_ftype_int_int
+ = build_function_type_list (V2SI_type_node,
+ integer_type_node, integer_type_node,
+ NULL_TREE);
+
+ tree v2si_ftype_v2si
+ = build_function_type_list (V2SI_type_node, V2SI_type_node, NULL_TREE);
+
+ tree v2sf_ftype_v2sf
+ = build_function_type_list (V2SF_type_node,
+ V2SF_type_node, NULL_TREE);
+
+ tree v2sf_ftype_v2si
+ = build_function_type_list (V2SF_type_node,
+ V2SI_type_node, NULL_TREE);
+
+ tree v2si_ftype_v2sf
+ = build_function_type_list (V2SI_type_node,
+ V2SF_type_node, NULL_TREE);
+
+ tree v2si_ftype_v2si_char
+ = build_function_type_list (V2SI_type_node,
+ V2SI_type_node, char_type_node, NULL_TREE);
+
+ tree v2si_ftype_int_char
+ = build_function_type_list (V2SI_type_node,
+ integer_type_node, char_type_node, NULL_TREE);
+
+ tree v2si_ftype_char
+ = build_function_type_list (V2SI_type_node, char_type_node, NULL_TREE);
+
+ tree int_ftype_int_int
+ = build_function_type_list (integer_type_node,
+ integer_type_node, integer_type_node,
+ NULL_TREE);
tree v4si_ftype_v4si_v4si
= build_function_type_list (V4SI_type_node,
@@ -4566,12 +5658,6 @@ altivec_init_builtins (void)
tree v4si_ftype_v4sf_v4sf
= build_function_type_list (V4SI_type_node,
V4SF_type_node, V4SF_type_node, NULL_TREE);
- tree v4si_ftype_v4si
- = build_function_type_list (V4SI_type_node, V4SI_type_node, NULL_TREE);
- tree v8hi_ftype_v8hi
- = build_function_type_list (V8HI_type_node, V8HI_type_node, NULL_TREE);
- tree v16qi_ftype_v16qi
- = build_function_type_list (V16QI_type_node, V16QI_type_node, NULL_TREE);
tree v8hi_ftype_v16qi_v16qi
= build_function_type_list (V8HI_type_node,
V16QI_type_node, V16QI_type_node, NULL_TREE);
@@ -4604,60 +5690,10 @@ altivec_init_builtins (void)
tree int_ftype_v16qi_v16qi
= build_function_type_list (integer_type_node,
V16QI_type_node, V16QI_type_node, NULL_TREE);
- tree int_ftype_int_v4si_v4si
- = build_function_type_list (integer_type_node,
- integer_type_node, V4SI_type_node,
- V4SI_type_node, NULL_TREE);
- tree int_ftype_int_v4sf_v4sf
- = build_function_type_list (integer_type_node,
- integer_type_node, V4SF_type_node,
- V4SF_type_node, NULL_TREE);
- tree int_ftype_int_v8hi_v8hi
- = build_function_type_list (integer_type_node,
- integer_type_node, V8HI_type_node,
- V8HI_type_node, NULL_TREE);
- tree int_ftype_int_v16qi_v16qi
- = build_function_type_list (integer_type_node,
- integer_type_node, V16QI_type_node,
- V16QI_type_node, NULL_TREE);
- tree v16qi_ftype_int_pvoid
- = build_function_type_list (V16QI_type_node,
- integer_type_node, pvoid_type_node, NULL_TREE);
- tree v4si_ftype_int_pvoid
- = build_function_type_list (V4SI_type_node,
- integer_type_node, pvoid_type_node, NULL_TREE);
- tree v8hi_ftype_int_pvoid
- = build_function_type_list (V8HI_type_node,
- integer_type_node, pvoid_type_node, NULL_TREE);
tree int_ftype_v8hi_v8hi
= build_function_type_list (integer_type_node,
V8HI_type_node, V8HI_type_node, NULL_TREE);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_4sf", v4sf_ftype_pfloat, ALTIVEC_BUILTIN_LD_INTERNAL_4sf);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_4sf", void_ftype_pfloat_v4sf, ALTIVEC_BUILTIN_ST_INTERNAL_4sf);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_4si", v4si_ftype_pint, ALTIVEC_BUILTIN_LD_INTERNAL_4si);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_4si", void_ftype_pint_v4si, ALTIVEC_BUILTIN_ST_INTERNAL_4si);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_8hi", v8hi_ftype_pshort, ALTIVEC_BUILTIN_LD_INTERNAL_8hi);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_8hi", void_ftype_pshort_v8hi, ALTIVEC_BUILTIN_ST_INTERNAL_8hi);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_ld_internal_16qi", v16qi_ftype_pchar, ALTIVEC_BUILTIN_LD_INTERNAL_16qi);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_st_internal_16qi", void_ftype_pchar_v16qi, ALTIVEC_BUILTIN_ST_INTERNAL_16qi);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_mtvscr", void_ftype_v4si, ALTIVEC_BUILTIN_MTVSCR);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_mfvscr", v8hi_ftype_void, ALTIVEC_BUILTIN_MFVSCR);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_dssall", void_ftype_void, ALTIVEC_BUILTIN_DSSALL);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_dss", void_ftype_qi, ALTIVEC_BUILTIN_DSS);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvsl", v16qi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVSL);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvsr", v16qi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVSR);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvebx", v16qi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVEBX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvehx", v8hi_ftype_int_pvoid, ALTIVEC_BUILTIN_LVEHX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvewx", v4si_ftype_int_pvoid, ALTIVEC_BUILTIN_LVEWX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvxl", v4si_ftype_int_pvoid, ALTIVEC_BUILTIN_LVXL);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_lvx", v4si_ftype_int_pvoid, ALTIVEC_BUILTIN_LVX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvx", void_ftype_v4si_int_pvoid, ALTIVEC_BUILTIN_STVX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvebx", void_ftype_v16qi_int_pvoid, ALTIVEC_BUILTIN_STVEBX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvehx", void_ftype_v8hi_int_pvoid, ALTIVEC_BUILTIN_STVEHX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvewx", void_ftype_v4si_int_pvoid, ALTIVEC_BUILTIN_STVEWX);
- def_builtin (MASK_ALTIVEC, "__builtin_altivec_stvxl", void_ftype_v4si_int_pvoid, ALTIVEC_BUILTIN_STVXL);
-
/* Add the simple ternary operators. */
d = (struct builtin_description *) bdesc_3arg;
for (i = 0; i < ARRAY_SIZE (bdesc_3arg); i++, d++)
@@ -4751,41 +5787,6 @@ altivec_init_builtins (void)
def_builtin (d->mask, d->name, type, d->code);
}
- /* Add the DST variants. */
- d = (struct builtin_description *) bdesc_dst;
- for (i = 0; i < ARRAY_SIZE (bdesc_dst); i++, d++)
- def_builtin (d->mask, d->name, void_ftype_pvoid_int_char, d->code);
-
- /* Initialize the predicates. */
- dp = (struct builtin_description_predicates *) bdesc_altivec_preds;
- for (i = 0; i < ARRAY_SIZE (bdesc_altivec_preds); i++, dp++)
- {
- enum machine_mode mode1;
- tree type;
-
- mode1 = insn_data[dp->icode].operand[1].mode;
-
- switch (mode1)
- {
- case V4SImode:
- type = int_ftype_int_v4si_v4si;
- break;
- case V8HImode:
- type = int_ftype_int_v8hi_v8hi;
- break;
- case V16QImode:
- type = int_ftype_int_v16qi_v16qi;
- break;
- case V4SFmode:
- type = int_ftype_int_v4sf_v4sf;
- break;
- default:
- abort ();
- }
-
- def_builtin (dp->mask, dp->name, type, dp->code);
- }
-
/* Add the simple binary operators. */
d = (struct builtin_description *) bdesc_2arg;
for (i = 0; i < ARRAY_SIZE (bdesc_2arg); i++, d++)
@@ -4817,6 +5818,15 @@ altivec_init_builtins (void)
case V8HImode:
type = v8hi_ftype_v8hi_v8hi;
break;
+ case V2SImode:
+ type = v2si_ftype_v2si_v2si;
+ break;
+ case V2SFmode:
+ type = v2sf_ftype_v2sf_v2sf;
+ break;
+ case SImode:
+ type = int_ftype_int_int;
+ break;
default:
abort ();
}
@@ -4876,6 +5886,15 @@ altivec_init_builtins (void)
else if (mode0 == V4SImode && mode1 == V4SFmode && mode2 == QImode)
type = v4si_ftype_v4sf_char;
+ else if (mode0 == V2SImode && mode1 == SImode && mode2 == SImode)
+ type = v2si_ftype_int_int;
+
+ else if (mode0 == V2SImode && mode1 == V2SImode && mode2 == QImode)
+ type = v2si_ftype_v2si_char;
+
+ else if (mode0 == V2SImode && mode1 == SImode && mode2 == QImode)
+ type = v2si_ftype_int_char;
+
/* int, x, x. */
else if (mode0 == SImode)
{
@@ -4904,36 +5923,6 @@ altivec_init_builtins (void)
def_builtin (d->mask, d->name, type, d->code);
}
- /* Initialize the abs* operators. */
- d = (struct builtin_description *) bdesc_abs;
- for (i = 0; i < ARRAY_SIZE (bdesc_abs); i++, d++)
- {
- enum machine_mode mode0;
- tree type;
-
- mode0 = insn_data[d->icode].operand[0].mode;
-
- switch (mode0)
- {
- case V4SImode:
- type = v4si_ftype_v4si;
- break;
- case V8HImode:
- type = v8hi_ftype_v8hi;
- break;
- case V16QImode:
- type = v16qi_ftype_v16qi;
- break;
- case V4SFmode:
- type = v4sf_ftype_v4sf;
- break;
- default:
- abort ();
- }
-
- def_builtin (d->mask, d->name, type, d->code);
- }
-
/* Add the simple unary operators. */
d = (struct builtin_description *) bdesc_1arg;
for (i = 0; i < ARRAY_SIZE (bdesc_1arg); i++, d++)
@@ -4959,6 +5948,16 @@ altivec_init_builtins (void)
type = v8hi_ftype_v16qi;
else if (mode0 == V4SImode && mode1 == V8HImode)
type = v4si_ftype_v8hi;
+ else if (mode0 == V2SImode && mode1 == V2SImode)
+ type = v2si_ftype_v2si;
+ else if (mode0 == V2SFmode && mode1 == V2SFmode)
+ type = v2sf_ftype_v2sf;
+ else if (mode0 == V2SFmode && mode1 == V2SImode)
+ type = v2sf_ftype_v2si;
+ else if (mode0 == V2SImode && mode1 == V2SFmode)
+ type = v2si_ftype_v2sf;
+ else if (mode0 == V2SImode && mode1 == QImode)
+ type = v2si_ftype_char;
else
abort ();
@@ -4966,7 +5965,6 @@ altivec_init_builtins (void)
}
}
-
/* Generate a memory reference for expand_block_move, copying volatile,
and other bits from an original memory reference. */
@@ -5689,6 +6687,7 @@ branch_positive_comparison_operator (op, mode)
code = GET_CODE (op);
return (code == EQ || code == LT || code == GT
+ || (TARGET_SPE && TARGET_HARD_FLOAT && !TARGET_FPRS && code == NE)
|| code == LTU || code == GTU
|| code == UNORDERED);
}
@@ -6140,8 +7139,12 @@ ccr_bit (op, scc_p)
switch (code)
{
case NE:
+ if (TARGET_SPE && TARGET_HARD_FLOAT && cc_mode == CCFPmode)
+ return base_bit + 1;
return scc_p ? base_bit + 3 : base_bit + 2;
case EQ:
+ if (TARGET_SPE && TARGET_HARD_FLOAT && cc_mode == CCFPmode)
+ return base_bit + 1;
return base_bit + 2;
case GT: case GTU: case UNLE:
return base_bit + 1;
@@ -6352,6 +7355,15 @@ print_operand (file, x, code)
fprintf (file, "crnor %d,%d,%d\n\t", base_bit + 3,
base_bit + 2, base_bit + 2);
}
+ else if (TARGET_SPE && TARGET_HARD_FLOAT
+ && GET_CODE (x) == EQ
+ && GET_MODE (XEXP (x, 0)) == CCFPmode)
+ {
+ int base_bit = 4 * (REGNO (XEXP (x, 0)) - CR0_REGNO);
+
+ fprintf (file, "crnor %d,%d,%d\n\t", base_bit + 1,
+ base_bit + 1, base_bit + 1);
+ }
return;
case 'E':
@@ -6632,6 +7644,18 @@ print_operand (file, x, code)
fprintf (file, "%d", i);
return;
+ case 't':
+ /* Like 'J' but get to the OVERFLOW/UNORDERED bit. */
+ if (GET_CODE (x) != REG || GET_MODE (x) != CCmode)
+ abort ();
+
+ /* Bit 3 is OV bit. */
+ i = 4 * (REGNO (x) - CR0_REGNO) + 3;
+
+ /* If we want bit 31, write a shift count of zero, not 32. */
+ fprintf (file, "%d", i == 31 ? 0 : i + 1);
+ return;
+
case 'T':
/* Print the symbolic name of a branch target register. */
if (GET_CODE (x) != REG || (REGNO (x) != LINK_REGISTER_REGNUM
@@ -6824,7 +7848,7 @@ print_operand (file, x, code)
}
return;
- /* Print AltiVec memory operand. */
+ /* Print AltiVec or SPE memory operand. */
case 'y':
{
rtx tmp;
@@ -6834,6 +7858,30 @@ print_operand (file, x, code)
tmp = XEXP (x, 0);
+ if (TARGET_SPE)
+ {
+ /* Handle [reg]. */
+ if (GET_CODE (tmp) == REG)
+ {
+ fprintf (file, "0(%s)", reg_names[REGNO (tmp)]);
+ break;
+ }
+ /* Handle [reg+UIMM]. */
+ else if (GET_CODE (tmp) == PLUS &&
+ GET_CODE (XEXP (tmp, 1)) == CONST_INT)
+ {
+ int x;
+
+ if (GET_CODE (XEXP (tmp, 0)) != REG)
+ abort ();
+
+ x = INTVAL (XEXP (tmp, 1));
+ fprintf (file, "%d(%s)", x, reg_names[REGNO (XEXP (tmp, 0))]);
+ break;
+ }
+
+ /* Fall through. Must be [reg+reg]. */
+ }
if (GET_CODE (tmp) == REG)
fprintf (file, "0,%s", reg_names[REGNO (tmp)]);
else if (GET_CODE (tmp) == PLUS && GET_CODE (XEXP (tmp, 1)) == REG)
@@ -7062,15 +8110,134 @@ rs6000_generate_compare (code)
/* First, the compare. */
compare_result = gen_reg_rtx (comp_mode);
- emit_insn (gen_rtx_SET (VOIDmode, compare_result,
- gen_rtx_COMPARE (comp_mode,
- rs6000_compare_op0,
- rs6000_compare_op1)));
+
+ /* SPE FP compare instructions on the GPRs. Yuck! */
+ if ((TARGET_SPE && TARGET_HARD_FLOAT) && rs6000_compare_fp_p)
+ {
+ rtx cmp, or1, or2, or_result, compare_result2;
+
+ switch (code)
+ {
+ case EQ:
+ case UNEQ:
+ case NE:
+ case LTGT:
+ cmp = flag_unsafe_math_optimizations
+ ? gen_tstsfeq_gpr (compare_result, rs6000_compare_op0,
+ rs6000_compare_op1)
+ : gen_cmpsfeq_gpr (compare_result, rs6000_compare_op0,
+ rs6000_compare_op1);
+ break;
+ case GT:
+ case GTU:
+ case UNGT:
+ case UNGE:
+ case GE:
+ case GEU:
+ cmp = flag_unsafe_math_optimizations
+ ? gen_tstsfgt_gpr (compare_result, rs6000_compare_op0,
+ rs6000_compare_op1)
+ : gen_cmpsfgt_gpr (compare_result, rs6000_compare_op0,
+ rs6000_compare_op1);
+ break;
+ case LT:
+ case LTU:
+ case UNLT:
+ case UNLE:
+ case LE:
+ case LEU:
+ cmp = flag_unsafe_math_optimizations
+ ? gen_tstsflt_gpr (compare_result, rs6000_compare_op0,
+ rs6000_compare_op1)
+ : gen_cmpsflt_gpr (compare_result, rs6000_compare_op0,
+ rs6000_compare_op1);
+ break;
+ default:
+ abort ();
+ }
+
+ /* Synthesize LE and GE from LT/GT || EQ. */
+ if (code == LE || code == GE || code == LEU || code == GEU)
+ {
+ /* Synthesize GE/LE frome GT/LT || EQ. */
+
+ emit_insn (cmp);
+
+ switch (code)
+ {
+ case LE: code = LT; break;
+ case GE: code = GT; break;
+ case LEU: code = LT; break;
+ case GEU: code = GT; break;
+ default: abort ();
+ }
+
+ or1 = gen_reg_rtx (SImode);
+ or2 = gen_reg_rtx (SImode);
+ or_result = gen_reg_rtx (CCEQmode);
+ compare_result2 = gen_reg_rtx (CCFPmode);
+
+ /* Do the EQ. */
+ cmp = flag_unsafe_math_optimizations
+ ? gen_tstsfeq_gpr (compare_result2, rs6000_compare_op0,
+ rs6000_compare_op1)
+ : gen_cmpsfeq_gpr (compare_result2, rs6000_compare_op0,
+ rs6000_compare_op1);
+ emit_insn (cmp);
+
+ /* The MC8540 FP compare instructions set the CR bits
+ differently than other PPC compare instructions. For
+ that matter, there is no generic test instruction, but a
+ testgt, testlt, and testeq. For a true condition, bit 2
+ is set (x1xx) in the CR. Following the traditional CR
+ values:
+
+ LT GT EQ OV
+ bit3 bit2 bit1 bit0
+
+ ... bit 2 would be a GT CR alias, so later on we
+ look in the GT bits for the branch instructins.
+ However, we must be careful to emit correct RTL in
+ the meantime, so optimizations don't get confused. */
+
+ or1 = gen_rtx (NE, SImode, compare_result, const0_rtx);
+ or2 = gen_rtx (NE, SImode, compare_result2, const0_rtx);
+
+ /* OR them together. */
+ cmp = gen_rtx_SET (VOIDmode, or_result,
+ gen_rtx_COMPARE (CCEQmode,
+ gen_rtx_IOR (SImode, or1, or2),
+ const_true_rtx));
+ compare_result = or_result;
+ code = EQ;
+ }
+ else
+ {
+ /* We only care about 1 bit (x1xx), so map everything to NE to
+ maintain rtl sanity. We'll get to the right bit (x1xx) at
+ code output time. */
+ if (code == NE || code == LTGT)
+ /* Do the inverse here because we have no cmpne
+ instruction. We use the cmpeq instruction and expect
+ to get a 0 instead. */
+ code = EQ;
+ else
+ code = NE;
+ }
+
+ emit_insn (cmp);
+ }
+ else
+ emit_insn (gen_rtx_SET (VOIDmode, compare_result,
+ gen_rtx_COMPARE (comp_mode,
+ rs6000_compare_op0,
+ rs6000_compare_op1)));
/* Some kinds of FP comparisons need an OR operation;
except for flag_unsafe_math_optimizations we don't bother. */
if (rs6000_compare_fp_p
&& ! flag_unsafe_math_optimizations
+ && ! (TARGET_HARD_FLOAT && TARGET_SPE)
&& (code == LE || code == GE
|| code == UNEQ || code == LTGT
|| code == UNGT || code == UNLT))
@@ -7191,6 +8358,19 @@ output_cbranch (op, label, reversed, insn)
if (really_reversed)
code = rs6000_reverse_condition (mode, code);
+ if ((TARGET_SPE && TARGET_HARD_FLOAT) && mode == CCFPmode)
+ {
+ /* The efscmp/tst* instructions twiddle bit 2, which maps nicely
+ to the GT bit. */
+ if (code == EQ)
+ /* Opposite of GT. */
+ code = UNLE;
+ else if (code == NE)
+ code = GT;
+ else
+ abort ();
+ }
+
switch (code)
{
/* Not all of these are actually distinct opcodes, but
@@ -7277,7 +8457,10 @@ rs6000_emit_cmove (dest, op, true_cond, false_cond)
rtx temp;
/* These modes should always match. */
- if (GET_MODE (op1) != compare_mode)
+ if (GET_MODE (op1) != compare_mode
+ /* In the isel case however, we can use a compare immediate, so
+ op1 may be a small constant. */
+ && (!TARGET_ISEL || !short_cint_operand (op1, VOIDmode)))
return 0;
if (GET_MODE (true_cond) != result_mode)
return 0;
@@ -7286,10 +8469,12 @@ rs6000_emit_cmove (dest, op, true_cond, false_cond)
/* First, work out if the hardware can do this at all, or
if it's too slow... */
- /* If the comparison is an integer one, since we only have fsel
- it'll be cheaper to use a branch. */
if (! rs6000_compare_fp_p)
- return 0;
+ {
+ if (TARGET_ISEL)
+ return rs6000_emit_int_cmove (dest, op, true_cond, false_cond);
+ return 0;
+ }
/* Eliminate half of the comparisons by switching operands, this
makes the remaining code simpler. */
@@ -7421,6 +8606,53 @@ rs6000_emit_cmove (dest, op, true_cond, false_cond)
return 1;
}
+/* Same as above, but for ints (isel). */
+
+static int
+rs6000_emit_int_cmove (dest, op, true_cond, false_cond)
+ rtx dest;
+ rtx op;
+ rtx true_cond;
+ rtx false_cond;
+{
+ rtx condition_rtx, cr;
+
+ /* All isel implementations thus far are 32-bits. */
+ if (GET_MODE (rs6000_compare_op0) != SImode)
+ return 0;
+
+ /* We still have to do the compare, because isel doesn't do a
+ compare, it just looks at the CRx bits set by a previous compare
+ instruction. */
+ condition_rtx = rs6000_generate_compare (GET_CODE (op));
+ cr = XEXP (condition_rtx, 0);
+
+ if (GET_MODE (cr) == CCmode)
+ emit_insn (gen_isel_signed (dest, condition_rtx,
+ true_cond, false_cond, cr));
+ else
+ emit_insn (gen_isel_unsigned (dest, condition_rtx,
+ true_cond, false_cond, cr));
+
+ return 1;
+}
+
+const char *
+output_isel (operands)
+ rtx *operands;
+{
+ enum rtx_code code;
+
+ code = GET_CODE (operands[1]);
+ if (code == GE || code == GEU || code == LE || code == LEU || code == NE)
+ {
+ PUT_CODE (operands[1], reverse_condition (code));
+ return "isel %0,%3,%2,%j1";
+ }
+ else
+ return "isel %0,%2,%3,%j1";
+}
+
void
rs6000_emit_minmax (dest, code, op0, op1)
rtx dest;
@@ -7639,6 +8871,10 @@ is_altivec_return_reg (reg, xyes)
+---------------------------------------+
| Save area for VRSAVE register (Z) | 8+P+A+V+L+X+W+Y
+---------------------------------------+
+ | SPE: area for 64-bit GP registers |
+ +---------------------------------------+
+ | SPE alignment padding |
+ +---------------------------------------+
| saved CR (C) | 8+P+A+V+L+X+W+Y+Z
+---------------------------------------+
| Save area for GP registers (G) | 8+P+A+V+L+X+W+Y+Z+C
@@ -7694,6 +8930,20 @@ rs6000_stack_info ()
else
info_ptr->gp_size = reg_size * (32 - info_ptr->first_gp_reg_save);
+ /* For the SPE, we have an additional upper 32-bits on each GPR.
+ Ideally we should save the entire 64-bits only when the upper
+ half is used in SIMD instructions. Since we only record
+ registers live (not the size they are used in), this proves
+ difficult because we'd have to traverse the instruction chain at
+ the right time, taking reload into account. This is a real pain,
+ so we opt to save the GPRs in 64-bits always. Anyone overly
+ concerned with frame size can fix this. ;-).
+
+ So... since we save all GPRs (except the SP) in 64-bits, the
+ traditional GP save area will be empty. */
+ if (TARGET_SPE_ABI)
+ info_ptr->gp_size = 0;
+
info_ptr->first_fp_reg_save = first_fp_reg_to_save ();
info_ptr->fp_size = 8 * (64 - info_ptr->first_fp_reg_save);
@@ -7742,7 +8992,9 @@ rs6000_stack_info ()
unsigned int i;
for (i = 0; EH_RETURN_DATA_REGNO (i) != INVALID_REGNUM; ++i)
continue;
- ehrd_size = i * UNITS_PER_WORD;
+
+ /* SPE saves EH registers in 64-bits. */
+ ehrd_size = i * (TARGET_SPE_ABI ? UNITS_PER_SPE_WORD : UNITS_PER_WORD);
}
else
ehrd_size = 0;
@@ -7755,6 +9007,11 @@ rs6000_stack_info ()
info_ptr->parm_size = RS6000_ALIGN (current_function_outgoing_args_size,
8);
+ if (TARGET_SPE_ABI)
+ info_ptr->spe_gp_size = 8 * (32 - info_ptr->first_gp_reg_save);
+ else
+ info_ptr->spe_gp_size = 0;
+
if (TARGET_ALTIVEC_ABI && TARGET_ALTIVEC_VRSAVE)
{
info_ptr->vrsave_mask = compute_vrsave_mask ();
@@ -7810,7 +9067,26 @@ rs6000_stack_info ()
info_ptr->gp_save_offset = info_ptr->fp_save_offset - info_ptr->gp_size;
info_ptr->cr_save_offset = info_ptr->gp_save_offset - info_ptr->cr_size;
- if (TARGET_ALTIVEC_ABI)
+ if (TARGET_SPE_ABI)
+ {
+ /* Align stack so SPE GPR save area is aligned on a
+ double-word boundary. */
+ if (info_ptr->spe_gp_size != 0)
+ info_ptr->spe_padding_size
+ = 8 - (-info_ptr->cr_save_offset % 8);
+ else
+ info_ptr->spe_padding_size = 0;
+
+ info_ptr->spe_gp_save_offset
+ = info_ptr->cr_save_offset
+ - info_ptr->spe_padding_size
+ - info_ptr->spe_gp_size;
+
+ /* Adjust for SPE case. */
+ info_ptr->toc_save_offset
+ = info_ptr->spe_gp_save_offset - info_ptr->toc_size;
+ }
+ else if (TARGET_ALTIVEC_ABI)
{
info_ptr->vrsave_save_offset
= info_ptr->cr_save_offset - info_ptr->vrsave_size;
@@ -7843,6 +9119,8 @@ rs6000_stack_info ()
+ info_ptr->altivec_size
+ info_ptr->altivec_padding_size
+ info_ptr->vrsave_size
+ + info_ptr->spe_gp_size
+ + info_ptr->spe_padding_size
+ ehrd_size
+ info_ptr->cr_size
+ info_ptr->lr_size
@@ -7897,6 +9175,9 @@ rs6000_stack_info ()
if (! TARGET_ALTIVEC_ABI || info_ptr->vrsave_mask == 0)
info_ptr->vrsave_save_offset = 0;
+ if (! TARGET_SPE_ABI || info_ptr->spe_gp_size == 0)
+ info_ptr->spe_gp_save_offset = 0;
+
if (! info_ptr->lr_save_p)
info_ptr->lr_save_offset = 0;
@@ -7938,6 +9219,9 @@ debug_stack_info (info)
if (TARGET_ALTIVEC_ABI)
fprintf (stderr, "\tALTIVEC ABI extensions enabled.\n");
+ if (TARGET_SPE_ABI)
+ fprintf (stderr, "\tSPE ABI extensions enabled.\n");
+
if (info->first_gp_reg_save != 32)
fprintf (stderr, "\tfirst_gp_reg_save = %5d\n", info->first_gp_reg_save);
@@ -7976,6 +9260,10 @@ debug_stack_info (info)
fprintf (stderr, "\taltivec_save_offset = %5d\n",
info->altivec_save_offset);
+ if (info->spe_gp_save_offset)
+ fprintf (stderr, "\tspe_gp_save_offset = %5d\n",
+ info->spe_gp_save_offset);
+
if (info->vrsave_save_offset)
fprintf (stderr, "\tvrsave_save_offset = %5d\n",
info->vrsave_save_offset);
@@ -8010,6 +9298,9 @@ debug_stack_info (info)
if (info->gp_size)
fprintf (stderr, "\tgp_size = %5d\n", info->gp_size);
+ if (info->spe_gp_size)
+ fprintf (stderr, "\tspe_gp_size = %5d\n", info->spe_gp_size);
+
if (info->fp_size)
fprintf (stderr, "\tfp_size = %5d\n", info->fp_size);
@@ -8023,6 +9314,10 @@ debug_stack_info (info)
fprintf (stderr, "\taltivec_padding_size= %5d\n",
info->altivec_padding_size);
+ if (info->spe_padding_size)
+ fprintf (stderr, "\tspe_padding_size = %5d\n",
+ info->spe_padding_size);
+
if (info->lr_size)
fprintf (stderr, "\tlr_size = %5d\n", info->lr_size);
@@ -8666,8 +9961,9 @@ emit_frame_save (frame_reg, frame_ptr, mode, regno, offset, total_size)
/* Some cases that need register indexed addressing. */
if ((TARGET_ALTIVEC_ABI && ALTIVEC_VECTOR_MODE (mode))
- /* Add more cases here. */
- )
+ || (TARGET_SPE_ABI
+ && SPE_VECTOR_MODE (mode)
+ && !SPE_CONST_OFFSET_OK (offset)))
{
/* Whomever calls us must make sure r11 is available in the
flow path of instructions in the prologue. */
@@ -8694,6 +9990,30 @@ emit_frame_save (frame_reg, frame_ptr, mode, regno, offset, total_size)
rs6000_frame_related (insn, frame_ptr, total_size, replacea, replaceb);
}
+/* Emit an offset memory reference suitable for a frame store, while
+ converting to a valid addressing mode. */
+
+static rtx
+gen_frame_mem_offset (mode, reg, offset)
+ enum machine_mode mode;
+ rtx reg;
+ int offset;
+{
+ rtx int_rtx, offset_rtx;
+
+ int_rtx = GEN_INT (offset);
+
+ if (TARGET_SPE_ABI && SPE_VECTOR_MODE (mode))
+ {
+ offset_rtx = gen_rtx_REG (Pmode, FIXED_SCRATCH);
+ emit_move_insn (offset_rtx, int_rtx);
+ }
+ else
+ offset_rtx = int_rtx;
+
+ return gen_rtx_MEM (mode, gen_rtx_PLUS (Pmode, reg, offset_rtx));
+}
+
/* Emit function prologue as insns. */
void
@@ -8711,7 +10031,14 @@ rs6000_emit_prologue ()
int using_store_multiple;
HOST_WIDE_INT sp_offset = 0;
+ if (TARGET_SPE_ABI)
+ {
+ reg_mode = V2SImode;
+ reg_size = 8;
+ }
+
using_store_multiple = (TARGET_MULTIPLE && ! TARGET_POWERPC64
+ && !TARGET_SPE_ABI
&& info->first_gp_reg_save < 31);
saving_FPRs_inline = (info->first_fp_reg_save == 64
|| FP_SAVE_INLINE (info->first_fp_reg_save));
@@ -8894,10 +10221,49 @@ rs6000_emit_prologue ()
|| (i+info->first_gp_reg_save == RS6000_PIC_OFFSET_TABLE_REGNUM
&& ((DEFAULT_ABI == ABI_V4 && flag_pic == 1)
|| (DEFAULT_ABI == ABI_DARWIN && flag_pic))))
- emit_frame_save (frame_reg_rtx, frame_ptr_rtx, reg_mode,
- info->first_gp_reg_save + i,
- info->gp_save_offset + sp_offset + reg_size * i,
- info->total_size);
+ {
+ rtx addr, reg, mem;
+ reg = gen_rtx_REG (reg_mode, info->first_gp_reg_save + i);
+
+ if (TARGET_SPE_ABI)
+ {
+ int offset = info->spe_gp_save_offset + sp_offset + 8 * i;
+ rtx b;
+
+ if (!SPE_CONST_OFFSET_OK (offset))
+ {
+ b = gen_rtx_REG (Pmode, FIXED_SCRATCH);
+ emit_move_insn (b, GEN_INT (offset));
+ }
+ else
+ b = GEN_INT (offset);
+
+ addr = gen_rtx_PLUS (Pmode, frame_reg_rtx, b);
+ mem = gen_rtx_MEM (V2SImode, addr);
+ set_mem_alias_set (mem, rs6000_sr_alias_set);
+ insn = emit_move_insn (mem, reg);
+
+ if (GET_CODE (b) == CONST_INT)
+ rs6000_frame_related (insn, frame_ptr_rtx, info->total_size,
+ NULL_RTX, NULL_RTX);
+ else
+ rs6000_frame_related (insn, frame_ptr_rtx, info->total_size,
+ b, GEN_INT (offset));
+ }
+ else
+ {
+ addr = gen_rtx_PLUS (Pmode, frame_reg_rtx,
+ GEN_INT (info->gp_save_offset
+ + sp_offset
+ + reg_size * i));
+ mem = gen_rtx_MEM (reg_mode, addr);
+ set_mem_alias_set (mem, rs6000_sr_alias_set);
+
+ insn = emit_move_insn (mem, reg);
+ rs6000_frame_related (insn, frame_ptr_rtx, info->total_size,
+ NULL_RTX, NULL_RTX);
+ }
+ }
}
/* ??? There's no need to emit actual instructions here, but it's the
@@ -8970,7 +10336,7 @@ rs6000_emit_prologue ()
/* Set frame pointer, if needed. */
if (frame_pointer_needed)
{
- insn = emit_move_insn (gen_rtx_REG (reg_mode, FRAME_POINTER_REGNUM),
+ insn = emit_move_insn (gen_rtx_REG (Pmode, FRAME_POINTER_REGNUM),
sp_reg_rtx);
RTX_FRAME_RELATED_P (insn) = 1;
}
@@ -9097,8 +10463,15 @@ rs6000_emit_epilogue (sibcall)
int reg_size = TARGET_POWERPC64 ? 8 : 4;
int i;
+ if (TARGET_SPE_ABI)
+ {
+ reg_mode = V2SImode;
+ reg_size = 8;
+ }
+
info = rs6000_stack_info ();
using_load_multiple = (TARGET_MULTIPLE && ! TARGET_POWERPC64
+ && !TARGET_SPE_ABI
&& info->first_gp_reg_save < 31);
restoring_FPRs_inline = (sibcall
|| current_function_calls_eh_return
@@ -9183,9 +10556,8 @@ rs6000_emit_epilogue (sibcall)
/* Get the old lr if we saved it. */
if (info->lr_save_p)
{
- rtx addr = gen_rtx_PLUS (Pmode, frame_reg_rtx,
- GEN_INT (info->lr_save_offset + sp_offset));
- rtx mem = gen_rtx_MEM (Pmode, addr);
+ rtx mem = gen_frame_mem_offset (Pmode, frame_reg_rtx,
+ info->lr_save_offset + sp_offset);
set_mem_alias_set (mem, rs6000_sr_alias_set);
@@ -9216,16 +10588,15 @@ rs6000_emit_epilogue (sibcall)
for (i = 0; ; ++i)
{
- rtx addr, mem;
+ rtx mem;
regno = EH_RETURN_DATA_REGNO (i);
if (regno == INVALID_REGNUM)
break;
- addr = plus_constant (frame_reg_rtx,
- info->ehrd_offset + sp_offset
- + reg_size * (int) i);
- mem = gen_rtx_MEM (reg_mode, addr);
+ mem = gen_frame_mem_offset (reg_mode, frame_reg_rtx,
+ info->ehrd_offset + sp_offset
+ + reg_size * (int) i);
set_mem_alias_set (mem, rs6000_sr_alias_set);
emit_move_insn (gen_rtx_REG (reg_mode, regno), mem);
@@ -9269,11 +10640,28 @@ rs6000_emit_epilogue (sibcall)
+ reg_size * i));
rtx mem = gen_rtx_MEM (reg_mode, addr);
+ /* Restore 64-bit quantities for SPE. */
+ if (TARGET_SPE_ABI)
+ {
+ int offset = info->spe_gp_save_offset + sp_offset + 8 * i;
+ rtx b;
+
+ if (!SPE_CONST_OFFSET_OK (offset))
+ {
+ b = gen_rtx_REG (Pmode, FIXED_SCRATCH);
+ emit_move_insn (b, GEN_INT (offset));
+ }
+ else
+ b = GEN_INT (offset);
+
+ addr = gen_rtx_PLUS (Pmode, frame_reg_rtx, b);
+ mem = gen_rtx_MEM (V2SImode, addr);
+ }
+
set_mem_alias_set (mem, rs6000_sr_alias_set);
emit_move_insn (gen_rtx_REG (reg_mode,
- info->first_gp_reg_save + i),
- mem);
+ info->first_gp_reg_save + i), mem);
}
/* Restore fpr's if we need to do it without calling a function. */
diff --git a/gcc/config/rs6000/rs6000.h b/gcc/config/rs6000/rs6000.h
index b63bb78649c..402490b68b5 100644
--- a/gcc/config/rs6000/rs6000.h
+++ b/gcc/config/rs6000/rs6000.h
@@ -87,6 +87,7 @@ Boston, MA 02111-1307, USA. */
%{mcpu=821: -mppc} \
%{mcpu=823: -mppc} \
%{mcpu=860: -mppc} \
+%{mcpu=8540: -me500} \
%{maltivec: -maltivec}"
#define CPP_DEFAULT_SPEC ""
@@ -358,6 +359,7 @@ enum processor_type
PROCESSOR_PPC750,
PROCESSOR_PPC7400,
PROCESSOR_PPC7450,
+ PROCESSOR_PPC8540,
PROCESSOR_POWER4
};
@@ -393,6 +395,8 @@ extern enum processor_type rs6000_cpu;
{"abi=", &rs6000_abi_string, N_("Specify ABI to use") }, \
{"long-double-", &rs6000_long_double_size_string, \
N_("Specify size of long double (64 or 128 bits)") }, \
+ {"isel=", &rs6000_isel_string, \
+ N_("Specify yes/no if isel instructions should be generated") }, \
{"vrsave=", &rs6000_altivec_vrsave_string, \
N_("Specify yes/no if VRSAVE instructions should be generated for AltiVec") }, \
{"longcall", &rs6000_longcall_switch, \
@@ -426,6 +430,10 @@ extern int rs6000_debug_arg; /* debug argument handling */
extern const char *rs6000_long_double_size_string;
extern int rs6000_long_double_type_size;
extern int rs6000_altivec_abi;
+extern int rs6000_spe_abi;
+extern int rs6000_isel;
+extern int rs6000_fprs;
+extern const char *rs6000_isel_string;
extern const char *rs6000_altivec_vrsave_string;
extern int rs6000_altivec_vrsave;
extern const char *rs6000_longcall_switch;
@@ -435,6 +443,11 @@ extern int rs6000_default_long_calls;
#define TARGET_ALTIVEC_ABI rs6000_altivec_abi
#define TARGET_ALTIVEC_VRSAVE rs6000_altivec_vrsave
+#define TARGET_SPE_ABI 0
+#define TARGET_SPE 0
+#define TARGET_ISEL 0
+#define TARGET_FPRS 1
+
/* Sometimes certain combinations of command options do not make sense
on a particular target machine. You can define a macro
`OVERRIDE_OPTIONS' to take account of this. This macro, if
@@ -508,6 +521,7 @@ extern int rs6000_default_long_calls;
#define MIN_UNITS_PER_WORD 4
#define UNITS_PER_FP_WORD 8
#define UNITS_PER_ALTIVEC_WORD 16
+#define UNITS_PER_SPE_WORD 8
/* Type used for ptrdiff_t, as a string used in a declaration. */
#define PTRDIFF_TYPE "int"
@@ -589,7 +603,8 @@ extern int rs6000_default_long_calls;
local store. TYPE is the data type, and ALIGN is the alignment
that the object would ordinarily have. */
#define LOCAL_ALIGNMENT(TYPE, ALIGN) \
- ((TARGET_ALTIVEC && TREE_CODE (TYPE) == VECTOR_TYPE) ? 128 : ALIGN)
+ ((TARGET_ALTIVEC && TREE_CODE (TYPE) == VECTOR_TYPE) ? 128 : \
+ (TARGET_SPE && TREE_CODE (TYPE) == VECTOR_TYPE) ? 64 : ALIGN)
/* Alignment of field after `int : 0' in a structure. */
#define EMPTY_FIELD_BOUNDARY 32
@@ -597,6 +612,18 @@ extern int rs6000_default_long_calls;
/* Every structure's size must be a multiple of this. */
#define STRUCTURE_SIZE_BOUNDARY 8
+/* Return 1 if a structure or array containing FIELD should be
+ accessed using `BLKMODE'.
+
+ For the SPE, simd types are V2SI, and gcc can be tempted to put the
+ entire thing in a DI and use subregs to access the internals.
+ store_bit_field() will force (subreg:DI (reg:V2SI x))'s to the
+ back-end. Because a single GPR can hold a V2SI, but not a DI, the
+ best thing to do is set structs to BLKmode and avoid Severe Tire
+ Damage. */
+#define MEMBER_TYPE_FORCES_BLK(FIELD, MODE) \
+ (TARGET_SPE && TREE_CODE (TREE_TYPE (FIELD)) == VECTOR_TYPE)
+
/* A bitfield declared as `int' forces `int' alignment for the struct. */
#define PCC_BITFIELD_TYPE_MATTERS 1
@@ -611,7 +638,7 @@ extern int rs6000_default_long_calls;
/* Make arrays of chars word-aligned for the same reasons.
Align vectors to 128 bits. */
#define DATA_ALIGNMENT(TYPE, ALIGN) \
- (TREE_CODE (TYPE) == VECTOR_TYPE ? 128 \
+ (TREE_CODE (TYPE) == VECTOR_TYPE ? (TARGET_SPE_ABI ? 64 : 128) \
: TREE_CODE (TYPE) == ARRAY_TYPE \
&& TYPE_MODE (TREE_TYPE (TYPE)) == QImode \
&& (ALIGN) < BITS_PER_WORD ? BITS_PER_WORD : (ALIGN))
@@ -650,7 +677,7 @@ extern int rs6000_default_long_calls;
a register, in order to work around problems in allocating stack storage
in inline functions. */
-#define FIRST_PSEUDO_REGISTER 111
+#define FIRST_PSEUDO_REGISTER 113
/* This must be included for pre gcc 3.0 glibc compatibility. */
#define PRE_GCC3_DWARF_FRAME_REGISTERS 77
@@ -675,6 +702,7 @@ extern int rs6000_default_long_calls;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
1, 1 \
+ , 1, 1 \
}
/* 1 for registers not available across function calls.
@@ -694,6 +722,7 @@ extern int rs6000_default_long_calls;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
1, 1 \
+ , 1, 1 \
}
/* Like `CALL_USED_REGISTERS' except this macro doesn't require that
@@ -712,6 +741,7 @@ extern int rs6000_default_long_calls;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0 \
+ , 0, 0 \
}
#define MQ_REGNO 64
@@ -727,6 +757,8 @@ extern int rs6000_default_long_calls;
#define TOTAL_ALTIVEC_REGS (LAST_ALTIVEC_REGNO - FIRST_ALTIVEC_REGNO + 1)
#define VRSAVE_REGNO 109
#define VSCR_REGNO 110
+#define SPE_ACC_REGNO 111
+#define SPEFSCR_REGNO 112
/* List the order in which to allocate registers. Each register must be
listed once, even those in FIXED_REGISTERS.
@@ -750,6 +782,7 @@ extern int rs6000_default_long_calls;
ctr (not saved; when we have the choice ctr is better)
lr (saved)
cr5, r1, r2, ap, xer, vrsave, vscr (fixed)
+ spe_acc, spefscr (fixed)
AltiVec registers:
v0 - v1 (not saved or used for anything)
@@ -781,6 +814,7 @@ extern int rs6000_default_long_calls;
96, 95, 94, 93, 92, 91, \
108, 107, 106, 105, 104, 103, 102, 101, 100, 99, 98, \
97, 109, 110 \
+ , 111, 112 \
}
/* True if register is floating-point. */
@@ -795,6 +829,9 @@ extern int rs6000_default_long_calls;
/* True if register is an integer register. */
#define INT_REGNO_P(N) ((N) <= 31 || (N) == ARG_POINTER_REGNUM)
+/* SPE SIMD registers are just the GPRs. */
+#define SPE_SIMD_REGNO_P(N) ((N) <= 31)
+
/* True if register is the XER register. */
#define XER_REGNO_P(N) ((N) == XER_REGNO)
@@ -806,12 +843,18 @@ extern int rs6000_default_long_calls;
This is ordinarily the length in words of a value of mode MODE
but can be less for certain modes in special long registers.
+ For the SPE, GPRs are 64 bits but only 32 bits are visible in
+ scalar instructions. The upper 32 bits are only available to the
+ SIMD instructions.
+
POWER and PowerPC GPRs hold 32 bits worth;
PowerPC64 GPRs and FPRs point register holds 64 bits worth. */
#define HARD_REGNO_NREGS(REGNO, MODE) \
(FP_REGNO_P (REGNO) \
? ((GET_MODE_SIZE (MODE) + UNITS_PER_FP_WORD - 1) / UNITS_PER_FP_WORD) \
+ : (SPE_SIMD_REGNO_P (REGNO) && TARGET_SPE && SPE_VECTOR_MODE (MODE)) \
+ ? ((GET_MODE_SIZE (MODE) + UNITS_PER_SPE_WORD - 1) / UNITS_PER_SPE_WORD) \
: ALTIVEC_REGNO_P (REGNO) \
? ((GET_MODE_SIZE (MODE) + UNITS_PER_ALTIVEC_WORD - 1) / UNITS_PER_ALTIVEC_WORD) \
: ((GET_MODE_SIZE (MODE) + UNITS_PER_WORD - 1) / UNITS_PER_WORD))
@@ -822,12 +865,18 @@ extern int rs6000_default_long_calls;
|| (MODE) == V4SFmode \
|| (MODE) == V4SImode)
+#define SPE_VECTOR_MODE(MODE) \
+ ((MODE) == V4HImode \
+ || (MODE) == V2SFmode \
+ || (MODE) == V2SImode)
+
/* Define this macro to be nonzero if the port is prepared to handle
insns involving vector mode MODE. At the very least, it must have
move patterns for this mode. */
-#define VECTOR_MODE_SUPPORTED_P(MODE) \
- (TARGET_ALTIVEC && ALTIVEC_VECTOR_MODE (MODE))
+#define VECTOR_MODE_SUPPORTED_P(MODE) \
+ ((TARGET_SPE && SPE_VECTOR_MODE (MODE)) \
+ || (TARGET_ALTIVEC && ALTIVEC_VECTOR_MODE (MODE)))
/* Value is 1 if hard register REGNO can hold a value of machine-mode MODE.
For POWER and PowerPC, the GPRs can hold any mode, but the float
@@ -841,6 +890,7 @@ extern int rs6000_default_long_calls;
|| (GET_MODE_CLASS (MODE) == MODE_INT \
&& GET_MODE_SIZE (MODE) == UNITS_PER_FP_WORD)) \
: ALTIVEC_REGNO_P (REGNO) ? ALTIVEC_VECTOR_MODE (MODE) \
+ : SPE_SIMD_REGNO_P (REGNO) && TARGET_SPE && SPE_VECTOR_MODE (MODE) ? 1 \
: CR_REGNO_P (REGNO) ? GET_MODE_CLASS (MODE) == MODE_CC \
: XER_REGNO_P (REGNO) ? (MODE) == PSImode \
: ! INT_REGNO_P (REGNO) ? (GET_MODE_CLASS (MODE) == MODE_INT \
@@ -905,6 +955,20 @@ extern int rs6000_default_long_calls;
#define BRANCH_COST 3
+
+/* A fixed register used at prologue and epilogue generation to fix
+ addressing modes. The SPE needs heavy addressing fixes at the last
+ minute, and it's best to save a register for it.
+
+ AltiVec also needs fixes, but we've gotten around using r11, which
+ is actually wrong because when use_backchain_to_restore_sp is true,
+ we end up clobbering r11.
+
+ The AltiVec case needs to be fixed. Dunno if we should break ABI
+ compatability and reserve a register for it as well.. */
+
+#define FIXED_SCRATCH (TARGET_SPE ? 14 : 11)
+
/* Define this macro to change register usage conditional on target flags.
Set MQ register fixed (already call_used) if not POWER architecture
(RIOS1, RIOS2, RSC, and PPC601) so that it will not be allocated.
@@ -919,7 +983,7 @@ extern int rs6000_default_long_calls;
if (TARGET_64BIT) \
fixed_regs[13] = call_used_regs[13] \
= call_really_used_regs[13] = 1; \
- if (TARGET_SOFT_FLOAT) \
+ if (TARGET_SOFT_FLOAT || !TARGET_FPRS) \
for (i = 32; i < 64; i++) \
fixed_regs[i] = call_used_regs[i] \
= call_really_used_regs[i] = 1; \
@@ -937,6 +1001,13 @@ extern int rs6000_default_long_calls;
= call_really_used_regs[RS6000_PIC_OFFSET_TABLE_REGNUM] = 1; \
if (TARGET_ALTIVEC) \
global_regs[VSCR_REGNO] = 1; \
+ if (TARGET_SPE) \
+ { \
+ global_regs[SPEFSCR_REGNO] = 1; \
+ fixed_regs[FIXED_SCRATCH] \
+ = call_used_regs[FIXED_SCRATCH] \
+ = call_really_used_regs[FIXED_SCRATCH] = 1; \
+ } \
if (! TARGET_ALTIVEC) \
{ \
for (i = FIRST_ALTIVEC_REGNO; i <= LAST_ALTIVEC_REGNO; ++i) \
@@ -1022,6 +1093,8 @@ enum reg_class
ALTIVEC_REGS,
VRSAVE_REGS,
VSCR_REGS,
+ SPE_ACC_REGS,
+ SPEFSCR_REGS,
NON_SPECIAL_REGS,
MQ_REGS,
LINK_REGS,
@@ -1050,6 +1123,8 @@ enum reg_class
"ALTIVEC_REGS", \
"VRSAVE_REGS", \
"VSCR_REGS", \
+ "SPE_ACC_REGS", \
+ "SPEFSCR_REGS", \
"NON_SPECIAL_REGS", \
"MQ_REGS", \
"LINK_REGS", \
@@ -1077,6 +1152,8 @@ enum reg_class
{ 0x00000000, 0x00000000, 0xffffe000, 0x00001fff }, /* ALTIVEC_REGS */ \
{ 0x00000000, 0x00000000, 0x00000000, 0x00002000 }, /* VRSAVE_REGS */ \
{ 0x00000000, 0x00000000, 0x00000000, 0x00004000 }, /* VSCR_REGS */ \
+ { 0x00000000, 0x00000000, 0x00000000, 0x00008000 }, /* SPE_ACC_REGS */ \
+ { 0x00000000, 0x00000000, 0x00000000, 0x00010000 }, /* SPEFSCR_REGS */ \
{ 0xffffffff, 0xffffffff, 0x00000008, 0x00000000 }, /* NON_SPECIAL_REGS */ \
{ 0x00000000, 0x00000000, 0x00000001, 0x00000000 }, /* MQ_REGS */ \
{ 0x00000000, 0x00000000, 0x00000002, 0x00000000 }, /* LINK_REGS */ \
@@ -1110,6 +1187,8 @@ enum reg_class
: (REGNO) == XER_REGNO ? XER_REGS \
: (REGNO) == VRSAVE_REGNO ? VRSAVE_REGS \
: (REGNO) == VSCR_REGNO ? VRSAVE_REGS \
+ : (REGNO) == SPE_ACC_REGNO ? SPE_ACC_REGS \
+ : (REGNO) == SPEFSCR_REGNO ? SPEFSCR_REGS \
: NO_REGS)
/* The class value for index registers, and the one for base regs. */
@@ -1289,6 +1368,7 @@ typedef struct rs6000_stack {
int lr_save_offset; /* offset to save LR from initial SP */
int cr_save_offset; /* offset to save CR from initial SP */
int vrsave_save_offset; /* offset to save VRSAVE from initial SP */
+ int spe_gp_save_offset; /* offset to save spe 64-bit gprs */
int toc_save_offset; /* offset to save the TOC pointer */
int varargs_save_offset; /* offset to save the varargs registers */
int ehrd_offset; /* offset to EH return data */
@@ -1306,6 +1386,8 @@ typedef struct rs6000_stack {
int vrsave_size; /* size to hold VRSAVE if not in save_size */
int altivec_padding_size; /* size of altivec alignment padding if
not in save_size */
+ int spe_gp_size; /* size of 64-bit GPR save size for SPE */
+ int spe_padding_size;
int toc_size; /* size to hold TOC if not in save_size */
int total_size; /* total bytes allocated for stack */
} rs6000_stack_t;
@@ -1424,6 +1506,8 @@ typedef struct rs6000_stack {
If the precise function being called is known, FUNC is its FUNCTION_DECL;
otherwise, FUNC is 0.
+ On the SPE, both FPs and vectors are returned in r3.
+
On RS/6000 an integer value is in r3 and a floating-point value is in
fp1, unless -msoft-float. */
@@ -1434,7 +1518,11 @@ typedef struct rs6000_stack {
? word_mode : TYPE_MODE (VALTYPE), \
TREE_CODE (VALTYPE) == VECTOR_TYPE \
&& TARGET_ALTIVEC ? ALTIVEC_ARG_RETURN \
- : TREE_CODE (VALTYPE) == REAL_TYPE && TARGET_HARD_FLOAT \
+ : TREE_CODE (VALTYPE) == REAL_TYPE \
+ && TARGET_SPE_ABI && !TARGET_FPRS \
+ ? GP_ARG_RETURN \
+ : TREE_CODE (VALTYPE) == REAL_TYPE \
+ && TARGET_HARD_FLOAT && TARGET_FPRS \
? FP_ARG_RETURN : GP_ARG_RETURN)
/* Define how to find the value returned by a library function
@@ -1443,7 +1531,7 @@ typedef struct rs6000_stack {
#define LIBCALL_VALUE(MODE) \
gen_rtx_REG (MODE, ALTIVEC_VECTOR_MODE (MODE) ? ALTIVEC_ARG_RETURN \
: GET_MODE_CLASS (MODE) == MODE_FLOAT \
- && TARGET_HARD_FLOAT \
+ && TARGET_HARD_FLOAT && TARGET_FPRS \
? FP_ARG_RETURN : GP_ARG_RETURN)
/* The AIX ABI for the RS/6000 specifies that all structures are
@@ -1601,7 +1689,7 @@ typedef struct rs6000_args
#define USE_FP_FOR_ARG_P(CUM,MODE,TYPE) \
(GET_MODE_CLASS (MODE) == MODE_FLOAT \
&& (CUM).fregno <= FP_ARG_MAX_REG \
- && TARGET_HARD_FLOAT)
+ && TARGET_HARD_FLOAT && TARGET_FPRS)
/* Non-zero if we can use an AltiVec register to pass this arg. */
#define USE_ALTIVEC_FOR_ARG_P(CUM,MODE,TYPE) \
@@ -1937,6 +2025,9 @@ typedef struct rs6000_args
#define TOC_RELATIVE_EXPR_P(X) (toc_relative_expr_p (X))
+/* SPE offset addressing is limited to 5-bits. */
+#define SPE_CONST_OFFSET_OK(x) (((x) & ~0x1f) == 0)
+
#define LEGITIMATE_CONSTANT_POOL_ADDRESS_P(X) \
(TARGET_TOC \
&& GET_CODE (X) == PLUS \
@@ -1961,6 +2052,9 @@ typedef struct rs6000_args
&& LEGITIMATE_ADDRESS_INTEGER_P (XEXP (X, 1), 0) \
&& (! ALTIVEC_VECTOR_MODE (MODE) \
|| (GET_CODE (XEXP (X,1)) == CONST_INT && INTVAL (XEXP (X,1)) == 0)) \
+ && (! SPE_VECTOR_MODE (MODE) \
+ || (GET_CODE (XEXP (X, 1)) == CONST_INT \
+ && SPE_CONST_OFFSET_OK (INTVAL (XEXP (X, 1))))) \
&& (((MODE) != DFmode && (MODE) != DImode) \
|| (TARGET_32BIT \
? LEGITIMATE_ADDRESS_INTEGER_P (XEXP (X, 1), 4) \
@@ -1988,7 +2082,7 @@ typedef struct rs6000_args
&& ! flag_pic && ! TARGET_TOC \
&& GET_MODE_NUNITS (MODE) == 1 \
&& (GET_MODE_BITSIZE (MODE) <= 32 \
- || (TARGET_HARD_FLOAT && (MODE) == DFmode)) \
+ || (TARGET_HARD_FLOAT && TARGET_FPRS && (MODE) == DFmode)) \
&& GET_CODE (X) == LO_SUM \
&& GET_CODE (XEXP (X, 0)) == REG \
&& INT_REG_OK_FOR_BASE_P (XEXP (X, 0), (STRICT)) \
@@ -2315,6 +2409,7 @@ do { \
? COSTS_N_INSNS (21) \
: COSTS_N_INSNS (37)); \
case PROCESSOR_PPC750: \
+ case PROCESSOR_PPC8540: \
case PROCESSOR_PPC7400: \
return COSTS_N_INSNS (19); \
case PROCESSOR_PPC7450: \
@@ -2600,6 +2695,8 @@ extern char rs6000_reg_names[][8]; /* register names (0 vs. %r0). */
&rs6000_reg_names[108][0], /* v31 */ \
&rs6000_reg_names[109][0], /* vrsave */ \
&rs6000_reg_names[110][0], /* vscr */ \
+ &rs6000_reg_names[111][0], /* spe_acc */ \
+ &rs6000_reg_names[112][0], /* spefscr */ \
}
/* print-rtl can't handle the above REGISTER_NAMES, so define the
@@ -2624,6 +2721,7 @@ extern char rs6000_reg_names[][8]; /* register names (0 vs. %r0). */
"v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23", \
"v24", "v25", "v26", "v27", "v28", "v29", "v30", "v31", \
"vrsave", "vscr" \
+ , "spe_acc", "spefscr" \
}
/* Table of additional register names to use in user input. */
@@ -2654,6 +2752,7 @@ extern char rs6000_reg_names[][8]; /* register names (0 vs. %r0). */
{"v24", 101},{"v25", 102},{"v26", 103},{"v27", 104}, \
{"v28", 105},{"v29", 106},{"v30", 107},{"v31", 108}, \
{"vrsave", 109}, {"vscr", 110}, \
+ {"spe_acc", 111}, {"spefscr", 112}, \
/* no additional names for: mq, lr, ctr, ap */ \
{"cr0", 68}, {"cr1", 69}, {"cr2", 70}, {"cr3", 71}, \
{"cr4", 72}, {"cr5", 73}, {"cr6", 74}, {"cr7", 75}, \
@@ -3003,4 +3102,246 @@ enum rs6000_builtins
ALTIVEC_BUILTIN_ABS_V4SF,
ALTIVEC_BUILTIN_ABS_V8HI,
ALTIVEC_BUILTIN_ABS_V16QI
+ /* SPE builtins. */
+ , SPE_BUILTIN_EVADDW,
+ SPE_BUILTIN_EVAND,
+ SPE_BUILTIN_EVANDC,
+ SPE_BUILTIN_EVDIVWS,
+ SPE_BUILTIN_EVDIVWU,
+ SPE_BUILTIN_EVEQV,
+ SPE_BUILTIN_EVFSADD,
+ SPE_BUILTIN_EVFSDIV,
+ SPE_BUILTIN_EVFSMUL,
+ SPE_BUILTIN_EVFSSUB,
+ SPE_BUILTIN_EVLDDX,
+ SPE_BUILTIN_EVLDHX,
+ SPE_BUILTIN_EVLDWX,
+ SPE_BUILTIN_EVLHHESPLATX,
+ SPE_BUILTIN_EVLHHOSSPLATX,
+ SPE_BUILTIN_EVLHHOUSPLATX,
+ SPE_BUILTIN_EVLWHEX,
+ SPE_BUILTIN_EVLWHOSX,
+ SPE_BUILTIN_EVLWHOUX,
+ SPE_BUILTIN_EVLWHSPLATX,
+ SPE_BUILTIN_EVLWWSPLATX,
+ SPE_BUILTIN_EVMERGEHI,
+ SPE_BUILTIN_EVMERGEHILO,
+ SPE_BUILTIN_EVMERGELO,
+ SPE_BUILTIN_EVMERGELOHI,
+ SPE_BUILTIN_EVMHEGSMFAA,
+ SPE_BUILTIN_EVMHEGSMFAN,
+ SPE_BUILTIN_EVMHEGSMIAA,
+ SPE_BUILTIN_EVMHEGSMIAN,
+ SPE_BUILTIN_EVMHEGUMIAA,
+ SPE_BUILTIN_EVMHEGUMIAN,
+ SPE_BUILTIN_EVMHESMF,
+ SPE_BUILTIN_EVMHESMFA,
+ SPE_BUILTIN_EVMHESMFAAW,
+ SPE_BUILTIN_EVMHESMFANW,
+ SPE_BUILTIN_EVMHESMI,
+ SPE_BUILTIN_EVMHESMIA,
+ SPE_BUILTIN_EVMHESMIAAW,
+ SPE_BUILTIN_EVMHESMIANW,
+ SPE_BUILTIN_EVMHESSF,
+ SPE_BUILTIN_EVMHESSFA,
+ SPE_BUILTIN_EVMHESSFAAW,
+ SPE_BUILTIN_EVMHESSFANW,
+ SPE_BUILTIN_EVMHESSIAAW,
+ SPE_BUILTIN_EVMHESSIANW,
+ SPE_BUILTIN_EVMHEUMI,
+ SPE_BUILTIN_EVMHEUMIA,
+ SPE_BUILTIN_EVMHEUMIAAW,
+ SPE_BUILTIN_EVMHEUMIANW,
+ SPE_BUILTIN_EVMHEUSIAAW,
+ SPE_BUILTIN_EVMHEUSIANW,
+ SPE_BUILTIN_EVMHOGSMFAA,
+ SPE_BUILTIN_EVMHOGSMFAN,
+ SPE_BUILTIN_EVMHOGSMIAA,
+ SPE_BUILTIN_EVMHOGSMIAN,
+ SPE_BUILTIN_EVMHOGUMIAA,
+ SPE_BUILTIN_EVMHOGUMIAN,
+ SPE_BUILTIN_EVMHOSMF,
+ SPE_BUILTIN_EVMHOSMFA,
+ SPE_BUILTIN_EVMHOSMFAAW,
+ SPE_BUILTIN_EVMHOSMFANW,
+ SPE_BUILTIN_EVMHOSMI,
+ SPE_BUILTIN_EVMHOSMIA,
+ SPE_BUILTIN_EVMHOSMIAAW,
+ SPE_BUILTIN_EVMHOSMIANW,
+ SPE_BUILTIN_EVMHOSSF,
+ SPE_BUILTIN_EVMHOSSFA,
+ SPE_BUILTIN_EVMHOSSFAAW,
+ SPE_BUILTIN_EVMHOSSFANW,
+ SPE_BUILTIN_EVMHOSSIAAW,
+ SPE_BUILTIN_EVMHOSSIANW,
+ SPE_BUILTIN_EVMHOUMI,
+ SPE_BUILTIN_EVMHOUMIA,
+ SPE_BUILTIN_EVMHOUMIAAW,
+ SPE_BUILTIN_EVMHOUMIANW,
+ SPE_BUILTIN_EVMHOUSIAAW,
+ SPE_BUILTIN_EVMHOUSIANW,
+ SPE_BUILTIN_EVMWHSMF,
+ SPE_BUILTIN_EVMWHSMFA,
+ SPE_BUILTIN_EVMWHSMI,
+ SPE_BUILTIN_EVMWHSMIA,
+ SPE_BUILTIN_EVMWHSSF,
+ SPE_BUILTIN_EVMWHSSFA,
+ SPE_BUILTIN_EVMWHUMI,
+ SPE_BUILTIN_EVMWHUMIA,
+ SPE_BUILTIN_EVMWLSMF,
+ SPE_BUILTIN_EVMWLSMFA,
+ SPE_BUILTIN_EVMWLSMFAAW,
+ SPE_BUILTIN_EVMWLSMFANW,
+ SPE_BUILTIN_EVMWLSMIAAW,
+ SPE_BUILTIN_EVMWLSMIANW,
+ SPE_BUILTIN_EVMWLSSF,
+ SPE_BUILTIN_EVMWLSSFA,
+ SPE_BUILTIN_EVMWLSSFAAW,
+ SPE_BUILTIN_EVMWLSSFANW,
+ SPE_BUILTIN_EVMWLSSIAAW,
+ SPE_BUILTIN_EVMWLSSIANW,
+ SPE_BUILTIN_EVMWLUMI,
+ SPE_BUILTIN_EVMWLUMIA,
+ SPE_BUILTIN_EVMWLUMIAAW,
+ SPE_BUILTIN_EVMWLUMIANW,
+ SPE_BUILTIN_EVMWLUSIAAW,
+ SPE_BUILTIN_EVMWLUSIANW,
+ SPE_BUILTIN_EVMWSMF,
+ SPE_BUILTIN_EVMWSMFA,
+ SPE_BUILTIN_EVMWSMFAA,
+ SPE_BUILTIN_EVMWSMFAN,
+ SPE_BUILTIN_EVMWSMI,
+ SPE_BUILTIN_EVMWSMIA,
+ SPE_BUILTIN_EVMWSMIAA,
+ SPE_BUILTIN_EVMWSMIAN,
+ SPE_BUILTIN_EVMWHSSFAA,
+ SPE_BUILTIN_EVMWSSF,
+ SPE_BUILTIN_EVMWSSFA,
+ SPE_BUILTIN_EVMWSSFAA,
+ SPE_BUILTIN_EVMWSSFAN,
+ SPE_BUILTIN_EVMWUMI,
+ SPE_BUILTIN_EVMWUMIA,
+ SPE_BUILTIN_EVMWUMIAA,
+ SPE_BUILTIN_EVMWUMIAN,
+ SPE_BUILTIN_EVNAND,
+ SPE_BUILTIN_EVNOR,
+ SPE_BUILTIN_EVOR,
+ SPE_BUILTIN_EVORC,
+ SPE_BUILTIN_EVRLW,
+ SPE_BUILTIN_EVSLW,
+ SPE_BUILTIN_EVSRWS,
+ SPE_BUILTIN_EVSRWU,
+ SPE_BUILTIN_EVSTDDX,
+ SPE_BUILTIN_EVSTDHX,
+ SPE_BUILTIN_EVSTDWX,
+ SPE_BUILTIN_EVSTWHEX,
+ SPE_BUILTIN_EVSTWHOX,
+ SPE_BUILTIN_EVSTWWEX,
+ SPE_BUILTIN_EVSTWWOX,
+ SPE_BUILTIN_EVSUBFW,
+ SPE_BUILTIN_EVXOR,
+ SPE_BUILTIN_EVABS,
+ SPE_BUILTIN_EVADDSMIAAW,
+ SPE_BUILTIN_EVADDSSIAAW,
+ SPE_BUILTIN_EVADDUMIAAW,
+ SPE_BUILTIN_EVADDUSIAAW,
+ SPE_BUILTIN_EVCNTLSW,
+ SPE_BUILTIN_EVCNTLZW,
+ SPE_BUILTIN_EVEXTSB,
+ SPE_BUILTIN_EVEXTSH,
+ SPE_BUILTIN_EVFSABS,
+ SPE_BUILTIN_EVFSCFSF,
+ SPE_BUILTIN_EVFSCFSI,
+ SPE_BUILTIN_EVFSCFUF,
+ SPE_BUILTIN_EVFSCFUI,
+ SPE_BUILTIN_EVFSCTSF,
+ SPE_BUILTIN_EVFSCTSI,
+ SPE_BUILTIN_EVFSCTSIZ,
+ SPE_BUILTIN_EVFSCTUF,
+ SPE_BUILTIN_EVFSCTUI,
+ SPE_BUILTIN_EVFSCTUIZ,
+ SPE_BUILTIN_EVFSNABS,
+ SPE_BUILTIN_EVFSNEG,
+ SPE_BUILTIN_EVMRA,
+ SPE_BUILTIN_EVNEG,
+ SPE_BUILTIN_EVRNDW,
+ SPE_BUILTIN_EVSUBFSMIAAW,
+ SPE_BUILTIN_EVSUBFSSIAAW,
+ SPE_BUILTIN_EVSUBFUMIAAW,
+ SPE_BUILTIN_EVSUBFUSIAAW,
+ SPE_BUILTIN_EVADDIW,
+ SPE_BUILTIN_EVLDD,
+ SPE_BUILTIN_EVLDH,
+ SPE_BUILTIN_EVLDW,
+ SPE_BUILTIN_EVLHHESPLAT,
+ SPE_BUILTIN_EVLHHOSSPLAT,
+ SPE_BUILTIN_EVLHHOUSPLAT,
+ SPE_BUILTIN_EVLWHE,
+ SPE_BUILTIN_EVLWHOS,
+ SPE_BUILTIN_EVLWHOU,
+ SPE_BUILTIN_EVLWHSPLAT,
+ SPE_BUILTIN_EVLWWSPLAT,
+ SPE_BUILTIN_EVRLWI,
+ SPE_BUILTIN_EVSLWI,
+ SPE_BUILTIN_EVSRWIS,
+ SPE_BUILTIN_EVSRWIU,
+ SPE_BUILTIN_EVSTDD,
+ SPE_BUILTIN_EVSTDH,
+ SPE_BUILTIN_EVSTDW,
+ SPE_BUILTIN_EVSTWHE,
+ SPE_BUILTIN_EVSTWHO,
+ SPE_BUILTIN_EVSTWWE,
+ SPE_BUILTIN_EVSTWWO,
+ SPE_BUILTIN_EVSUBIFW,
+
+ /* Compares. */
+ SPE_BUILTIN_EVCMPEQ,
+ SPE_BUILTIN_EVCMPGTS,
+ SPE_BUILTIN_EVCMPGTU,
+ SPE_BUILTIN_EVCMPLTS,
+ SPE_BUILTIN_EVCMPLTU,
+ SPE_BUILTIN_EVFSCMPEQ,
+ SPE_BUILTIN_EVFSCMPGT,
+ SPE_BUILTIN_EVFSCMPLT,
+ SPE_BUILTIN_EVFSTSTEQ,
+ SPE_BUILTIN_EVFSTSTGT,
+ SPE_BUILTIN_EVFSTSTLT,
+
+ /* EVSEL compares. */
+ SPE_BUILTIN_EVSEL_CMPEQ,
+ SPE_BUILTIN_EVSEL_CMPGTS,
+ SPE_BUILTIN_EVSEL_CMPGTU,
+ SPE_BUILTIN_EVSEL_CMPLTS,
+ SPE_BUILTIN_EVSEL_CMPLTU,
+ SPE_BUILTIN_EVSEL_FSCMPEQ,
+ SPE_BUILTIN_EVSEL_FSCMPGT,
+ SPE_BUILTIN_EVSEL_FSCMPLT,
+ SPE_BUILTIN_EVSEL_FSTSTEQ,
+ SPE_BUILTIN_EVSEL_FSTSTGT,
+ SPE_BUILTIN_EVSEL_FSTSTLT,
+
+ SPE_BUILTIN_EVSPLATFI,
+ SPE_BUILTIN_EVSPLATI,
+ SPE_BUILTIN_EVMWHSSMAA,
+ SPE_BUILTIN_EVMWHSMFAA,
+ SPE_BUILTIN_EVMWHSMIAA,
+ SPE_BUILTIN_EVMWHUSIAA,
+ SPE_BUILTIN_EVMWHUMIAA,
+ SPE_BUILTIN_EVMWHSSFAN,
+ SPE_BUILTIN_EVMWHSSIAN,
+ SPE_BUILTIN_EVMWHSMFAN,
+ SPE_BUILTIN_EVMWHSMIAN,
+ SPE_BUILTIN_EVMWHUSIAN,
+ SPE_BUILTIN_EVMWHUMIAN,
+ SPE_BUILTIN_EVMWHGSSFAA,
+ SPE_BUILTIN_EVMWHGSMFAA,
+ SPE_BUILTIN_EVMWHGSMIAA,
+ SPE_BUILTIN_EVMWHGUMIAA,
+ SPE_BUILTIN_EVMWHGSSFAN,
+ SPE_BUILTIN_EVMWHGSMFAN,
+ SPE_BUILTIN_EVMWHGSMIAN,
+ SPE_BUILTIN_EVMWHGUMIAN,
+ SPE_BUILTIN_MTSPEFSCR,
+ SPE_BUILTIN_MFSPEFSCR,
+ SPE_BUILTIN_BRINC
};
diff --git a/gcc/config/rs6000/rs6000.md b/gcc/config/rs6000/rs6000.md
index 5e2b5870098..52c86079cb6 100644
--- a/gcc/config/rs6000/rs6000.md
+++ b/gcc/config/rs6000/rs6000.md
@@ -2039,9 +2039,18 @@
(minus:SI (match_dup 2) (match_dup 1))))
(set (match_operand:SI 0 "gpc_reg_operand" "")
(minus:SI (match_dup 2) (match_dup 3)))]
- "TARGET_POWER"
+ "TARGET_POWER || TARGET_ISEL"
"
-{ operands[3] = gen_reg_rtx (SImode); }")
+{
+ if (TARGET_ISEL)
+ {
+ operands[2] = force_reg (SImode, operands[2]);
+ rs6000_emit_minmax (operands[0], SMIN, operands[1], operands[2]);
+ DONE;
+ }
+
+ operands[3] = gen_reg_rtx (SImode);
+}")
(define_split
[(set (match_operand:SI 0 "gpc_reg_operand" "")
@@ -2064,9 +2073,17 @@
(minus:SI (match_dup 2) (match_dup 1))))
(set (match_operand:SI 0 "gpc_reg_operand" "")
(plus:SI (match_dup 3) (match_dup 1)))]
- "TARGET_POWER"
+ "TARGET_POWER || TARGET_ISEL"
"
-{ operands[3] = gen_reg_rtx (SImode); }")
+{
+ if (TARGET_ISEL)
+ {
+ operands[2] = force_reg (SImode, operands[2]);
+ rs6000_emit_minmax (operands[0], SMAX, operands[1], operands[2]);
+ DONE;
+ }
+ operands[3] = gen_reg_rtx (SImode);
+}")
(define_split
[(set (match_operand:SI 0 "gpc_reg_operand" "")
@@ -2091,9 +2108,14 @@
(minus:SI (match_dup 4) (match_dup 3))))
(set (match_operand:SI 0 "gpc_reg_operand" "")
(minus:SI (match_dup 2) (match_dup 3)))]
- "TARGET_POWER"
+ "TARGET_POWER || TARGET_ISEL"
"
{
+ if (TARGET_ISEL)
+ {
+ rs6000_emit_minmax (operands[0], UMIN, operands[1], operands[2]);
+ DONE;
+ }
operands[3] = gen_reg_rtx (SImode);
operands[4] = gen_reg_rtx (SImode);
operands[5] = GEN_INT (-2147483647 - 1);
@@ -2109,9 +2131,14 @@
(minus:SI (match_dup 4) (match_dup 3))))
(set (match_operand:SI 0 "gpc_reg_operand" "")
(plus:SI (match_dup 3) (match_dup 1)))]
- "TARGET_POWER"
+ "TARGET_POWER || TARGET_ISEL"
"
{
+ if (TARGET_ISEL)
+ {
+ rs6000_emit_minmax (operands[0], UMAX, operands[1], operands[2]);
+ DONE;
+ }
operands[3] = gen_reg_rtx (SImode);
operands[4] = gen_reg_rtx (SImode);
operands[5] = GEN_INT (-2147483647 - 1);
@@ -2210,7 +2237,12 @@
""
"
{
- if (! TARGET_POWER)
+ if (TARGET_ISEL)
+ {
+ emit_insn (gen_abssi2_isel (operands[0], operands[1]));
+ DONE;
+ }
+ else if (! TARGET_POWER)
{
emit_insn (gen_abssi2_nopower (operands[0], operands[1]));
DONE;
@@ -2223,11 +2255,30 @@
"TARGET_POWER"
"abs %0,%1")
+(define_insn_and_split "abssi2_isel"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (abs:SI (match_operand:SI 1 "gpc_reg_operand" "b")))
+ (clobber (match_scratch:SI 2 "=b"))
+ (clobber (match_scratch:CC 3 "=y"))]
+ "TARGET_ISEL"
+ "#"
+ "&& reload_completed"
+ [(set (match_dup 2) (neg:SI (match_dup 1)))
+ (set (match_dup 3)
+ (compare:CC (match_dup 1)
+ (const_int 0)))
+ (set (match_dup 0)
+ (if_then_else:SI (ge (match_dup 3)
+ (const_int 0))
+ (match_dup 1)
+ (match_dup 2)))]
+ "")
+
(define_insn_and_split "abssi2_nopower"
[(set (match_operand:SI 0 "gpc_reg_operand" "=&r,r")
(abs:SI (match_operand:SI 1 "gpc_reg_operand" "r,0")))
(clobber (match_scratch:SI 2 "=&r,&r"))]
- "! TARGET_POWER"
+ "! TARGET_POWER && ! TARGET_ISEL"
"#"
"&& reload_completed"
[(set (match_dup 2) (ashiftrt:SI (match_dup 1) (const_int 31)))
@@ -5035,7 +5086,7 @@
(define_insn "extendsfdf2"
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(float_extend:DF (match_operand:SF 1 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"*
{
if (REGNO (operands[0]) == REGNO (operands[1]))
@@ -5048,35 +5099,47 @@
(define_insn "truncdfsf2"
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(float_truncate:SF (match_operand:DF 1 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"frsp %0,%1"
[(set_attr "type" "fp")])
(define_insn "aux_truncdfsf2"
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(unspec:SF [(match_operand:SF 1 "gpc_reg_operand" "f")] 0))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"frsp %0,%1"
[(set_attr "type" "fp")])
-(define_insn "negsf2"
+(define_expand "negsf2"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "")
+ (neg:SF (match_operand:SF 1 "gpc_reg_operand" "")))]
+ "TARGET_HARD_FLOAT"
+ "")
+
+(define_insn "*negsf2"
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(neg:SF (match_operand:SF 1 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fneg %0,%1"
[(set_attr "type" "fp")])
-(define_insn "abssf2"
+(define_expand "abssf2"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "")
+ (abs:SF (match_operand:SF 1 "gpc_reg_operand" "")))]
+ "TARGET_HARD_FLOAT"
+ "")
+
+(define_insn "*abssf2"
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(abs:SF (match_operand:SF 1 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fabs %0,%1"
[(set_attr "type" "fp")])
(define_insn ""
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(neg:SF (abs:SF (match_operand:SF 1 "gpc_reg_operand" "f"))))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fnabs %0,%1"
[(set_attr "type" "fp")])
@@ -5091,7 +5154,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(plus:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"fadds %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5099,7 +5162,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(plus:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"{fa|fadd} %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5114,7 +5177,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(minus:SF (match_operand:SF 1 "gpc_reg_operand" "f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsubs %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5122,7 +5185,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(minus:SF (match_operand:SF 1 "gpc_reg_operand" "f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"{fs|fsub} %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5137,7 +5200,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"fmuls %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5145,7 +5208,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"{fm|fmul} %0,%1,%2"
[(set_attr "type" "dmul")])
@@ -5160,7 +5223,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(div:SF (match_operand:SF 1 "gpc_reg_operand" "f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"fdivs %0,%1,%2"
[(set_attr "type" "sdiv")])
@@ -5168,7 +5231,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(div:SF (match_operand:SF 1 "gpc_reg_operand" "f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS"
"{fd|fdiv} %0,%1,%2"
[(set_attr "type" "ddiv")])
@@ -5177,7 +5240,7 @@
(plus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"fmadds %0,%1,%2,%3"
[(set_attr "type" "fp")])
@@ -5186,7 +5249,7 @@
(plus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f")))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fma|fmadd} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
@@ -5195,7 +5258,7 @@
(minus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"fmsubs %0,%1,%2,%3"
[(set_attr "type" "fp")])
@@ -5204,7 +5267,7 @@
(minus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f")))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fms|fmsub} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
@@ -5213,7 +5276,7 @@
(neg:SF (plus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f"))))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"fnmadds %0,%1,%2,%3"
[(set_attr "type" "fp")])
@@ -5222,7 +5285,7 @@
(neg:SF (plus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f"))))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fnma|fnmadd} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
@@ -5231,7 +5294,7 @@
(neg:SF (minus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f"))))]
- "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"fnmsubs %0,%1,%2,%3"
[(set_attr "type" "fp")])
@@ -5240,27 +5303,27 @@
(neg:SF (minus:SF (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%f")
(match_operand:SF 2 "gpc_reg_operand" "f"))
(match_operand:SF 3 "gpc_reg_operand" "f"))))]
- "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "! TARGET_POWERPC && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fnms|fnmsub} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
(define_expand "sqrtsf2"
[(set (match_operand:SF 0 "gpc_reg_operand" "")
(sqrt:SF (match_operand:SF 1 "gpc_reg_operand" "")))]
- "(TARGET_PPC_GPOPT || TARGET_POWER2) && TARGET_HARD_FLOAT"
+ "(TARGET_PPC_GPOPT || TARGET_POWER2) && TARGET_HARD_FLOAT && TARGET_FPRS"
"")
(define_insn ""
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(sqrt:SF (match_operand:SF 1 "gpc_reg_operand" "f")))]
- "TARGET_PPC_GPOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GPOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsqrts %0,%1"
[(set_attr "type" "ssqrt")])
(define_insn ""
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(sqrt:SF (match_operand:SF 1 "gpc_reg_operand" "f")))]
- "TARGET_POWER2 && TARGET_HARD_FLOAT"
+ "TARGET_POWER2 && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsqrt %0,%1"
[(set_attr "type" "dsqrt")])
@@ -5274,7 +5337,7 @@
(match_operand:SF 2 "gpc_reg_operand" ""))
(match_dup 1)
(match_dup 2)))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"{ rs6000_emit_minmax (operands[0], SMAX, operands[1], operands[2]); DONE;}")
(define_expand "minsf3"
@@ -5283,7 +5346,7 @@
(match_operand:SF 2 "gpc_reg_operand" ""))
(match_dup 2)
(match_dup 1)))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"{ rs6000_emit_minmax (operands[0], SMIN, operands[1], operands[2]); DONE;}")
(define_split
@@ -5291,7 +5354,7 @@
(match_operator:SF 3 "min_max_operator"
[(match_operand:SF 1 "gpc_reg_operand" "")
(match_operand:SF 2 "gpc_reg_operand" "")]))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
[(const_int 0)]
"
{ rs6000_emit_minmax (operands[0], GET_CODE (operands[3]),
@@ -5299,12 +5362,60 @@
DONE;
}")
+(define_expand "movsicc"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "")
+ (if_then_else:SI (match_operand 1 "comparison_operator" "")
+ (match_operand:SI 2 "gpc_reg_operand" "")
+ (match_operand:SI 3 "gpc_reg_operand" "")))]
+ "TARGET_ISEL"
+ "
+{
+ if (rs6000_emit_cmove (operands[0], operands[1], operands[2], operands[3]))
+ DONE;
+ else
+ FAIL;
+}")
+
+;; We use the BASE_REGS for the isel input operands because, if rA is
+;; 0, the value of 0 is placed in rD upon truth. Similarly for rB
+;; because we may switch the operands and rB may end up being rA.
+;;
+;; We need 2 patterns: an unsigned and a signed pattern. We could
+;; leave out the mode in operand 4 and use one pattern, but reload can
+;; change the mode underneath our feet and then gets confused trying
+;; to reload the value.
+(define_insn "isel_signed"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (if_then_else:SI
+ (match_operator 1 "comparison_operator"
+ [(match_operand:CC 4 "cc_reg_operand" "y")
+ (const_int 0)])
+ (match_operand:SI 2 "gpc_reg_operand" "b")
+ (match_operand:SI 3 "gpc_reg_operand" "b")))]
+ "TARGET_ISEL"
+ "*
+{ return output_isel (operands); }"
+ [(set_attr "length" "4")])
+
+(define_insn "isel_unsigned"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (if_then_else:SI
+ (match_operator 1 "comparison_operator"
+ [(match_operand:CCUNS 4 "cc_reg_operand" "y")
+ (const_int 0)])
+ (match_operand:SI 2 "gpc_reg_operand" "b")
+ (match_operand:SI 3 "gpc_reg_operand" "b")))]
+ "TARGET_ISEL"
+ "*
+{ return output_isel (operands); }"
+ [(set_attr "length" "4")])
+
(define_expand "movsfcc"
[(set (match_operand:SF 0 "gpc_reg_operand" "")
(if_then_else:SF (match_operand 1 "comparison_operator" "")
(match_operand:SF 2 "gpc_reg_operand" "")
(match_operand:SF 3 "gpc_reg_operand" "")))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"
{
if (rs6000_emit_cmove (operands[0], operands[1], operands[2], operands[3]))
@@ -5319,7 +5430,7 @@
(match_operand:SF 4 "zero_fp_constant" "F"))
(match_operand:SF 2 "gpc_reg_operand" "f")
(match_operand:SF 3 "gpc_reg_operand" "f")))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsel %0,%1,%2,%3"
[(set_attr "type" "fp")])
@@ -5329,28 +5440,28 @@
(match_operand:DF 4 "zero_fp_constant" "F"))
(match_operand:SF 2 "gpc_reg_operand" "f")
(match_operand:SF 3 "gpc_reg_operand" "f")))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsel %0,%1,%2,%3"
[(set_attr "type" "fp")])
(define_insn "negdf2"
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(neg:DF (match_operand:DF 1 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fneg %0,%1"
[(set_attr "type" "fp")])
(define_insn "absdf2"
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(abs:DF (match_operand:DF 1 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fabs %0,%1"
[(set_attr "type" "fp")])
(define_insn ""
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(neg:DF (abs:DF (match_operand:DF 1 "gpc_reg_operand" "f"))))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fnabs %0,%1"
[(set_attr "type" "fp")])
@@ -5358,7 +5469,7 @@
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(plus:DF (match_operand:DF 1 "gpc_reg_operand" "%f")
(match_operand:DF 2 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"{fa|fadd} %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5366,7 +5477,7 @@
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(minus:DF (match_operand:DF 1 "gpc_reg_operand" "f")
(match_operand:DF 2 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"{fs|fsub} %0,%1,%2"
[(set_attr "type" "fp")])
@@ -5374,7 +5485,7 @@
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(mult:DF (match_operand:DF 1 "gpc_reg_operand" "%f")
(match_operand:DF 2 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"{fm|fmul} %0,%1,%2"
[(set_attr "type" "dmul")])
@@ -5382,7 +5493,7 @@
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(div:DF (match_operand:DF 1 "gpc_reg_operand" "f")
(match_operand:DF 2 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"{fd|fdiv} %0,%1,%2"
[(set_attr "type" "ddiv")])
@@ -5391,7 +5502,7 @@
(plus:DF (mult:DF (match_operand:DF 1 "gpc_reg_operand" "%f")
(match_operand:DF 2 "gpc_reg_operand" "f"))
(match_operand:DF 3 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fma|fmadd} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
@@ -5400,7 +5511,7 @@
(minus:DF (mult:DF (match_operand:DF 1 "gpc_reg_operand" "%f")
(match_operand:DF 2 "gpc_reg_operand" "f"))
(match_operand:DF 3 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fms|fmsub} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
@@ -5409,7 +5520,7 @@
(neg:DF (plus:DF (mult:DF (match_operand:DF 1 "gpc_reg_operand" "%f")
(match_operand:DF 2 "gpc_reg_operand" "f"))
(match_operand:DF 3 "gpc_reg_operand" "f"))))]
- "TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fnma|fnmadd} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
@@ -5418,14 +5529,14 @@
(neg:DF (minus:DF (mult:DF (match_operand:DF 1 "gpc_reg_operand" "%f")
(match_operand:DF 2 "gpc_reg_operand" "f"))
(match_operand:DF 3 "gpc_reg_operand" "f"))))]
- "TARGET_HARD_FLOAT && TARGET_FUSED_MADD"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_FUSED_MADD"
"{fnms|fnmsub} %0,%1,%2,%3"
[(set_attr "type" "dmul")])
(define_insn "sqrtdf2"
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(sqrt:DF (match_operand:DF 1 "gpc_reg_operand" "f")))]
- "(TARGET_PPC_GPOPT || TARGET_POWER2) && TARGET_HARD_FLOAT"
+ "(TARGET_PPC_GPOPT || TARGET_POWER2) && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsqrt %0,%1"
[(set_attr "type" "dsqrt")])
@@ -5438,7 +5549,7 @@
(match_operand:DF 2 "gpc_reg_operand" ""))
(match_dup 1)
(match_dup 2)))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"{ rs6000_emit_minmax (operands[0], SMAX, operands[1], operands[2]); DONE;}")
(define_expand "mindf3"
@@ -5447,7 +5558,7 @@
(match_operand:DF 2 "gpc_reg_operand" ""))
(match_dup 2)
(match_dup 1)))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"{ rs6000_emit_minmax (operands[0], SMIN, operands[1], operands[2]); DONE;}")
(define_split
@@ -5455,7 +5566,7 @@
(match_operator:DF 3 "min_max_operator"
[(match_operand:DF 1 "gpc_reg_operand" "")
(match_operand:DF 2 "gpc_reg_operand" "")]))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
[(const_int 0)]
"
{ rs6000_emit_minmax (operands[0], GET_CODE (operands[3]),
@@ -5468,7 +5579,7 @@
(if_then_else:DF (match_operand 1 "comparison_operator" "")
(match_operand:DF 2 "gpc_reg_operand" "")
(match_operand:DF 3 "gpc_reg_operand" "")))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"
{
if (rs6000_emit_cmove (operands[0], operands[1], operands[2], operands[3]))
@@ -5483,7 +5594,7 @@
(match_operand:DF 4 "zero_fp_constant" "F"))
(match_operand:DF 2 "gpc_reg_operand" "f")
(match_operand:DF 3 "gpc_reg_operand" "f")))]
- "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT"
+ "TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT && TARGET_FPRS"
"fsel %0,%1,%2,%3"
[(set_attr "type" "fp")])
@@ -5499,6 +5610,18 @@
;; Conversions to and from floating-point.
+(define_expand "fixunssfsi2"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "")
+ (unsigned_fix:SI (fix:SF (match_operand:SF 1 "gpc_reg_operand" ""))))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "")
+
+(define_expand "fix_truncsfsi2"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "")
+ (fix:SI (match_operand:SF 1 "gpc_reg_operand" "")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "")
+
; For each of these conversions, there is a define_expand, a define_insn
; with a '#' template, and a define_split (with C code). The idea is
; to allow constant folding with the template of the define_insn,
@@ -5512,7 +5635,7 @@
(clobber (match_dup 4))
(clobber (match_dup 5))
(clobber (match_dup 6))])]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"
{
if (TARGET_POWERPC64)
@@ -5539,7 +5662,7 @@
(clobber (match_operand:DF 4 "memory_operand" "=o"))
(clobber (match_operand:DF 5 "gpc_reg_operand" "=f"))
(clobber (match_operand:SI 6 "gpc_reg_operand" "=r"))]
- "! TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
"#"
[(set_attr "length" "24")])
@@ -5551,7 +5674,7 @@
(clobber (match_operand:DF 4 "offsettable_mem_operand" ""))
(clobber (match_operand:DF 5 "gpc_reg_operand" ""))
(clobber (match_operand:SI 6 "gpc_reg_operand" ""))]
- "! TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
[(set (match_operand:DF 0 "gpc_reg_operand" "")
(float:DF (match_operand:SI 1 "gpc_reg_operand" "")))
(use (match_operand:SI 2 "gpc_reg_operand" ""))
@@ -5581,6 +5704,12 @@
DONE;
}")
+(define_expand "floatunssisf2"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "")
+ (unsigned_float:SF (match_operand:SI 1 "gpc_reg_operand" "")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "")
+
(define_expand "floatunssidf2"
[(parallel [(set (match_operand:DF 0 "gpc_reg_operand" "")
(unsigned_float:DF (match_operand:SI 1 "gpc_reg_operand" "")))
@@ -5588,7 +5717,7 @@
(use (match_dup 3))
(clobber (match_dup 4))
(clobber (match_dup 5))])]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"
{
if (TARGET_POWERPC64)
@@ -5614,7 +5743,7 @@
(use (match_operand:DF 3 "gpc_reg_operand" "f"))
(clobber (match_operand:DF 4 "memory_operand" "=o"))
(clobber (match_operand:DF 5 "gpc_reg_operand" "=f"))]
- "! TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
"#"
[(set_attr "length" "20")])
@@ -5625,7 +5754,7 @@
(use (match_operand:DF 3 "gpc_reg_operand" ""))
(clobber (match_operand:DF 4 "offsettable_mem_operand" ""))
(clobber (match_operand:DF 5 "gpc_reg_operand" ""))]
- "! TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "! TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
[(set (match_operand:DF 0 "gpc_reg_operand" "")
(unsigned_float:DF (match_operand:SI 1 "gpc_reg_operand" "")))
(use (match_operand:SI 2 "gpc_reg_operand" ""))
@@ -5657,7 +5786,7 @@
(fix:SI (match_operand:DF 1 "gpc_reg_operand" "")))
(clobber (match_dup 2))
(clobber (match_dup 3))])]
- "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT"
+ "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT && TARGET_FPRS"
"
{
operands[2] = gen_reg_rtx (DImode);
@@ -5669,7 +5798,7 @@
(fix:SI (match_operand:DF 1 "gpc_reg_operand" "f")))
(clobber (match_operand:DI 2 "gpc_reg_operand" "=f"))
(clobber (match_operand:DI 3 "memory_operand" "=o"))]
- "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT"
+ "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT && TARGET_FPRS"
"#"
[(set_attr "length" "16")])
@@ -5678,7 +5807,7 @@
(fix:SI (match_operand:DF 1 "gpc_reg_operand" "")))
(clobber (match_operand:DI 2 "gpc_reg_operand" ""))
(clobber (match_operand:DI 3 "offsettable_mem_operand" ""))]
- "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT"
+ "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT && TARGET_FPRS"
[(set (match_operand:SI 0 "gpc_reg_operand" "")
(fix:SI (match_operand:DF 1 "gpc_reg_operand" "")))
(clobber (match_operand:DI 2 "gpc_reg_operand" ""))
@@ -5705,14 +5834,20 @@
(define_insn "fctiwz"
[(set (match_operand:DI 0 "gpc_reg_operand" "=*f")
(unspec:DI [(fix:SI (match_operand:DF 1 "gpc_reg_operand" "f"))] 10))]
- "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT"
+ "(TARGET_POWER2 || TARGET_POWERPC) && TARGET_HARD_FLOAT && TARGET_FPRS"
"{fcirz|fctiwz} %0,%1"
[(set_attr "type" "fp")])
+(define_expand "floatsisf2"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "")
+ (float:SF (match_operand:SI 1 "gpc_reg_operand" "")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "")
+
(define_insn "floatdidf2"
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(float:DF (match_operand:DI 1 "gpc_reg_operand" "*f")))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
"fcfid %0,%1"
[(set_attr "type" "fp")])
@@ -5722,7 +5857,7 @@
(clobber (match_operand:DI 2 "memory_operand" "=o"))
(clobber (match_operand:DI 3 "gpc_reg_operand" "=r"))
(clobber (match_operand:DI 4 "gpc_reg_operand" "=f"))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
"#"
""
[(set (match_dup 3) (sign_extend:DI (match_dup 1)))
@@ -5737,7 +5872,7 @@
(clobber (match_operand:DI 2 "memory_operand" "=o"))
(clobber (match_operand:DI 3 "gpc_reg_operand" "=r"))
(clobber (match_operand:DI 4 "gpc_reg_operand" "=f"))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
"#"
""
[(set (match_dup 3) (zero_extend:DI (match_dup 1)))
@@ -5749,7 +5884,7 @@
(define_insn "fix_truncdfdi2"
[(set (match_operand:DI 0 "gpc_reg_operand" "=*f")
(fix:DI (match_operand:DF 1 "gpc_reg_operand" "f")))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT"
+ "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS"
"fctidz %0,%1"
[(set_attr "type" "fp")])
@@ -5758,7 +5893,8 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(float:SF (match_operand:DI 1 "gpc_reg_operand" "*f")))
(clobber (match_scratch:DF 2 "=f"))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT && flag_unsafe_math_optimizations"
+ "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS
+ && flag_unsafe_math_optimizations"
"#"
"&& reload_completed"
[(set (match_dup 2)
@@ -7981,7 +8117,7 @@
[(set (match_operand:DF 0 "gpc_reg_operand" "=f,!r")
(mem:DF (lo_sum:SI (match_operand:SI 1 "gpc_reg_operand" "b,b")
(match_operand 2 "" ""))))]
- "TARGET_MACHO && TARGET_HARD_FLOAT && ! TARGET_64BIT"
+ "TARGET_MACHO && TARGET_HARD_FLOAT && TARGET_FPRS && ! TARGET_64BIT"
"*
{
switch (which_alternative)
@@ -8012,7 +8148,7 @@
[(set (mem:DF (lo_sum:SI (match_operand:SI 1 "gpc_reg_operand" "b")
(match_operand 2 "" "")))
(match_operand:DF 0 "gpc_reg_operand" "f"))]
- "TARGET_MACHO && TARGET_HARD_FLOAT && ! TARGET_64BIT"
+ "TARGET_MACHO && TARGET_HARD_FLOAT && TARGET_FPRS && ! TARGET_64BIT"
"stfd %0,lo16(%2)(%1)"
[(set_attr "type" "store")
(set_attr "length" "4")])
@@ -8021,7 +8157,7 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f,!r")
(mem:SF (lo_sum:SI (match_operand:SI 1 "gpc_reg_operand" "b,b")
(match_operand 2 "" ""))))]
- "TARGET_MACHO && TARGET_HARD_FLOAT && ! TARGET_64BIT"
+ "TARGET_MACHO && TARGET_HARD_FLOAT && TARGET_FPRS && ! TARGET_64BIT"
"@
lfs %0,lo16(%2)(%1)
{l|lwz} %0,lo16(%2)(%1)"
@@ -8032,7 +8168,7 @@
[(set (mem:SF (lo_sum:SI (match_operand:SI 1 "gpc_reg_operand" "b,b")
(match_operand 2 "" "")))
(match_operand:SF 0 "gpc_reg_operand" "f,!r"))]
- "TARGET_MACHO && TARGET_HARD_FLOAT && ! TARGET_64BIT"
+ "TARGET_MACHO && TARGET_HARD_FLOAT && TARGET_FPRS && ! TARGET_64BIT"
"@
stfs %0,lo16(%2)(%1)
{st|stw} %0,lo16(%2)(%1)"
@@ -8214,7 +8350,8 @@
[(set (match_operand:SF 0 "nonimmediate_operand" "=!r,!r,m,f,f,m,!r,!r")
(match_operand:SF 1 "input_operand" "r,m,r,f,m,f,G,Fn"))]
"(gpc_reg_operand (operands[0], SFmode)
- || gpc_reg_operand (operands[1], SFmode)) && TARGET_HARD_FLOAT"
+ || gpc_reg_operand (operands[1], SFmode))
+ && (TARGET_HARD_FLOAT && TARGET_FPRS)"
"@
mr %0,%1
{l%U1%X1|lwz%U1%X1} %0,%1
@@ -8231,7 +8368,8 @@
[(set (match_operand:SF 0 "nonimmediate_operand" "=r,r,m,r,r,r,r,r")
(match_operand:SF 1 "input_operand" "r,m,r,I,L,R,G,Fn"))]
"(gpc_reg_operand (operands[0], SFmode)
- || gpc_reg_operand (operands[1], SFmode)) && TARGET_SOFT_FLOAT"
+ || gpc_reg_operand (operands[1], SFmode))
+ && (TARGET_SOFT_FLOAT || !TARGET_FPRS)"
"@
mr %0,%1
{l%U1%X1|lwz%U1%X1} %0,%1
@@ -8341,7 +8479,7 @@
(define_insn "*movdf_hardfloat32"
[(set (match_operand:DF 0 "nonimmediate_operand" "=!r,??r,m,!r,!r,!r,f,f,m")
(match_operand:DF 1 "input_operand" "r,m,r,G,H,F,f,m,f"))]
- "! TARGET_POWERPC64 && TARGET_HARD_FLOAT
+ "! TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS
&& (gpc_reg_operand (operands[0], DFmode)
|| gpc_reg_operand (operands[1], DFmode))"
"*
@@ -8434,7 +8572,7 @@
(define_insn "*movdf_softfloat32"
[(set (match_operand:DF 0 "nonimmediate_operand" "=r,r,m,r,r,r")
(match_operand:DF 1 "input_operand" "r,m,r,G,H,F"))]
- "! TARGET_POWERPC64 && TARGET_SOFT_FLOAT
+ "! TARGET_POWERPC64 && (TARGET_SOFT_FLOAT || !TARGET_FPRS)
&& (gpc_reg_operand (operands[0], DFmode)
|| gpc_reg_operand (operands[1], DFmode))"
"*
@@ -8475,7 +8613,7 @@
(define_insn "*movdf_hardfloat64"
[(set (match_operand:DF 0 "nonimmediate_operand" "=!r,??r,m,!r,!r,!r,f,f,m")
(match_operand:DF 1 "input_operand" "r,m,r,G,H,F,f,m,f"))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT
+ "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS
&& (gpc_reg_operand (operands[0], DFmode)
|| gpc_reg_operand (operands[1], DFmode))"
"@
@@ -8494,7 +8632,7 @@
(define_insn "*movdf_softfloat64"
[(set (match_operand:DF 0 "nonimmediate_operand" "=r,r,m,r,r,r")
(match_operand:DF 1 "input_operand" "r,m,r,G,H,F"))]
- "TARGET_POWERPC64 && TARGET_SOFT_FLOAT
+ "TARGET_POWERPC64 && (TARGET_SOFT_FLOAT || !TARGET_FPRS)
&& (gpc_reg_operand (operands[0], DFmode)
|| gpc_reg_operand (operands[1], DFmode))"
"@
@@ -8510,13 +8648,15 @@
(define_expand "movtf"
[(set (match_operand:TF 0 "general_operand" "")
(match_operand:TF 1 "any_operand" ""))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"{ rs6000_emit_move (operands[0], operands[1], TFmode); DONE; }")
(define_insn "*movtf_internal"
[(set (match_operand:TF 0 "nonimmediate_operand" "=f,f,m,!r,!r,!r")
(match_operand:TF 1 "input_operand" "f,m,f,G,H,F"))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128
&& (gpc_reg_operand (operands[0], TFmode)
|| gpc_reg_operand (operands[1], TFmode))"
"*
@@ -8549,7 +8689,8 @@
(define_split
[(set (match_operand:TF 0 "gpc_reg_operand" "")
(match_operand:TF 1 "const_double_operand" ""))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
[(set (match_dup 3) (match_dup 1))
(set (match_dup 0)
(float_extend:TF (match_dup 3)))]
@@ -8562,7 +8703,8 @@
(define_insn_and_split "extenddftf2"
[(set (match_operand:TF 0 "gpc_reg_operand" "=f")
(float_extend:TF (match_operand:DF 1 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"#"
""
[(set (match_dup 2) (match_dup 3))]
@@ -8575,7 +8717,8 @@
(define_insn_and_split "extendsftf2"
[(set (match_operand:TF 0 "gpc_reg_operand" "=f")
(float_extend:TF (match_operand:SF 1 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"#"
""
[(set (match_dup 2) (match_dup 3))]
@@ -8588,7 +8731,8 @@
(define_insn "trunctfdf2"
[(set (match_operand:DF 0 "gpc_reg_operand" "=f")
(float_truncate:DF (match_operand:TF 1 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"fadd %0,%1,%L1"
[(set_attr "type" "fp")
(set_attr "length" "8")])
@@ -8597,7 +8741,8 @@
[(set (match_operand:SF 0 "gpc_reg_operand" "=f")
(float_truncate:SF (match_operand:TF 1 "gpc_reg_operand" "f")))
(clobber (match_scratch:DF 2 "=f"))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT
+ && TARGET_FPRS && TARGET_LONG_DOUBLE_128"
"#"
"&& reload_completed"
[(set (match_dup 2)
@@ -8611,7 +8756,7 @@
(float:TF (match_operand:DI 1 "gpc_reg_operand" "*f")))
(clobber (match_scratch:DF 2 "=f"))]
"DEFAULT_ABI == ABI_AIX && TARGET_POWERPC64
- && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_LONG_DOUBLE_128"
"#"
"&& reload_completed"
[(set (match_dup 2)
@@ -8624,7 +8769,8 @@
[(set (match_operand:TF 0 "gpc_reg_operand" "=f")
(float:TF (match_operand:SI 1 "gpc_reg_operand" "r")))
(clobber (match_scratch:DF 2 "=f"))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"#"
"&& reload_completed"
[(set (match_dup 2)
@@ -8637,7 +8783,7 @@
[(set (match_operand:DI 0 "gpc_reg_operand" "=*f")
(fix:DI (match_operand:TF 1 "gpc_reg_operand" "f")))]
"DEFAULT_ABI == ABI_AIX && TARGET_POWERPC64
- && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ && TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_LONG_DOUBLE_128"
"#"
"&& reload_completed"
[(set (match_dup 2)
@@ -8649,7 +8795,8 @@
(define_insn_and_split "fix_trunctfsi2"
[(set (match_operand:SI 0 "gpc_reg_operand" "=r")
(fix:SI (match_operand:TF 1 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"#"
"&& reload_completed"
[(set (match_dup 2)
@@ -8661,7 +8808,8 @@
(define_insn "negtf2"
[(set (match_operand:TF 0 "gpc_reg_operand" "=f")
(neg:TF (match_operand:TF 1 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"*
{
if (REGNO (operands[0]) == REGNO (operands[1]) + 1)
@@ -8675,7 +8823,8 @@
(define_insn "abstf2"
[(set (match_operand:TF 0 "gpc_reg_operand" "=f")
(abs:TF (match_operand:TF 1 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"*
{
if (REGNO (operands[0]) == REGNO (operands[1]) + 1)
@@ -8689,7 +8838,8 @@
(define_insn ""
[(set (match_operand:TF 0 "gpc_reg_operand" "=f")
(neg:TF (abs:TF (match_operand:TF 1 "gpc_reg_operand" "f"))))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"*
{
if (REGNO (operands[0]) == REGNO (operands[1]) + 1)
@@ -9726,7 +9876,7 @@
(match_operand:SI 2 "reg_or_short_operand" "r,I"))))
(set (match_operand:SI 0 "gpc_reg_operand" "=b,b")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_HARD_FLOAT && TARGET_UPDATE"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_UPDATE"
"@
lfsux %3,%0,%2
lfsu %3,%2(%0)"
@@ -9738,7 +9888,7 @@
(match_operand:SF 3 "gpc_reg_operand" "f,f"))
(set (match_operand:SI 0 "gpc_reg_operand" "=b,b")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_HARD_FLOAT && TARGET_UPDATE"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_UPDATE"
"@
stfsux %3,%0,%2
stfsu %3,%2(%0)"
@@ -9750,7 +9900,7 @@
(match_operand:SI 2 "reg_or_short_operand" "r,I"))))
(set (match_operand:SI 0 "gpc_reg_operand" "=b,b")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_SOFT_FLOAT && TARGET_UPDATE"
+ "(TARGET_SOFT_FLOAT || !TARGET_FPRS) && TARGET_UPDATE"
"@
{lux|lwzux} %3,%0,%2
{lu|lwzu} %3,%2(%0)"
@@ -9762,7 +9912,7 @@
(match_operand:SF 3 "gpc_reg_operand" "r,r"))
(set (match_operand:SI 0 "gpc_reg_operand" "=b,b")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_SOFT_FLOAT && TARGET_UPDATE"
+ "(TARGET_SOFT_FLOAT || !TARGET_FPRS) && TARGET_UPDATE"
"@
{stux|stwux} %3,%0,%2
{stu|stwu} %3,%2(%0)"
@@ -9774,7 +9924,7 @@
(match_operand:SI 2 "reg_or_short_operand" "r,I"))))
(set (match_operand:SI 0 "gpc_reg_operand" "=b,b")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_HARD_FLOAT && TARGET_UPDATE"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_UPDATE"
"@
lfdux %3,%0,%2
lfdu %3,%2(%0)"
@@ -9786,7 +9936,7 @@
(match_operand:DF 3 "gpc_reg_operand" "f,f"))
(set (match_operand:SI 0 "gpc_reg_operand" "=b,b")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_HARD_FLOAT && TARGET_UPDATE"
+ "TARGET_HARD_FLOAT && TARGET_FPRS && TARGET_UPDATE"
"@
stfdux %3,%0,%2
stfdu %3,%2(%0)"
@@ -9800,7 +9950,7 @@
(set (match_operand:DF 2 "gpc_reg_operand" "=f")
(match_operand:DF 3 "memory_operand" ""))]
"TARGET_POWER2
- && TARGET_HARD_FLOAT
+ && TARGET_HARD_FLOAT && TARGET_FPRS
&& registers_ok_for_quad_peep (operands[0], operands[2])
&& ! MEM_VOLATILE_P (operands[1]) && ! MEM_VOLATILE_P (operands[3])
&& addrs_ok_for_quad_peep (XEXP (operands[1], 0), XEXP (operands[3], 0))"
@@ -9812,7 +9962,7 @@
(set (match_operand:DF 2 "memory_operand" "")
(match_operand:DF 3 "gpc_reg_operand" "f"))]
"TARGET_POWER2
- && TARGET_HARD_FLOAT
+ && TARGET_HARD_FLOAT && TARGET_FPRS
&& registers_ok_for_quad_peep (operands[1], operands[3])
&& ! MEM_VOLATILE_P (operands[0]) && ! MEM_VOLATILE_P (operands[2])
&& addrs_ok_for_quad_peep (XEXP (operands[0], 0), XEXP (operands[2], 0))"
@@ -10623,7 +10773,7 @@
(define_expand "cmpdf"
[(set (cc0) (compare (match_operand:DF 0 "gpc_reg_operand" "")
(match_operand:DF 1 "gpc_reg_operand" "")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"
{
rs6000_compare_op0 = operands[0];
@@ -10635,7 +10785,8 @@
(define_expand "cmptf"
[(set (cc0) (compare (match_operand:TF 0 "gpc_reg_operand" "")
(match_operand:TF 1 "gpc_reg_operand" "")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT
+ && TARGET_FPRS && TARGET_LONG_DOUBLE_128"
"
{
rs6000_compare_op0 = operands[0];
@@ -10942,7 +11093,7 @@
[(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
(compare:CCFP (match_operand:SF 1 "gpc_reg_operand" "f")
(match_operand:SF 2 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fcmpu %0,%1,%2"
[(set_attr "type" "fpcompare")])
@@ -10950,7 +11101,7 @@
[(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
(compare:CCFP (match_operand:DF 1 "gpc_reg_operand" "f")
(match_operand:DF 2 "gpc_reg_operand" "f")))]
- "TARGET_HARD_FLOAT"
+ "TARGET_HARD_FLOAT && TARGET_FPRS"
"fcmpu %0,%1,%2"
[(set_attr "type" "fpcompare")])
@@ -10959,7 +11110,8 @@
[(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
(compare:CCFP (match_operand:TF 1 "gpc_reg_operand" "f")
(match_operand:TF 2 "gpc_reg_operand" "f")))]
- "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_LONG_DOUBLE_128"
+ "DEFAULT_ABI == ABI_AIX && TARGET_HARD_FLOAT && TARGET_FPRS
+ && TARGET_LONG_DOUBLE_128"
"fcmpu %0,%1,%2\;bne %0,$+4\;fcmpu %0,%L1,%L2"
[(set_attr "type" "fpcompare")
(set_attr "length" "12")])
@@ -10981,6 +11133,14 @@
[(set_attr "type" "cr_logical")
(set_attr "length" "12")])
+;; Same as above, but get the OV/ORDERED bit.
+(define_insn "move_from_CR_ov_bit"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (unspec:SI [(match_operand 1 "cc_reg_operand" "y")] 724))]
+ "TARGET_ISEL"
+ "%D1mfcr %0\;{rlinm|rlwinm} %0,%0,%t1,1"
+ [(set_attr "length" "12")])
+
(define_insn ""
[(set (match_operand:DI 0 "gpc_reg_operand" "=r")
(match_operator:DI 1 "scc_comparison_operator"
@@ -11395,7 +11555,7 @@
(lshiftrt:SI (neg:SI (abs:SI (match_operand:SI 1 "gpc_reg_operand" "r")))
(const_int 31)))
(clobber (match_scratch:SI 2 "=&r"))]
- "! TARGET_POWER && ! TARGET_POWERPC64"
+ "! TARGET_POWER && ! TARGET_POWERPC64 && !TARGET_ISEL"
"{ai|addic} %2,%1,-1\;{sfe|subfe} %0,%2,%1"
[(set_attr "length" "8")])
@@ -16131,3 +16291,5 @@
"vspltisb %2,0\;vsubsws %3,%2,%1\;vmaxsw %0,%1,%3"
[(set_attr "type" "altivec")
(set_attr "length" "12")])
+
+(include "spe.md")
diff --git a/gcc/config/rs6000/spe.h b/gcc/config/rs6000/spe.h
new file mode 100644
index 00000000000..bac1c386300
--- /dev/null
+++ b/gcc/config/rs6000/spe.h
@@ -0,0 +1,1099 @@
+/* PowerPC E500 user include file.
+ Copyright (C) 2002 Free Software Foundation, Inc.
+ Contributed by Aldy Hernandez (aldyh@redhat.com).
+
+This file is part of GNU CC.
+
+GNU CC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2, or (at your option)
+any later version.
+
+GNU CC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GNU CC; see the file COPYING. If not, write to
+the Free Software Foundation, 59 Temple Place - Suite 330,
+Boston, MA 02111-1307, USA. */
+
+/* As a special exception, if you include this header file into source
+ files compiled by GCC, this header file does not by itself cause
+ the resulting executable to be covered by the GNU General Public
+ License. This exception does not however invalidate any other
+ reasons why the executable file might be covered by the GNU General
+ Public License. */
+
+#ifndef _SPE_H
+#define _SPE_H
+
+#define __vector __attribute__((vector_size(8)))
+
+typedef int int32_t;
+typedef unsigned uint32_t;
+typedef short int16_t;
+typedef unsigned short uint16_t;
+typedef long long int64_t;
+typedef unsigned long long uint64_t;
+
+typedef short __vector __ev64_s16__;
+typedef unsigned short __vector __ev64_u16__;
+typedef int __vector __ev64_s32__;
+typedef unsigned __vector __ev64_u32__;
+typedef long long __ev64_s64__;
+typedef unsigned long long __ev64_u64__;
+typedef float __vector __ev64_fs__;
+
+typedef int __vector __ev64_opaque__;
+
+#define __v2si __ev64_opaque__
+#define __v2sf __ev64_fs__
+
+#define __ev_addw(a,b) __builtin_spe_evaddw((__v2si) (a), (__v2si) (b))
+#define __ev_addiw(a,b) __builtin_spe_evaddiw ((__v2si) (a), (b))
+#define __ev_subfw(a,b) __builtin_spe_evsubfw ((__v2si) (a), (__v2si) (b))
+#define __ev_subifw(a,b) __builtin_spe_evsubifw ((__v2si) (a), (b))
+#define __ev_abs(a) __builtin_spe_evabs ((__v2si) (a))
+#define __ev_neg(a) __builtin_spe_evneg ((__v2si) (a))
+#define __ev_extsb(a) __builtin_spe_evextsb ((__v2si) (a))
+#define __ev_extsh(a) __builtin_spe_evextsh ((__v2si) (a))
+#define __ev_and(a,b) __builtin_spe_evand ((__v2si) (a), (__v2si) (b))
+#define __ev_or(a,b) __builtin_spe_evor ((__v2si) (a), (__v2si) (b))
+#define __ev_xor(a,b) __builtin_spe_evxor ((__v2si) (a), (__v2si) (b))
+#define __ev_nand(a,b) __builtin_spe_evnand ((__v2si) (a), (__v2si) (b))
+#define __ev_nor(a,b) __builtin_spe_evnor ((__v2si) (a), (__v2si) (b))
+#define __ev_eqv(a,b) __builtin_spe_eveqv ((__v2si) (a), (__v2si) (b))
+#define __ev_andc(a,b) __builtin_spe_evandc ((__v2si) (a), (__v2si) (b))
+#define __ev_orc(a,b) __builtin_spe_evorc ((__v2si) (a), (__v2si) (b))
+#define __ev_rlw(a,b) __builtin_spe_evrlw ((__v2si) (a), (__v2si) (b))
+#define __ev_rlwi(a,b) __builtin_spe_evrlwi ((__v2si) (a), (b))
+#define __ev_slw(a,b) __builtin_spe_evslw ((__v2si) (a), (__v2si) (b))
+#define __ev_slwi(a,b) __builtin_spe_evslwi ((__v2si) (a), (b))
+#define __ev_srws(a,b) __builtin_spe_evsrws ((__v2si) (a), (__v2si) (b))
+#define __ev_srwu(a,b) __builtin_spe_evsrwu ((__v2si) (a), (__v2si) (b))
+#define __ev_srwis(a,b) __builtin_spe_evsrwis ((__v2si) (a), (b))
+#define __ev_srwiu(a,b) __builtin_spe_evsrwiu ((__v2si) (a), (b))
+#define __ev_cntlzw(a) __builtin_spe_evcntlzw ((__v2si) (a))
+#define __ev_cntlsw(a) __builtin_spe_evcntlsw ((__v2si) (a))
+#define __ev_rndw(a) __builtin_spe_evrndw ((__v2si) (a))
+#define __ev_mergehi(a,b) __builtin_spe_evmergehi ((__v2si) (a), (__v2si) (b))
+#define __ev_mergelo(a,b) __builtin_spe_evmergelo ((__v2si) (a), (__v2si) (b))
+#define __ev_mergelohi(a,b) __builtin_spe_evmergelohi ((__v2si) (a), (__v2si) (b))
+#define __ev_mergehilo(a,b) __builtin_spe_evmergehilo ((__v2si) (a), (__v2si) (b))
+#define __ev_splati(a) __builtin_spe_evsplati ((a))
+#define __ev_splatfi(a) __builtin_spe_evsplatfi ((a))
+#define __ev_divws(a,b) __builtin_spe_evdivws ((__v2si) (a), (__v2si) (b))
+#define __ev_divwu(a,b) __builtin_spe_evdivwu ((__v2si) (a), (__v2si) (b))
+#define __ev_mra(a) __builtin_spe_evmra ((__v2si) (a))
+
+#define __brinc __builtin_spe_brinc
+
+/* Loads. */
+
+#define __ev_lddx(a,b) __builtin_spe_evlddx ((void *)(a), (b))
+#define __ev_ldwx(a,b) __builtin_spe_evldwx ((void *)(a), (b))
+#define __ev_ldhx(a,b) __builtin_spe_evldhx ((void *)(a), (b))
+#define __ev_lwhex(a,b) __builtin_spe_evlwhex ((a), (b))
+#define __ev_lwhoux(a,b) __builtin_spe_evlwhoux ((a), (b))
+#define __ev_lwhosx(a,b) __builtin_spe_evlwhosx ((a), (b))
+#define __ev_lwwsplatx(a,b) __builtin_spe_evlwwsplatx ((a), (b))
+#define __ev_lwhsplatx(a,b) __builtin_spe_evlwhsplatx ((a), (b))
+#define __ev_lhhesplatx(a,b) __builtin_spe_evlhhesplatx ((a), (b))
+#define __ev_lhhousplatx(a,b) __builtin_spe_evlhhousplatx ((a), (b))
+#define __ev_lhhossplatx(a,b) __builtin_spe_evlhhossplatx ((a), (b))
+#define __ev_ldd(a,b) __builtin_spe_evldd ((void *)(a), (b))
+#define __ev_ldw(a,b) __builtin_spe_evldw ((void *)(a), (b))
+#define __ev_ldh(a,b) __builtin_spe_evldh ((void *)(a), (b))
+#define __ev_lwhe(a,b) __builtin_spe_evlwhe ((a), (b))
+#define __ev_lwhou(a,b) __builtin_spe_evlwhou ((a), (b))
+#define __ev_lwhos(a,b) __builtin_spe_evlwhos ((a), (b))
+#define __ev_lwwsplat(a,b) __builtin_spe_evlwwsplat ((a), (b))
+#define __ev_lwhsplat(a,b) __builtin_spe_evlwhsplat ((a), (b))
+#define __ev_lhhesplat(a,b) __builtin_spe_evlhhesplat ((a), (b))
+#define __ev_lhhousplat(a,b) __builtin_spe_evlhhousplat ((a), (b))
+#define __ev_lhhossplat(a,b) __builtin_spe_evlhhossplat ((a), (b))
+
+/* Stores. */
+
+#define __ev_stddx(a,b,c) __builtin_spe_evstddx ((__v2si)(a), (void *)(b), (c))
+#define __ev_stdwx(a,b,c) __builtin_spe_evstdwx ((__v2si)(a), (void *)(b), (c))
+#define __ev_stdhx(a,b,c) __builtin_spe_evstdhx ((__v2si)(a), (void *)(b), (c))
+#define __ev_stwwex(a,b,c) __builtin_spe_evstwwex ((__v2si)(a), (b), (c))
+#define __ev_stwwox(a,b,c) __builtin_spe_evstwwox ((__v2si)(a), (b), (c))
+#define __ev_stwhex(a,b,c) __builtin_spe_evstwhex ((__v2si)(a), (b), (c))
+#define __ev_stwhox(a,b,c) __builtin_spe_evstwhox ((__v2si)(a), (b), (c))
+#define __ev_stdd(a,b,c) __builtin_spe_evstdd ((__v2si)(a), (b), (c))
+#define __ev_stdw(a,b,c) __builtin_spe_evstdw ((__v2si)(a), (b), (c))
+#define __ev_stdh(a,b,c) __builtin_spe_evstdh ((__v2si)(a), (b), (c))
+#define __ev_stwwe(a,b,c) __builtin_spe_evstwwe ((__v2si)(a), (b), (c))
+#define __ev_stwwo(a,b,c) __builtin_spe_evstwwo ((__v2si)(a), (b), (c))
+#define __ev_stwhe(a,b,c) __builtin_spe_evstwhe ((__v2si)(a), (b), (c))
+#define __ev_stwho(a,b,c) __builtin_spe_evstwho ((__v2si)(a), (b), (c))
+
+/* Fixed point complex. */
+
+#define __ev_mhossf(a, b) __builtin_spe_evmhossf ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmf(a, b) __builtin_spe_evmhosmf ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmi(a, b) __builtin_spe_evmhosmi ((__v2si) (a), (__v2si) (b))
+#define __ev_mhoumi(a, b) __builtin_spe_evmhoumi ((__v2si) (a), (__v2si) (b))
+#define __ev_mhessf(a, b) __builtin_spe_evmhessf ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmf(a, b) __builtin_spe_evmhesmf ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmi(a, b) __builtin_spe_evmhesmi ((__v2si) (a), (__v2si) (b))
+#define __ev_mheumi(a, b) __builtin_spe_evmheumi ((__v2si) (a), (__v2si) (b))
+#define __ev_mhossfa(a, b) __builtin_spe_evmhossfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmfa(a, b) __builtin_spe_evmhosmfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmia(a, b) __builtin_spe_evmhosmia ((__v2si) (a), (__v2si) (b))
+#define __ev_mhoumia(a, b) __builtin_spe_evmhoumia ((__v2si) (a), (__v2si) (b))
+#define __ev_mhessfa(a, b) __builtin_spe_evmhessfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmfa(a, b) __builtin_spe_evmhesmfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmia(a, b) __builtin_spe_evmhesmia ((__v2si) (a), (__v2si) (b))
+#define __ev_mheumia(a, b) __builtin_spe_evmheumia ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mhoumf __ev_mhoumi
+#define __ev_mheumf __ev_mheumi
+#define __ev_mhoumfa __ev_mhoumia
+#define __ev_mheumfa __ev_mheumia
+
+#define __ev_mhossfaaw(a, b) __builtin_spe_evmhossfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhossiaaw(a, b) __builtin_spe_evmhossiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmfaaw(a, b) __builtin_spe_evmhosmfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmiaaw(a, b) __builtin_spe_evmhosmiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhousiaaw(a, b) __builtin_spe_evmhousiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhoumiaaw(a, b) __builtin_spe_evmhoumiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhessfaaw(a, b) __builtin_spe_evmhessfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhessiaaw(a, b) __builtin_spe_evmhessiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmfaaw(a, b) __builtin_spe_evmhesmfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmiaaw(a, b) __builtin_spe_evmhesmiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mheusiaaw(a, b) __builtin_spe_evmheusiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mheumiaaw(a, b) __builtin_spe_evmheumiaaw ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mhousfaaw __ev_mhousiaaw
+#define __ev_mhoumfaaw __ev_mhoumiaaw
+#define __ev_mheusfaaw __ev_mheusiaaw
+#define __ev_mheumfaaw __ev_mheumiaaw
+
+#define __ev_mhossfanw(a, b) __builtin_spe_evmhossfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhossianw(a, b) __builtin_spe_evmhossianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmfanw(a, b) __builtin_spe_evmhosmfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhosmianw(a, b) __builtin_spe_evmhosmianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhousianw(a, b) __builtin_spe_evmhousianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhoumianw(a, b) __builtin_spe_evmhoumianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhessfanw(a, b) __builtin_spe_evmhessfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhessianw(a, b) __builtin_spe_evmhessianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmfanw(a, b) __builtin_spe_evmhesmfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mhesmianw(a, b) __builtin_spe_evmhesmianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mheusianw(a, b) __builtin_spe_evmheusianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mheumianw(a, b) __builtin_spe_evmheumianw ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mhousfanw __ev_mhousianw
+#define __ev_mhoumfanw __ev_mhoumianw
+#define __ev_mheusfanw __ev_mheusianw
+#define __ev_mheumfanw __ev_mheumianw
+
+#define __ev_mhogsmfaa(a, b) __builtin_spe_evmhogsmfaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhogsmiaa(a, b) __builtin_spe_evmhogsmiaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhogumiaa(a, b) __builtin_spe_evmhogumiaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhegsmfaa(a, b) __builtin_spe_evmhegsmfaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhegsmiaa(a, b) __builtin_spe_evmhegsmiaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mhegumiaa(a, b) __builtin_spe_evmhegumiaa ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mhogumfaa __ev_mhogumiaa
+#define __ev_mhegumfaa __ev_mhegumiaa
+
+#define __ev_mhogsmfan(a, b) __builtin_spe_evmhogsmfan ((__v2si) (a), (__v2si) (b))
+#define __ev_mhogsmian(a, b) __builtin_spe_evmhogsmian ((__v2si) (a), (__v2si) (b))
+#define __ev_mhogumian(a, b) __builtin_spe_evmhogumian ((__v2si) (a), (__v2si) (b))
+#define __ev_mhegsmfan(a, b) __builtin_spe_evmhegsmfan ((__v2si) (a), (__v2si) (b))
+#define __ev_mhegsmian(a, b) __builtin_spe_evmhegsmian ((__v2si) (a), (__v2si) (b))
+#define __ev_mhegumian(a, b) __builtin_spe_evmhegumian ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mhogumfan __ev_mhogumian
+#define __ev_mhegumfan __ev_mhegumian
+
+#define __ev_mwhssf(a, b) __builtin_spe_evmwhssf ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmf(a, b) __builtin_spe_evmwhsmf ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmi(a, b) __builtin_spe_evmwhsmi ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhumi(a, b) __builtin_spe_evmwhumi ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhssfa(a, b) __builtin_spe_evmwhssfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmfa(a, b) __builtin_spe_evmwhsmfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmia(a, b) __builtin_spe_evmwhsmia ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhumia(a, b) __builtin_spe_evmwhumia ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwhumf __ev_mwhumi
+#define __ev_mwhumfa __ev_mwhumia
+
+#define __ev_mwlssf(a, b) __builtin_spe_evmwlssf ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlsmf(a, b) __builtin_spe_evmwlsmf ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlumi(a, b) __builtin_spe_evmwlumi ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlssfa(a, b) __builtin_spe_evmwlssfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlsmfa(a, b) __builtin_spe_evmwlsmfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlumia(a, b) __builtin_spe_evmwlumia ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlumiaaw(a, b) __builtin_spe_evmwlumiaaw ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwlufi __ev_mwlumi
+#define __ev_mwlufia __ev_mwlumia
+
+#define __ev_mwlssfaaw(a, b) __builtin_spe_evmwlssfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlssiaaw(a, b) __builtin_spe_evmwlssiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlsmfaaw(a, b) __builtin_spe_evmwlsmfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlsmiaaw(a, b) __builtin_spe_evmwlsmiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlusiaaw(a, b) __builtin_spe_evmwlusiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlusiaaw(a, b) __builtin_spe_evmwlusiaaw ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwlumfaaw __ev_mwlumiaaw
+#define __ev_mwlusfaaw __ev_mwlusiaaw
+
+#define __ev_mwlssfanw(a, b) __builtin_spe_evmwlssfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlssianw(a, b) __builtin_spe_evmwlssianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlsmfanw(a, b) __builtin_spe_evmwlsmfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlsmianw(a, b) __builtin_spe_evmwlsmianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlusianw(a, b) __builtin_spe_evmwlusianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwlumianw(a, b) __builtin_spe_evmwlumianw ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwlumfanw __ev_mwlumianw
+#define __ev_mwlusfanw __ev_mwlusianw
+
+#define __ev_mwssf(a, b) __builtin_spe_evmwssf ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmf(a, b) __builtin_spe_evmwsmf ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmi(a, b) __builtin_spe_evmwsmi ((__v2si) (a), (__v2si) (b))
+#define __ev_mwumi(a, b) __builtin_spe_evmwumi ((__v2si) (a), (__v2si) (b))
+#define __ev_mwssfa(a, b) __builtin_spe_evmwssfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmfa(a, b) __builtin_spe_evmwsmfa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmia(a, b) __builtin_spe_evmwsmia ((__v2si) (a), (__v2si) (b))
+#define __ev_mwumia(a, b) __builtin_spe_evmwumia ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwumf __ev_mwumi
+#define __ev_mwumfa __ev_mwumia
+
+#define __ev_mwssfaa(a, b) __builtin_spe_evmwssfaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmfaa(a, b) __builtin_spe_evmwsmfaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmiaa(a, b) __builtin_spe_evmwsmiaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwumiaa(a, b) __builtin_spe_evmwumiaa ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwumfaa __ev_mwumiaa
+
+#define __ev_mwssfan(a, b) __builtin_spe_evmwssfan ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmfan(a, b) __builtin_spe_evmwsmfan ((__v2si) (a), (__v2si) (b))
+#define __ev_mwsmian(a, b) __builtin_spe_evmwsmian ((__v2si) (a), (__v2si) (b))
+#define __ev_mwumian(a, b) __builtin_spe_evmwumian ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwumfan __ev_mwumian
+
+#define __ev_addssiaaw(a) __builtin_spe_evaddssiaaw ((__v2si) (a))
+#define __ev_addsmiaaw(a) __builtin_spe_evaddsmiaaw ((__v2si) (a))
+#define __ev_addusiaaw(a) __builtin_spe_evaddusiaaw ((__v2si) (a))
+#define __ev_addumiaaw(a) __builtin_spe_evaddumiaaw ((__v2si) (a))
+
+#define __ev_addusfaaw __ev_addusiaaw
+#define __ev_addumfaaw __ev_addumiaaw
+#define __ev_addsmfaaw __ev_addsmiaaw
+#define __ev_addssfaaw __ev_addssiaaw
+
+#define __ev_subfssiaaw(a) __builtin_spe_evsubfssiaaw ((__v2si) (a))
+#define __ev_subfsmiaaw(a) __builtin_spe_evsubfsmiaaw ((__v2si) (a))
+#define __ev_subfusiaaw(a) __builtin_spe_evsubfusiaaw ((__v2si) (a))
+#define __ev_subfumiaaw(a) __builtin_spe_evsubfumiaaw ((__v2si) (a))
+
+#define __ev_subfusfaaw __ev_subfusiaaw
+#define __ev_subfumfaaw __ev_subfumiaaw
+#define __ev_subfsmfaaw __ev_subfsmiaaw
+#define __ev_subfssfaaw __ev_subfssiaaw
+
+/* Floating Point SIMD Instructions */
+
+/* These all return V2SF, but we need to cast them to V2SI because the SPE
+ expect all functions to be __ev64_opaque__. */
+
+#define __ev_fsabs(a) ((__v2si) __builtin_spe_evfsabs ((__v2sf) a))
+#define __ev_fsnabs(a) ((__v2si) __builtin_spe_evfsnabs ((__v2sf) a))
+#define __ev_fsneg(a) ((__v2si) __builtin_spe_evfsneg ((__v2sf) a))
+#define __ev_fsadd(a, b) ((__v2si) __builtin_spe_evfsadd ((__v2sf) a, (__v2sf) b))
+#define __ev_fssub(a, b) ((__v2si) __builtin_spe_evfssub ((__v2sf) a, (__v2sf) b))
+#define __ev_fsmul(a, b) ((__v2si) __builtin_spe_evfsmul ((__v2sf) a, (__v2sf) b))
+#define __ev_fsdiv(a, b) ((__v2si) __builtin_spe_evfsdiv ((__v2sf) a, (__v2sf) b))
+#define __ev_fscfui(a) ((__v2si) __builtin_spe_evfscfui ((__v2si) a))
+#define __ev_fscfsi(a) ((__v2si) __builtin_spe_evfscfsi ((__v2sf) a))
+#define __ev_fscfuf(a) ((__v2si) __builtin_spe_evfscfuf ((__v2sf) a))
+#define __ev_fscfsf(a) ((__v2si) __builtin_spe_evfscfsf ((__v2sf) a))
+#define __ev_fsctui(a) ((__v2si) __builtin_spe_evfsctui ((__v2sf) a))
+#define __ev_fsctsi(a) ((__v2si) __builtin_spe_evfsctsi ((__v2sf) a))
+#define __ev_fsctuf(a) ((__v2si) __builtin_spe_evfsctuf ((__v2sf) a))
+#define __ev_fsctsf(a) ((__v2si) __builtin_spe_evfsctsf ((__v2sf) a))
+#define __ev_fsctuiz(a) ((__v2si) __builtin_spe_evfsctuiz ((__v2sf) a))
+#define __ev_fsctsiz(a) ((__v2si) __builtin_spe_evfsctsiz ((__v2sf) a))
+
+/* NOT SUPPORTED IN FIRST e500, support via two instructions: */
+
+#define __ev_mwhusfaaw __ev_mwhusiaaw
+#define __ev_mwhumfaaw __ev_mwhumiaaw
+#define __ev_mwhusfanw __ev_mwhusianw
+#define __ev_mwhumfanw __ev_mwhumianw
+#define __ev_mwhgumfaa __ev_mwhgumiaa
+#define __ev_mwhgumfan __ev_mwhgumian
+
+#define __ev_mwhgssfaa(a, b) __internal_ev_mwhgssfaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgsmfaa(a, b) __internal_ev_mwhgsmfaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgsmiaa(a, b) __internal_ev_mwhgsmiaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgumiaa(a, b) __internal_ev_mwhgumiaa ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgssfan(a, b) __internal_ev_mwhgssfan ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgsmfan(a, b) __internal_ev_mwhgsmfan ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgsmian(a, b) __internal_ev_mwhgsmian ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhgumian(a, b) __internal_ev_mwhgumian ((__v2si) (a), (__v2si) (b))
+
+#define __ev_mwhssiaaw(a, b) __internal_ev_mwhssiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhssfaaw(a, b) __internal_ev_mwhssfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmfaaw(a, b) __internal_ev_mwhsmfaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmiaaw(a, b) __internal_ev_mwhsmiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhusiaaw(a, b) __internal_ev_mwhusiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhumiaaw(a, b) __internal_ev_mwhumiaaw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhssfanw(a, b) __internal_ev_mwhssfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhssianw(a, b) __internal_ev_mwhssianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmfanw(a, b) __internal_ev_mwhsmfanw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhsmianw(a, b) __internal_ev_mwhsmianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhusianw(a, b) __internal_ev_mwhusianw ((__v2si) (a), (__v2si) (b))
+#define __ev_mwhumianw(a, b) __internal_ev_mwhumianw ((__v2si) (a), (__v2si) (b))
+
+static inline __ev64_opaque__
+__internal_ev_mwhssfaaw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhssf (a, b);
+ return __ev_addssiaaw(t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhssiaaw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmi (a,b);
+ return __ev_addssiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhsmfaaw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmf (a,b);
+ return __ev_addsmiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhsmiaaw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmi (a,b);
+ return __ev_addsmiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhusiaaw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhumi (a,b);
+ return __ev_addusiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhumiaaw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhumi (a,b);
+ return __ev_addumiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhssfanw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhssf (a,b);
+ return __ev_subfssiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhssianw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmi (a,b);
+ return __ev_subfssiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhsmfanw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmf (a,b);
+ return __ev_subfsmiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhsmianw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmi (a,b);
+ return __ev_subfsmiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhusianw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhumi (a,b);
+ return __ev_subfusiaaw (t);
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhumianw (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhumi (a,b);
+ return __ev_subfumiaaw (t);
+}
+
+/* ** */
+
+static inline __ev64_opaque__
+__internal_ev_mwhgssfaa (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhssf (a, b);
+ return __ev_mwsmiaa (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgsmfaa (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmf (a, b);
+ return __ev_mwsmiaa (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgsmiaa (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmi (a, b);
+ return __ev_mwsmiaa (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgumiaa (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhumi (a, b);
+ return __ev_mwumiaa (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgssfan (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhssf (a, b);
+ return __ev_mwsmian (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgsmfan (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmf (a, b);
+ return __ev_mwsmian (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgsmian (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhsmi (a, b);
+ return __ev_mwsmian (t, ((__ev64_opaque__){1, 1}));
+}
+
+static inline __ev64_opaque__
+__internal_ev_mwhgumian (__ev64_opaque__ a, __ev64_opaque__ b)
+{
+ __ev64_opaque__ t;
+
+ t = __ev_mwhumi (a, b);
+ return __ev_mwumian (t, ((__ev64_opaque__){1, 1}));
+}
+
+/* END OF NOT SUPPORTED */
+
+/* __ev_create* functions. */
+
+#define __ev_create_ufix32_u32 __ev_create_u32
+#define __ev_create_sfix32_s32 __ev_create_s32
+#define __ev_create_sfix32_fs __ev_create_fs
+#define __ev_create_ufix32_fs __ev_create_fs
+
+static inline __ev64_opaque__
+__ev_create_s16 (int16_t a, int16_t b, int16_t c, int16_t d)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int16_t i[4];
+ } u;
+
+ u.i[0] = a;
+ u.i[1] = b;
+ u.i[2] = c;
+ u.i[3] = d;
+
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_create_u16 (uint16_t a, uint16_t b, uint16_t c, uint16_t d)
+
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint16_t i[4];
+ } u;
+
+ u.i[0] = a;
+ u.i[1] = b;
+ u.i[2] = c;
+ u.i[3] = d;
+
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_create_s32 (int32_t a, int32_t b)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int32_t i[2];
+ } u;
+
+ u.i[0] = a;
+ u.i[1] = b;
+
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_create_u32 (uint32_t a, uint32_t b)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint32_t i[2];
+ } u;
+
+ u.i[0] = a;
+ u.i[1] = b;
+
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_create_fs (float a, float b)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ float f[2];
+ } u;
+
+ u.f[0] = a;
+ u.f[1] = b;
+
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_create_s64 (int64_t a)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int64_t i;
+ } u;
+
+ u.i = a;
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_create_u64 (uint64_t a)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint64_t i;
+ } u;
+
+ u.i = a;
+ return u.v;
+}
+
+#define __ev_convert_u64(a) ((uint64_t) (a))
+#define __ev_convert_s64(a) ((int64_t) (a))
+
+/* __ev_get_* functions. */
+
+#define __ev_get_upper_u32(a) __ev_get_u32_internal ((__ev64_opaque__) a, 0)
+#define __ev_get_lower_u32(a) __ev_get_u32_internal ((__ev64_opaque__) a, 1)
+#define __ev_get_upper_s32(a) __ev_get_s32_internal ((__ev64_opaque__) a, 0)
+#define __ev_get_lower_s32(a) __ev_get_s32_internal ((__ev64_opaque__) a, 1)
+#define __ev_get_upper_fs(a) __ev_get_fs_internal ((__ev64_opaque__) a, 0)
+#define __ev_get_lower_fs(a) __ev_get_fs_internal ((__ev64_opaque__) a, 1)
+#define __ev_get_upper_ufix32_u32(a) __ev_get_upper_u32(a)
+#define __ev_get_lower_ufix32_u32(a) __ev_get_lower_u32(a)
+#define __ev_get_upper_sfix32_s32(a) __ev_get_upper_s32(a)
+#define __ev_get_lower_sfix32_s32(a) __ev_get_lower_s32(a)
+#define __ev_get_upper_sfix32_fs(a) __ev_get_upper_fs(a)
+#define __ev_get_lower_sfix32_fs(a) __ev_get_lower_fs(a)
+#define __ev_get_upper_ufix32_fs(a) __ev_get_upper_fs(a)
+#define __ev_get_lower_ufix32_fs(a) __ev_get_lower_fs(a)
+
+#define __ev_get_u32(a, b) __ev_get_u32_internal ((__ev64_opaque__) a, b)
+#define __ev_get_s32(a, b) __ev_get_s32_internal ((__ev64_opaque__) a, b)
+#define __ev_get_fs(a, b) __ev_get_fs_internal ((__ev64_opaque__) a, b)
+#define __ev_get_u16(a, b) __ev_get_u16_internal ((__ev64_opaque__) a, b)
+#define __ev_get_s16(a, b) __ev_get_s16_internal ((__ev64_opaque__) a, b)
+
+#define __ev_get_ufix32_u32(a, b) __ev_get_u32 (a, b)
+#define __ev_get_sfix32_s32(a, b) __ev_get_s32 (a, b)
+#define __ev_get_ufix32_fs(a, b) __ev_get_fs (a, b)
+#define __ev_get_sfix32_fs(a, b) __ev_get_fs (a, b)
+
+static inline uint32_t
+__ev_get_u32_internal (__ev64_opaque__ a, uint32_t pos)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint32_t i[2];
+ } u;
+
+ u.v = a;
+ return u.i[pos];
+}
+
+static inline int32_t
+__ev_get_s32_internal (__ev64_opaque__ a, uint32_t pos)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int32_t i[2];
+ } u;
+
+ u.v = a;
+ return u.i[pos];
+}
+
+static inline float
+__ev_get_fs_internal (__ev64_opaque__ a, uint32_t pos)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ float f[2];
+ } u;
+
+ u.v = a;
+ return u.f[pos];
+}
+
+static inline uint16_t
+__ev_get_u16_internal (__ev64_opaque__ a, uint32_t pos)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint16_t i[4];
+ } u;
+
+ u.v = a;
+ return u.i[pos];
+}
+
+static inline int16_t
+__ev_get_s16_internal (__ev64_opaque__ a, uint32_t pos)
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int16_t i[4];
+ } u;
+
+ u.v = a;
+ return u.i[pos];
+}
+
+/* __ev_set_* functions. */
+
+#define __ev_set_u32(a, b, c) __ev_set_u32_internal ((__ev64_opaque__) a, b, c)
+#define __ev_set_s32(a, b, c) __ev_set_s32_internal ((__ev64_opaque__) a, b, c)
+#define __ev_set_fs(a, b, c) __ev_set_fs_internal ((__ev64_opaque__) a, b, c)
+#define __ev_set_u16(a, b, c) __ev_set_u16_internal ((__ev64_opaque__) a, b, c)
+#define __ev_set_s16(a, b, c) __ev_set_s16_internal ((__ev64_opaque__) a, b, c)
+
+#define __ev_set_ufix32_u32 __ev_set_u32
+#define __ev_set_sfix32_s32 __ev_set_s32
+#define __ev_set_ufix32_fs __ev_set_fs
+#define __ev_set_sfix32_fs __ev_set_fs
+
+#define __ev_set_upper_u32(a, b) __ev_set_u32(a, b, 0)
+#define __ev_set_lower_u32(a, b) __ev_set_u32 (a, b, 1)
+#define __ev_set_upper_s32(a, b) __ev_set_s32 (a, b, 0)
+#define __ev_set_lower_s32(a, b) __ev_set_s32 (a, b, 1)
+#define __ev_set_upper_fs(a, b) __ev_set_fs (a, b, 0)
+#define __ev_set_lower_fs(a, b) __ev_set_fs (a, b, 1)
+#define __ev_set_upper_ufix32_u32 __ev_set_upper_u32
+#define __ev_set_lower_ufix32_u32 __ev_set_lower_u32
+#define __ev_set_upper_sfix32_s32 __ev_set_upper_s32
+#define __ev_set_lower_sfix32_s32 __ev_set_lower_s32
+#define __ev_set_upper_sfix32_fs __ev_set_upper_fs
+#define __ev_set_lower_sfix32_fs __ev_set_lower_fs
+#define __ev_set_upper_ufix32_fs __ev_set_upper_fs
+#define __ev_set_lower_ufix32_fs __ev_set_lower_fs
+
+#define __ev_set_acc_vec64(a) __builtin_spe_evmra ((__ev64_opaque__)(a))
+
+static inline __ev64_opaque__
+__ev_set_acc_u64 (uint64_t a)
+{
+ __ev_mra (a);
+ return (__ev64_opaque__) a;
+}
+
+static inline __ev64_opaque__
+__ev_set_acc_s64 (int64_t a)
+{
+ __ev_mra (a);
+ return (__ev64_opaque__) a;
+}
+
+static inline __ev64_opaque__
+__ev_set_u32_internal (__ev64_opaque__ a, uint32_t b, uint32_t pos )
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint32_t i[2];
+ } u;
+
+ u.v = a;
+ u.i[pos] = b;
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_set_s32_internal (__ev64_opaque__ a, int32_t b, uint32_t pos )
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int32_t i[2];
+ } u;
+
+ u.v = a;
+ u.i[pos] = b;
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_set_fs_internal (__ev64_opaque__ a, float b, uint32_t pos )
+{
+ union
+ {
+ __ev64_opaque__ v;
+ float f[2];
+ } u;
+
+ u.v = a;
+ u.f[pos] = b;
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_set_u16_internal (__ev64_opaque__ a, uint16_t b, uint32_t pos )
+{
+ union
+ {
+ __ev64_opaque__ v;
+ uint16_t i[4];
+ } u;
+
+ u.v = a;
+ u.i[pos] = b;
+ return u.v;
+}
+
+static inline __ev64_opaque__
+__ev_set_s16_internal (__ev64_opaque__ a, int16_t b, uint32_t pos )
+{
+ union
+ {
+ __ev64_opaque__ v;
+ int16_t i[4];
+ } u;
+
+ u.v = a;
+ u.i[pos] = b;
+ return u.v;
+}
+
+/* Predicates. */
+
+#define __pred_all 0
+#define __pred_any 1
+#define __pred_upper 2
+#define __pred_lower 3
+
+#define __ev_any_gts(a, b) \
+ __builtin_spe_evcmpgts (__pred_any, (__v2si)(a), (__v2si)(b))
+#define __ev_all_gts(a, b) \
+ __builtin_spe_evcmpgts (__pred_all, (__v2si)(a), (__v2si)(b))
+#define __ev_upper_gts(a, b) \
+ __builtin_spe_evcmpgts (__pred_upper, (__v2si)(a), (__v2si)(b))
+#define __ev_lower_gts(a, b) \
+ __builtin_spe_evcmpgts (__pred_lower, (__v2si)(a), (__v2si)(b))
+#define __ev_select_gts(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_gts ((__v2si)(a), (__v2si)(b), \
+ (__v2si)(c), (__v2si)(d)))
+
+#define __ev_any_gtu(a, b) \
+ __builtin_spe_evcmpgtu (__pred_any, (__v2si)(a), (__v2si)(b))
+#define __ev_all_gtu(a, b) \
+ __builtin_spe_evcmpgtu (__pred_all, (__v2si)(a), (__v2si)(b))
+#define __ev_upper_gtu(a, b) \
+ __builtin_spe_evcmpgtu (__pred_upper, (__v2si)(a), (__v2si)(b))
+#define __ev_lower_gtu(a, b) \
+ __builtin_spe_evcmpgtu (__pred_lower, (__v2si)(a), (__v2si)(b))
+#define __ev_select_gtu(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_gtu ((__v2si)(a), (__v2si)(b), \
+ (__v2si)(c), (__v2si)(d)))
+
+#define __ev_any_lts(a, b) \
+ __builtin_spe_evcmplts (__pred_any, (__v2si)(a), (__v2si)(b))
+#define __ev_all_lts(a, b) \
+ __builtin_spe_evcmplts (__pred_all, (__v2si)(a), (__v2si)(b))
+#define __ev_upper_lts(a, b) \
+ __builtin_spe_evcmplts (__pred_upper, (__v2si)(a), (__v2si)(b))
+#define __ev_lower_lts(a, b) \
+ __builtin_spe_evcmplts (__pred_lower, (__v2si)(a), (__v2si)(b))
+#define __ev_select_lts(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_lts ((__v2si)(a), (__v2si)(b), \
+ (__v2si)(c), (__v2si)(d)))
+
+#define __ev_any_ltu(a, b) \
+ __builtin_spe_evcmpltu (__pred_any, (__v2si)(a), (__v2si)(b))
+#define __ev_all_ltu(a, b) \
+ __builtin_spe_evcmpltu (__pred_all, (__v2si)(a), (__v2si)(b))
+#define __ev_upper_ltu(a, b) \
+ __builtin_spe_evcmpltu (__pred_upper, (__v2si)(a), (__v2si)(b))
+#define __ev_lower_ltu(a, b) \
+ __builtin_spe_evcmpltu (__pred_lower, (__v2si)(a), (__v2si)(b))
+#define __ev_select_ltu(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_ltu ((__v2si)(a), (__v2si)(b), \
+ (__v2si)(c), (__v2si)(d)))
+#define __ev_any_eq(a, b) \
+ __builtin_spe_evcmpeq (__pred_any, (__v2si)(a), (__v2si)(b))
+#define __ev_all_eq(a, b) \
+ __builtin_spe_evcmpeq (__pred_all, (__v2si)(a), (__v2si)(b))
+#define __ev_upper_eq(a, b) \
+ __builtin_spe_evcmpeq (__pred_upper, (__v2si)(a), (__v2si)(b))
+#define __ev_lower_eq(a, b) \
+ __builtin_spe_evcmpeq (__pred_lower, (__v2si)(a), (__v2si)(b))
+#define __ev_select_eq(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_eq ((__v2si)(a), (__v2si)(b), \
+ (__v2si)(c), (__v2si)(d)))
+
+#define __ev_any_fs_gt(a, b) \
+ __builtin_spe_evfscmpgt (__pred_any, (__v2sf)(a), (__v2sf)(b))
+#define __ev_all_fs_gt(a, b) \
+ __builtin_spe_evfscmpgt (__pred_all, (__v2sf)(a), (__v2sf)(b))
+#define __ev_upper_fs_gt(a, b) \
+ __builtin_spe_evfscmpgt (__pred_upper, (__v2sf)(a), (__v2sf)(b))
+#define __ev_lower_fs_gt(a, b) \
+ __builtin_spe_evfscmpgt (__pred_lower, (__v2sf)(a), (__v2sf)(b))
+#define __ev_select_fs_gt(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_fsgt ((__v2sf)(a), (__v2sf)(b), \
+ (__v2sf)(c), (__v2sf)(d)))
+
+#define __ev_any_fs_lt(a, b) \
+ __builtin_spe_evfscmplt (__pred_any, (__v2sf)(a), (__v2sf)(b))
+#define __ev_all_fs_lt(a, b) \
+ __builtin_spe_evfscmplt (__pred_all, (__v2sf)(a), (__v2sf)(b))
+#define __ev_upper_fs_lt(a, b) \
+ __builtin_spe_evfscmplt (__pred_upper, (__v2sf)(a), (__v2sf)(b))
+#define __ev_lower_fs_lt(a, b) \
+ __builtin_spe_evfscmplt (__pred_lower, (__v2sf)(a), (__v2sf)(b))
+#define __ev_select_fs_lt(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_fslt ((__v2sf)(a), (__v2sf)(b), \
+ (__v2sf)(c), (__v2sf)(d)))
+
+#define __ev_any_fs_eq(a, b) \
+ __builtin_spe_evfscmpeq (__pred_any, (__v2sf)(a), (__v2sf)(b))
+#define __ev_all_fs_eq(a, b) \
+ __builtin_spe_evfscmpeq (__pred_all, (__v2sf)(a), (__v2sf)(b))
+#define __ev_upper_fs_eq(a, b) \
+ __builtin_spe_evfscmpeq (__pred_upper, (__v2sf)(a), (__v2sf)(b))
+#define __ev_lower_fs_eq(a, b) \
+ __builtin_spe_evfscmpeq (__pred_lower, (__v2sf)(a), (__v2sf)(b))
+#define __ev_select_fs_eq(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_fseq ((__v2sf)(a), (__v2sf)(b), \
+ (__v2sf)(c), (__v2sf)(d)))
+
+#define __ev_any_fs_tst_gt(a, b) \
+ __builtin_spe_evfststgt (__pred_any, (__v2sf)(a), (__v2sf)(b))
+#define __ev_all_fs_tst_gt(a, b) \
+ __builtin_spe_evfststgt (__pred_all, (__v2sf)(a), (__v2sf)(b))
+#define __ev_upper_fs_tst_gt(a, b) \
+ __builtin_spe_evfststgt (__pred_upper, (__v2sf)(a), (__v2sf)(b))
+#define __ev_lower_fs_tst_gt(a, b) \
+ __builtin_spe_evfststgt (__pred_lower, (__v2sf)(a), (__v2sf)(b))
+#define __ev_select_fs_tst_gt(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_fststgt ((__v2sf)(a), (__v2sf)(b), \
+ (__v2sf)(c), (__v2sf)(d)))
+
+#define __ev_any_fs_tst_lt(a, b) \
+ __builtin_spe_evfststlt (__pred_any, (__v2sf)(a), (__v2sf)(b))
+#define __ev_all_fs_tst_lt(a, b) \
+ __builtin_spe_evfststlt (__pred_all, (__v2sf)(a), (__v2sf)(b))
+#define __ev_upper_fs_tst_lt(a, b) \
+ __builtin_spe_evfststlt (__pred_upper, (__v2sf)(a), (__v2sf)(b))
+#define __ev_lower_fs_tst_lt(a, b) \
+ __builtin_spe_evfststlt (__pred_lower, (__v2sf)(a), (__v2sf)(b))
+#define __ev_select_fs_tst_lt(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_fststlt ((__v2sf)(a), (__v2sf)(b), \
+ (__v2sf)(c), (__v2sf)(d)))
+
+#define __ev_any_fs_tst_eq(a, b) \
+ __builtin_spe_evfststeq (__pred_any, (__v2sf)(a), (__v2sf)(b))
+#define __ev_all_fs_tst_eq(a, b) \
+ __builtin_spe_evfststeq (__pred_all, (__v2sf)(a), (__v2sf)(b))
+#define __ev_upper_fs_tst_eq(a, b) \
+ __builtin_spe_evfststeq (__pred_upper, (__v2sf)(a), (__v2sf)(b))
+#define __ev_lower_fs_tst_eq(a, b) \
+ __builtin_spe_evfststeq (__pred_lower, (__v2sf)(a), (__v2sf)(b))
+#define __ev_select_fs_tst_eq(a, b, c, d) \
+ ((__v2si) __builtin_spe_evsel_fststeq ((__v2sf)(a), (__v2sf)(b), \
+ (__v2sf)(c), (__v2sf)(d)))
+
+/* SPEFSCR accesor functions. */
+
+#define __SPEFSCR_SOVH 0x80000000
+#define __SPEFSCR_OVH 0x40000000
+#define __SPEFSCR_FGH 0x20000000
+#define __SPEFSCR_FXH 0x10000000
+#define __SPEFSCR_FINVH 0x08000000
+#define __SPEFSCR_FDBZH 0x04000000
+#define __SPEFSCR_FUNFH 0x02000000
+#define __SPEFSCR_FOVFH 0x01000000
+/* 2 unused bits */
+#define __SPEFSCR_FINXS 0x00200000
+#define __SPEFSCR_FINVS 0x00100000
+#define __SPEFSCR_FDBZS 0x00080000
+#define __SPEFSCR_FUNFS 0x00040000
+#define __SPEFSCR_FOVFS 0x00020000
+#define __SPEFSCR_MODE 0x00010000
+#define __SPEFSCR_SOV 0x00008000
+#define __SPEFSCR_OV 0x00004000
+#define __SPEFSCR_FG 0x00002000
+#define __SPEFSCR_FX 0x00001000
+#define __SPEFSCR_FINV 0x00000800
+#define __SPEFSCR_FDBZ 0x00000400
+#define __SPEFSCR_FUNF 0x00000200
+#define __SPEFSCR_FOVF 0x00000100
+/* 1 unused bit */
+#define __SPEFSCR_FINXE 0x00000040
+#define __SPEFSCR_FINVE 0x00000020
+#define __SPEFSCR_FDBZE 0x00000010
+#define __SPEFSCR_FUNFE 0x00000008
+#define __SPEFSCR_FOVFE 0x00000004
+#define __SPEFSCR_FRMC 0x00000003
+
+#define __ev_get_spefscr_sovh() (__builtin_spe_mfspefscr () & __SPEFSCR_SOVH)
+#define __ev_get_spefscr_ovh() (__builtin_spe_mfspefscr () & __SPEFSCR_OVH)
+#define __ev_get_spefscr_fgh() (__builtin_spe_mfspefscr () & __SPEFSCR_FGH)
+#define __ev_get_spefscr_fxh() (__builtin_spe_mfspefscr () & __SPEFSCR_FXH)
+#define __ev_get_spefscr_finvh() (__builtin_spe_mfspefscr () & __SPEFSCR_FINVH)
+#define __ev_get_spefscr_fdbzh() (__builtin_spe_mfspefscr () & __SPEFSCR_FDBZH)
+#define __ev_get_spefscr_funfh() (__builtin_spe_mfspefscr () & __SPEFSCR_FUNFH)
+#define __ev_get_spefscr_fovfh() (__builtin_spe_mfspefscr () & __SPEFSCR_FOVFH)
+#define __ev_get_spefscr_finxs() (__builtin_spe_mfspefscr () & __SPEFSCR_FINXS)
+#define __ev_get_spefscr_finvs() (__builtin_spe_mfspefscr () & __SPEFSCR_FINVS)
+#define __ev_get_spefscr_fdbzs() (__builtin_spe_mfspefscr () & __SPEFSCR_FDBZS)
+#define __ev_get_spefscr_funfs() (__builtin_spe_mfspefscr () & __SPEFSCR_FUNFS)
+#define __ev_get_spefscr_fovfs() (__builtin_spe_mfspefscr () & __SPEFSCR_FOVFS)
+#define __ev_get_spefscr_mode() (__builtin_spe_mfspefscr () & __SPEFSCR_MODE)
+#define __ev_get_spefscr_sov() (__builtin_spe_mfspefscr () & __SPEFSCR_SOV)
+#define __ev_get_spefscr_ov() (__builtin_spe_mfspefscr () & __SPEFSCR_OV)
+#define __ev_get_spefscr_fg() (__builtin_spe_mfspefscr () & __SPEFSCR_FG)
+#define __ev_get_spefscr_fx() (__builtin_spe_mfspefscr () & __SPEFSCR_FX)
+#define __ev_get_spefscr_finv() (__builtin_spe_mfspefscr () & __SPEFSCR_FINV)
+#define __ev_get_spefscr_fdbz() (__builtin_spe_mfspefscr () & __SPEFSCR_FDBZ)
+#define __ev_get_spefscr_funf() (__builtin_spe_mfspefscr () & __SPEFSCR_FUNF)
+#define __ev_get_spefscr_fovf() (__builtin_spe_mfspefscr () & __SPEFSCR_FOVF)
+#define __ev_get_spefscr_finxe() (__builtin_spe_mfspefscr () & __SPEFSCR_FINXE)
+#define __ev_get_spefscr_finve() (__builtin_spe_mfspefscr () & __SPEFSCR_FINVE)
+#define __ev_get_spefscr_fdbze() (__builtin_spe_mfspefscr () & __SPEFSCR_FDBZE)
+#define __ev_get_spefscr_funfe() (__builtin_spe_mfspefscr () & __SPEFSCR_FUNFE)
+#define __ev_get_spefscr_fovfe() (__builtin_spe_mfspefscr () & __SPEFSCR_FOVFE)
+#define __ev_get_spefscr_frmc() (__builtin_spe_mfspefscr () & __SPEFSCR_FRMC)
+
+static inline void
+__ev_clr_spefscr_field (int mask)
+{
+ int i;
+
+ i = __builtin_spe_mfspefscr ();
+ i &= ~mask;
+ __builtin_spe_mtspefscr (i);
+}
+
+#define __ev_clr_spefscr_sovh() __ev_clr_spefscr_field (__SPEFSCR_SOVH)
+#define __ev_clr_spefscr_sov() __ev_clr_spefscr_field (__SPEFSCR_SOV)
+#define __ev_clr_spefscr_finxs() __ev_clr_spefscr_field (__SPEFSCR_FINXS)
+#define __ev_clr_spefscr_finvs() __ev_clr_spefscr_field (__SPEFSCR_FINVS)
+#define __ev_clr_spefscr_fdbzs() __ev_clr_spefscr_field (__SPEFSCR_FDBZS)
+#define __ev_clr_spefscr_funfs() __ev_clr_spefscr_field (__SPEFSCR_FUNFS)
+#define __ev_clr_spefscr_fovfs() __ev_clr_spefscr_field (__SPEFSCR_FOVFS)
+
+/* Set rounding mode:
+ rnd = 0 (nearest)
+ rnd = 1 (zero)
+ rnd = 2 (+inf)
+ rnd = 3 (-inf)
+*/
+static inline void
+__ev_set_spefscr_frmc (int rnd)
+{
+ int i;
+
+ i = __builtin_spe_mfspefscr ();
+ i &= ~__SPEFSCR_FRMC;
+ i |= rnd;
+}
+
+#endif /* _SPE_H */
diff --git a/gcc/config/rs6000/spe.md b/gcc/config/rs6000/spe.md
new file mode 100644
index 00000000000..250209e6dce
--- /dev/null
+++ b/gcc/config/rs6000/spe.md
@@ -0,0 +1,2550 @@
+;; e500 SPE description
+;; Copyright (C) 2002 Free Software Foundation, Inc.
+;; Contributed by Aldy Hernandez (aldy@quesejoda.com)
+
+;; This file is part of GNU CC.
+
+;; GNU CC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 2, or (at your option)
+;; any later version.
+
+;; GNU CC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GNU CC; see the file COPYING. If not, write to
+;; the Free Software Foundation, 59 Temple Place - Suite 330,
+;; Boston, MA 02111-1307, USA.
+
+(define_constants
+ [(SPE_ACC_REGNO 111)
+ (SPEFSCR_REGNO 112)])
+
+(define_insn "*negsf2_gpr"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (neg:SF (match_operand:SF 1 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsneg %0,%1"
+ [(set_attr "type" "fp")])
+
+(define_insn "*abssf2_gpr"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (abs:SF (match_operand:SF 1 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsabs %0,%1"
+ [(set_attr "type" "fp")])
+
+(define_insn "*addsf3_gpr"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (plus:SF (match_operand:SF 1 "gpc_reg_operand" "%r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsadd %0,%1,%2"
+ [(set_attr "type" "fp")])
+
+(define_insn "*subsf3_gpr"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (minus:SF (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efssub %0,%1,%2"
+ [(set_attr "type" "fp")])
+
+(define_insn "*mulsf3_gpr"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (mult:SF (match_operand:SF 1 "gpc_reg_operand" "%r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsmul %0,%1,%2"
+ [(set_attr "type" "fp")])
+
+(define_insn "*divsf3_gpr"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (div:SF (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsdiv %0,%1,%2"
+ [(set_attr "type" "fp")])
+
+(define_insn "spe_efsctuiz"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (unspec:SI [(match_operand:SF 1 "gpc_reg_operand" "r")] 700))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsctuiz %0,%1"
+ [(set_attr "type" "fp")])
+
+(define_insn "spe_fixunssfsi2"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (unsigned_fix:SI (fix:SF (match_operand:SF 1 "gpc_reg_operand" "r"))))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsctui %0,%1"
+ [(set_attr "type" "fp")])
+
+(define_insn "spe_fix_truncsfsi2"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (fix:SI (match_operand:SF 1 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efsctsi %0,%1"
+ [(set_attr "type" "fp")])
+
+(define_insn "spe_floatunssisf2"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (unsigned_float:SF (match_operand:SI 1 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efscfui %0,%1"
+ [(set_attr "type" "fp")])
+
+(define_insn "spe_floatsisf2"
+ [(set (match_operand:SF 0 "gpc_reg_operand" "=r")
+ (float:SF (match_operand:SI 1 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS"
+ "efscfsi %0,%1"
+ [(set_attr "type" "fp")])
+
+
+;; SPE SIMD instructions
+
+(define_insn "spe_evabs"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (abs:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evabs %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evandc"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (and:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (not:V2SI (match_operand:V2SI 2 "gpc_reg_operand" "r"))))]
+ "TARGET_SPE"
+ "evandc %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evand"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (and:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evand %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+;; Vector compare instructions
+
+(define_insn "spe_evcmpeq"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 500))]
+ "TARGET_SPE"
+ "evcmpeq %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evcmpgts"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 501))]
+ "TARGET_SPE"
+ "evcmpgts %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evcmpgtu"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 502))]
+ "TARGET_SPE"
+ "evcmpgtu %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evcmplts"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 503))]
+ "TARGET_SPE"
+ "evcmplts %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evcmpltu"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 504))]
+ "TARGET_SPE"
+ "evcmpltu %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+;; Floating point vector compare instructions
+
+(define_insn "spe_evfscmpeq"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")] 538))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfscmpeq %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfscmpgt"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")] 539))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfscmpgt %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfscmplt"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")] 540))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfscmplt %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfststeq"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")] 541))]
+ "TARGET_SPE"
+ "evfststeq %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfststgt"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")] 542))]
+ "TARGET_SPE"
+ "evfststgt %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfststlt"
+ [(set (match_operand:CC 0 "cc_reg_operand" "=y")
+ (unspec:CC [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")] 543))]
+ "TARGET_SPE"
+ "evfststlt %0,%1,%2"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+;; End of vector compare instructions
+
+(define_insn "spe_evcntlsw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")] 505))]
+ "TARGET_SPE"
+ "evcntlsw %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evcntlzw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")] 506))]
+ "TARGET_SPE"
+ "evcntlzw %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_eveqv"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (not:V2SI (xor:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))))]
+ "TARGET_SPE"
+ "eveqv %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evextsb"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")] 507))]
+ "TARGET_SPE"
+ "evextsb %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evextsh"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")] 508))]
+ "TARGET_SPE"
+ "evextsh %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlhhesplat"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 509)]
+ "TARGET_SPE"
+ "evlhhesplat %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlhhesplatx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 510)]
+ "TARGET_SPE"
+ "evlhhesplatx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlhhossplat"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 511)]
+ "TARGET_SPE"
+ "evlhhossplat %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlhhossplatx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 512)]
+ "TARGET_SPE"
+ "evlhhossplatx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlhhousplat"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 513)]
+ "TARGET_SPE"
+ "evlhhousplat %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlhhousplatx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 514)]
+ "TARGET_SPE"
+ "evlhhousplatx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhsplat"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 515)]
+ "TARGET_SPE"
+ "evlwhsplat %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhsplatx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 516)]
+ "TARGET_SPE"
+ "evlwhsplatx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwwsplat"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 517)]
+ "TARGET_SPE"
+ "evlwwsplat %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwwsplatx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 518)]
+ "TARGET_SPE"
+ "evlwwsplatx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmergehi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (vec_merge:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (vec_select:V2SI
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (parallel [(const_int 1)
+ (const_int 0)]))
+ (const_int 2)))]
+ "TARGET_SPE"
+ "evmergehi %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmergehilo"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (vec_merge:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (const_int 2)))]
+ "TARGET_SPE"
+ "evmergehilo %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmergelo"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (vec_merge:V2SI (vec_select:V2SI
+ (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (parallel [(const_int 1)
+ (const_int 0)]))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (const_int 2)))]
+ "TARGET_SPE"
+ "evmergelo %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmergelohi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (vec_merge:V2SI (vec_select:V2SI
+ (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (parallel [(const_int 1)
+ (const_int 0)]))
+ (vec_select:V2SI
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (parallel [(const_int 1)
+ (const_int 0)]))
+ (const_int 2)))]
+ "TARGET_SPE"
+ "evmergelohi %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evnand"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (not:V2SI (and:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))))]
+ "TARGET_SPE"
+ "evnand %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evneg"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (neg:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evneg %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evnor"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (not:V2SI (ior:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))))]
+ "TARGET_SPE"
+ "evnor %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evorc"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (ior:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (not:V2SI (match_operand:V2SI 2 "gpc_reg_operand" "r"))))]
+ "TARGET_SPE"
+ "evorc %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evor"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (ior:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evor %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evrlwi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:QI 2 "immediate_operand" "i")] 519))]
+ "TARGET_SPE"
+ "evrlwi %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evrlw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 520))]
+ "TARGET_SPE"
+ "evrlw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evrndw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")] 521))]
+ "TARGET_SPE"
+ "evrndw %0,%1"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsel"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (match_operand:CC 3 "cc_reg_operand" "y")] 522))]
+ "TARGET_SPE"
+ "evsel %0,%1,%2,%3"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsel_fs"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")
+ (match_operand:CC 3 "cc_reg_operand" "y")] 725))]
+ "TARGET_SPE"
+ "evsel %0,%1,%2,%3"
+ [(set_attr "type" "veccmp")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evslwi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:QI 2 "immediate_operand" "i")]
+ 523))]
+ "TARGET_SPE"
+ "evslwi %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evslw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 524))]
+ "TARGET_SPE"
+ "evslw %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsrwis"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:QI 2 "immediate_operand" "i")]
+ 525))]
+ "TARGET_SPE"
+ "evsrwis %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsrwiu"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:QI 2 "immediate_operand" "i")]
+ 526))]
+ "TARGET_SPE"
+ "evsrwiu %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsrws"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 527))]
+ "TARGET_SPE"
+ "evsrws %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsrwu"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 528))]
+ "TARGET_SPE"
+ "evsrwu %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evxor"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (xor:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evxor %0,%1,%2"
+ [(set_attr "type" "vecsimple")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsabs"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (abs:V2SF (match_operand:V2SF 1 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evfsabs %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsadd"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (plus:V2SF (match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfsadd %0,%1,%2"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfscfsf"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 529))]
+ "TARGET_SPE"
+ "evfscfsf %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfscfsi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (fix:V2SI (match_operand:V2SF 1 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evfscfsi %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfscfuf"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 530))]
+ "TARGET_SPE"
+ "evfscfuf %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfscfui"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SI 1 "gpc_reg_operand" "r")] 701))]
+ "TARGET_SPE"
+ "evfscfui %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsctsf"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 531))]
+ "TARGET_SPE"
+ "evfsctsf %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsctsi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 532))]
+ "TARGET_SPE"
+ "evfsctsi %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsctsiz"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 533))]
+ "TARGET_SPE"
+ "evfsctsiz %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsctuf"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 534))]
+ "TARGET_SPE"
+ "evfsctuf %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsctui"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 535))]
+ "TARGET_SPE"
+ "evfsctui %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsctuiz"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 536))]
+ "TARGET_SPE"
+ "evfsctuiz %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsdiv"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (div:V2SF (match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfsdiv %0,%1,%2"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsmul"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (mult:V2SF (match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfsmul %0,%1,%2"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsnabs"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (unspec:V2SF [(match_operand:V2SF 1 "gpc_reg_operand" "r")] 537))]
+ "TARGET_SPE"
+ "evfsnabs %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfsneg"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (neg:V2SF (match_operand:V2SF 1 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evfsneg %0,%1"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evfssub"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (minus:V2SF (match_operand:V2SF 1 "gpc_reg_operand" "r")
+ (match_operand:V2SF 2 "gpc_reg_operand" "r")))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evfssub %0,%1,%2"
+ [(set_attr "type" "vecfloat")
+ (set_attr "length" "4")])
+
+;; SPE SIMD load instructions.
+
+;; Only the hardware engineer who designed the SPE inderstands the
+;; plethora of load and store instructions ;-). We have no way of
+;; differentiating between them with RTL so use an unspec of const_int 0
+;; to avoid identical RTL.
+
+(define_insn "spe_evldd"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 544)]
+ "TARGET_SPE && INTVAL (operands[2]) >= 0 && INTVAL (operands[2]) <= 31"
+ "evldd %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlddx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 545)]
+ "TARGET_SPE"
+ "evlddx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evldh"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 546)]
+ "TARGET_SPE && INTVAL (operands[2]) >= 0 && INTVAL (operands[2]) <= 31"
+ "evldh %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evldhx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 547)]
+ "TARGET_SPE"
+ "evldhx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evldw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 548)]
+ "TARGET_SPE"
+ "evldw %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evldwx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 549)]
+ "TARGET_SPE"
+ "evldwx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhe"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 550)]
+ "TARGET_SPE"
+ "evlwhe %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhex"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 551)]
+ "TARGET_SPE"
+ "evlwhex %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhos"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 552)]
+ "TARGET_SPE"
+ "evlwhos %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhosx"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 553)]
+ "TARGET_SPE"
+ "evlwhosx %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhou"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:QI 2 "immediate_operand" "i"))))
+ (unspec [(const_int 0)] 554)]
+ "TARGET_SPE"
+ "evlwhou %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evlwhoux"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (mem:V2SI (plus:SI (match_operand:SI 1 "gpc_reg_operand" "b")
+ (match_operand:SI 2 "gpc_reg_operand" "r"))))
+ (unspec [(const_int 0)] 555)]
+ "TARGET_SPE"
+ "evlwhoux %0,%1,%2"
+ [(set_attr "type" "vecload")
+ (set_attr "length" "4")])
+
+(define_insn "spe_brinc"
+ [(set (match_operand:SI 0 "gpc_reg_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "gpc_reg_operand" "r")
+ (match_operand:SI 2 "gpc_reg_operand" "r")] 556))]
+ "TARGET_SPE"
+ "brinc %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhegsmfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 557))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhegsmfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhegsmfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 558))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhegsmfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhegsmiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 559))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhegsmiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhegsmian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 560))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhegsmian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhegumiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 561))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhegumiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhegumian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 562))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhegumian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmfaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 563))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhesmfaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmfanw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 564))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhesmfanw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 565))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhesmfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 566))]
+ "TARGET_SPE"
+ "evmhesmf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 567))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhesmiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 568))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhesmianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 569))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhesmia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhesmi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 570))]
+ "TARGET_SPE"
+ "evmhesmi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhessfaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 571))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhessfaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhessfanw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 572))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhessfanw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhessfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 573))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhessfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhessf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 574))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evmhessf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhessiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 575))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhessiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhessianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 576))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhessianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmheumiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 577))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmheumiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmheumianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 578))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmheumianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmheumia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 579))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmheumia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmheumi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 580))]
+ "TARGET_SPE"
+ "evmheumi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmheusiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 581))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmheusiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmheusianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 582))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmheusianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhogsmfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 583))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhogsmfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhogsmfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 584))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhogsmfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhogsmiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 585))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhogsmiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhogsmian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 586))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhogsmian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhogumiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 587))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhogumiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhogumian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 588))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhogumian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmfaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 589))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhosmfaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmfanw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 590))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhosmfanw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 591))]
+ "TARGET_SPE"
+ "evmhosmfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 592))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhosmf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 593))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhosmiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 594))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhosmianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 595))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhosmia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhosmi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 596))]
+ "TARGET_SPE"
+ "evmhosmi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhossfaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 597))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhossfaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhossfanw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 598))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhossfanw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhossfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 599))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhossfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhossf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 600))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evmhossf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhossiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 601))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhossiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhossianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 602))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhossianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhoumiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 603))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhoumiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhoumianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 604))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhoumianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhoumia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 605))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhoumia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhoumi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 606))]
+ "TARGET_SPE"
+ "evmhoumi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhousiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 607))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhousiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmhousianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 608))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmhousianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmmlssfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 609))]
+ "TARGET_SPE"
+ "evmmlssfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmmlssf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 610))]
+ "TARGET_SPE"
+ "evmmlssf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 611))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhsmfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 612))]
+ "TARGET_SPE"
+ "evmwhsmf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 613))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhsmia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 614))]
+ "TARGET_SPE"
+ "evmwhsmi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhssfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 615))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhssfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhusian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 626))]
+ "TARGET_SPE"
+ "evmwhusian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhssf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 628))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evmwhssf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhumia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 629))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhumia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhumi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 630))]
+ "TARGET_SPE"
+ "evmwhumi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlsmfaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 631))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlsmfaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlsmfanw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 632))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlsmfanw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlsmfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 633))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlsmfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlsmf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 634))]
+ "TARGET_SPE"
+ "evmwlsmf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlsmiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 635))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlsmiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlsmianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 636))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlsmianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlssf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 637))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evmwlssf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlssfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 638))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlssfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlssfaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 639))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlssfaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlssfanw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 640))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlssfanw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlssiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 641))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlssiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlssianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 642))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlssianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlumiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 643))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlumiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlumianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 644))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlumianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlumia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 645))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlumia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlumi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 646))]
+ "TARGET_SPE"
+ "evmwlumi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlusiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 647))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlusiaaw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwlusianw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 648))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwlusianw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 649))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwsmfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 650))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwsmfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 651))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwsmfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 652))]
+ "TARGET_SPE"
+ "evmwsmf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 653))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwsmiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 654))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwsmian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 655))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwsmia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwsmi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 656))]
+ "TARGET_SPE"
+ "evmwsmi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwssfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 657))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwssfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwssfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 658))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwssfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwssfa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 659))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwssfa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwssf"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 660))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evmwssf %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwumiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 661))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwumiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwumian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 662))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwumian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwumia"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 663))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwumia %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwumi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 664))]
+ "TARGET_SPE"
+ "evmwumi %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evaddw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (plus:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evaddw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evaddusiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 673))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evaddusiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evaddumiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 674))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evaddumiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evaddssiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 675))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evaddssiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evaddsmiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 676))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evaddsmiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evaddiw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:QI 2 "immediate_operand" "i")] 677))]
+ "TARGET_SPE"
+ "evaddiw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsubifw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:QI 2 "immediate_operand" "i")] 678))]
+ "TARGET_SPE"
+ "evsubifw %0,%2,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsubfw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (minus:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))]
+ "TARGET_SPE"
+ "evsubfw %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsubfusiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 679))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evsubfusiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsubfumiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 680))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evsubfumiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsubfssiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 681))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evsubfssiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsubfsmiaaw"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (reg:V2SI SPE_ACC_REGNO)] 682))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evsubfsmiaaw %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmra"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (match_operand:V2SI 1 "gpc_reg_operand" "r"))
+ (set (reg:V2SI SPE_ACC_REGNO) (match_dup 1))]
+ "TARGET_SPE"
+ "evmra %0,%1"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evdivws"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (div:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evdivws %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evdivwu"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (udiv:V2SI (match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")))
+ (clobber (reg:SI SPEFSCR_REGNO))]
+ "TARGET_SPE"
+ "evdivwu %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsplatfi"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:QI 1 "immediate_operand" "i")] 684))]
+ "TARGET_SPE"
+ "evsplatfi %1,%0"
+ [(set_attr "type" "vecperm")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evsplati"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:QI 1 "immediate_operand" "i")] 685))]
+ "TARGET_SPE"
+ "evsplati %1,%0"
+ [(set_attr "type" "vecperm")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstdd"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 686)]
+ "TARGET_SPE"
+ "evstdd %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstddx"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 687)]
+ "TARGET_SPE"
+ "evstddx %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstdh"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 688)]
+ "TARGET_SPE"
+ "evstdh %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstdhx"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 689)]
+ "TARGET_SPE"
+ "evstdhx %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstdw"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 690)]
+ "TARGET_SPE"
+ "evstdw %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstdwx"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 691)]
+ "TARGET_SPE"
+ "evstdwx %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwhe"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 692)]
+ "TARGET_SPE"
+ "evstwhe %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwhex"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 693)]
+ "TARGET_SPE"
+ "evstwhex %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwho"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 694)]
+ "TARGET_SPE"
+ "evstwho %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwhox"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 695)]
+ "TARGET_SPE"
+ "evstwhox %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwwe"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 696)]
+ "TARGET_SPE"
+ "evstwwe %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwwex"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 697)]
+ "TARGET_SPE"
+ "evstwwex %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwwo"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:QI 1 "immediate_operand" "i")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 698)]
+ "TARGET_SPE"
+ "evstwwo %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evstwwox"
+ [(set (mem:V2SI (plus:SI (match_operand:SI 0 "gpc_reg_operand" "b")
+ (match_operand:SI 1 "gpc_reg_operand" "r")))
+ (match_operand:V2SI 2 "gpc_reg_operand" "r"))
+ (unspec [(const_int 0)] 699)]
+ "TARGET_SPE"
+ "evstwwox %2,%0,%1"
+ [(set_attr "type" "vecstore")
+ (set_attr "length" "4")])
+
+;; SPE vector clears
+
+(define_insn "*movv2si_const0"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (match_operand:V2SI 1 "zero_constant" ""))]
+ "TARGET_SPE"
+ "evxor %0,%0,%0"
+ [(set_attr "type" "vecsimple")])
+
+(define_insn "*movv2sf_const0"
+ [(set (match_operand:V2SF 0 "gpc_reg_operand" "=r")
+ (match_operand:V2SF 1 "zero_constant" ""))]
+ "TARGET_SPE"
+ "evxor %0,%0,%0"
+ [(set_attr "type" "vecsimple")])
+
+(define_insn "*movv4hi_const0"
+ [(set (match_operand:V4HI 0 "gpc_reg_operand" "=r")
+ (match_operand:V4HI 1 "zero_constant" ""))]
+ "TARGET_SPE"
+ "evxor %0,%0,%0"
+ [(set_attr "type" "vecsimple")])
+
+;; Vector move instructions.
+
+(define_expand "movv2si"
+ [(set (match_operand:V2SI 0 "nonimmediate_operand" "")
+ (match_operand:V2SI 1 "any_operand" ""))]
+ "TARGET_SPE"
+ "{ rs6000_emit_move (operands[0], operands[1], V2SImode); DONE; }")
+
+
+(define_insn "*movv2si_internal"
+ [(set (match_operand:V2SI 0 "nonimmediate_operand" "=m,r,r")
+ (match_operand:V2SI 1 "input_operand" "r,m,r"))]
+ "TARGET_SPE"
+ "@
+ evstdd%X0 %1,%y0
+ evldd%X1 %0,%y1
+ evor %0,%1,%1"
+ [(set_attr "type" "vecload")])
+
+(define_expand "movv4hi"
+ [(set (match_operand:V4HI 0 "nonimmediate_operand" "")
+ (match_operand:V4HI 1 "any_operand" ""))]
+ "TARGET_SPE"
+ "{ rs6000_emit_move (operands[0], operands[1], V4HImode); DONE; }")
+
+(define_insn "*movv4hi_internal"
+ [(set (match_operand:V4HI 0 "nonimmediate_operand" "=m,r,r")
+ (match_operand:V4HI 1 "input_operand" "r,m,r"))]
+ "TARGET_SPE"
+ "@
+ evstdd%X0 %1,%y0
+ evldd%X1 %0,%y1
+ evor %0,%1,%1"
+ [(set_attr "type" "vecload")])
+
+(define_expand "movv2sf"
+ [(set (match_operand:V2SF 0 "nonimmediate_operand" "")
+ (match_operand:V2SF 1 "any_operand" ""))]
+ "TARGET_SPE"
+ "{ rs6000_emit_move (operands[0], operands[1], V2SFmode); DONE; }")
+
+(define_insn "*movv2sf_internal"
+ [(set (match_operand:V2SF 0 "nonimmediate_operand" "=m,r,r")
+ (match_operand:V2SF 1 "input_operand" "r,m,r"))]
+ "TARGET_SPE"
+ "@
+ evstdd%X0 %1,%y0
+ evldd%X1 %0,%y1
+ evor %0,%1,%1"
+ [(set_attr "type" "vecload")])
+
+(define_insn "spe_evmwhssfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 702))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhssfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhssmaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 703))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhssmaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 704))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhsmfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 705))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhsmiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhusiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 706))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhusiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhumiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 707))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhumiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhssfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 708))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhssfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhssian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 709))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhssian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 710))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhsmfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhsmian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 711))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhsmian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhumian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 713))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhumian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgssfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 714))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgssfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgsmfaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 715))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgsmfaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgsmiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 716))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgsmiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgumiaa"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 717))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgumiaa %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgssfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 718))
+ (clobber (reg:SI SPEFSCR_REGNO))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgssfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgsmfan"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 719))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgsmfan %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgsmian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 720))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgsmian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_evmwhgumian"
+ [(set (match_operand:V2SI 0 "gpc_reg_operand" "=r")
+ (unspec:V2SI [(match_operand:V2SI 1 "gpc_reg_operand" "r")
+ (match_operand:V2SI 2 "gpc_reg_operand" "r")] 721))
+ (clobber (reg:V2SI SPE_ACC_REGNO))]
+ "TARGET_SPE"
+ "evmwhgumian %0,%1,%2"
+ [(set_attr "type" "veccomplex")
+ (set_attr "length" "4")])
+
+(define_insn "spe_mtspefscr"
+ [(set (reg:SI SPEFSCR_REGNO)
+ (unspec_volatile:SI [(match_operand:SI 0 "register_operand" "r")]
+ 722))]
+ "TARGET_SPE"
+ "mtspefscr %0"
+ [(set_attr "type" "vecsimple")])
+
+(define_insn "spe_mfspefscr"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec_volatile:SI [(reg:SI SPEFSCR_REGNO)] 723))]
+ "TARGET_SPE"
+ "mfspefscr %0"
+ [(set_attr "type" "vecsimple")])
+
+;; MPC8540 single-precision FP instructions on GPRs.
+;; We have 2 variants for each. One for IEEE compliant math and one
+;; for non IEEE compliant math.
+
+(define_insn "cmpsfeq_gpr"
+ [(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
+ (eq:CCFP (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS && !flag_unsafe_math_optimizations"
+ "efscmpeq %0,%1,%2"
+ [(set_attr "type" "fpcompare")])
+
+(define_insn "tstsfeq_gpr"
+ [(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
+ (eq:CCFP (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS && flag_unsafe_math_optimizations"
+ "efststeq %0,%1,%2"
+ [(set_attr "type" "fpcompare")])
+
+(define_insn "cmpsfgt_gpr"
+ [(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
+ (gt:CCFP (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS && !flag_unsafe_math_optimizations"
+ "efscmpgt %0,%1,%2"
+ [(set_attr "type" "fpcompare")])
+
+(define_insn "tstsfgt_gpr"
+ [(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
+ (gt:CCFP (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS && flag_unsafe_math_optimizations"
+ "efststgt %0,%1,%2"
+ [(set_attr "type" "fpcompare")])
+
+(define_insn "cmpsflt_gpr"
+ [(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
+ (lt:CCFP (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS && !flag_unsafe_math_optimizations"
+ "efscmplt %0,%1,%2"
+ [(set_attr "type" "fpcompare")])
+
+(define_insn "tstsflt_gpr"
+ [(set (match_operand:CCFP 0 "cc_reg_operand" "=y")
+ (lt:CCFP (match_operand:SF 1 "gpc_reg_operand" "r")
+ (match_operand:SF 2 "gpc_reg_operand" "r")))]
+ "TARGET_HARD_FLOAT && !TARGET_FPRS && flag_unsafe_math_optimizations"
+ "efststlt %0,%1,%2"
+ [(set_attr "type" "fpcompare")])
+