summaryrefslogtreecommitdiff
path: root/opcodes
Commit message (Collapse)AuthorAgeFilesLines
* gcc-13 i386-dis.c warningAlan Modra2023-04-241-16/+31
| | | | | | | | | opcodes/i386-dis.c: In function ‘print_insn’: opcodes/i386-dis.c:9865:22: error: storing the address of local variable ‘priv’ in ‘*info.private_data’ [-Werror=dangling-pointer=] * i386-dis.c (print_insn): Clear info->private_data before returning.
* x86: work around compiler diagnosing dangling pointerJan Beulich2023-04-241-0/+6
| | | | | | | | | | | | | For quite come time print_insn() has been storing the address of a local variable into info->private_data. Since the compiler can't know that the field won't be accessed again after print_insn() returns, it may kind of legitimately diagnose this. And recent enough gcc does as of the introduction of the fetch_error() return paths (replacing setjmp()-based error handling). Utilizing that neither prefix_name() nor i386_dis_printf() actually use info->private_data, zap the pointer in fetch_error(), after having retrieved it for local use.
* Fix -Wmaybe-uninitialized warning in opcodes/i386-dis.cTom Tromey2023-04-212-1/+6
| | | | | | | | | | | | | | | | | | | | | | | A recent change in opcodes/i386-dis.c caused a build failure on my x86-64 Fedora 36 system, which uses: $ gcc --version gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4) [...] The error is: ../../binutils-gdb/opcodes/i386-dis.c: In function ‘OP_J’: ../../binutils-gdb/opcodes/i386-dis.c:12705:22: error: ‘val’ may be used uninitialized [-Werror=maybe-uninitialized] 12705 | disp = val & 0x8000 ? val - 0x10000 : val; | ~~~~^~~~~~~~ This patch fixes the warning. opcodes/ChangeLog 2023-04-21 Tom Tromey <tromey@adacore.com> * i386-dis.c (OP_J): Check result of get16.
* x86: drop (explicit) BFD64 dependency from disassemblerJan Beulich2023-04-211-13/+4
| | | | | | get64() is unreachable when !BFD64 (due to a check relatively early in print_insn()). Let's avoid the associated #ifdef-ary (or else we should extend it to remove more dead code).
* x86: drop use of setjmp() from disassemblerJan Beulich2023-04-211-5/+0
| | | | | With the longjmp() uses all gone, the setjmp() isn't necessary anymore either.
* x86: change fetch error handling for get<N>()Jan Beulich2023-04-211-133/+114
| | | | | | | | | | | | | | | Make them return boolean and convert FETCH_DATA() uses to fetch_code(). With this no further users of FETCH_DATA() remain, so the macro and its backing function are dropped as well. Leave value types as they were for the helper functions, even if I don't think that beyond get64() use of bfd_{,signed_}vma is really necessary. With type change of "disp" in OP_E_memory(), change the 2nd parameter of print_displacement() to a signed type as well, though (eliminating the need for a local variable of signed type). This also eliminates the need for custom printing of '-' in Intel syntax displacement expressions. While there drop forward declarations which aren't really needed.
* x86: change fetch error handling when processing operandsJan Beulich2023-04-211-233/+276
| | | | | Make the handler functions all return boolean and convert FETCH_DATA() uses to fetch_code().
* x86: change fetch error handling in get_valid_dis386()Jan Beulich2023-04-211-30/+26
| | | | | Introduce a special error indicator node, for the sole (real) caller to recognize and act upon.
* x86: change fetch error handling in ckprefix()Jan Beulich2023-04-211-12/+20
| | | | | | Use a tristate (enum) return value type to be able to express all three cases which are of interest to the (sole) caller. This also allows doing away with the abuse of "rex_used".
* x86: change fetch error handling in top-level functionJan Beulich2023-04-211-13/+59
| | | | | | | | | ... and its direct helper get_sib(). Using setjmp()/longjmp() for fetch error handling is problematic, as per https://sourceware.org/pipermail/binutils/2023-March/126687.html. Start using more conventional error handling instead. Also introduce a fetch_modrm() helper, for subsequent re-use.
* x86: move fetch error handling into a helper functionJan Beulich2023-04-211-28/+35
| | | | | | | | | ... such that it can be used from other than the setjmp() error handling path. Since I'd like the function's parameter to be pointer-to-const, two other functions need respective constification then, too (along with needing to be forward-declared).
* RISC-V: Cache the latest mapping symbol and its boundary.Kito Cheng2023-04-181-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This issue was reported from https://github.com/riscv-collab/riscv-gnu-toolchain/issues/1188 Current flow: 1) Scan any mapping symbol less than this instruciton. 2) If not found, did a backward search. The flow seems not big issue, let run an example here: $x: 0x0 a <--- Found at step 1 0x4 b <--- Not found in step 1, but found at step 2 0x8 c <--- Not found in step 1, but found at step 2 $d 0x12 .word 1234 <-- Found at step 1 The instruciton didn't have the same address with mapping symbol will still did backward search again and again. So the new flow is: 1) Use the last mapping symbol status if the address is still within the range of the current mapping symbol. 2) Scan any mapping symbol less than this instruciton. 3) If not found, did a backward search. 4) If a proper mapping symbol is found in either step 2 or 3, find its boundary, and cache that. Use the same example to run the new flow again: $x: 0x0 a <--- Found at step 2, the boundary is 0x12 0x4 b <--- Cache hit at step 1, within the boundary. 0x8 c <--- Cache hit at step 1, within the boundary. $d 0x12 .word 1234 <-- Found at step 2, the boundary is the end of section. The disassemble time of the test cases has been reduced from ~20 minutes to ~4 seconds. opcode/ChangeLog PR 30282 * riscv-dis.c (last_map_symbol_boundary): New. (last_map_state): New. (last_map_section): New. (riscv_search_mapping_symbol): Cache the result of latest mapping symbol.
* arc: remove faulty instructionsClaudiu Zissulescu2023-04-122-720/+6
| | | | Clean not implemented ARC instruction from ARC instruction table.
* Fix illegal memory access when disassembling corrupt NFP binaries.Nick Clifton2023-04-112-1/+9
| | | | | PR 30310 * nfp-dis.c (init_nfp6000_priv): Check that the output section exists.
* Support Intel AMX-COMPLEXHaochen Jiang2023-04-077-4711/+4808
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gas/ChangeLog: * NEWS: Support Intel AMX-COMPLEX. * config/tc-i386.c: Add amx_complex. * doc/c-i386.texi: Document .amx_complex. * testsuite/gas/i386/i386.exp: Run AMX-COMPLEX tests. * testsuite/gas/i386/amx-complex-inval.l: New test. * testsuite/gas/i386/amx-complex-inval.s: Ditto. * testsuite/gas/i386/x86-64-amx-complex-bad.d: Ditto. * testsuite/gas/i386/x86-64-amx-complex-bad.s: Ditto. * testsuite/gas/i386/x86-64-amx-complex-intel.d: Ditto. * testsuite/gas/i386/x86-64-amx-complex.d: Ditto. * testsuite/gas/i386/x86-64-amx-complex.s: Ditto. opcodes/ChangeLog: * i386-dis.c (MOD_VEX_0F386C_X86_64_W_0): New. (PREFIX_VEX_0F386C_X86_64_W_0_M_1_L_0): Ditto. (X86_64_VEX_0F386C): Ditto. (VEX_LEN_0F386C_X86_64_W_0_M_1): Ditto. (VEX_W_0F386C_X86_64): Ditto. (mod_table): Add MOD_VEX_0F386C_X86_64_W_0. (prefix_table): Add PREFIX_VEX_0F386C_X86_64_W_0_M_1_L_0. (x86_64_table): Add X86_64_VEX_0F386C. (vex_len_table): Add VEX_LEN_0F386C_X86_64_W_0_M_1. (vex_w_table): Add VEX_W_0F386C_X86_64. * i386-gen.c (cpu_flag_init): Add CPU_AMX_COMPLEX_FLAGS and CPU_ANY_AMX_COMPLEX_FLAGS. * i386-init.h: Regenerated. * i386-mnem.h: Ditto. * i386-opc.h (CpuAMX_COMPLEX): New. (i386_cpu_flags): Add cpuamx_complex. * i386-opc.tbl: Add AMX-COMPLEX instructions. * i386-tbl.h: Regenerated.
* asan: csky floatformat_to_double uninitialised valueAlan Modra2023-04-031-10/+6
| | | | | | * csky-dis.c (csky_print_operand <OPRND_TYPE_FCONSTANT>): Don't access ibytes after read_memory_func error. Change type of ibytes to avoid casts.
* opcodes/arm: adjust whitespace in cpsie instructionAndrew Burgess2023-04-031-2/+2
| | | | | | | | | | | | | | | While I was working on the disassembler styling for ARM I noticed that the whitespace in the cpsie instruction was inconsistent with most of the other ARM disassembly output, the disassembly for cpsie looks like this: cpsie if,#10 notice there's no space before the '#10' immediate, most other ARM instructions have a space before each operand. This commit updates the disassembler to add the missing space, and updates the tests I found that tested this instruction.
* RISC-V: Allocate "various" operand typeTsukasa OI2023-03-312-8/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit intends to move operands that require very special handling or operand types that are so minor (e.g. only useful on a few instructions) under "W". I also intend this "W" to be "temporary" operand storage until we can find good two character (or less) operand type. In this commit, prefetch offset operand "f" for 'Zicbop' extension is moved to "Wif" because of its special handling (and allocating single character "f" for this operand type seemed too much). Current expected allocation guideline is as follows: 1. 'W' 2. The most closely related single-letter extension in lowercase (strongly recommended but not mandatory) 3. Identify operand type The author currently plans to allocate following three-character operand types (for operands including instructions from unratified extensions). 1. "Wif" ('Zicbop': fetch offset) 2. "Wfv" (unratified 'Zfa': value operand from FLI.[HSDQ] instructions) 3. "Wfm" / "WfM" 'Zfh', 'F', 'D', 'Q': rounding modes "m" with special handling solely for widening conversion instructions. gas/ChangeLog: * config/tc-riscv.c (validate_riscv_insn, riscv_ip): Move from "f" to "Wif". opcodes/ChangeLog: * riscv-dis.c (print_insn_args): Move from "f" to "Wif". * riscv-opc.c (riscv_opcodes): Reflect new operand type.
* x86: parse VEX and alike specifiers for .insnJan Beulich2023-03-311-0/+2
| | | | | | | | | | All encoding spaces can be used this way; there's a certain risk that the bits presently reserved could be used for other purposes down the road, but people using .insn are expected to know what they're doing anyway. Plus this way there's at least _some_ way to have those bits set. For now this will only allow operand-less insns to be encoded this way.
* x86: introduce .insn directiveJan Beulich2023-03-313-0/+5
| | | | For starters this deals with only very basic constructs.
* aarch64: Add the RPRFM instructionRichard Sandiford2023-03-306-885/+925
| | | | | | | | | | This patch adds the RPRFM (range prefetch) instruction. It was introduced as part of SME2, but it belongs to the prefetch hint space and so doesn't require any specific ISA flags. The aarch64_rprfmop_array initialiser (deliberately) only fills in the leading non-null elements.
* aarch64: Add the SVE FCLAMP instructionRichard Sandiford2023-03-302-759/+771
|
* aarch64: Add new SVE shift instructionsRichard Sandiford2023-03-302-873/+909
| | | | | This patch adds the new SVE SQRSHRN, SQRSHRUN and UQRSHRN instructions.
* aarch64: Add new SVE saturating conversion instructionsRichard Sandiford2023-03-302-752/+788
| | | | | This patch adds the SVE SQCVTN, SQCVTUN and UQCVTN instructions, which are available when FEAT_SME2 is implemented.
* aarch64: Add new SVE dot-product instructionsRichard Sandiford2023-03-306-841/+923
| | | | | | | This patch adds the SVE FDOT, SDOT and UDOT instructions, which are available when FEAT_SME2 is implemented. The patch also reorders the existing SVE_Zm3_22_INDEX to keep the operands numerically sorted.
* aarch64: Add the SVE BFMLSL instructionsRichard Sandiford2023-03-302-742/+793
| | | | | This patch adds the SVE BFMLSLB and BFMLSLT instructions, which are available when FEAT_SME2 is implemented.
* aarch64: Add the SME2 UZP and ZIP instructionsRichard Sandiford2023-03-302-338/+438
| | | | | This patch adds UZP and ZIP, which combine UZP{1,2} and ZIP{1,2} into single instructions.
* aarch64: Add the SME2 UNPK instructionsRichard Sandiford2023-03-302-709/+757
| | | | | | This patch adds SUNPK and UUNPK, which unpack one register's worth of elements to two registers' worth, or two registers' worth to four registers' worth.
* aarch64: Add the SME2 shift instructionsRichard Sandiford2023-03-309-505/+679
| | | | | | | | | | | | | | | There are two instruction formats here: - SQRSHR, SQRSHRU and UQRSHR, which operate on lists of two or four registers. - SQRSHRN, SQRSHRUN and UQRSHRN, which operate on lists of four registers. These are the first SME2 instructions to have immediate operands. The patch makes sure that, when parsing SME2 instructions with immediate operands, the new predicate-as-counter registers are parsed as registers rather than as #-less immediates.
* aarch64: Add the SME2 saturating conversion instructionsRichard Sandiford2023-03-306-468/+592
| | | | | | | | | | There are two instruction formats here: - SQCVT, SQCVTU and UQCVT, which operate on lists of two or four registers. - SQCVTN, SQCVTUN and UQCVTN, which operate on lists of four registers.
* aarch64: Add the SME2 FP<->FP conversion instructionsRichard Sandiford2023-03-302-948/+1000
| | | | | This patch adds the BFCVT{,N} and FCVT{,N} instructions, which narrow a pair of .S registers to a single .H register.
* aarch64: Add the SME2 FP<->int conversion instructionsRichard Sandiford2023-03-302-749/+945
| | | | | | This patch adds the SME2 versions of the FP<->integer conversion instructions FCVT* and *CVTF. It also adds FP rounding instructions FRINT*, which share the same format.
* aarch64: Add the SME2 CLAMP instructionsRichard Sandiford2023-03-302-820/+892
| | | | | FCLAMP, SCLAMP and UCLAMP share the same format, although FCLAMP doesn't have a .B form.
* aarch64: Add the SME2 MOPA and MOPS instructionsRichard Sandiford2023-03-302-717/+789
| | | | [BSU]MOP[AS] share the same format.
* aarch64: Add the SME2 vertical dot-product instructionsRichard Sandiford2023-03-302-669/+789
| | | | | | | | | There are three instruction formats here: - BFVDOT + FVDOT - SVDOT + UVDOT - SUVDOT + USVDOT There are also 64-bit forms of SVDOT and UVDOT.
* aarch64: Add the SME2 dot-product instructionsRichard Sandiford2023-03-302-753/+1353
| | | | | | | BFDOT, FDOT and USDOT share the same instruction format. SDOT and UDOT share a different format. SUDOT does not have the multi vector x multi vector forms, since they would be redundant with USDOT.
* aarch64: Add the SME2 MLALL and MLSLL instructionsRichard Sandiford2023-03-306-851/+1599
| | | | | | | SMLALL, SMLSLL, UMLALL and UMLSLL have the same format. USMLALL and SUMLALL allow the same operand types as those instructions, except that SUMLALL does not have the multi-vector x multi-vector forms (which would be redundant with USMLALL).
* aarch64: Add the SME2 MLAL and MLSL instructionsRichard Sandiford2023-03-308-665/+1485
| | | | | | | The {BF,F,S,U}MLAL and {BF,F,S,U}MLSL instructions share the same encoding. They are the first instance of a ZA (as opposed to ZA tile) operand having a range of offsets. As with ZA tiles, the expected range size is encoded in the operand-specific data field.
* aarch64: Add the SME2 FMLA and FMLS instructionsRichard Sandiford2023-03-306-529/+753
|
* aarch64: Add the SME2 maximum/minimum instructionsRichard Sandiford2023-03-304-439/+979
| | | | | | This patch adds the SME2 multi-register forms of F{MAX,MIN}{,NM} and {S,U}{MAX,MIN}. SQDMULH, SRSHL and URSHL have the same form as SMAX etc., so the patch adds them too.
* aarch64: Add the SME2 ADD and SUB instructionsRichard Sandiford2023-03-308-487/+750
| | | | | | | | | | | | | | | | | | | | | | Add support for the SME2 ADD. SUB, FADD and FSUB instructions. SUB and FSUB have the same form as ADD and FADD, except that ADD also has a 2-operand accumulating form. The 64-bit ADD/SUB instructions require FEAT_SME_I16I64 and the 64-bit FADD/FSUB instructions require FEAT_SME_F64F64. These are the first instructions to have tied register list operands, as opposed to tied single registers. The parse_operands change prevents unsuffixed Z registers (width==-1) from being treated as though they had an Advanced SIMD-style suffix (.4s etc.). It means that: Error: expected element type rather than vector type at operand 2 -- `add za\.s\[w8,0\],{z0-z1}' becomes: Error: missing type suffix at operand 2 -- `add za\.s\[w8,0\],{z0-z1}'
* aarch64: Add the SME2 ZT0 instructionsRichard Sandiford2023-03-308-397/+686
| | | | | | | | SME2 adds lookup table instructions for quantisation. They use a new lookup table register called ZT0. LUTI2 takes an unsuffixed SVE vector index of the form Zn[<imm>], which is the first time that this syntax has been used.
* aarch64: Add the SME2 predicate-related instructionsRichard Sandiford2023-03-3010-821/+1270
| | | | | | | | | | | | | | | | | Implementation-wise, the main things to note here are: - the WHILE* instructions have forms that return a pair of predicate registers. This is the first time that we've had lists of predicate registers, and they wrap around after register 15 rather than after register 31. - the predicate-as-counter WHILE* instructions have a fourth operand that specifies the vector length. We can treat this as an enumeration, except that immediate values aren't allowed. - PEXT takes an unsuffixed predicate index of the form PN<n>[<imm>]. This is the first instance of a vector/predicate index having no suffix.
* aarch64: Add the SME2 multivector LD1 and ST1 instructionsRichard Sandiford2023-03-3010-448/+2088
| | | | | | | | | | | | | | | SME2 adds LD1 and ST1 variants for lists of 2 and 4 registers. The registers can be consecutive or strided. In the strided case, 2-register lists have a stride of 8, starting at register x0xxx. 4-register lists have a stride of 4, starting at register x00xx. The instructions are predicated on a predicate-as-counter register in the range pn8-pn15. Although we already had register fields with upper bounds of 7 and 15, this is the first plain register operand to have a nonzero lower bound. The patch uses the operand-specific data field to record the minimum value, rather than having separate inserters and extractors for each lower bound. This in turn required adding an extra bit to the field.
* aarch64: Add the SME2 MOVA instructionsRichard Sandiford2023-03-3010-312/+674
| | | | | | | | | | | | | | SME2 defines new MOVA instructions for moving multiple registers to and from ZA. As with SME, the instructions are also available through MOV aliases. One notable feature of these instructions (and many other SME2 instructions) is that some register lists must start at a multiple of the list's size. The patch uses the general error "start register out of range" when this constraint isn't met, rather than an error specifically about multiples. This ensures that the error is consistent between these simple consecutive lists and later strided lists, for which the requirements aren't a simple multiple.
* aarch64: Add support for predicate-as-counter registersRichard Sandiford2023-03-306-1597/+1647
| | | | | | | | | | | | SME2 adds a new format for the existing SVE predicate registers: predicates as counters rather than predicates as masks. In assembly code, operands that interpret predicates as counters are written pn<N> rather than p<N>. This patch adds support for these registers and extends some existing instructions to support them. Since the new forms are just a programmer convenience, there's no need to make them more restrictive than the earlier predicate-as-mask forms.
* aarch64; Add support for vector offset rangesRichard Sandiford2023-03-301-9/+48
| | | | | | | | | | | Some SME2 instructions operate on a range of consecutive ZA vectors. This is indicated by syntax such as: za[<Wv>, <imml>:<immh>] Like with the earlier vgx2 and vgx4 support, we get better error messages if the parser allows all ZA indices to have a range. We can then reject invalid cases during constraint checking.
* aarch64: Add support for vgx2 and vgx4Richard Sandiford2023-03-301-8/+41
| | | | | | | | | | | | | | | | | | | | | | | Many SME2 instructions operate on groups of 2 or 4 ZA vectors. This is indicated by adding a "vgx2" or "vgx4" group size to the ZA index. The group size is optional in assembly but preferred for disassembly. There is not a binary distinction between mnemonics that have group sizes and mnemonics that don't, nor between mnemonics that take vgx2 and mnemonics that take vgx4. We therefore get better error messages if we allow any ZA index to have a group size during parsing, and wait until constraint checking to reject invalid sizes. A quirk of the way errors are reported means that if an instruction is wrong both in its qualifiers and its use of a group size, we'll print suggested alternative instructions that also have an incorrect group size. But that's a general property that also applies to things like out-of-range immediates. It's also not obviously the wrong thing to do. We need to be relatively confident that we're looking at the right opcode before reporting detailed operand-specific errors, so doing qualifier checking first seems resonable.
* aarch64: Add _off4 suffix to AARCH64_OPND_SME_ZA_arrayRichard Sandiford2023-03-303-7/+7
| | | | | | | SME2 adds various new fields that are similar to AARCH64_OPND_SME_ZA_array, but are distinguished by the size of their offset fields. This patch adds _off4 to the name of the field that we already have.
* aarch64: Add a _10 suffix to FLD_imm3Richard Sandiford2023-03-306-8/+8
| | | | | SME2 adds various new 3-bit immediate fields, so this patch adds an lsb position suffix to the name of the field that we already have.