summaryrefslogtreecommitdiff
path: root/compiler/nativeGen/X86/CodeGen.hs
Commit message (Collapse)AuthorAgeFilesLines
* Per-thread allocation counters and limitsSimon Marlow2014-11-121-17/+54
| | | | | | | | This reverts commit f0fcc41d755876a1b02d1c7c79f57515059f6417. New changes: now works on 32-bit platforms too. I added some basic support for 64-bit subtraction and comparison operations to the x86 NCG.
* Implement optimized NCG `MO_Ctz W64` op for i386 (#9340)Herbert Valerio Riedel2014-10-181-9/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is an optimization to the CTZ primops introduced for #9340 Previously we called out to `hs_ctz64`, but we can actually generate better hand-tuned code while avoiding the FFI ccall. With this patch, the code {-# LANGUAGE MagicHash #-} module TestClz0 where import GHC.Prim ctz64 :: Word64# -> Word# ctz64 x = ctz64# x results in the following assembler generated by NCG on i386: TestClz.ctz64_info: movl (%ebp),%eax movl 4(%ebp),%ecx movl %ecx,%edx orl %eax,%edx movl $64,%edx je _nAO bsf %ecx,%ecx addl $32,%ecx bsf %eax,%eax cmovne %eax,%ecx movl %ecx,%edx _nAO: movl %edx,%esi addl $8,%ebp jmp *(%ebp) For comparision, here's what LLVM 3.4 currently generates: 000000fc <TestClzz_ctzz64_info>: fc: 0f bc 45 04 bsf 0x4(%ebp),%eax 100: b9 20 00 00 00 mov $0x20,%ecx 105: 0f 45 c8 cmovne %eax,%ecx 108: 83 c1 20 add $0x20,%ecx 10b: 8b 45 00 mov 0x0(%ebp),%eax 10e: 8b 55 08 mov 0x8(%ebp),%edx 111: 0f bc f0 bsf %eax,%esi 114: 85 c0 test %eax,%eax 116: 0f 44 f1 cmove %ecx,%esi 119: 83 c5 08 add $0x8,%ebp 11c: ff e2 jmp *%edx Reviewed By: austin Auditors: simonmar Differential Revision: https://phabricator.haskell.org/D163
* Add MO_AddIntC, MO_SubIntC MachOps and implement in X86 backendReid Barton2014-08-231-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | Summary: These MachOps are used by addIntC# and subIntC#, which in turn are used in integer-gmp when adding or subtracting small Integers. The following benchmark shows a ~6% speedup after this commit on x86_64 (building GHC with BuildFlavour=perf). {-# LANGUAGE MagicHash #-} import GHC.Exts import Criterion.Main count :: Int -> Integer count (I# n#) = go n# 0 where go :: Int# -> Integer -> Integer go 0# acc = acc go n# acc = go (n# -# 1#) $! acc + 1 main = defaultMain [bgroup "count" [bench "100" $ whnf count 100]] Differential Revision: https://phabricator.haskell.org/D140
* Implement new CLZ and CTZ primops (re #9340)Herbert Valerio Riedel2014-08-141-0/+65
| | | | | | | | | | | | | | | | | | | | | | | | This implements the new primops clz#, clz32#, clz64#, ctz#, ctz32#, ctz64# which provide efficient implementations of the popular count-leading-zero and count-trailing-zero respectively (see testcase for a pure Haskell reference implementation). On x86, NCG as well as LLVM generates code based on the BSF/BSR instructions (which need extra logic to make the 0-case well-defined). Test Plan: validate and succesful tests on i686 and amd64 Reviewers: rwbarton, simonmar, ezyang, austin Subscribers: simonmar, relrod, ezyang, carter Differential Revision: https://phabricator.haskell.org/D144 GHC Trac Issues: #9340
* x86: zero extend the result of 16-bit popcnt instructions (#9435)Reid Barton2014-08-121-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The 'popcnt r16, r/m16' instruction only writes the low 16 bits of the destination register, so we have to zero-extend the result to a full word as popCnt16# is supposed to return a Word#. For popCnt8# we could instead zero-extend the input to 32 bits and then do a 32-bit popcnt, and not have to zero-extend the result. LLVM produces the 16-bit popcnt sequence with two zero extensions, though, and who am I to argue? Test Plan: - ran "make TEST=cgrun071 EXTRA_HC_OPTS=-msse42" - then ran again adding "WAY=optasm", and verified that the popcnt sequences we generate match the ones produced by LLVM for its @llvm.ctpop.* intrinsics Reviewers: austin, hvr, tibbe Reviewed By: austin, hvr, tibbe Subscribers: phaskell, hvr, simonmar, relrod, ezyang, carter Differential Revision: https://phabricator.haskell.org/D147 GHC Trac Issues: #9435
* x86: Always generate add instruction in MO_Add2 (#9013)Reid Barton2014-08-111-2/+3
| | | | | | | | | | | | | Test Plan: - ran validate - ran T9013 test with all ways - ran CarryOverflow test with all ways, for good measure Reviewers: austin, simonmar Reviewed By: simonmar Differential Revision: https://phabricator.haskell.org/D137
* Eliminate some code duplication in x86 backend (genCCall32/64)Reid Barton2014-08-101-101/+13
| | | | | | | | | | | | | | | | | | | | | | | | Summary: No functional changes except in panic messages. These functions were identical except for - x87 operations in genCCall32 - the fallback to genCCall32'/64' - "32" vs "64" in panic messages (one case was wrong!) - minor syntactic or otherwise non-functional differences. Test Plan: Ran "validate --no-dph --slow" before and after the change. Only differences were two tests that failed before the change but not after, further investigation revealed that those tests are in fact erratic. Reviewers: simonmar, austin Reviewed By: austin Subscribers: phaskell, simonmar, relrod, ezyang, carter Differential Revision: https://phabricator.haskell.org/D139
* Add missing memory fence to atomicWriteIntArray#Johan Tibell2014-07-231-1/+2
|
* X86 codegen: make LOCK a real instruction prefixJohan Tibell2014-07-231-8/+4
| | | | | | | | | | Before LOCK was a separate instruction and this led to the register allocator separating it from the instruction it was supposed to be a prefix of, leading to illegal assembly such as lock mov Fix contributed by PÁLI Gábor János.
* Rename PackageId to PackageKey, distinguishing it from Cabal's PackageId.Edward Z. Yang2014-07-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Previously, both Cabal and GHC defined the type PackageId, and we expected them to be roughly equivalent (but represented differently). This refactoring separates these two notions. A package ID is a user-visible identifier; it's the thing you write in a Cabal file, e.g. containers-0.9. The components of this ID are semantically meaningful, and decompose into a package name and a package vrsion. A package key is an opaque identifier used by GHC to generate linking symbols. Presently, it just consists of a package name and a package version, but pursuant to #9265 we are planning to extend it to record other information. Within a single executable, it uniquely identifies a package. It is *not* an InstalledPackageId, as the choice of a package key affects the ABI of a package (whereas an InstalledPackageId is computed after compilation.) Cabal computes a package key for the package and passes it to GHC using -package-name (now *extremely* misnamed). As an added bonus, we don't have to worry about shadowing anymore. As a follow on, we should introduce -current-package-key having the same role as -package-name, and deprecate the old flag. This commit is just renaming. The haddock submodule needed to be updated. Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu> Test Plan: validate Reviewers: simonpj, simonmar, hvr, austin Subscribers: simonmar, relrod, carter Differential Revision: https://phabricator.haskell.org/D79 Conflicts: compiler/main/HscTypes.lhs compiler/main/Packages.lhs utils/haddock
* Re-add more primops for atomic ops on byte arraysJohan Tibell2014-06-301-0/+110
| | | | | | | | | | | | | | | | | | | | | | | This is the second attempt to add this functionality. The first attempt was reverted in 950fcae46a82569e7cd1fba1637a23b419e00ecd, due to register allocator failure on x86. Given how the register allocator currently works, we don't have enough registers on x86 to support cmpxchg using complicated addressing modes. Instead we fall back to a simpler addressing mode on x86. Adds the following primops: * atomicReadIntArray# * atomicWriteIntArray# * fetchSubIntArray# * fetchOrIntArray# * fetchXorIntArray# * fetchAndIntArray# Makes these pre-existing out-of-line primops inline: * fetchAddIntArray# * casIntArray#
* Revert "Add more primops for atomic ops on byte arrays"Johan Tibell2014-06-261-92/+0
| | | | | | | | This commit caused the register allocator to fail on i386. This reverts commit d8abf85f8ca176854e9d5d0b12371c4bc402aac3 and 04dd7cb3423f1940242fdfe2ea2e3b8abd68a177 (the second being a fix to the first).
* Add more primops for atomic ops on byte arraysJohan Tibell2014-06-241-0/+92
| | | | | | | | | | | | | | | | | | | Summary: Add more primops for atomic ops on byte arrays Adds the following primops: * atomicReadIntArray# * atomicWriteIntArray# * fetchSubIntArray# * fetchOrIntArray# * fetchXorIntArray# * fetchAndIntArray# Makes these pre-existing out-of-line primops inline: * fetchAddIntArray# * casIntArray#
* Make better use of the x86 addressing modeJohan Tibell2014-06-101-0/+9
| | | | | | | | | | | | | We now emit movq %rdi,16(%r14,%rsi,8) instead of leaq 16(%r14),%rax movq %rdi,(%rax,%rsi,8) This helps e.g. byte array indexing.
* Add LANGUAGE pragmas to compiler/ source filesHerbert Valerio Riedel2014-05-151-1/+2
| | | | | | | | | | | | | | | | | | In some cases, the layout of the LANGUAGE/OPTIONS_GHC lines has been reorganized, while following the convention, to - place `{-# LANGUAGE #-}` pragmas at the top of the source file, before any `{-# OPTIONS_GHC #-}`-lines. - Moreover, if the list of language extensions fit into a single `{-# LANGUAGE ... -#}`-line (shorter than 80 characters), keep it on one line. Otherwise split into `{-# LANGUAGE ... -#}`-lines for each individual language extension. In both cases, try to keep the enumeration alphabetically ordered. (The latter layout is preferable as it's more diff-friendly) While at it, this also replaces obsolete `{-# OPTIONS ... #-}` pragma occurences by `{-# OPTIONS_GHC ... #-}` pragmas.
* Add flags to control memcpy and memset inliningJohan Tibell2014-03-261-26/+30
| | | | | | | This adds -fmax-inline-memcpy-insns and -fmax-inline-memset-insns. These flags control when we inline calls to memcpy/memset with statically known arguments. The flag naming style is taken from GCC and the same limit is used by both GCC and LLVM.
* Add support for prefetch with locality levels.Austin Seipp2013-10-011-2/+21
| | | | | | | | | | | | | | | | | This patch adds support for several new primitive operations which support using processor-specific instructions to help guide data and cache locality decisions. We have levels ranging from [0..3] For LLVM, we generate llvm.prefetch intrinsics at the proper locality level (similar to GCC.) For x86 we generate prefetch{NTA, t2, t1, t0} instructions. On SPARC and PowerPC, the locality levels are ignored. This closes #8256. Authored-by: Carter Tazio Schonwald <carter.schonwald@gmail.com> Signed-off-by: Austin Seipp <austin@well-typed.com>
* SIMD primops are now generated using schemas that are polymorphic inGeoffrey Mainland2013-09-221-0/+2
| | | | | | | | | | | | | width and element type. SIMD primops are now polymorphic in vector size and element type, but only internally to the compiler. More specifically, utils/genprimopcode has been extended so that it "knows" about SIMD vectors. This allows us to, for example, write a single definition for the "add two vectors" primop in primops.txt.pp and have it instantiated at many vector types. This generates a primop in GHC.Prim for each vector type at which "add two vectors" is instantiated, but only one data constructor for the PrimOp data type, so the code generator is much, much simpler.
* Add support for byte endian swapping for Word 16/32/64.Austin Seipp2013-07-171-0/+24
| | | | | | | | | | | | | * Exposes bSwap{,16,32,64}# primops * Add a new machop: MO_BSwap * Use a Stg implementation (hs_bswap{16,32,64}) for other implementation in NCG. * Generate bswap in X86 NCG for 32 and 64 bits, and for 16 bits, bswap+shr instead of using xchg. * Generate llvm.bswap intrinsics in llvm codegen. Authored-by: Vincent Hanquez <tab@snarc.org> Signed-off-by: Austin Seipp <aseipp@pobox.com>
* Fix many ASSERT uses under Clang.Austin Seipp2013-06-181-1/+1
| | | | | | Clang doesn't like whitespace between macro and arguments. Signed-off-by: Austin Seipp <aseipp@pobox.com>
* Revert "Add support for byte endian swapping for Word 16/32/64."Simon Peyton Jones2013-06-111-14/+0
| | | | This reverts commit 1c5b0511a89488f5280523569d45ee61c0d09ffa.
* Add support for byte endian swapping for Word 16/32/64.Ian Lynagh2013-06-091-0/+14
| | | | | | | | | | | | * Exposes bSwap{,16,32,64}# primops * Add a new machops MO_BSwap * Use a Stg implementation (hs_bswap{16,32,64}) for other implementation in NCG. * Generate bswap in X86 NCG for 32 and 64 bits, and for 16 bits, bswap+shr instead of using xchg. * Generate llvm.bswap intrinsics in llvm codegen. Patch from Vincent Hanquez.
* Refactor cmmMakeDynamicReferenceIan Lynagh2013-05-131-5/+4
| | | | | It now has its own class, and the addImport function is defined in that class, rather than needing to be passed as an argument.
* x86: promote arguments to C functions according to the ABI (#7383)Simon Marlow2013-02-231-6/+14
| | | | | | | | | I don't think the x86-64 version is quite right, but this ought to be enough to pass cgrun071. This code is terrible and needs a complete refactor. There's a lot of duplication, and we ought to be specifying the ABI in a much more abstract way (like LLVM).
* Add prefetch primops.Geoffrey Mainland2013-02-011-0/+3
|
* Add support for passing SSE vectors in registers.Geoffrey Mainland2013-02-011-41/+47
| | | | | | | This patch adds support for 6 XMM registers on x86-64 which overlap with the F and D registers and may hold 128-bit wide SIMD vectors. Because there is not a good way to attach type information to STG registers, we aggressively bitcast in the LLVM back-end.
* Add the Int32X4# primitive type and associated primops.Paul Monday2013-02-011-0/+18
|
* Add the Float32X4# primitive type and associated primops.Geoffrey Mainland2013-02-011-1/+35
| | | | | | | | | | | | | This patch lays the groundwork needed for primop support for SIMD vectors. In addition to the groundwork, we add support for the FloatX4# primitive type and associated primops. * Add the FloatX4# primitive type and associated primops. * Add CodeGen support for Float vectors. * Compile vector operations to LLVM vector operations in the LLVM code generator. * Make the x86 native backend fail gracefully when encountering vector primops. * Only generate primop wrappers for vector primops when using LLVM.
* typosGabor Greif2013-01-301-1/+1
|
* Add preprocessor defines when SSE is enabledJohan Tibell2013-01-101-10/+2
| | | | | | | | | | | This will add the following preprocessor defines when Haskell source files are compiled: * __SSE__ - If any version of SSE is enabled * __SSE2__ - If SSE2 or greater is enabled * __SSE4_2_ - If SSE4.2 is enabled Note that SSE2 is enabled by default on x86-64.
* Implement word2Float# and word2Double#Johan Tibell2012-12-131-0/+13
|
* Remove OldCmm, convert backends to consume new CmmSimon Marlow2012-11-121-107/+113
| | | | | | | | | | | | | | | | | | This removes the OldCmm data type and the CmmCvt pass that converts new Cmm to OldCmm. The backends (NCGs, LLVM and C) have all been converted to consume new Cmm. The main difference between the two data types is that conditional branches in new Cmm have both true/false successors, whereas in OldCmm the false case was a fallthrough. To generate slightly better code we occasionally need to invert a conditional to ensure that the branch-not-taken becomes a fallthrough; this was previously done in CmmCvt, and it is now done in CmmContFlowOpt. We could go further and use the Hoopl Block representation for native code, which would mean that we could use Hoopl's postorderDfs and analyses for native code, but for now I've left it as is, using the old ListGraph representation for native code.
* Fix typosIan Lynagh2012-11-011-2/+2
|
* Attach global register liveness info to Cmm procedures.Geoffrey Mainland2012-10-301-2/+2
| | | | | | | All Cmm procedures now include the set of global registers that are live on procedure entry, i.e., the global registers used to pass arguments to the procedure. Only global registers that are use to pass arguments are included in this list.
* Cmm jumps always have live register information.Geoffrey Mainland2012-10-301-3/+2
| | | | Jumps now always have live register information attached, so drop Maybes.
* Some alpha renamingIan Lynagh2012-10-161-4/+4
| | | | | Mostly d -> g (matching DynFlag -> GeneralFlag). Also renamed if* to when*, matching the Haskell if/when names
* fix panic message typoSimon Marlow2012-09-251-1/+1
|
* whitespace and panic message fixupSimon Marlow2012-09-241-8/+8
|
* Generate better code for "if (3 <= x) then ..."Simon Marlow2012-09-241-1/+13
|
* Move wORD_SIZE into platformConstantsIan Lynagh2012-09-161-12/+11
|
* Move more constants into platformConstantsIan Lynagh2012-09-141-5/+5
|
* Pass DynFlags down to wordWidthIan Lynagh2012-09-121-10/+10
|
* Pass DynFlags down to bWordIan Lynagh2012-09-121-66/+72
| | | | | | I've switched to passing DynFlags rather than Platform, as (a) it's simpler to not have to extract targetPlatform in so many places, and (b) it may be useful to have DynFlags around in future.
* Move more code into codeGen/CodeGen/Platform.hsIan Lynagh2012-08-281-68/+89
| | | | | | | | HaskellMachRegs.h is no longer included in anything under compiler/ Also, includes/CodeGen.Platform.hs now includes "stg/MachRegs.h" rather than <stg/MachRegs.h> which means that we always get the file from the tree, rather than from the bootstrapping compiler.
* More CPP removal in nativeGen/X86/Regs.hsIan Lynagh2012-08-221-2/+2
|
* Remove some CPP in nativeGen/X86/Regs.hsIan Lynagh2012-08-221-6/+6
|
* Make -fPIC a dynamic flagIan Lynagh2012-07-161-11/+13
| | | | | | Hopefully I've kept the logic the same, and we now generate warnings if the user does -fno-PIC but we ignore them (e.g. because they're on OS X amd64).
* Allow the register allocator access to argument regs (R1.., F1.., etc.)Simon Marlow2012-07-061-11/+15
| | | | | | | | | | | | | | | This was made possible by the recent change to codeGen to attach the live GlobalRegs to every CmmJump, and we'll be relying on it quite heavily in the new code generator too. What this means essentially is that when we see x = R1 the register allocator will automatically assign x to R1 and generate no code at all (also known as "coalescing"). It wasn't possible before because the register allocator had to assume that R1 was always live, because it didn't have access to accurate liveness information.
* Remove lots of commented out 'in' keywordsIan Lynagh2012-06-131-30/+0
|
* Remove PlatformOutputableIan Lynagh2012-06-131-8/+4
| | | | | We can now get the Platform from the DynFlags inside an SDoc, so we no longer need to pass the Platform in.