summaryrefslogtreecommitdiff
path: root/compiler/nativeGen/X86/CodeGen.hs
Commit message (Collapse)AuthorAgeFilesLines
...
* Change how macros like ASSERT are definedIan Lynagh2012-06-051-0/+1
| | | | | By using Haskell's debugIsOn rather than CPP's "#ifdef DEBUG", we don't need to kludge things to keep the warning checker happy etc.
* Add an X86/amd64 implementation for quotRemWord2Ian Lynagh2012-04-211-20/+50
|
* Add a quotRemWord2 primopIan Lynagh2012-04-211-6/+7
| | | | | | | | It allows you to do (high, low) `quotRem` d provided high < d. Currently only has an inefficient fallback implementation.
* Implement the Adjustor for Win64Ian Lynagh2012-03-211-2/+6
|
* Fixes for the calling convention on Win64Ian Lynagh2012-03-211-7/+42
| | | | In particular, fixes for FP arguments
* Rename allArgRegs to allIntArgRegsIan Lynagh2012-03-211-2/+2
|
* Fix for Win64 codegenIan Lynagh2012-03-201-7/+22
| | | | | We need to leave stack space for arguments that we are passing in registers.
* Fix the unregisterised build; fixes #5901Ian Lynagh2012-02-271-4/+4
|
* Add x86 implementations of the quotRem, Mul2 and Add2 primopsIan Lynagh2012-02-241-2/+59
|
* Implement 2-word-multiply for x86_64Ian Lynagh2012-02-241-0/+15
|
* Add a 2-word-multiply operatorIan Lynagh2012-02-241-0/+1
| | | | Currently no NCGs support it
* Add x86_64 support for the add-with-carry opIan Lynagh2012-02-231-0/+13
|
* Add a Word add-with-carry primopIan Lynagh2012-02-231-19/+17
| | | | No special-casing in any NCGs yet
* Call expandCallishMachOp in the x86_64 codegen tooIan Lynagh2012-02-231-0/+4
| | | | | Currently it does nothing, as x86_64 supports all the callishMachOps that expandCallishMachOp expands, but it might be needed in the future.
* Add a primop for unsigned quotRem; part of #5598Ian Lynagh2012-02-171-15/+23
| | | | Only amd64 has an efficient implementation currently.
* Small refactorIan Lynagh2012-02-171-87/+92
| | | | Moved the default case of genCCall64 out into a separate function
* Define a quotRem CallishMachOp; fixes #5598Ian Lynagh2012-02-141-4/+27
| | | | | This means we no longer do a division twice when we are using quotRem (on platforms on which the op is supported; currently only amd64).
* Track STG live register information for use in LLVMDavid Terei2012-01-091-1/+1
| | | | | | | | | We now carry around with CmmJump statements a list of the STG registers that are live at that jump site. This is used by the LLVM backend so it can avoid unnesecarily passing around dead registers, improving perfromance. This gives us the framework to finally fix trac #4308.
* Remove unused arg field of CmmReturnDavid Terei2012-01-051-1/+1
|
* Remove unused argument field on CmmJumpDavid Terei2012-01-051-1/+1
|
* small refactoringSimon Marlow2012-01-051-2/+3
|
* We must emit DELTA pseudo-instructions when moving %esp (#5747)Simon Marlow2012-01-051-1/+3
|
* Make getDynFlags* functions use HasDynFlags/getDynFlags tooIan Lynagh2011-12-191-13/+13
|
* Get rid of the "safety" field of CmmCall (OldCmm)Simon Marlow2011-11-291-2/+2
| | | | | This field was doing nothing. I think it originally appeared in a very old incarnation of the new code generator.
* Explicitly handle unsupported Cmm prim ops.David Terei2011-11-221-2/+4
|
* Better documentation for stack alignment designDavid Terei2011-11-171-25/+22
|
* Fix #4211: No need to fixup stack using mangler on OSXDavid Terei2011-11-171-3/+3
| | | | | | | We now manage the stack correctly on both x86 and i386, keeping the stack align at (16n bytes - word size) on function entry and at (16n bytes) on function calls. This gives us compatability with LLVM and GCC.
* Ignore stdcall c-call in native codegen on x86_64David M Peixotto2011-11-011-2/+3
| | | | | | | | | | | | | | The stdcall calling convention is not supported on x86_64. When an ffi import requests stdcall, a warning is issued as desired by #3336. However, the native codegen was still generating code that expected the callee to cleanup the stack arguments when calling a c function that requests stdcall. This patch changes the codegen to actually use the ccall calling convention as intended. Signed-off-by: David Terei <davidterei@gmail.com>
* Change stack alignment to 16+8 bytes in STG codeDavid M Peixotto2011-11-011-7/+9
| | | | | | | | | | | | | | | | | This patch changes the STG code so that %rsp to be aligned to a 16-byte boundary + 8. This is the alignment required by the x86_64 ABI on entry to a function. Previously we kept %rsp aligned to a 16-byte boundary, but this was causing problems for the LLVM backend (see #4211). We now don't need to invoke llvm stack mangler on x86_64 targets. Since the stack is now 16+8 byte algined in STG land on x86_64, we don't need to mangle the stack manipulations with the llvm mangler. This patch only modifies the alignement for x86_64 backends. Signed-off-by: David Terei <davidterei@gmail.com>
* Eliminate all uses of IF_ARCH_i386, and remove the definitionIan Lynagh2011-10-231-5/+10
|
* More CPP removalIan Lynagh2011-10-231-33/+35
|
* Tweak a commentIan Lynagh2011-10-141-2/+1
|
* More CPP removal: pprDynamicLinkerAsmLabel in CLabelIan Lynagh2011-10-021-4/+8
| | | | And some knock-on changes
* Start de-CPPing X86.RegsIan Lynagh2011-08-301-2/+4
|
* A little more CPP removalIan Lynagh2011-08-301-41/+60
|
* Make popCnt# primop work with dynamic compilationJohan Tibell2011-08-251-4/+10
|
* Renaming onlySimon Peyton Jones2011-08-251-5/+5
| | | | | CmmTop -> CmmDecl CmmPgm -> CmmGroup
* Add popCnt# primopJohan Tibell2011-08-161-1/+29
|
* Fix an x86 code generation bug (#5393). In fact, there were two bugsSimon Marlow2011-08-091-2/+2
| | | | | | | in X86.CodeGen.getNonClobberedOperand: two code fragments were the wrong way around, and we were using the wrong size on an instruction (32 bits instead of the word size). This bit of the code generator must have never worked!
* one more instance of the 64-bit constant bug I noticedSimon Marlow2011-07-211-1/+1
|
* Fix a bug in the code generation for 64-bit literals on 32-bit x86: weSimon Marlow2011-07-201-1/+1
| | | | | | | | were accidentally zeroing out the high 32 bits. This bug must have been here for ages, it was only just exposed by the new Typeable code which uses two 64-bit values to store a 128-bit hash, and so exercises the code generation for 64-bit stuff.
* Split the X86 genCCall function upIan Lynagh2011-07-191-332/+346
| | | | Just a small refactoring. Makes it a little less intimidating.
* Small refactoringIan Lynagh2011-07-151-4/+3
|
* More work towards cross-compilationIan Lynagh2011-07-151-3/+4
| | | | | | | | | | | | There's now a variant of the Outputable class that knows what platform we're targetting: class PlatformOutputable a where pprPlatform :: Platform -> a -> SDoc pprPlatformPrec :: Platform -> Rational -> a -> SDoc and various instances have had to be converted to use that class, and we pass Platform around accordingly.
* Refactoring: use a structured CmmStatics type rather than [CmmStatic]Max Bolingbroke2011-07-051-10/+7
| | | | | | | | | | | | | | | | | | I observed that the [CmmStatics] within CmmData uses the list in a very stylised way. The first item in the list is almost invariably a CmmDataLabel. Many parts of the compiler pattern match on this list and fail if this is not true. This patch makes the invariant explicit by introducing a structured type CmmStatics that holds the label and the list of remaining [CmmStatic]. There is one wrinkle: the x86 backend sometimes wants to output an alignment directive just before the label. However, this can be easily fixed up by parameterising the native codegen over the type of CmmStatics (though the GenCmmTop parameterisation) and using a pair (Alignment, CmmStatics) there instead. As a result, I think we will be able to remove CmmAlign and CmmDataLabel from the CmmStatic data type, thus nuking a lot of code and failing pattern matches. This change will come as part of my next patch.
* Keep the C stack pointer 16-byte aligned on all x86 platforms, not just Mac ↵Simon Marlow2011-06-271-10/+5
| | | | | | | | | | OS X (#5250). The OS X ABI requires the C stack pointer to be 16-byte aligned at a function call. As far as I know this is not a requirement on other x86 ABIs, but it seems that gcc is now generating SSE2 code that assumes stack alignment (-mincoming-stack-boundary defaults to 4), so we have to respect 16-byte alignment.
* Remove most of the CPP from compiler/nativeGen/X86/CodeGen.hsIan Lynagh2011-06-171-558/+522
|
* Whitespace only in compiler/nativeGen/X86/CodeGen.hsIan Lynagh2011-06-171-607/+607
|
* Wibble on panic messageDavid Terei2011-06-161-1/+1
|
* FIX BUILD on PPC. Define default genCCall with correct number of args.Erik de Castro Lopo2011-06-161-1/+1
|