| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
I changed the behaviour slightly, e.g. i386/FreeBSD will no longer
fall through and use the Linux "i386-pc-linux-gnu", but will get the
final empty case instead. I assume that that's the right thing to do.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This means that we now generate the same code whatever platform we are
on, which should help avoid changes on one platform breaking the build
on another.
It's also another step towards full cross-compilation.
|
|
|
|
|
|
| |
To explicitly choose whether you want an unregisterised build you now
need to use the "--enable-unregisterised"/"--disable-unregisterised"
configure flags.
|
|
|
|
|
|
|
|
|
| |
Proc-point splitting is only required by backends that do not support
having proc-points within a code block (that is, everything except the
native backend, i.e. LLVM and C).
Not doing proc-point splitting saves some compilation time, and might
produce slightly better code in some cases.
|
| |
|
| |
|
|
|
|
| |
earlier did, so we avoid them.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
In particular, this makes life simpler when we want to use a general
GHC SDoc in the middle of some LLVM.
|
|
|
|
|
|
|
|
| |
It allows you to do
(high, low) `quotRem` d
provided high < d.
Currently only has an inefficient fallback implementation.
|
| |
|
|
|
|
| |
Currently no NCGs support it
|
|
|
|
| |
No special-casing in any NCGs yet
|
|
|
|
| |
Only amd64 has an efficient implementation currently.
|
|
|
|
|
| |
This means we no longer do a division twice when we are using quotRem
(on platforms on which the op is supported; currently only amd64).
|
| |
|
|
|
|
| |
Signed-off-by: David Terei <davidterei@gmail.com>
|
| |
|
| |
|
| |
|
|
|
|
| |
is used for optimisation. (enabled by default)
|
| |
|
|
|
|
|
|
|
| |
TBAA allows us to specify a type hierachy in metadata with
the property that nodes on different branches don't alias.
This should somewhat improve the optimizations LLVM does that
rely on alias information.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
We now carry around with CmmJump statements a list of
the STG registers that are live at that jump site.
This is used by the LLVM backend so it can avoid
unnesecarily passing around dead registers, improving
perfromance. This gives us the framework to finally
fix trac #4308.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
LLVM has a problem when the user imports some FFI types
like memcpy and memset in a manner that conflicts with
the types that GHC uses internally.
So now we pre-initialise the environment with the most
general types for these functions.
|
| |
|
|
|
|
|
|
|
|
| |
Compile time still isn't as good as I'd like but no easy changes
available. LLVM backend could do with a big rewrite to improve
performance as there are some ugly designs in it.
At least the test case isn't 10min anymore, just a few seconds now.
|
| |
|
| |
|
|
|
|
|
| |
This field was doing nothing. I think it originally appeared in a
very old incarnation of the new code generator.
|
|
|
|
|
|
|
|
|
| |
In GHC, this provides an easy way to call a C function via a C wrapper.
This is important when the function is really defined by CPP.
Requires the new CApiFFI extension.
Not documented yet, as it's still an experimental feature at this stage.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Better to specifically list the unsupported cases in code
than to have a catch all that panics. The later method hides
problems when new constructors are added such as the recent
additions to the supported Cmm prim ops that weren't ported
to the C backend since no one noticed.
|
|
|
|
|
|
|
| |
We now manage the stack correctly on both x86 and i386, keeping
the stack align at (16n bytes - word size) on function entry
and at (16n bytes) on function calls. This gives us compatability
with LLVM and GCC.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes the STG code so that %rsp to be aligned
to a 16-byte boundary + 8. This is the alignment required by
the x86_64 ABI on entry to a function. Previously we kept
%rsp aligned to a 16-byte boundary, but this was causing
problems for the LLVM backend (see #4211).
We now don't need to invoke llvm stack mangler on
x86_64 targets. Since the stack is now 16+8 byte algined in
STG land on x86_64, we don't need to mangle the stack
manipulations with the llvm mangler.
This patch only modifies the alignement for x86_64 backends.
Signed-off-by: David Terei <davidterei@gmail.com>
|
| |
|
|
|
|
| |
And some knock-on changes
|
|
|
|
|
| |
CmmTop -> CmmDecl
CmmPgm -> CmmGroup
|
| |
|