| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
e.g.
cd compiler
make FAST=YES stage1/build/HscTypes.o
builds just the specified .o file, without rebuilding dependencies,
and omitting some of the makefile phases. FAST=YES works anywhere, to
omit depenencies and phases. 'make fast' is shorthand for 'make
all FAST=YES'.
|
|
|
|
|
| |
The type in a ViewPat wasn't being zonked. Easily fixed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main purpose of this patch is to fix Trac #3306, by fleshing out the
syntax for GADT-style record declraations so that you have a context in
the type. The new form is
data T a where
MkT :: forall a. Eq a => { x,y :: !a } -> T a
See discussion on the Trac ticket.
The old form is still allowed, but give a deprecation warning.
When we remove the old form we'll also get rid of the one reduce/reduce
error in the grammar. Hurrah!
While I was at it, I failed as usual to resist the temptation to do lots of
refactoring. The parsing of data/type declarations is now much simpler and
more uniform. Less code, less chance of errors, and more functionality.
Took longer than I planned, though.
ConDecl has record syntax, but it was not being used consistently, so I
pushed that through the compiler.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
- converting a THSyn FFI declaration to HsDecl was broken; fixed
- pretty-printing of FFI declarations was variously bogus; fixed
- there was an unused "library" field in CImport; removed
|
|
|
|
|
|
|
|
|
|
|
| |
I boobed when I decoupled record selectors from their data types.
The most straightforward and robust fix means attaching the TyCon
of a record selector to its IfaceIdInfo, so
you'll need to rebuild all .hi files
That said, the fix is easy.
|
| |
|
| |
|
| |
|
|
|
|
| |
The implementations are still in the rts.
|
|
|
|
|
| |
In practise currently you also need UnliftedFFITypes, however
the restriction to just unlifted types may be lifted in future.
|
|
|
|
| |
We don't handle "foreign import prim" in TH stuff.
|
| |
|
|
|
|
|
|
|
| |
The safe/unsafe annotation doesn't currently mean anything for prim.
Just in case we decide it means something later it's better to stick
to using one or the other consistently. We decided that using safe
is the better one to require (and it's also the default).
|
|
|
|
|
|
|
|
|
|
| |
It adds a third case to StgOp which already hold StgPrimOp and StgFCallOp.
The code generation for the new StgPrimCallOp case is almost exactly the
same as for out-of-line primops. They now share the tailCallPrim function.
In the Core -> STG translation we map foreign calls using the "prim"
calling convention to the StgPrimCallOp case. This is because in Core we
represent prim calls using the ForeignCall stuff. At the STG level however
the prim calls are really much more like primops than foreign calls.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike normal foreign imports which desugar into a separate worker and
wrapper, we use just a single wrapper decleration. The representation
in Core of the call is currently as a foreign call. This means the
args are all treated as fully strict. This is ok at the moment because
we restrict the types for foreign import prim to be of unboxed types,
however in future we may want to make prim imports be the normal cmm
calling convention for Haskell functions, in which case we would not
be able to assume all args are strict. At that point it may make more
sense to represent cmm/prim calls distinct from foreign calls, and
more like the we the existing PrimOp calls are handled.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main restriction is that all args and results must be unboxed types.
In particular we allow unboxed tuple results (which is a primary
motivation for the whole feature). The normal rules apply about
"void rep" result types like State#. We only allow "prim" calling
convention for import, not export. The other forms of import, "dynamic",
"wrapper" and data label are banned as a conseqence of checking that the
imported name is a valid C string. We currently require prim imports to
be marked unsafe, though this is essentially arbitrary as the safety
information is unused.
|
|
|
|
|
| |
We only allow simple function label imports, not the normal complicated
business with "wrapper" "dynamic" or data label "&var" imports.
|
|
|
|
|
| |
First in a series of patches to add the feature.
This patch just adds PrimCallConv to the CCallConv type.
|
| |
|
| |
|
|
|
|
| |
Including help for directory-specific targets, such as 'make 1' in ghc
|
|
|
|
| |
Fixes failure when Haddocking Data.Monoid in libraries/base
|
|
|
|
| |
test is tcfail205
|
| |
|
|
|
|
|
|
| |
We needed some more $s to delay evaluation until the values are
available, and the calls needed to be later in the ghc.mk so that
compiler_stage2_WAYS etc are defined.
|
| |
|
|
|
|
|
|
|
| |
This is a correction to the patch:
* When linking a shared library with --make, always do the link step
which used the wrong flag in making the decision. It used -dynamic
whereas the correct flag is -shared.
|
|
|
|
| |
It's already checked for foreign import, but was missing for export.
|
| |
|
|
|
|
|
| |
With the exception of GHC's main Parser.y(.pp), which has 2
reduce/reduce conflicts
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Roman found situations where he had
case (f n) of _ -> e
where he knew that f (which was strict in n) would terminate if n did.
Notice that the result of (f n) is discarded. So it makes sense to
transform to
case n of _ -> e
Rather than attempt some general analysis to support this, I've added
enough support that you can do this using a rewrite rule:
RULE "f/seq" forall n. seq (f n) e = seq n e
You write that rule. When GHC sees a case expression that discards
its result, it mentally transforms it to a call to 'seq' and looks for
a RULE. (This is done in Simplify.rebuildCase.) As usual, the
correctness of the rule is up to you.
This patch implements the extra stuff. I have not documented it explicitly
in the user manual yet... let's see how useful it is first.
The patch looks bigger than it is, because
a) Comments; see esp MkId Note [seqId magic]
b) Some refactoring. Notably, I moved the special desugaring for
seq from MkCore back into DsUtils where it properly belongs.
(It's really a desugaring thing, not a CoreSyn invariant.)
c) Annoyingly, in a RULE left-hand side we need to be careful that
the magical desugaring done in MkId Note [seqId magic] item (c)
is *not* done on the LHS of a rule. Or rather, we arrange to
un-do it, in DsBinds.decomposeRuleLhs.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should accept these:
data a :*: b = ....
or
data (:*:) a b = ...
only if -XTypeOperators is in force. And similarly class decls.
This patch fixes the problem. It uses the slightly-nasty OccName.isSymOcc,
but the only way to avoid that is to cach the result in OccNames which seems
overkill to us.
|
| |
|
|
|
|
|
| |
I've also added some missing $s to some makefiles. These aren't
technically necessary, but it's nice to be consistent.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new flag -XMonoLocalBinds tells GHC not to generalise nested
bindings in let or where clauses, unless there is a type signature,
in which case we use it.
I'm thinking about whether this might actually be a good direction for
Haskell go to in, although it seems pretty radical. Anyway, the flag
is easy to implement (look at how few lines change), and having it
will allow us to experiement with and without.
Just for the record, below are the changes required in the boot
libraries -- ie the places where. Not quite as minimal as I'd hoped,
but the changes fall into a few standard patterns, and most represent
(in my opinion) sytlistic improvements. I will not push these patches,
however.
== running darcs what -s --repodir libraries/base
M ./Control/Arrow.hs -2 +4
M ./Data/Data.hs -7 +22
M ./System/IO/Error.hs +1
M ./Text/ParserCombinators/ReadP.hs +1
== running darcs what -s --repodir libraries/bytestring
M ./Data/ByteString/Char8.hs -1 +2
M ./Data/ByteString/Unsafe.hs +1
== running darcs what -s --repodir libraries/Cabal
M ./Distribution/PackageDescription.hs -2 +6
M ./Distribution/PackageDescription/Check.hs +3
M ./Distribution/PackageDescription/Configuration.hs -1 +3
M ./Distribution/ParseUtils.hs -2 +4
M ./Distribution/Simple/Command.hs -1 +4
M ./Distribution/Simple/Setup.hs -12 +24
M ./Distribution/Simple/UserHooks.hs -1 +5
== running darcs what -s --repodir libraries/containers
M ./Data/IntMap.hs -2 +2
== running darcs what -s --repodir libraries/dph
M ./dph-base/Data/Array/Parallel/Arr/BBArr.hs -1 +3
M ./dph-base/Data/Array/Parallel/Arr/BUArr.hs -2 +4
M ./dph-prim-par/Data/Array/Parallel/Unlifted/Distributed/Arrays.hs -6 +10
M ./dph-prim-par/Data/Array/Parallel/Unlifted/Distributed/Combinators.hs -3 +6
M ./dph-prim-seq/Data/Array/Parallel/Unlifted/Sequential/Flat/Permute.hs -2 +4
== running darcs what -s --repodir libraries/syb
M ./Data/Generics/Twins.hs -5 +18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes an insidious and long-standing bug in the way that
parallelism is handled in GHC. See Note [lazyId magic] in MkId.
Here's the diagnosis, copied from the Trac ticket. par is defined
in GHC.Conc thus:
{-# INLINE par #-}
par :: a -> b -> b
par x y = case (par# x) of { _ -> lazy y }
-- The reason for the strange "lazy" call is that it fools the
-- compiler into thinking that pseq and par are non-strict in
-- their second argument (even if it inlines pseq/par at the call
-- site). If it thinks par is strict in "y", then it often
-- evaluates "y" before "x", which is totally wrong.
The function lazy is the identity function, but it is inlined only
after strictness analysis, and (via some magic) pretends to be
lazy. Hence par pretends to be lazy too.
The trouble is that both par and lazy are inlined into your definition
of parallelise, so that the unfolding for parallelise (exposed in
Parallelise.hi) does not use lazy at all. Then when compiling Main,
parallelise is in turn inlined (before strictness analysis), and so
the strictness analyser sees too much.
This was all sloppy thinking on my part. Inlining lazy after
strictness analysis works fine for the current module, but not for
importing modules.
The fix implemented by this patch is to inline 'lazy' in CorePrep,
not in WorkWrap. That way interface files never see the inlined version.
The downside is that a little less optimisation may happen on programs
that use 'lazy'. And you'll only see this in the results -ddump-prep
not in -ddump-simpl. So KEEP AN EYE OUT (Simon and Satnam especially).
Still, it should work properly now. Certainly fixes #3259.
|
|
|
|
|
|
|
| |
Adopt Max's suggestion for name shadowing, by suppressing shadowing
warnings for variables starting with "_". A tiny bit of refactoring
along the way.
|
|
|
|
|
|
|
|
|
|
|
| |
In Tempate Haskell -ddump-splices, the "after" expression is populated
with RdrNames, many of which are Orig things. We used to print these
fully-qualified, but that's a bit heavy.
This patch refactors the code a bit so that the same print-unqualified
mechanism we use for Names also works for RdrNames. Lots of comments
too, because it took me a while to figure out how it all worked again.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When you say -ddump-splices, the "before" expression is now
*renamed* but not *typechecked"
Reason (a) less typechecking crap
(b) data constructors after type checking have been
changed to their *wrappers*, and that makes them
print always fully qualified
|
|
|
|
|
|
| |
The trial-and-error for type defaults was not playing nicely with
-Werror. The fix is simple.
|