| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In #21717 we saw a reportedly unsound strictness signature due to an unsound
definition of plusSubDmd on Calls. This patch contains a description and the fix
to the unsoundness as outlined in `Note [Call SubDemand vs. evaluation Demand]`.
This fix means we also get rid of the special handling of `-fpedantic-bottoms`
in eta-reduction. Thanks to less strict and actually sound strictness results,
we will no longer eta-reduce the problematic cases in the first place, even
without `-fpedantic-bottoms`.
So fixing the unsoundness also makes our eta-reduction code simpler with less
hacks to explain. But there is another, more unfortunate side-effect:
We *unfix* #21085, but fortunately we have a new fix ready:
See `Note [mkCall and plusSubDmd]`.
There's another change:
I decided to make `Note [SubDemand denotes at least one evaluation]` a lot
simpler by using `plusSubDmd` (instead of `lubPlusSubDmd`) even if both argument
demands are lazy. That leads to less precise results, but in turn rids ourselves
from the need for 4 different `OpMode`s and the complication of
`Note [Manual specialisation of lub*Dmd/plus*Dmd]`. The result is simpler code
that is in line with the paper draft on Demand Analysis.
I left the abandoned idea in `Note [Unrealised opportunity in plusDmd]` for
posterity. The fallout in terms of regressions is negligible, as the testsuite
and NoFib shows.
```
Program Allocs Instrs
--------------------------------------------------------------------------------
hidden +0.2% -0.2%
linear -0.0% -0.7%
--------------------------------------------------------------------------------
Min -0.0% -0.7%
Max +0.2% +0.0%
Geometric Mean +0.0% -0.0%
```
Fixes #21717.
|
|
|
|
|
|
|
| |
* Replace 'text . show' and 'ppr' with 'int'.
* Remove Outputable.hs-boot, no longer needed
* Use pprWithCommas
* Factor out instructions in AArch64 codegen
|
|
|
|
|
|
|
|
|
|
|
| |
Part of proposal 475 (https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0475-tuple-syntax.rst)
Moves all tuples to GHC.Tuple.Prim
Updates ghc-prim version (and bumps bounds in dependents)
updates haddock submodule
updates deepseq submodule
updates text submodule
|
|
|
|
|
|
|
|
|
| |
The function GHC.Stg.InferTags.Rewrite.isTagged can be given
the Id of a join point, which might be representation polymorphic.
This would cause the call to isUnliftedType to crash. It's better
to use typeLevity_maybe instead.
Fixes #22212
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to an oversight, the initial specification and implementation of
-Woperator-whitespace focused on varsym exclusively and completely
ignored consym.
This meant that expressions such as "x+ y" would produce a warning,
while "x:+ y" would not.
The specification was corrected in ghc-proposals pull request #404,
and this patch updates the implementation accordingly.
Regression test included.
|
|
|
|
|
|
|
| |
Emit a __builtin_unreachable() call after a foreign call marked as
CmmNeverReturns. This is crucial to generate correctly typed code for
wasm; as for other archs, this is also beneficial for the C compiler
optimizations.
|
|
|
|
|
|
|
|
| |
Rather than a list of constructors and a `NewOrData` flag, we define `data DataDefnCons a = NewTypeCon a | DataTypeCons [a]`, which enforces a newtype to have exactly one constructor.
Closes #22070.
Bump haddock submodule.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this patch, the varsym lexing rules were defined as follows:
<0> {
@varsym / { precededByClosingToken `alexAndPred` followedByOpeningToken } { varsym_tight_infix }
@varsym / { followedByOpeningToken } { varsym_prefix }
@varsym / { precededByClosingToken } { varsym_suffix }
@varsym { varsym_loose_infix }
}
Unfortunately, this meant that the predicates 'precededByClosingToken' and
'followedByOpeningToken' were recomputed several times before we could figure
out the whitespace context.
With this patch, we check for whitespace context directly in the lexer
action:
<0> {
@varsym { with_op_ws varsym }
}
The checking for opening/closing tokens happens in 'with_op_ws' now,
which is part of the lexer action rather than the lexer predicate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the lexer, predicates have the following type:
{ ... } :: user -- predicate state
-> AlexInput -- input stream before the token
-> Int -- length of the token
-> AlexInput -- input stream after the token
-> Bool -- True <=> accept the token
This is documented in the Alex manual.
There is access to the input stream both before and after the token.
But when the time comes to construct the token, GHC passes only the
initial string buffer to the lexer action. This patch fixes it:
- type Action = PsSpan -> StringBuffer -> Int -> P (PsLocated Token)
+ type Action = PsSpan -> StringBuffer -> Int -> StringBuffer -> P (PsLocated Token)
Now lexer actions have access to the string buffer both before and after
the token, just like the predicates. It's just a matter of passing an
additional function parameter throughout the lexer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, derived instances of `Functor` (as well as the related classes
`Foldable`, `Traversable`, and `Generic1`) would determine which constraints to
infer by checking for fields that contain the last type variable. The problem
was that this last type variable was taken from `tyConTyVars`. For GADTs, the
type variables in each data constructor are _not_ the same type variables as
in `tyConTyVars`, leading to #22167.
This fixes the issue by instead checking for the last type variable using
`dataConUnivTyVars`. (This is very similar in spirit to the fix for #21185,
which also replaced an errant use of `tyConTyVars` with type variables from
each data constructor.)
Fixes #22167.
|
|
|
|
|
|
|
| |
When compiling Cmm, the ml_hs_file field is used to indicate Cmm
filename when later generating DWARF information. We should pass the
original filename here, otherwise for preprocessed Cmm files, the
filename will be a temporary filename which is confusing.
|
|
|
|
|
|
|
|
|
|
| |
• Delete some dead code, largely under `GHC.Utils`.
• Clean up a few definitions in `GHC.Utils.(Misc, Monad)`.
• Clean up `GHC.Types.SrcLoc`.
• Derive stock `Functor, Foldable, Traversable` for more types.
• Derive more instances for newtypes.
Bump haddock submodule.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See the examples in #22057 which show we have to traverse deeply into a
pattern to determine whether it contains a splice or not. The original
implementation pointed this out but deemed this very shallow traversal
"too expensive".
Fixes #22057
I also fixed an oversight in !7821 which meant we lost a warning which
was present in 9.2.2.
Fixes #22067
|
|
|
|
| |
fixes #22176
|
| |
|
|
|
|
|
|
|
|
|
|
| |
For an expression like:
case x of y
Con z -> z
If we also retain the tag sig for z we can generate code to immediately return
it rather than calling out to stg_ap_0_fast.
|
| |
|
|
|
|
|
|
|
| |
This fixes various typos and spelling mistakes
in the compiler.
Fixes #21891
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a module `M` exports two fields `f` (using DuplicateRecordFields), we can
still accept
import M (f)
import M hiding (f)
and treat `f` as referencing both of them. This was accepted in GHC 9.0, but gave
rise to an ambiguity error in GHC 9.2. See #21625.
This patch also documents this behaviour in the user's guide, and updates the
test for #16745 which is now treated differently.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This MR adds diagnostic codes, assigning unique numeric codes to
error and warnings, e.g.
error: [GHC-53633]
Pattern match is redundant
This is achieved as follows:
- a type family GhcDiagnosticCode that gives the diagnostic code
for each diagnostic constructor,
- a type family ConRecursInto that specifies whether to recur into
an argument of the constructor to obtain a more fine-grained code
(e.g. different error codes for different 'deriving' errors),
- generics machinery to generate the value-level function assigning
each diagnostic its error code; see Note [Diagnostic codes using generics]
in GHC.Types.Error.Codes.
The upshot is that, to add a new diagnostic code, contributors only need
to modify the two type families mentioned above. All logic relating to
diagnostic codes is thus contained to the GHC.Types.Error.Codes module,
with no code duplication.
This MR also refactors error message datatypes a bit, ensuring we can
derive Generic for them, and cleans up the logic around constraint
solver reports by splitting up 'TcSolverReportInfo' into separate
datatypes (see #20772).
Fixes #21684
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements GHC proposal 313, "Delimited continuation
primops", by adding native support for delimited continuations to the
GHC RTS.
All things considered, the patch is relatively small. It almost
exclusively consists of changes to the RTS; the compiler itself is
essentially unaffected. The primops come with fairly extensive Haddock
documentation, and an overview of the implementation strategy is given
in the Notes in rts/Continuation.c.
This first stab at the implementation prioritizes simplicity over
performance. Most notably, every continuation is always stored as a
single, contiguous chunk of stack. If one of these chunks is
particularly large, it can result in poor performance, as the current
implementation does not attempt to cleverly squeeze a subset of the
stack frames into the existing stack: it must fit all at once. If this
proves to be a performance issue in practice, a cleverer strategy would
be a worthwhile target for future improvements.
|
|
|
|
|
|
|
|
|
| |
Normally, the unregisterised builds avoid generating 64-bit
CallishMachOp in StgToCmm, so CmmToC doesn't support these. However,
there do exist cases where we'd like to invoke cmmToC for other cmm
inputs which may contain such CallishMachOps, and it's a rather low
effort to add support for these since they only require calling into
existing ghc-prim cbits.
|
|
|
|
|
|
|
| |
By reexporting the entirety of Applicative from GHC.Prelude, we can save
ourselves some `hiding` and importing of `Applicative` in consumers of GHC.Prelude.
This also has the benefit of isolating this type of change to
GHC.Prelude, so that people in the future don't have to think about it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes:
In order to be warning free and compatible, we hide Applicative(..)
from Prelude in a few places and instead import it directly from
Control.Applicative.
Please see the migration guide at
https://github.com/haskell/core-libraries-committee/blob/main/guides/export-lifta2-prelude.md
for more details.
This means that Applicative is now exported in its entirety from
Prelude.
Motivation:
This change is motivated by a few things:
* liftA2 is an often used function, even more so than (<*>) for some
people.
* When implementing Applicative, the compiler will prompt you for either
an implementation of (<*>) or of liftA2, but trying to use the latter
ends with an error, without further imports. This could be confusing
for newbies.
* For teaching, it is often times easier to introduce liftA2 first,
as it is a natural generalisation of fmap.
* This change seems to have been unanimously and enthusiastically
accepted by the CLC members, possibly indicating a lot of love for it.
* This change causes very limited breakage, see the linked issue below
for an investigation on this.
See https://github.com/haskell/core-libraries-committee/issues/50
for the surrounding discussion and more details.
|
|
|
|
|
| |
Use 'text' instead of 'ppr'.
Using 'ppr' on the list "hello" rendered as "h,e,l,l,o".
|
|
|
|
|
|
|
| |
Change calls to renderWithContext with showSDocOneLine; it's more
efficient and explanatory.
Remove polyPatSig (unused)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Rather conservatively return Top.
See Note [Untyped demand on case-alternative binders].
I also factored `addCaseBndrDmd` into two separate functions `scrutSubDmd` and
`fieldBndrDmds`.
Fixes #22039.
|
|
|
|
|
|
| |
It turns out Solo is a very recent addition to base, so for older GHC
versions we just defined it inline here the one place we use it in the
compiler.
|
|
|
|
|
|
|
|
|
| |
- Remove mkHeteroCoercionType, sdocImpredicativeTypes, isStateType (unused),
isCoVar_maybe (duplicated by getCoVar_maybe)
- Replace a few occurrences of voidPrimId with (# #).
void# is a deprecated synonym for the unboxed tuple.
- Use showSDoc in :show linker.
This makes it consistent with the other :show commands
|
|
|
|
| |
closes #21931
|
|
|
|
|
| |
This buglet was exposed by #22114, a consequence of my earlier
refactoring of arity for join points.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug was a subtle error in anyInRnEnvR, introduced by
commit d4d3fe6e02c0eb2117dbbc9df72ae394edf50f06
Author: Andreas Klebinger <klebinger.andreas@gmx.at>
Date: Sat Jul 9 01:19:52 2022 +0200
Rule matching: Don't compute the FVs if we don't look at them.
The net result was #22028, where a rewrite rule would wrongly
match on a lambda.
The fix to that function is easy.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following `TcRnDiagnostic` messages have been introduced:
TcRnIllegalHsigDefaultMethods
TcRnBadGenericMethod
TcRnWarningMinimalDefIncomplete
TcRnDefaultMethodForPragmaLacksBinding
TcRnIgnoreSpecialisePragmaOnDefMethod
TcRnBadMethodErr
TcRnNoExplicitAssocTypeOrDefaultDeclaration
|
|
|
|
|
|
|
|
| |
As the remarkably-simple #22112 showed, we were making a black hole
in the unfolding of a self-recursive binding. Boo!
It's a bit tricky. Documented in GHC.Iface.Tidy,
Note [tidyTopUnfolding: avoiding black holes]
|
|
|
|
|
|
|
| |
The use of Solo here allows us to force the selection into the SCE to obtain
the Subst but without forcing the substitution to be applied. The resulting thunk
is placed into a lazy field which is rarely forced, so forcing it regresses
peformance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a pretty serious space leak as the forced thunk would retain
`Alt b` values which would then contain reference to a lot of old
bindings and other simplifier gunk.
The OtherCon unfolding was not forced on subsequent simplifier runs so
more and more old stuff would be retained until the end of
simplification.
Fixing this has a drastic effect on maximum residency for the mmark
package which goes from
```
45,005,401,056 bytes allocated in the heap
17,227,721,856 bytes copied during GC
818,281,720 bytes maximum residency (33 sample(s))
9,659,144 bytes maximum slop
2245 MiB total memory in use (0 MB lost due to fragmentation)
```
to
```
45,039,453,304 bytes allocated in the heap
13,128,181,400 bytes copied during GC
331,546,608 bytes maximum residency (40 sample(s))
7,471,120 bytes maximum slop
916 MiB total memory in use (0 MB lost due to fragmentation)
```
See #21993 for some more discussion.
|
|
|
|
|
|
|
|
| |
It's better to overwrite the bindings fields of the ModGuts before
starting an iteration as then all the old bindings can be collected as
soon as the simplifier has processed them. Otherwise we end up with the
old bindings being alive until right at the end of the simplifier pass
as the mg_binds field is only modified right at the end.
|
|
|
|
|
|
|
|
| |
This reverts commit 851d8dd89a7955864b66a3da8b25f1dd88a503f8.
This commit was originally reverted due to an increase in space usage.
This was diagnosed as because the SCE increased in size and that was
being retained by another leak. See #22102
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As #21763 showed, we were over-specialising in some cases, when
the function involved was doing a simple 'eval', but not taking
the value apart, or branching on it.
This MR fixes the problem. See Note [Do not specialise evals].
Nofib barely budges, except that spectral/cichelli allocates about
3% less.
Compiler bytes-allocated improves a bit
geo. mean -0.1%
minimum -0.5%
maximum +0.0%
The -0.5% is on T11303b, for what it's worth.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the SDocContext used for code generation contained
information whether the labels should use Asm or C style.
However, at every individual call site, this is known statically.
This removes the parameter to 'PprCode' and replaces every 'pdoc'
used to print a label in code style with 'pprCLabel' or 'pprAsmLabel'.
The OutputableP instance is now used only for dumps.
The output of T15155 changes, it now uses the Asm style
(which is faithful to what actually happens).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch massages the keys used in the `TmOracle` `CoreMap` to ensure
that dictionaries of coherent classes give the same key.
That is, whenever we have an expression we want to insert or lookup in
the `TmOracle` `CoreMap`, we first replace any dictionary
`$dict_abcd :: ct` with a value of the form `error @ct`.
This allows us to common-up view pattern functions with required
constraints whose arguments differed only in the uniques of the
dictionaries they were provided, thus fixing #21662.
This is a rather ad-hoc change to the keys used in the
`TmOracle` `CoreMap`. In the long run, we would probably want to use
a different representation for the keys instead of simply using
`CoreExpr` as-is. This more ambitious plan is outlined in #19272.
Fixes #21662
Updates unix submodule
|
|
|
|
|
| |
This fixes a build error on x86_64-linux-alpine3_12-validate.
See the function 'loadExternalPlugins' defined in this file.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Here we at long last remove the `make`-based build system, it having
been replaced with the Shake-based Hadrian build system. Users are
encouraged to refer to the documentation in `hadrian/doc` and this [1]
blog post for details on using Hadrian.
Closes #17527.
[1] https://www.haskell.org/ghc/blog/20220805-make-to-hadrian.html
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This MR fixes #21694, #21755. It also makes sure that #21948 and
fix to #21694.
* For #21694 the underlying problem was that we were calling arityType
on an expression that had free join points. This is a Bad Bad Idea.
See Note [No free join points in arityType].
* To make "no free join points in arityType" work out I had to avoid
trying to use eta-expansion for runRW#. This entailed a few changes
in the Simplifier's treatment of runRW#. See
GHC.Core.Opt.Simplify.Iteration Note [No eta-expansion in runRW#]
* I also made andArityType work correctly with -fpedantic-bottoms;
see Note [Combining case branches: andWithTail].
* Rewrote Note [Combining case branches: optimistic one-shot-ness]
* arityType previously treated join points differently to other
let-bindings. This patch makes them unform; arityType analyses
the RHS of all bindings to get its ArityType, and extends am_sigs.
I realised that, now we have am_sigs giving the ArityType for
let-bound Ids, we don't need the (pre-dating) special code in
arityType for join points. But instead we need to extend the env for
Rec bindings, which weren't doing before. More uniform now. See
Note [arityType for let-bindings].
This meant we could get rid of ae_joins, and in fact get rid of
EtaExpandArity altogether. Simpler.
* And finally, it was the strange treatment of join-point Ids in
arityType (involving a fake ABot type) that led to a serious bug:
#21755. Fixed by this refactoring, which treats them uniformly;
but without breaking #18328.
In fact, the arity for recursive join bindings is pretty tricky;
see the long Note [Arity for recursive join bindings]
in GHC.Core.Opt.Simplify.Utils. That led to more refactoring,
including deciding that an Id could have an Arity that is bigger
than its JoinArity; see Note [Invariants on join points], item
2(b) in GHC.Core
* Make sure that the "demand threshold" for join points in DmdAnal
is no bigger than the join-arity. In GHC.Core.Opt.DmdAnal see
Note [Demand signatures are computed for a threshold arity based on idArity]
* I moved GHC.Core.Utils.exprIsDeadEnd into GHC.Core.Opt.Arity,
where it more properly belongs.
* Remove an old, redundant hack in FloatOut. The old Note was
Note [Bottoming floats: eta expansion] in GHC.Core.Opt.SetLevels.
Compile time improves very slightly on average:
Metrics: compile_time/bytes allocated
---------------------------------------------------------------------------------------
T18223(normal) ghc/alloc 725,808,720 747,839,216 +3.0% BAD
T6048(optasm) ghc/alloc 105,006,104 101,599,472 -3.2% GOOD
geo. mean -0.2%
minimum -3.2%
maximum +3.0%
For some reason Windows was better
T10421(normal) ghc/alloc 125,888,360 124,129,168 -1.4% GOOD
T18140(normal) ghc/alloc 85,974,520 83,884,224 -2.4% GOOD
T18698b(normal) ghc/alloc 236,764,568 234,077,288 -1.1% GOOD
T18923(normal) ghc/alloc 75,660,528 73,994,512 -2.2% GOOD
T6048(optasm) ghc/alloc 112,232,512 108,182,520 -3.6% GOOD
geo. mean -0.6%
I had a quick look at T18223 but it is knee deep in coercions and
the size of everything looks similar before and after. I decided
to accept that 3% increase in exchange for goodness elsewhere.
Metric Decrease:
T10421
T18140
T18698b
T18923
T6048
Metric Increase:
T18223
|