summaryrefslogtreecommitdiff
path: root/compiler/GHC/Core/Opt
diff options
context:
space:
mode:
Diffstat (limited to 'compiler/GHC/Core/Opt')
-rw-r--r--compiler/GHC/Core/Opt/Arity.hs10
-rw-r--r--compiler/GHC/Core/Opt/CSE.hs4
-rw-r--r--compiler/GHC/Core/Opt/CallArity.hs4
-rw-r--r--compiler/GHC/Core/Opt/ConstantFold.hs6
-rw-r--r--compiler/GHC/Core/Opt/DmdAnal.hs16
-rw-r--r--compiler/GHC/Core/Opt/Exitify.hs6
-rw-r--r--compiler/GHC/Core/Opt/FloatOut.hs2
-rw-r--r--compiler/GHC/Core/Opt/OccurAnal.hs12
-rw-r--r--compiler/GHC/Core/Opt/Pipeline.hs4
-rw-r--r--compiler/GHC/Core/Opt/SetLevels.hs4
-rw-r--r--compiler/GHC/Core/Opt/Simplify/Env.hs4
-rw-r--r--compiler/GHC/Core/Opt/Simplify/Utils.hs8
-rw-r--r--compiler/GHC/Core/Opt/SpecConstr.hs6
-rw-r--r--compiler/GHC/Core/Opt/WorkWrap.hs4
-rw-r--r--compiler/GHC/Core/Opt/WorkWrap/Utils.hs20
-rw-r--r--compiler/GHC/Core/Opt/simplifier.tib2
16 files changed, 56 insertions, 56 deletions
diff --git a/compiler/GHC/Core/Opt/Arity.hs b/compiler/GHC/Core/Opt/Arity.hs
index b03fe84b14..c9142443c1 100644
--- a/compiler/GHC/Core/Opt/Arity.hs
+++ b/compiler/GHC/Core/Opt/Arity.hs
@@ -571,7 +571,7 @@ Extrude the g2
f' = \p. \s. ((error "...") |> g1) s
f = f' |> (String -> g2)
-Discard args for bottomming function
+Discard args for bottoming function
f' = \p. \s. ((error "...") |> g1 |> g3
g3 :: (S -> (S,T)) ~ (S,T)
@@ -823,7 +823,7 @@ arityTypeOneShots (AT lams _) = map snd lams
safeArityType :: ArityType -> SafeArityType
-- ^ Assuming this ArityType is all we know, find the arity of
--- the function, and trim the argument info (and Divergenge)
+-- the function, and trim the argument info (and Divergence)
-- to match that arity. See Note [SafeArityType]
safeArityType at@(AT lams _)
= case go 0 IsCheap lams of
@@ -2034,7 +2034,7 @@ This what eta_expand does. We do it in two steps:
where etas :: EtaInfo
etaInfoAbs builds the lambdas
- etaInfoApp builds the applictions
+ etaInfoApp builds the applications
Note that the /same/ EtaInfo drives both etaInfoAbs and etaInfoApp
@@ -2391,7 +2391,7 @@ case where `e` is trivial):
when `e = \x. if x then bot else id`, because the latter will diverge when
the former would not.
- On the other hand, with `-fno-pendantic-bottoms` , we will have eta-expanded
+ On the other hand, with `-fno-pedantic-bottoms` , we will have eta-expanded
the definition of `e` and then eta-reduction is sound
(see Note [Dealing with bottom]).
Consequence: We have to check that `-fpedantic-bottoms` is off; otherwise
@@ -2487,7 +2487,7 @@ HOWEVER, if we transform
that might mean that f isn't saturated any more, and does not inline.
This led to some other regressions.
-TL;DR currrently we do /not/ eta reduce if the result is a PAP.
+TL;DR currently we do /not/ eta reduce if the result is a PAP.
Note [Eta reduction with casted arguments]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/compiler/GHC/Core/Opt/CSE.hs b/compiler/GHC/Core/Opt/CSE.hs
index 23baf90742..ff1bd3782e 100644
--- a/compiler/GHC/Core/Opt/CSE.hs
+++ b/compiler/GHC/Core/Opt/CSE.hs
@@ -144,7 +144,7 @@ even though we /also/ carry a substitution x -> y. Can we just drop
the binding instead? Well, not at top level! See Note [Top level and
postInlineUnconditionally] in GHC.Core.Opt.Simplify.Utils; and in any
case CSE applies only to the /bindings/ of the program, and we leave
-it to the simplifier to propate effects to the RULES. Finally, it
+it to the simplifier to propagate effects to the RULES. Finally, it
doesn't seem worth the effort to discard the nested bindings because
the simplifier will do it next.
@@ -356,7 +356,7 @@ the program; it's a kind of synthetic key for recursive bindings.
Note [Separate envs for let rhs and body]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Substituting occurances of the binder in the rhs with the
+Substituting occurrences of the binder in the rhs with the
renamed binder is wrong for non-recursive bindings. Why?
Consider this core.
diff --git a/compiler/GHC/Core/Opt/CallArity.hs b/compiler/GHC/Core/Opt/CallArity.hs
index 306b3bd446..265c4fb57e 100644
--- a/compiler/GHC/Core/Opt/CallArity.hs
+++ b/compiler/GHC/Core/Opt/CallArity.hs
@@ -150,7 +150,7 @@ The interesting cases of the analysis:
any useful co-call information.
Return (fv e)²
* Case alternatives alt₁,alt₂,...:
- Only one can be execuded, so
+ Only one can be executed, so
Return (alt₁ ∪ alt₂ ∪...)
* App e₁ e₂ (and analogously Case scrut alts), with non-trivial e₂:
We get the results from both sides, with the argument evaluated at most once.
@@ -277,7 +277,7 @@ together with what other functions.
Note [Analysis type signature]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The work-hourse of the analysis is the function `callArityAnal`, with the
+The workhorse of the analysis is the function `callArityAnal`, with the
following type:
type CallArityRes = (UnVarGraph, VarEnv Arity)
diff --git a/compiler/GHC/Core/Opt/ConstantFold.hs b/compiler/GHC/Core/Opt/ConstantFold.hs
index 14d5c88262..8a22124779 100644
--- a/compiler/GHC/Core/Opt/ConstantFold.hs
+++ b/compiler/GHC/Core/Opt/ConstantFold.hs
@@ -1288,7 +1288,7 @@ word64Result' :: Integer -> CoreExpr
word64Result' result = Lit (mkLitWord64Wrap result)
--- | 'ambiant (primop x) = x', but not nececesarily 'primop (ambient x) = x'.
+-- | 'ambient (primop x) = x', but not necessarily 'primop (ambient x) = x'.
semiInversePrimOp :: PrimOp -> RuleM CoreExpr
semiInversePrimOp primop = do
[Var primop_id `App` e] <- getArgs
@@ -2909,7 +2909,7 @@ mulFoldingRules' platform arg1 arg2 num_ops = case (arg1,arg2) of
andFoldingRules' :: Platform -> CoreExpr -> CoreExpr -> NumOps -> Maybe CoreExpr
andFoldingRules' platform arg1 arg2 num_ops = case (arg1, arg2) of
- -- R2) * `or` `and` simplications
+ -- R2) * `or` `and` simplifications
-- l1 and (l2 and x) ==> (l1 and l2) and x
(L l1, is_lit_and num_ops -> Just (l2, x))
-> Just (mkL (l1 .&. l2) `and` x)
@@ -2932,7 +2932,7 @@ andFoldingRules' platform arg1 arg2 num_ops = case (arg1, arg2) of
orFoldingRules' :: Platform -> CoreExpr -> CoreExpr -> NumOps -> Maybe CoreExpr
orFoldingRules' platform arg1 arg2 num_ops = case (arg1, arg2) of
- -- R2) * `or` `and` simplications
+ -- R2) * `or` `and` simplifications
-- l1 or (l2 or x) ==> (l1 or l2) or x
(L l1, is_lit_or num_ops -> Just (l2, x))
-> Just (mkL (l1 .|. l2) `or` x)
diff --git a/compiler/GHC/Core/Opt/DmdAnal.hs b/compiler/GHC/Core/Opt/DmdAnal.hs
index bf1870c3ea..011f02af5f 100644
--- a/compiler/GHC/Core/Opt/DmdAnal.hs
+++ b/compiler/GHC/Core/Opt/DmdAnal.hs
@@ -362,7 +362,7 @@ dmdAnalStar env (n :* sd) e
-- and Note [Analysing with absent demand]
= (toPlusDmdArg $ multDmdType n' dmd_ty, e')
--- Main Demand Analsysis machinery
+-- Main Demand Analysis machinery
dmdAnal, dmdAnal' :: AnalEnv
-> SubDemand -- The main one takes a *SubDemand*
-> CoreExpr -> WithDmdType CoreExpr
@@ -1440,7 +1440,7 @@ encoded in the demand signature, because that is the information that
demand analysis propagates throughout the program. Failing to
implement the strategy laid out in the signature can result in
reboxing in unexpected places. Hence, we must completely anticipate
-unboxing decisions during demand analysis and reflect these decicions
+unboxing decisions during demand analysis and reflect these decisions
in demand annotations. That is the job of 'finaliseArgBoxities',
which is defined here and called from demand analysis.
@@ -1460,8 +1460,8 @@ Note [Finalising boxity for let-bound Ids]
Consider
let x = e in body
where the demand on 'x' is 1!P(blah). We want to unbox x according to
-Note [Thunk splitting] in GHC.Core.Opt.WorkWrap. We must do this becuase
-worker/wrapper ignores stricness and looks only at boxity flags; so if
+Note [Thunk splitting] in GHC.Core.Opt.WorkWrap. We must do this because
+worker/wrapper ignores strictness and looks only at boxity flags; so if
x's demand is L!P(blah) we might still split it (wrongly). We want to
switch to Boxed on any lazy demand.
@@ -1933,7 +1933,7 @@ There are two reasons we sometimes trim a demand to match a type.
1. GADTs
2. Recursive products and widening
-More on both below. But the botttom line is: we really don't want to
+More on both below. But the bottom line is: we really don't want to
have a binder whose demand is more deeply-nested than its type
"allows". So in findBndrDmd we call trimToType and findTypeShape to
trim the demand on the binder to a form that matches the type
@@ -1956,7 +1956,7 @@ For (2) consider
f _ (MkT n t) = f n t
Here f is lazy in T, but its *usage* is infinite: P(L,P(L,P(L, ...))).
-Notice that this happens because T is a product type, and is recrusive.
+Notice that this happens because T is a product type, and is recursive.
If we are not careful, we'll fail to iterate to a fixpoint in dmdFix,
and bale out entirely, which is inefficient and over-conservative.
@@ -2238,7 +2238,7 @@ The Opt_DictsStrict flag makes GHC use call-by-value for dictionaries. Why?
* Generally CBV is more efficient.
* Dictionaries are always non-bottom; and never take much work to
- compute. E.g. a dfun from an instance decl always returns a dicionary
+ compute. E.g. a dfun from an instance decl always returns a dictionary
record immediately. See DFunUnfolding in CoreSyn.
See also Note [Recursive superclasses] in TcInstDcls.
@@ -2254,7 +2254,7 @@ The Opt_DictsStrict flag makes GHC use call-by-value for dictionaries. Why?
See #17758 for more background and perf numbers.
-The implementation is extremly simple: just make the strictness
+The implementation is extremely simple: just make the strictness
analyser strictify the demand on a dictionary binder in
'findBndrDmd'.
diff --git a/compiler/GHC/Core/Opt/Exitify.hs b/compiler/GHC/Core/Opt/Exitify.hs
index ac4df91f97..89156418bc 100644
--- a/compiler/GHC/Core/Opt/Exitify.hs
+++ b/compiler/GHC/Core/Opt/Exitify.hs
@@ -306,7 +306,7 @@ Neither do we want this to happen
in …
where the floated expression `x+x` is a bit more complicated, but still not
-intersting.
+interesting.
Expressions are interesting when they move an occurrence of a variable outside
the recursive `go` that can benefit from being obviously called once, for example:
@@ -315,7 +315,7 @@ the recursive `go` that can benefit from being obviously called once, for exampl
see that it is called at most once, and hence improve the function’s
strictness signature
-So we only hoist an exit expression out if it mentiones at least one free,
+So we only hoist an exit expression out if it mentions at least one free,
non-imported variable.
Note [Jumps can be interesting]
@@ -430,7 +430,7 @@ would).
To prevent this, we need to recognize exit join points, and then disable
inlining.
-Exit join points, recognizeable using `isExitJoinId` are join points with an
+Exit join points, recognizable using `isExitJoinId` are join points with an
occurrence in a recursive group, and can be recognized (after the occurrence
analyzer ran!) using `isExitJoinId`.
This function detects joinpoints with `occ_in_lam (idOccinfo id) == True`,
diff --git a/compiler/GHC/Core/Opt/FloatOut.hs b/compiler/GHC/Core/Opt/FloatOut.hs
index b6ee3691c8..8c2961d21f 100644
--- a/compiler/GHC/Core/Opt/FloatOut.hs
+++ b/compiler/GHC/Core/Opt/FloatOut.hs
@@ -343,7 +343,7 @@ Note [floatBind for top level]
We may have a *nested* binding whose destination level is (FloatMe tOP_LEVEL), thus
letrec { foo <0,0> = .... (let bar<0,0> = .. in ..) .... }
The binding for bar will be in the "tops" part of the floating binds,
-and thus not partioned by floatBody.
+and thus not partitioned by floatBody.
We could perhaps get rid of the 'tops' component of the floating binds,
but this case works just as well.
diff --git a/compiler/GHC/Core/Opt/OccurAnal.hs b/compiler/GHC/Core/Opt/OccurAnal.hs
index a4218df867..4a67b8cdea 100644
--- a/compiler/GHC/Core/Opt/OccurAnal.hs
+++ b/compiler/GHC/Core/Opt/OccurAnal.hs
@@ -788,7 +788,7 @@ occAnalNonRecBind !env lvl imp_rule_edges bndr rhs body_usage
-- h = ...
-- g = ...
-- RULE map g = h
- -- Then we want to ensure that h is in scope everwhere
+ -- Then we want to ensure that h is in scope everywhere
-- that g is (since the RULE might turn g into h), so
-- we make g mention h.
@@ -958,7 +958,7 @@ And now the Simplifer will try to use PreInlineUnconditionally on lvl1
(which occurs just once), but because it is last we won't actually
substitute in lvl2. Sigh.
-To avoid this possiblity, we include edges from lvl2 to /both/ its
+To avoid this possibility, we include edges from lvl2 to /both/ its
stable unfolding /and/ its RHS. Hence the defn of inl_fvs in
makeNode. Maybe we could be more clever, but it's very much a corner
case.
@@ -1222,7 +1222,7 @@ more likely. Here's a real example from #1969:
$s$dm2 = \x. op dBool }
The RULES stuff means that we can't choose $dm as a loop breaker
(Note [Choosing loop breakers]), so we must choose at least (say)
-opInt *and* opBool, and so on. The number of loop breakders is
+opInt *and* opBool, and so on. The number of loop breakers is
linear in the number of instance declarations.
Note [Loop breakers and INLINE/INLINABLE pragmas]
@@ -2404,10 +2404,10 @@ A': Non-obviously saturated applications: eg build (f (\x y -> expensive))
B: Let-bindings: eg let f = \c. let ... in \n -> blah
in (build f, build f)
- Propagate one-shot info from the demanand-info on 'f' to the
+ Propagate one-shot info from the demand-info on 'f' to the
lambdas in its RHS (which may not be syntactically at the top)
- This information must have come from a previous run of the demanand
+ This information must have come from a previous run of the demand
analyser.
Previously, the demand analyser would *also* set the one-shot information, but
@@ -2550,7 +2550,7 @@ addOneInScope env@(OccEnv { occ_bs_env = swap_env, occ_bs_rng = rng_vars }) bndr
addInScope :: OccEnv -> [Var] -> OccEnv
-- See Note [The binder-swap substitution]
--- It's only neccessary to call this on in-scope Ids,
+-- It's only necessary to call this on in-scope Ids,
-- but harmless to include TyVars too
addInScope env@(OccEnv { occ_bs_env = swap_env, occ_bs_rng = rng_vars }) bndrs
| any (`elemVarSet` rng_vars) bndrs = env { occ_bs_env = emptyVarEnv, occ_bs_rng = emptyVarSet }
diff --git a/compiler/GHC/Core/Opt/Pipeline.hs b/compiler/GHC/Core/Opt/Pipeline.hs
index bbf0dc2164..28871d9fb7 100644
--- a/compiler/GHC/Core/Opt/Pipeline.hs
+++ b/compiler/GHC/Core/Opt/Pipeline.hs
@@ -244,7 +244,7 @@ getCoreToDo dflags rule_base extra_vars
-- GHC.Iface.Tidy.StaticPtrTable.
static_ptrs_float_outwards,
- -- Run the simplier phases 2,1,0 to allow rewrite rules to fire
+ -- Run the simplifier phases 2,1,0 to allow rewrite rules to fire
runWhen do_simpl3
(CoreDoPasses $ [ simpl_phase (Phase phase) "main" max_iter
| phase <- [phases, phases-1 .. 1] ] ++
@@ -417,7 +417,7 @@ for two reasons, both shown up in test perf/compiler/T16473,
with -O2 -flate-specialise
1. I found that running late-Specialise after SpecConstr, with no
- simplification in between meant that the carefullly constructed
+ simplification in between meant that the carefully constructed
SpecConstr rule never got to fire. (It was something like
lvl = f a -- Arity 1
....g lvl....
diff --git a/compiler/GHC/Core/Opt/SetLevels.hs b/compiler/GHC/Core/Opt/SetLevels.hs
index 9645a10340..1d811b12cf 100644
--- a/compiler/GHC/Core/Opt/SetLevels.hs
+++ b/compiler/GHC/Core/Opt/SetLevels.hs
@@ -772,7 +772,7 @@ But do not do so if (saves_alloc):
- the expression is not a HNF, and
- the expression is not bottoming
-Exammples:
+Examples:
* Bottoming
f x = case x of
@@ -945,7 +945,7 @@ But, as ever, we need to be careful:
Example:
... let { v = \y. error (show x ++ show y) } in ...
We want to abstract over x and float the whole thing to top:
- lvl = \xy. errror (show x ++ show y)
+ lvl = \xy. error (show x ++ show y)
...let {v = lvl x} in ...
Then of course we don't want to separately float the body (error ...)
diff --git a/compiler/GHC/Core/Opt/Simplify/Env.hs b/compiler/GHC/Core/Opt/Simplify/Env.hs
index cd3548781a..6409a6d7eb 100644
--- a/compiler/GHC/Core/Opt/Simplify/Env.hs
+++ b/compiler/GHC/Core/Opt/Simplify/Env.hs
@@ -468,7 +468,7 @@ seIdSubst:
binding site.
* The in-scope "set" usually maps x->x; we use it simply for its domain.
- But sometimes we have two in-scope Ids that are synomyms, and should
+ But sometimes we have two in-scope Ids that are synonyms, and should
map to the same target: x->x, y->x. Notably:
case y of x { ... }
That's why the "set" is actually a VarEnv Var
@@ -1160,7 +1160,7 @@ simplJoinBndr mult res_ty env id
adjustJoinPointType :: Mult
-> Type -- New result type
-> Id -- Old join-point Id
- -> Id -- Adjusted jont-point Id
+ -> Id -- Adjusted join-point Id
-- (adjustJoinPointType mult new_res_ty join_id) does two things:
--
-- 1. Set the return type of the join_id to new_res_ty
diff --git a/compiler/GHC/Core/Opt/Simplify/Utils.hs b/compiler/GHC/Core/Opt/Simplify/Utils.hs
index 433c67b35a..5e5fb8bc52 100644
--- a/compiler/GHC/Core/Opt/Simplify/Utils.hs
+++ b/compiler/GHC/Core/Opt/Simplify/Utils.hs
@@ -320,7 +320,7 @@ data ArgInfo
-- that the function diverges after being given
-- that number of args
- ai_discs :: [Int] -- Discounts for remaining value arguments (beyong ai_args)
+ ai_discs :: [Int] -- Discounts for remaining value arguments (beyond ai_args)
-- non-zero => be keener to inline
-- Always infinite
}
@@ -2001,7 +2001,7 @@ new binding is abstracted. Note that
mentioned in the abstracted body. This means:
- they are automatically in dependency order, because main_tvs is
- there is no issue about non-determinism
- - we don't gratuitiously change order, which may help (in a tiny
+ - we don't gratuitously change order, which may help (in a tiny
way) with CSE and/or the compiler-debugging experience
-}
@@ -2229,7 +2229,7 @@ Note [Merge Nested Cases]
}
which merges two cases in one case when -- the default alternative of
-the outer case scrutises the same variable as the outer case. This
+the outer case scrutinises the same variable as the outer case. This
transformation is called Case Merging. It avoids that the same
variable is scrutinised multiple times.
@@ -2500,7 +2500,7 @@ Since the case is exhaustive (all cases are) we can convert it to
DEFAULT -> e1
1# -> e2
-This may generate sligthtly better code (although it should not, since
+This may generate slightly better code (although it should not, since
all cases are exhaustive) and/or optimise better. I'm not certain that
it's necessary, but currently we do make this change. We do it here,
NOT in the TagToEnum rules (see "Beware" in Note [caseRules for tagToEnum]
diff --git a/compiler/GHC/Core/Opt/SpecConstr.hs b/compiler/GHC/Core/Opt/SpecConstr.hs
index 2eb5862039..538b457ffc 100644
--- a/compiler/GHC/Core/Opt/SpecConstr.hs
+++ b/compiler/GHC/Core/Opt/SpecConstr.hs
@@ -2011,7 +2011,7 @@ type argument giving us:
But if you look closely this wouldn't typecheck!
If we substitute `f True` with `$sf void#` we expect the type argument to be applied first
but we apply void# first.
-The easist fix seems to be just to add the void argument to the front of the arguments.
+The easiest fix seems to be just to add the void argument to the front of the arguments.
Now we get:
$sf :: Void# -> forall t. bla
$sf void @t = $se
@@ -2052,7 +2052,7 @@ calcSpecInfo :: Id -- The original function
-> ( [Var] -- Demand-decorated binders
, DmdSig -- Strictness of specialised thing
, Arity, Maybe JoinArity ) -- Arities of specialised thing
--- Calcuate bits of IdInfo for the specialised function
+-- Calculate bits of IdInfo for the specialised function
-- See Note [Transfer strictness]
-- See Note [Strictness information in worker binders]
calcSpecInfo fn (CP { cp_qvars = qvars, cp_args = pats }) extra_bndrs
@@ -2593,7 +2593,7 @@ argToPat1 env in_scope val_env arg arg_occ _arg_str
, Just arg_occs <- mb_scrut dc
= do { let (ty_args, rest_args) = splitAtList (dataConUnivTyVars dc) args
con_str, matched_str :: [StrictnessMark]
- -- con_str corrresponds 1-1 with the /value/ arguments
+ -- con_str corresponds 1-1 with the /value/ arguments
-- matched_str corresponds 1-1 with /all/ arguments
con_str = dataConRepStrictness dc
matched_str = match_vals con_str rest_args
diff --git a/compiler/GHC/Core/Opt/WorkWrap.hs b/compiler/GHC/Core/Opt/WorkWrap.hs
index 30d4993abc..27d85d0545 100644
--- a/compiler/GHC/Core/Opt/WorkWrap.hs
+++ b/compiler/GHC/Core/Opt/WorkWrap.hs
@@ -486,7 +486,7 @@ Reminder: Note [Don't w/w INLINE things], so we don't need to worry
about INLINE things here.
-What if `foo` has no specialiations, is worker/wrappered (with the
+What if `foo` has no specialisations, is worker/wrappered (with the
wrapper inlining very early), and exported; and then in an importing
module we have {-# SPECIALISE foo : ... #-}?
@@ -645,7 +645,7 @@ as simple as I thought. Consider this:
in p `seq` (v,v)
I think we'll give `f` the strictness signature `<SP(M,A)>`, where the
-`M` sayd that we'll evaluate the first component of the pair at most
+`M` says that we'll evaluate the first component of the pair at most
once. Why? Because the RHS of the thunk `v` is evaluated at most
once.
diff --git a/compiler/GHC/Core/Opt/WorkWrap/Utils.hs b/compiler/GHC/Core/Opt/WorkWrap/Utils.hs
index 1fc05737f1..5b653e751f 100644
--- a/compiler/GHC/Core/Opt/WorkWrap/Utils.hs
+++ b/compiler/GHC/Core/Opt/WorkWrap/Utils.hs
@@ -524,7 +524,7 @@ reference the wrong, inner a. A similar situation occurred in #12562, we even
saw a type variable in the worker shadowing an outer term-variable binding.
We avoid the issue by freshening the argument variables from the original fun
-RHS through 'cloneBndrs', which will also take care of subsitution in binder
+RHS through 'cloneBndrs', which will also take care of substitution in binder
types. Fortunately, it's sufficient to pick the FVs of the arg vars as in-scope
set, so that we don't need to do a FV traversal over the whole body of the
original function.
@@ -717,7 +717,7 @@ mkWWcpr. But we still want to emit warning with -DDEBUG, to hopefully catch
other cases where something went avoidably wrong.
This warning also triggers for the stream fusion library within `text`.
-We can'easily W/W constructed results like `Stream` because we have no simple
+We can't easily W/W constructed results like `Stream` because we have no simple
way to express existential types in the worker's type signature.
Note [WW for calling convention]
@@ -741,7 +741,7 @@ of work.
Performing W/W might not always be a win. In particular it's easy to break
(badly written, but common) rule frameworks by doing additional W/W splits.
-See #20364 for a more detailed explaination.
+See #20364 for a more detailed explanation.
Hence we have the following strategies with different trade-offs:
@@ -751,7 +751,7 @@ A) Never do W/W *just* for unlifting of arguments.
B) Do W/W on just about anything where it might be
beneficial.
- + Exploits pretty much every oppertunity for unlifting.
+ + Exploits pretty much every opportunity for unlifting.
- A bit of compile time/code size cost for all the wrappers.
- Can break rules which would otherwise fire. See #20364.
@@ -764,7 +764,7 @@ C) Unlift *any* (non-boot exported) functions arguments if they are strict.
- Requires either:
~ Eta-expansion at *all* call sites in order to generate
an impedance matcher function. Leading to massive code bloat.
- Essentially we end up creating a imprompto wrapper function
+ Essentially we end up creating a impromptu wrapper function
wherever we wouldn't inline the wrapper with a W/W approach.
~ There is the option of achieving this without eta-expansion if we instead expand
the partial application code to check for demands on the calling convention and
@@ -864,7 +864,7 @@ mkWWstr opts args str_marks
, args1 ++ args2
, wrap_fn1 . wrap_fn2
, wrap_arg:wrap_args ) }
- go _ _ = panic "mkWWstr: Impossible - str/arg length missmatch"
+ go _ _ = panic "mkWWstr: Impossible - str/arg length mismatch"
----------------------
-- mkWWstr_one wrap_var = (useful, work_args, wrap_fn, wrap_arg)
@@ -909,7 +909,7 @@ mkWWstr_one opts arg str_mark =
fam_envs = wo_fam_envs opts
arg_ty = idType arg
arg_dmd = idDemandInfo arg
- arg_str | isTyVar arg = NotMarkedStrict -- Type args don't get stricness marks
+ arg_str | isTyVar arg = NotMarkedStrict -- Type args don't get strictness marks
| otherwise = str_mark
do_nothing = return (badWorker, [(arg,arg_str)], nop_fn, varToCoreExpr arg)
@@ -1291,7 +1291,7 @@ combineIRDCRs = foldl' combineIRDCR NonRecursiveOrUnsure
-- through one of @dc@'s fields (so surely non-recursive).
-- * @NonRecursiveOrUnsure@ when @fuel /= Infinity@
-- and @fuel@ expansions of nested data TyCons were not enough to prove
--- non-recursivenss, nor arrive at an occurrence of @tc@ thus proving
+-- non-recursiveness, nor arrive at an occurrence of @tc@ thus proving
-- recursiveness. (So not sure if non-recursive.)
-- * @NonRecursiveOrUnsure@ when we hit an abstract TyCon (one without
-- visible DataCons), such as those imported from .hs-boot files.
@@ -1595,7 +1595,7 @@ return unboxed instead of in an unboxed singleton tuple:
We want `$wh :: Int# -> [Int]`.
We'd get `$wh :: Int# -> (# [Int] #)`.
-By considering vars as unlifted that satsify 'exprIsHNF', we catch (3).
+By considering vars as unlifted that satisfy 'exprIsHNF', we catch (3).
Why not check for 'exprOkForSpeculation'? Quite perplexingly, evaluated vars
are not ok-for-spec, see Note [exprOkForSpeculation and evaluated variables].
For (1) and (2) we would have to look at the term. WW only looks at the
@@ -1607,7 +1607,7 @@ Note [Linear types and CPR]
Remark on linearity: in both the case of the wrapper and the worker,
we build a linear case to unpack constructed products. All the
multiplicity information is kept in the constructors (both C and (#, #)).
-In particular (#,#) is parametrised by the multiplicity of its fields.
+In particular (#,#) is parameterised by the multiplicity of its fields.
Specifically, in this instance, the multiplicity of the fields of (#,#)
is chosen to be the same as those of C.
diff --git a/compiler/GHC/Core/Opt/simplifier.tib b/compiler/GHC/Core/Opt/simplifier.tib
index e0f9dc91f2..07127c7fe2 100644
--- a/compiler/GHC/Core/Opt/simplifier.tib
+++ b/compiler/GHC/Core/Opt/simplifier.tib
@@ -706,7 +706,7 @@ each iteration of Step 2 only performs one transformation, then the
entire program will to be re-analysed by Step 1, and re-traversed by
Step 2, for each transformation of the sequence. Sometimes this is
unavoidable, but it is often possible to perform a sequence of
-transformtions in a single pass.
+transformations in a single pass.
The key function, which simplifies expressions, has the following type:
@