| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes the syntax and story around overlapping type
family instances. Before, we had "unbranched" instances and
"branched" instances. Now, we have closed type families and
open ones.
The behavior of open families is completely unchanged. In particular,
coincident overlap of open type family instances still works, despite
emails to the contrary.
A closed type family is declared like this:
> type family F a where
> F Int = Bool
> F a = Char
The equations are tried in order, from top to bottom, subject to
certain constraints, as described in the user manual. It is not
allowed to declare an instance of a closed family.
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
compiler/rename/RnSource.lhs
compiler/simplCore/OccurAnal.lhs
compiler/vectorise/Vectorise/Exp.hs
NB: Merging instead of rebasing for a change. During rebase Git got confused due to the lack of the submodules in my quite old fork.
|
| |
| |
| |
| |
| | |
* We need to keep the vectorised version of a variable alive while the original is alive.
* This implies that the vectorised version needs to get into the iface if the original appears in an unfolding.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
* Vectorisation avoidance is now the default
* Types and values from unvectorised modules are permitted in scalar code
* Simplified the VECTORISE pragmas (see http://hackage.haskell.org/trac/ghc/wiki/DataParallel/VectPragma for the spec)
* Vectorisation information is now included in the annotated Core AST
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An ordered, overlapping type family instance is introduced by 'type
instance
where', followed by equations. See the new section in the user manual
(7.7.2.2) for details. The canonical example is Boolean equality at the
type
level:
type family Equals (a :: k) (b :: k) :: Bool
type instance where
Equals a a = True
Equals a b = False
A branched family instance, such as this one, checks its equations in
order
and applies only the first the matches. As explained in the note
[Instance
checking within groups] in FamInstEnv.lhs, we must be careful not to
simplify,
say, (Equals Int b) to False, because b might later unify with Int.
This commit includes all of the commits on the overlapping-tyfams
branch. SPJ
requested that I combine all my commits over the past several months
into one
monolithic commit. The following GHC repos are affected: ghc, testsuite,
utils/haddock, libraries/template-haskell, and libraries/dph.
Here are some details for the interested:
- The definition of CoAxiom has been moved from TyCon.lhs to a
new file CoAxiom.lhs. I made this decision because of the
number of definitions necessary to support BranchList.
- BranchList is a GADT whose type tracks whether it is a
singleton list or not-necessarily-a-singleton-list. The reason
I introduced this type is to increase static checking of places
where GHC code assumes that a FamInst or CoAxiom is indeed a
singleton. This assumption takes place roughly 10 times
throughout the code. I was worried that a future change to GHC
would invalidate the assumption, and GHC might subtly fail to
do the right thing. By explicitly labeling CoAxioms and
FamInsts as being Unbranched (singleton) or
Branched (not-necessarily-singleton), we make this assumption
explicit and checkable. Furthermore, to enforce the accuracy of
this label, the list of branches of a CoAxiom or FamInst is
stored using a BranchList, whose constructors constrain its
type index appropriately.
I think that the decision to use BranchList is probably the most
controversial decision I made from a code design point of view.
Although I provide conversions to/from ordinary lists, it is more
efficient to use the brList... functions provided in CoAxiom than
always to convert. The use of these functions does not wander far
from the core CoAxiom/FamInst logic.
BranchLists are motivated and explained in the note [Branched axioms] in
CoAxiom.lhs.
- The CoAxiom type has changed significantly. You can see the new
type in CoAxiom.lhs. It uses a CoAxBranch type to track
branches of the CoAxiom. Correspondingly various functions
producing and consuming CoAxioms had to change, including the
binary layout of interface files.
- To get branched axioms to work correctly, it is important to have a
notion
of type "apartness": two types are apart if they cannot unify, and no
substitution of variables can ever get them to unify, even after type
family
simplification. (This is different than the normal failure to unify
because
of the type family bit.) This notion in encoded in tcApartTys, in
Unify.lhs.
Because apartness is finer-grained than unification, the tcUnifyTys
now
calls tcApartTys.
- CoreLinting axioms has been updated, both to reflect the new
form of CoAxiom and to enforce the apartness rules of branch
application. The formalization of the new rules is in
docs/core-spec/core-spec.pdf.
- The FamInst type (in types/FamInstEnv.lhs) has changed
significantly, paralleling the changes to CoAxiom. Of course,
this forced minor changes in many files.
- There are several new Notes in FamInstEnv.lhs, including one
discussing confluent overlap and why we're not doing it.
- lookupFamInstEnv, lookupFamInstEnvConflicts, and
lookup_fam_inst_env' (the function that actually does the work)
have all been more-or-less completely rewritten. There is a
Note [lookup_fam_inst_env' implementation] describing the
implementation. One of the changes that affects other files is
to change the type of matches from a pair of (FamInst, [Type])
to a new datatype (which now includes the index of the matching
branch). This seemed a better design.
- The TySynInstD constructor in Template Haskell was updated to
use the new datatype TySynEqn. I also bumped the TH version
number, requiring changes to DPH cabal files. (That's why the
DPH repo has an overlapping-tyfams branch.)
- As SPJ requested, I refactored some of the code in HsDecls:
* splitting up TyDecl into SynDecl and DataDecl, correspondingly
changing HsTyDefn to HsDataDefn (with only one constructor)
* splitting FamInstD into TyFamInstD and DataFamInstD and
splitting FamInstDecl into DataFamInstDecl and TyFamInstDecl
* making the ClsInstD take a ClsInstDecl, for parallelism with
InstDecl's other constructors
* changing constructor TyFamily into FamDecl
* creating a FamilyDecl type that stores the details for a family
declaration; this is useful because FamilyDecls can appear in classes
but
other decls cannot
* restricting the associated types and associated type defaults for a
* class
to be the new, more restrictive types
* splitting cid_fam_insts into cid_tyfam_insts and cid_datafam_insts,
according to the new types
* perhaps one or two more that I'm overlooking
None of these changes has far-reaching implications.
- The user manual, section 7.7.2.2, is updated to describe the new type
family
instances.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
instance pragmas
* Correct usage of new type wrappers from MkId
* 'VECTORISE [SCALAR] type T = S' didn't work correctly across module boundaries
* Clean up 'VECTORISE SCALAR instance'
|
| |
|
| |
|
| |
|
|
|
|
| |
* Frontend support (not yet used in the vectoriser)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is work mostly done by Daniel Winograd-Cort during his
internship at MSR Cambridge, with some further refactoring by me.
This commit adds support to GHCi for most top-level declarations that
can be used in Haskell source files. Class, data, newtype, type,
instance are all supported, as are Type Family-related declarations.
The current set of declarations are shown by :show bindings. As with
variable bindings, entities bound by newer declarations shadow earlier
ones.
Tests are in testsuite/tests/ghci/scripts/ghci039--ghci054.
Documentation to follow.
|
|
|
|
|
|
| |
- Toplevel bindings that cannot be vectorised are reported as a warning
- '-ddump-vt-trace' has even more information about unvectorised code
- Fixed some documentation
|
|
|
|
| |
versions of imported identifiers
|
|
|
|
|
|
|
|
|
| |
they are not declared, but only imported
- Types already gained this functionality already in a previous commit
- This commit adds the capability for functions
This is a crucial step towards being able to use the standard Prelude, instead of a special vectorised one.
|
|
|
|
|
|
|
|
|
| |
- Pragma to determine how a given type is vectorised
- At this stage only the VECTORISE SCALAR variant is used by the vectoriser.
- '{-# VECTORISE SCALAR type t #-}' implies that 't' cannot contain parallel arrays and may be used in vectorised code. However, its constructors can only be used in scalar code. We use this, e.g., for 'Int'.
- May be used on imported types
See also http://hackage.haskell.org/trac/ghc/wiki/DataParallel/VectPragma
|
|
|
|
|
|
|
| |
We used to have "loop breaker" and "non-rule loop breaker", but
the unqualified version in particualr was pretty confusing. So
now we have "strong loop breaker" and "weak loop breaker";
comments in BasicTypes and OccurAnal.
|
| |
|
|
|
|
| |
toplevel variable 'f'.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See the paper "Practical aspects of evidence based compilation in System FC"
* Coercion becomes a data type, distinct from Type
* Coercions become value-level things, rather than type-level things,
(although the value is zero bits wide, like the State token)
A consequence is that a coerion abstraction increases the arity by 1
(just like a dictionary abstraction)
* There is a new constructor in CoreExpr, namely Coercion, to inject
coercions into terms
|
|
|
|
|
|
|
|
|
| |
- The pragma {-# VECTORISE SCALAR foo #-} marks 'foo' as a
scalar function for for vectorisation and generates a
vectorised version by applying 'scalar_map' and friends.
- The set of scalar functions is not yet emitted into
interface files. This will be added in a subsequent
patch via 'VectInfo'.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added a pragma {-# VECTORISE var = exp #-} that prevents
the vectoriser from vectorising the definition of 'var'.
Instead it uses the binding '$v_var = exp' to vectorise
'var'. The vectoriser checks that the Core type of 'exp'
matches the vectorised Core type of 'var'. (It would be
quite complicated to perform that check in the type checker
as the vectorisation of a type needs the state of the VM
monad.)
- Added parts of a related VECTORISE SCALAR pragma
- Documented -ddump-vect
- Added -ddump-vt-trace
- Some clean up
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While trying to fix #1666 (-Werror aborts too early) I decided to some
tidyup in GHC/DriverPipeline/HscMain.
- The GhcMonad overloading is gone from DriverPipeline and HscMain
now. GhcMonad is now defined in a module of its own, and only
used in the top-level GHC layer. DriverPipeline and HscMain
use the plain IO monad and take HscEnv as an argument.
- WarnLogMonad is gone. printExceptionAndWarnings is now called
printException (the old name is deprecated). Session no longer
contains warnings.
- HscMain has its own little monad that collects warnings, and also
plumbs HscEnv around. The idea here is that warnings are collected
while we're in HscMain, but on exit from HscMain (any function) we
check for warnings and either print them (via log_action, so IDEs
can still override the printing), or turn them into an error if
-Werror is on.
- GhcApiCallbacks is gone, along with GHC.loadWithLogger. Thomas
Schilling told me he wasn't using these, and I don't see a good
reason to have them.
- there's a new pure API to the parser (suggestion from Neil Mitchell):
parser :: String
-> DynFlags
-> FilePath
-> Either ErrorMessages (WarningMessages,
Located (HsModule RdrName))
|
|
|
|
| |
Fixes a loop in the compiler, when running the dph tests
|
|
|
|
| |
Implements Trac #4299. Documentation to come.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
module
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
See the long Note [INLINE and default methods].
This patch changes a couple of data types, with a knock-on effect on
the format of interface files. A lot of files get touched, but is a
relatively minor change. The main tiresome bit is the extra plumbing
to communicate default methods between the type checker and the
desugarer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch collects a small raft of related changes
* Arrange that during
(a) rule matching and
(b) uses of exprIsConApp_maybe
we "look through" unfoldings only if they are active
in the phase. Doing this for (a) required a bit of
extra plumbing in the rule matching code, but I think
it's worth it.
One wrinkle is that even if inlining is off (in the 'gentle'
phase of simplification) during rule matching we want to
"look through" things with inlinings.
See SimplUtils.activeUnfInRule.
This fixes a long-standing bug, where things that were
supposed to be (say) NOINLINE, could still be poked into
via exprIsConApp_maybe.
* In the above cases, also check for (non-rule) loop breakers;
we never look through these. This fixes a bug that could make
the simplifier diverge (and did for Roman).
Test = simplCore/should_compile/dfun-loop
* Try harder not to choose a DFun as a loop breaker. This is
just a small adjustment in the OccurAnal scoring function
* In the scoring function in OccurAnal, look at the InlineRule
unfolding (if there is one) not the actual RHS, beause the
former is what'll be inlined.
* Make the application of any function to dictionary arguments
CONLIKE. Thus (f d1 d2) is CONLIKE.
Encapsulated in CoreUtils.isExpandableApp
Reason: see Note [Expandable overloadings] in CoreUtils
* Make case expressions seem slightly smaller in CoreUnfold.
This reverses an unexpected consequences of charging for
alternatives.
Refactorings
~~~~~~~~~~~~
* Signficantly refactor the data type for Unfolding (again).
The result is much nicer.
* Add type synonym BasicTypes.CompilerPhase = Int
and use it
Many of the files touched by this patch are simply knock-on
consequences of these two refactorings.
|