| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of the other users of the fptools build system have migrated to
Cabal, and with the move to darcs we can now flatten the source tree
without losing history, so here goes.
The main change is that the ghc/ subdir is gone, and most of what it
contained is now at the top level. The build system now makes no
pretense at being multi-project, it is just the GHC build system.
No doubt this will break many things, and there will be a period of
instability while we fix the dependencies. A straightforward build
should work, but I haven't yet fixed binary/source distributions.
Changes to the Building Guide will follow, too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for UTF-8 source files
GHC finally has support for full Unicode in source files. Source
files are now assumed to be UTF-8 encoded, and the full range of
Unicode characters can be used, with classifications recognised using
the implementation from Data.Char. This incedentally means that only
the stage2 compiler will recognise Unicode in source files, because I
was too lazy to port the unicode classifier code into libcompat.
Additionally, the following synonyms for keywords are now recognised:
forall symbol (U+2200) forall
right arrow (U+2192) ->
left arrow (U+2190) <-
horizontal ellipsis (U+22EF) ..
there are probably more things we could add here.
This will break some source files if Latin-1 characters are being used.
In most cases this should result in a UTF-8 decoding error. Later on
if we want to support more encodings (perhaps with a pragma to specify
the encoding), I plan to do it by recoding into UTF-8 before parsing.
Internally, there were some pretty big changes:
- FastStrings are now stored in UTF-8
- Z-encoding has been moved right to the back end. Previously we
used to Z-encode every identifier on the way in for simplicity,
and only decode when we needed to show something to the user.
Instead, we now keep every string in its UTF-8 encoding, and
Z-encode right before printing it out. To avoid Z-encoding the
same string multiple times, the Z-encoding is cached inside the
FastString the first time it is requested.
This speeds up the compiler - I've measured some definite
improvement in parsing at least, and I expect compilations overall
to be faster too. It also cleans up a lot of cruft from the
OccName interface. Z-encoding is nicely hidden inside the
Outputable instance for Names & OccNames now.
- StringBuffers are UTF-8 too, and are now represented as
ForeignPtrs.
- I've put together some test cases, not by any means exhaustive,
but there are some interesting UTF-8 decoding error cases that
aren't obvious. Also, take a look at unicode001.hs for a demo.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix some bugs in compacting GC.
Bug 1: When threading the fields of an AP or PAP, we were grabbing the
info table of the function without unthreading it first.
Bug 2: eval_thunk_selector() might accidentally find itself in
to-space when going through indirections in a compacted generation.
We must check for this case and bale out if necessary.
Bug 3: This is somewhat more nasty. When we have an AP or PAP that
points to a BCO, the layout info for the AP/PAP is in the BCO's
instruction array, which is two objects deep from the AP/PAP itself.
The trouble is, during compacting GC, we can only safely look one
object deep from the current object, because pointers from objects any
deeper might have been already updated to point to their final
destinations.
The solution is to put the arity and bitmap info for a BCO into the
BCO object itself. This means BCOs become variable-length, which is a
slight annoyance, but it also means that looking up the arity/bitmap
is quicker. There is a slight reduction in complexity in the byte
code generator due to not having to stuff the bitmap at the front of
the instruction stream.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Merge the eval-apply-branch on to the HEAD
------------------------------------------
This is a change to GHC's evaluation model in order to ultimately make
GHC more portable and to reduce complexity in some areas.
At some point we'll update the commentary to describe the new state of
the RTS. Pending that, the highlights of this change are:
- No more Su. The Su register is gone, update frames are one
word smaller.
- Slow-entry points and arg checks are gone. Unknown function calls
are handled by automatically-generated RTS entry points (AutoApply.hc,
generated by the program in utils/genapply).
- The stack layout is stricter: there are no "pending arguments" on
the stack any more, the stack is always strictly a sequence of
stack frames.
This means that there's no need for LOOKS_LIKE_GHC_INFO() or
LOOKS_LIKE_STATIC_CLOSURE() any more, and GHC doesn't need to know
how to find the boundary between the text and data segments (BIG WIN!).
- A couple of nasty hacks in the mangler caused by the neet to
identify closure ptrs vs. info tables have gone away.
- Info tables are a bit more complicated. See InfoTables.h for the
details.
- As a side effect, GHCi can now deal with polymorphic seq. Some bugs
in GHCi which affected primitives and unboxed tuples are now
fixed.
- Binary sizes are reduced by about 7% on x86. Performance is roughly
similar, some programs get faster while some get slower. I've seen
GHCi perform worse on some examples, but haven't investigated
further yet (GHCi performance *should* be about the same or better
in theory).
- Internally the code generator is rather better organised. I've moved
info-table generation from the NCG into the main codeGen where it is
shared with the C back-end; info tables are now emitted as arrays
of words in both back-ends. The NCG is one step closer to being able
to support profiling.
This has all been fairly thoroughly tested, but no doubt I've messed
up the commit in some way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
--------------------------------------
Make Template Haskell into the HEAD
--------------------------------------
This massive commit transfers to the HEAD all the stuff that
Simon and Tim have been doing on Template Haskell. The
meta-haskell-branch is no more!
WARNING: make sure that you
* Update your links if you are using link trees.
Some modules have been added, some have gone away.
* Do 'make clean' in all library trees.
The interface file format has changed, and you can
get strange panics (sadly) if GHC tries to read old interface files:
e.g. ghc-5.05: panic! (the `impossible' happened, GHC version 5.05):
Binary.get(TyClDecl): ForeignType
* You need to recompile the rts too; Linker.c has changed
However the libraries are almost unaltered; just a tiny change in
Base, and to the exports in Prelude.
NOTE: so far as TH itself is concerned, expression splices work
fine, but declaration splices are not complete.
---------------
The main change
---------------
The main structural change: renaming and typechecking have to be
interleaved, because we can't rename stuff after a declaration splice
until after we've typechecked the stuff before (and the splice
itself).
* Combine the renamer and typecheker monads into one
(TcRnMonad, TcRnTypes)
These two replace TcMonad and RnMonad
* Give them a single 'driver' (TcRnDriver). This driver
replaces TcModule.lhs and Rename.lhs
* The haskell-src library package has a module
Language/Haskell/THSyntax
which defines the Haskell data type seen by the TH programmer.
* New modules:
hsSyn/Convert.hs converts THSyntax -> HsSyn
deSugar/DsMeta.hs converts HsSyn -> THSyntax
* New module typecheck/TcSplice type-checks Template Haskell splices.
-------------
Linking stuff
-------------
* ByteCodeLink has been split into
ByteCodeLink (which links)
ByteCodeAsm (which assembles)
* New module ghci/ObjLink is the object-code linker.
* compMan/CmLink is removed entirely (was out of place)
Ditto CmTypes (which was tiny)
* Linker.c initialises the linker when it is first used (no need to call
initLinker any more). Template Haskell makes it harder to know when
and whether to initialise the linker.
-------------------------------------
Gathering the LIE in the type checker
-------------------------------------
* Instead of explicitly gathering constraints in the LIE
tcExpr :: RenamedExpr -> TcM (TypecheckedExpr, LIE)
we now dump the constraints into a mutable varabiable carried
by the monad, so we get
tcExpr :: RenamedExpr -> TcM TypecheckedExpr
Much less clutter in the code, and more efficient too.
(Originally suggested by Mark Shields.)
-----------------
Remove "SysNames"
-----------------
Because the renamer and the type checker were entirely separate,
we had to carry some rather tiresome implicit binders (or "SysNames")
along inside some of the HsDecl data structures. They were both
tiresome and fragile.
Now that the typechecker and renamer are more intimately coupled,
we can eliminate SysNames (well, mostly... default methods still
carry something similar).
-------------
Clean up HsPat
-------------
One big clean up is this: instead of having two HsPat types (InPat and
OutPat), they are now combined into one. This is more consistent with
the way that HsExpr etc is handled; there are some 'Out' constructors
for the type checker output.
So:
HsPat.InPat --> HsPat.Pat
HsPat.OutPat --> HsPat.Pat
No 'pat' type parameter in HsExpr, HsBinds, etc
Constructor patterns are nicer now: they use
HsPat.HsConDetails
for the three cases of constructor patterns:
prefix, infix, and record-bindings
The *same* data type HsConDetails is used in the type
declaration of the data type (HsDecls.TyData)
Lots of associated clean-up operations here and there. Less code.
Everything is wonderful.
|
|
|
|
| |
GlaExts => GHC.Exts
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Housekeeping:
- The main goal is to remove dependencies on hslibs for a
bootstrapped compiler, leaving only a requirement that the
packages base, haskell98 and readline are built in stage 1 in
order to bootstrap. We're almost there: Posix is still required
for signal handling, but all other dependencies on hslibs are now
gone.
Uses of Addr and ByteArray/MutableByteArray array are all gone
from the compiler. PrimPacked defines the Ptr type for GHC 4.08
(which didn't have it), and it defines simple BA and MBA types to
replace uses of ByteArray and MutableByteArray respectively.
- Clean up import lists. HsVersions.h now defines macros for some
modules which have moved between GHC versions. eg. one now
imports 'GLAEXTS' to get at unboxed types and primops in the
compiler.
Many import lists have been sorted as per the recommendations in
the new style guidelines in the commentary.
I've built the compiler with GHC 4.08.2, 5.00.2, 5.02.3, 5.04 and
itself, and everything still works here. Doubtless I've got something
wrong, though.
|
|
|
|
|
|
|
|
| |
Make the byte-code generator understand about unboxed
tuple returns. The previous code was just wrong.
This code is better but it is still not *right*, I fear.
Don't merge till we sort this out.
|
|
|
|
| |
Re-enable bootstrapping: More Ptr trouble, now that it's (almost) abstract...
|
|
|
|
| |
(F)SLIT fixes, continued...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FastString cleanup, stage 1.
The FastString type is no longer a mixture of hashed strings and
literal strings, it contains hashed strings only with O(1) comparison
(except for UnicodeStr, but that will also go away in due course). To
create a literal instance of FastString, use FSLIT("..").
By far the most common use of the old literal version of FastString
was in the pattern
ptext SLIT("...")
this combination still works, although it doesn't go via FastString
any more. The next stage will be to remove the need to use this
special combination at all, using a RULE.
To convert a FastString into an SDoc, now use 'ftext' instead of
'ptext'.
I've also removed all the FAST_STRING related macros from HsVersions.h
except for SLIT and FSLIT, just use the relevant functions from
FastString instead.
|
|
|
|
|
| |
Urk, PrelPrimopWrappers is now GHCziPrimopWrappers (sigh, this should
really be done in a less fragile way).
|
|
|
|
| |
Friday afternoon pet peeve removal: define (Util.notNull :: [a] -> Bool) and use it
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
------------------------
Change
GlobalName --> ExternalName
LocalName -> InternalName
------------------------
For a long time there's been terminological confusion between
GlobalName vs LocalName (property of a Name)
GlobalId vs LocalId (property of an Id)
I've now changed the terminology for Name to be
ExternalName vs InternalName
I've also added quite a bit of documentation in the Commentary.
|
|
|
|
| |
Fix import wibble
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make foreign export dynamic work in GHCi. Main changes:
* Allow literal labels to propagate through the bytecode generator
and eventually be linked by the runtime linker.
* Minor mods to driver plumbing so that GHCi produces the relevant
*_stub.[ch] files, compiles them with gcc, and loads the resulting .o's
* Dereference the stable pointer in the generated C stub, rather
than passing it to a Haskell-world helper. This seems simpler and
removes the need to have a H-world helper, which in turn means the
stub .o doesn't refer to any H-world entities. This is important
because our linker can't deal with mutual recursion between
BCOs and loaded objects.
Still ToDo:
* Make it thread/GC safe. (Sigbjorn?)
* Get rid of the bits of code in DsForeign which generate the
Haskell helper. I had a go but it wasn't obvious how to do it,
so have deferred.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Switch over to the new hierarchical libraries
---------------------------------------------
This commit reorganises our libraries to use the new hierarchical
module namespace extension.
The basic story is this:
- fptools/libraries contains the new hierarchical libraries.
Everything in here is "clean", i.e. most deprecated stuff has
been removed.
- fptools/libraries/base is the new base package
(replacing "std") and contains roughly what was previously
in std, lang, and concurrent, minus deprecated stuff.
Things that are *not allowed* in libraries/base include:
Addr, ForeignObj, ByteArray, MutableByteArray,
_casm_, _ccall_, ``'', PrimIO
For ByteArrays and MutableByteArrays we use UArray and
STUArray/IOUArray respectively now.
Modules previously called PrelFoo are now under
fptools/libraries/GHC. eg. PrelBase is now GHC.Base.
- fptools/libraries/haskell98 provides the Haskell 98 std.
libraries (Char, IO, Numeric etc.) as a package. This
package is enabled by default.
- fptools/libraries/network is a rearranged version of
the existing net package (the old package net is still
available; see below).
- Other packages will migrate to fptools/libraries in
due course.
NB. you need to checkout fptools/libraries as well as
fptools/hslibs now. The nightly build scripts will need to be
tweaked.
- fptools/hslibs still contains (almost) the same stuff as before.
Where libraries have moved into the new hierarchy, the hslibs
version contains a "stub" that just re-exports the new version.
The idea is that code will gradually migrate from fptools/hslibs
into fptools/libraries as it gets cleaned up, and in a version or
two we can remove the old packages altogether.
- I've taken the opportunity to make some changes to the build
system, ripping out the old hslibs Makefile stuff from
mk/target.mk; the new package building Makefile code is in
mk/package.mk (auto-included from mk/target.mk).
The main improvement is that packages now register themselves at
make boot time using ghc-pkg, and the monolithic package.conf
in ghc/driver is gone.
I've updated the standard packages but haven't tested win32,
graphics, xlib, object-io, or OpenGL yet. The Makefiles in
these packages may need some further tweaks, and they'll need
pkg.conf.in files added.
- Unfortunately all this rearrangement meant I had to bump the
interface-file version and create a bunch of .hi-boot-6 files :-(
|
|
|
|
|
|
| |
Unbreak 2nd stage build by tracking recent RTS naming changes
(ATTENTION: I'm not quite sure what I'm doing here exactly,
but things seem to work... :-}
|
|
|
|
|
|
|
|
|
|
|
|
| |
In GHCi, if we are currently using a compiled version of a module and
the user compiles a new version of the module, allow the new version
to be linked in during a :reload. (as suggested by Koen Claessen).
We can't go all the way and allow a newly compiled module to replace
an existing interpreted version, because the version numbers in the
interface file will be out-of-sync with our internal copy of the
interface. To link in a newly compiled version of an interpreted
module, you still have to do :load.
|
|
|
|
| |
Add an ASSERT
|
|
|
|
| |
follow-on from prev. commit; more tidyups
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
merge from stable, revs:
1.191.4.1 +2 -2 fptools/ghc/compiler/Makefile
1.7.4.2 +38 -13 fptools/ghc/compiler/ghci/ByteCodeFFI.lhs
1.58.4.2 +4 -3 fptools/ghc/compiler/ghci/ByteCodeGen.lhs
1.25.4.1 +40 -10 fptools/ghc/compiler/ghci/ByteCodeLink.lhs
Make the bytecode generation machinery print a helpful message if
it has to give up due to lack of 64-bit support.
Add various bits of supporting infrastructure for 64-bit values
in the bytecode generator. Making it all work is beyond the scope
of a patchlevel release, so these are unused right now.
1.25.4.2 +27 -7 fptools/ghc/compiler/ghci/ByteCodeLink.lhs
Print a civilised and helpful error message if the bytecode linker
should encounter a link failure.
1.58.4.3 +6 -8 fptools/ghc/compiler/ghci/ByteCodeGen.lhs
1.25.4.3 +1 -1 fptools/ghc/compiler/ghci/ByteCodeLink.lhs
Also give civilised messages for interactive FFI link failures.
1.25.4.4 +2 -1 fptools/ghc/compiler/ghci/ByteCodeLink.lhs
Refine the runtime-link-failure msg a bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the story about POSIX headers in C compilation.
Until now, all C code in the RTS and library cbits has by default been
compiled with settings for POSIXness enabled, that is:
#define _POSIX_SOURCE 1
#define _POSIX_C_SOURCE 199309L
#define _ISOC9X_SOURCE
If you wanted to negate this, you'd have to define NON_POSIX_SOURCE
before including headers.
This scheme has some bad effects:
* It means that ccall-unfoldings exported via interfaces from a
module compiled with -DNON_POSIX_SOURCE may not compile when
imported into a module which does not -DNON_POSIX_SOURCE.
* It overlaps with the feature tests we do with autoconf.
* It seems to have caused borkage in the Solaris builds for some
considerable period of time.
The New Way is:
* The default changes to not-being-in-Posix mode.
* If you want to force a C file into Posix mode, #include as
the **first** include the new file ghc/includes/PosixSource.h.
Most of the RTS C sources have this include now.
* NON_POSIX_SOURCE is almost totally expunged. Unfortunately
we have to retain some vestiges of it in ghc/compiler so that
modules compiled via C on Solaris using older compilers don't
break.
|
|
|
|
| |
Add support for passing ptr/byte arrays to C.
|
|
|
|
|
| |
Disable use of finalisers attached to UnlinkedBCOs, since finalisers
attached to non-atomic objects may run too early :-(
|
|
|
|
|
|
|
|
| |
Attach finaliser for malloc'd blocks to the UnlinkedBCOs, not to
linked really-really-really BCOs. This is because an unlinked BCO
may be copied many times to generated LinkedBCOs before it dies.
Attaching finalisers to linked BCOs could mean multiple free()s on
the same address.
|
|
|
|
|
|
| |
Use the bytecode generator's monad to keep track of the malloc'd blocks
created for each BCO. Eventually use this info to generate a finaliser
which is tied to the real, linked BCO
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix enough bugs/incompletenesses so that foreign import (static) works
fairly well on x86.
Still ToDo:
* f-i dynamic
* save/restore GC/thread context around calls
* stdcall support
* pass/return of 64-bit integral quantities on x86
* sparc implementation
|
|
|
|
| |
Only define i_CCALL iff bci_CCALL is defined in WithHc's ByteCodes.h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Haskell-side support for FFI (foreign import only).
Since doing the FFI necessarily involves gruesome
architecture-specific knowledge about calling conventions, I have
chosen to put this knowledge in Haskell-land, in ByteCodeFFI.
The general idea is: to do a ccall, the interpreter accumulates the
args R to L on the stack, as is the normal case for tail-calls.
However, it then calls a piece of machine code created by ByteCodeFFI
and which is specific to this call site. This glue code copies args
off the Haskell stack, calls the target function, and places the
result back into a dummy placeholder created on the Haskell stack
prior to the call. The interpreter then SLIDEs and RETURNs in the
normal way.
The magic glue code copies args off the Haskell stack and pushes them
directly on the C stack (x86) and/or into regs (sparc et al). Because
the code is made up specifically for this call site, it can do all
that non-interpretively. The address (of the C fn to call) is
presented as just another tagged Addr# on the Haskell stack. This
makes f-i-dynamic trivial since the first arg is the said Addr#.
Presently ByteCodeFFI only knows how to generate x86 code sequences.
|
|
|
|
| |
Hmm..are the Cambridge offices running low on oxygen? Desloppified to make stage2 work again
|
|
|
|
|
| |
oops: I changed the names of some of the GC stubs, and didn't realise they
were mentioned here too.
|
|
|
|
| |
Don't rely so much on exports from ArrayBase.
|
|
|
|
|
|
|
|
|
|
|
| |
Implement tagToEnum# for the bytecode system. Blargh. We spot tail-calls
tagToEnum# <type> arg
and emit code to push the arg, then do a bytecode test-sequence to
determine what value it is, push the relevant constructor, and merge
control flow again, at a label which does the normal tail-call
sequence: slide the constructor down to the sequel and enter it.
Blargyle, or as some would say, barferama.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GHCi fixes:
- expressions are now compiled in a pseudo-module "$Interactive",
which avoids some problems with storage of demand-loaded declarations.
- compilation manager now detects when it needs to read the interace
for a module, even if it is already compiled. GHCi never demand-loads
interfaces now.
- (from Simon PJ) fix a problem with the recompilation checker, which
meant that modules were sometimes not recompiled when they should
have been.
- ByteCodeGen/Link: move linker related stuff into ByteCodeLink.
|
|
|
|
| |
Unload temporary bindings from the ClosureEnv properly at cmLoadModule time.
|
|
|
|
| |
VoidRep call/return support for interpreted code.
|
|
|
|
|
| |
oops, filterNameMap was the wrong way around (or I was using wrong).
It should *keep* the named modules, not throw them away.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support stack overflow checks in interpreted code. The deal is:
* If a BCO is reckoned to need less than iNTERP_STACK_CHECK_THRESH
words of stack, no stack check insn is added.
* If a BCO needs >= iNTERP_STACK_CHECK_THRESH words, an explicit
check insn is added.
The interpreter ensures at least iNTERP_STACK_CHECK_THRESH words of
stack are available before running each BCO, regardless of whether or
not the BCO contains an explicit check insn too.
By setting iNTERP_STACK_CHECK_THRESH to a suitably large level
(currently 50), most BCOs only require the implicit stack check, which
avoids the overhead of decoding one extra insn per BCO. BCOs which do
have a stack check insn then do 2 stack checks rather than 1
(implicit, then explicit), but this is rare enough that we don't care.
|
|
|
|
|
| |
When linking a bytecode module, only add top-level (isGlobalName)
bindings into the returned augmented closure env.
|
|
|
|
| |
remove CAF List hack; the RTS has support for CAF retension and reversion.
|
|
|
|
| |
Add support for Word#.
|
|
|
|
| |
Fill in some more missing cases.
|
|
|
|
|
| |
More stuff to do with primop support in the interpreter. Also, track
some changes to the libraries.
|
|
|
|
| |
Use mkApUpd0# to ensure top-level things are updateable.
|
|
|
|
| |
Hopefully sort out heap-stack movement for constructors/cases.
|
|
Split ByteCodeGen up into more manageable-sized pieces.
|