summaryrefslogtreecommitdiff
path: root/includes/Parallel.h
diff options
context:
space:
mode:
authorberthold@mathematik.uni-marburg.de <unknown>2008-09-15 13:28:46 +0000
committerberthold@mathematik.uni-marburg.de <unknown>2008-09-15 13:28:46 +0000
commitcf9650f2a1690c04051c716124bb0350adc74ae7 (patch)
treef2e622b8eea04515fc4a19d24a1ecb57a654b33d /includes/Parallel.h
parent7eeac4d143e9287d7c2e27ba23b84d175df49962 (diff)
downloadhaskell-cf9650f2a1690c04051c716124bb0350adc74ae7.tar.gz
Work stealing for sparks
Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of the RTS. Spark pools are per capability, separately allocated and held in the Capability structure. The implementation uses Double-Ended Queues (deque) and cas-protected access. The write end of the queue (position bottom) can only be used with mutual exclusion, i.e. by exactly one caller at a time. Multiple readers can steal()/findSpark() from the read end (position top), and are synchronised without a lock, based on a cas of the top position. One reader wins, the others return NULL for a failure. Work stealing is called when Capabilities find no other work (inside yieldCapability), and tries all capabilities 0..n-1 twice, unless a theft succeeds. Inside schedulePushWork, all considered cap.s (those which were idle and could be grabbed) are woken up. Future versions should wake up capabilities immediately when putting a new spark in the local pool, from newSpark(). Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also fixing a (strange) conflict in the scheduler.
Diffstat (limited to 'includes/Parallel.h')
0 files changed, 0 insertions, 0 deletions