| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
compiler/ghc.mk
@echo "#define $(HostArch_CPP)_HOST_ARCH 1" >> $@
@echo "#define $(TargetArch_CPP)_HOST_ARCH 1" >> $@
this leads to warnigns like:
> warning: 'x86_64_HOST_ARCH' is not defined, evaluates to 0 [-Wundef]
Reviewers: austin, bgamari, erikd, simonmar
Reviewed By: simonmar
Subscribers: rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D3555
|
|
|
|
| |
Our new CPP linter enforces this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This both says what we mean and silences a bunch of spurious CPP linting
warnings. This pragma is supported by all CPP implementations which we
support.
Reviewers: austin, erikd, simonmar, hvr
Reviewed By: simonmar
Subscribers: rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D3482
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While debugging an unrelated issue I noticed that we leak a
TimerQueueTimer on exit since we don't necessarily call stopTicker
before exitTicker. Fix this.
Test Plan: Validate on Windows
Reviewers: simonmar, austin, erikd
Reviewed By: simonmar
Subscribers: rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D3477
|
|
|
|
| |
This will make it a bit easier to maintain consistent output in the testsuite.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[skip ci]
There ware some old file names (.lhs, ...) at comments.
* rts/win32/ThrIOManager.c
- Conc.lhs -> Conc.hs
* rts/PrimOps.cmm
- ByteCodeLink.lhs -> ByteCodeLink.hs
- StgMiscClosures.hc -> StgMiscClosures.cmm
* rts/AutoApply.h
- AutoApply.hc -> AutoApply.cmm
* rts/HeapStackCheck.cmm
- PrimOps.hc -> PrimOps.cmm
* rts/LdvProfile.h
- Updates.hc -> Updates.cmm
* rts/Schedule.c
- StgStartup.hc -> StgStartup.cmm
* rts/Weak.c
- StgMiscClosures.hc -> StgMiscClosures.cmm
Reviewers: bgamari, austin, erikd, simonmar
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D3075
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch is courtesy of @awson.
Currently, whenever GHC catches a segfault on Windows, it simply reports the
somewhat uninformative message
`Segmentation fault/access violation in generated code`. This patch adds to
the message the type of violation (read/write/dep) and location information,
which should help debugging segfaults in the future.
Fixes #13108.
Test Plan: Build on Windows
Reviewers: austin, erikd, bgamari, simonmar, Phyx
Reviewed By: bgamari, Phyx
Subscribers: awson, thomie, #ghc_windows_task_force
Differential Revision: https://phabricator.haskell.org/D2969
GHC Trac Issues: #13108
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code has been broken on 64-bit systems for some time: the length
and timeout arguments of `addIORequest` and `addDelayRequest`,
respectively, were declared as `int`. However, they were passed Haskell
integers from their respective primops. Integer overflow and madness
ensued. This resulted in #7325 and who knows what else.
Also, there were a few left-over `BOOL`s in here which were not passed
to Windows system calls; these were changed to C99 `bool`s.
However, there is still a bit of signedness inconsistency within the
`delay#` call-chain,
* `GHC.Conc.IO.threadDelay` and the `delay#` primop accept `Int`
arguments
* The `delay#` implementation in `PrimOps.cmm` expects the timeout as
a `W_`
* `AsyncIO.c:addDelayRequest` expects an `HsInt` (was `int` prior to
this patch)
* `IOManager.c:AddDelayRequest` expects an `HsInt`` (was `int`)
* The Windows `Sleep` function expects a `DWORD` (which is unsigned)
Test Plan: Validate on Windows
Reviewers: erikd, austin, simonmar, Phyx
Reviewed By: Phyx
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2861
GHC Trac Issues: #7325
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This commit makes various improvements and addresses some issues with
Compact Regions (aka Compact Normal Forms).
This was the most important thing I wanted to fix. Compaction
previously prevented GC from running until it was complete, which
would be a problem in a multicore setting. Now, we compact using a
hand-written Cmm routine that can be interrupted at any point. When a
GC is triggered during a sharing-enabled compaction, the GC has to
traverse and update the hash table, so this hash table is now stored
in the StgCompactNFData object.
Previously, compaction consisted of a deepseq using the NFData class,
followed by a traversal in C code to copy the data. This is now done
in a single pass with hand-written Cmm (see rts/Compact.cmm). We no
longer use the NFData instances, instead the Cmm routine evaluates
components directly as it compacts.
The new compaction is about 50% faster than the old one with no
sharing, and a little faster on average with sharing (the cost of the
hash table dominates when we're doing sharing).
Static objects that don't (transitively) refer to any CAFs don't need
to be copied into the compact region. In particular this means we
often avoid copying Char values and small Int values, because these
are static closures in the runtime.
Each Compact# object can support a single compactAdd# operation at any
given time, so the Data.Compact library now enforces mutual exclusion
using an MVar stored in the Compact object.
We now get exceptions rather than killing everything with a barf()
when we encounter an object that cannot be compacted (a function, or a
mutable object). We now also detect pinned objects, which can't be
compacted either.
The Data.Compact API has been refactored and cleaned up. A new
compactSize operation returns the size (in bytes) of the compact
object.
Most of the documentation is in the Haddock docs for the compact
library, which I've expanded and improved here.
Various comments in the code have been improved, especially the main
Note [Compact Normal Forms] in rts/sm/CNF.c.
I've added a few tests, and expanded a few of the tests that were
there. We now also run the tests with GHCi, and in a new test way
that enables sanity checking (+RTS -DS).
There's a benchmark in libraries/compact/tests/compact_bench.hs for
measuring compaction speed and comparing sharing vs. no sharing.
The field totalDataW in StgCompactNFData was unnecessary.
Test Plan:
* new unit tests
* validate
* tested manually that we can compact Data.Aeson data
Reviewers: gcampax, bgamari, ezyang, austin, niteria, hvr, erikd
Subscribers: thomie, simonpj
Differential Revision: https://phabricator.haskell.org/D2751
GHC Trac Issues: #12455
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Fix issues preventing x86 GHC to build on Windows and
fix segfault in the testsuite.
Test Plan: ./validate
Reviewers: austin, erikd, simonmar, bgamari
Reviewed By: bgamari
Subscribers: #ghc_windows_task_force, thomie
Differential Revision: https://phabricator.haskell.org/D2789
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Plan: Validate on lots of platforms
Reviewers: erikd, simonmar, austin
Reviewed By: erikd, simonmar
Subscribers: michalt, thomie
Differential Revision: https://phabricator.haskell.org/D2699
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
NOTE: I have been able to do simple testing on emulated NUMA nodes.
Real hardware would be needed for a proper test.
D2199 Added NUMA support for Linux, I have just filled in the missing pieces following
the description of the Linux APIs.
Test Plan:
Use `bcdedit.exe /set groupsize 2` to modify the kernel again (Similar to D2533).
This generates some NUMA nodes:
```
Logical Processor to NUMA Node Map:
NUMA Node 0:
**
--
NUMA Node 1:
--
**
Approximate Cross-NUMA Node Access Cost (relative to fastest):
00 01
00: 1.1 1.1
01: 1.0 1.0
```
run ` ../test-numa.exe +RTS --numa -RTS`
and check PerfMon for NUMA allocations.
Reviewers: simonmar, erikd, bgamari, austin
Reviewed By: simonmar
Subscribers: thomie, #ghc_windows_task_force
Differential Revision: https://phabricator.haskell.org/D2534
GHC Trac Issues: #12602
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Windows support for more than 64 logical processors are implemented
using processor groups.
Essentially what it's doing is keeping the existing maximum of 64
processors and keeping the affinity mask a 64 bit value, but adds an
hierarchy above that.
This support was added to Windows 7 and so we need to at runtime detect
if the APIs are still there due to our minimum supported version being
Windows Vista.
The Maximum number of groups supported at this time is 4, so 256 logical
cores. The group indices are 0 based. One thread can have affinity with
multiple groups.
See
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684251.aspx
and particularly helpful is the whitepaper: 'Supporting Systems that
have more than 64 processors' at
https://msdn.microsoft.com/en-us/library/windows/hardware/dn653313.aspx
Processor groups are not guaranteed to be uniformly distributed nor
guaranteed to be filled before a next group is needed. The OS will
assign processors to groups based on physical proximity and will never
partially assign cores from one physical cpu to more than one group. If
one has two 48 core CPUs then you'd end up with two groups of 48 logical
cpus. Now add a 3rd CPU with 10 cores and the group it is assigned to
depends where the socket is on the board.
Test Plan:
./validate or make test -c . in the rts test folder.
This tests for regressions, to test this particular functionality
itself:
<program> +RTS -N -qa -RTS
Test is detailed in description.
Reviewers: bgamari, simonmar, austin, erikd
Reviewed By: simonmar
Subscribers: thomie, #ghc_windows_task_force
Differential Revision: https://phabricator.haskell.org/D2533
GHC Trac Issues: #11054
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
We stumbled upon a case where an external library (OpenCL) does not work
if a specific address (0x200000000) is taken.
It so happens that `osReserveHeapMemory` starts trying to mmap at 0x200000000:
```
void *hint = (void*)((W_)8 * (1 << 30) + attempt * BLOCK_SIZE);
at = osTryReserveHeapMemory(*len, hint);
```
This makes it impossible to use Haskell programs compiled with GHC 8
with C functions that use OpenCL.
See this example https://github.com/chpatrick/oclwtf for a repro.
This patch allows the user to work around this kind of behavior outside
our control by letting the user override the starting address through an
RTS command line flag.
Reviewers: bgamari, Phyx, simonmar, erikd, austin
Reviewed By: Phyx, simonmar
Subscribers: rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D2513
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The aim here is to reduce the number of remote memory accesses on
systems with a NUMA memory architecture, typically multi-socket servers.
Linux provides a NUMA API for doing two things:
* Allocating memory local to a particular node
* Binding a thread to a particular node
When given the +RTS --numa flag, the runtime will
* Determine the number of NUMA nodes (N) by querying the OS
* Assign capabilities to nodes, so cap C is on node C%N
* Bind worker threads on a capability to the correct node
* Keep a separate free lists in the block layer for each node
* Allocate the nursery for a capability from node-local memory
* Allocate blocks in the GC from node-local memory
For example, using nofib/parallel/queens on a 24-core 2-socket machine:
```
$ ./Main 15 +RTS -N24 -s -A64m
Total time 173.960s ( 7.467s elapsed)
$ ./Main 15 +RTS -N24 -s -A64m --numa
Total time 150.836s ( 6.423s elapsed)
```
The biggest win here is expected to be allocating from node-local
memory, so that means programs using a large -A value (as here).
According to perf, on this program the number of remote memory accesses
were reduced by more than 50% by using `--numa`.
Test Plan:
* validate
* There's a new flag --debug-numa=<n> that pretends to do NUMA without
actually making the OS calls, which is useful for testing the code
on non-NUMA systems.
* TODO: I need to add some unit tests
Reviewers: erikd, austin, rwbarton, ezyang, bgamari, hvr, niteria
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2199
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes the code a little more modular and allows the removal of some
CPP hackery. By providing dummy implementations of of the `m32_*`
functions (which simply call `errorBelch`) it means that the call sites
for these functions are syntax checked even when `RTS_LINKER_USE_MMAP`
is `0`.
Also changes some size parameter types from `unsigned int` to `size_t`.
Test Plan: Validate on Linux, OS X and Windows
Reviewers: Phyx, hsyl20, bgamari, simonmar, austin
Reviewed By: simonmar, austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2237
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The first argument of 'osFreeMBlocks' ought to have the same type as the
return value from 'osGetMBlocks'. Make it so.
Reviewers: austin, simonmar, bgamari
Reviewed By: bgamari
Subscribers: erikd, rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D2235
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `nat` type was an alias for `unsigned int` with a comment saying
it was at least 32 bits. We keep the typedef in case client code is
using it but mark it as deprecated.
Test Plan: Validated on Linux, OS X and Windows
Reviewers: simonmar, austin, thomie, hvr, bgamari, hsyl20
Differential Revision: https://phabricator.haskell.org/D2166
|
|
|
|
|
|
|
|
|
| |
At some point there may have been a reason for the
`INLINE_ME` macro, but not anymore...
Reviewed By: austin
Differential Revision: https://phabricator.haskell.org/D2041
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use of this helper function was removed in:
commit 3c9fc104337a142fe4f375d30d7a6b81d55a70c1
Author: Brian Brooks <brooks.brian@gmail.com>
Date: Thu Jul 10 02:55:33 2014 -0500
Avoid unnecessary clock_gettime() syscalls in GC stats.
Noticed by uselex.rb:
getThreadCPUTime: [R]: exported from:
./rts/dist/build/posix/GetTime.p_o
Signed-off-by: Sergei Trofimovich <siarheit@google.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Fix exit code for Windows to match expected for out-of-memory test
Test Plan: ./validate
Reviewers: simonmar, austin, thomie, bgamari
Reviewed By: thomie, bgamari
Differential Revision: https://phabricator.haskell.org/D1753
GHC Trac Issues: #11422
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the two-step allocator the RTS asks the kernel for a large upfront
mmap'd region of memory (on the order of terabytes). While we have no
expectation that this entire region will be backed by physical memory,
this scheme nevertheless fails on some systems with resource limits.
Here we use a back-off scheme to reduce our allocation request until we
find a size agreeable to the kernel. Fixes #10877.
This also fixes a latent bug wherein the heap reservation retry logic
would fail to free the previously reserved address space, which would
likely result in a heap allocation failure.
Test Plan:
set address space limit with `ulimit -v 67108864` and try running
a compiled program
Reviewers: simonmar, austin
Reviewed By: simonmar
Subscribers: thomie, RyanGlScott
Differential Revision: https://phabricator.haskell.org/D1405
GHC Trac Issues: #10877
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously this was introduced in D524 as a compile-time constant.
Sadly, this isn't flexible enough to allow for environments where
ulimits restrict the maximum address space size (see, for instance,
Consequently, we are forced to make this dynamic. In principle this
shouldn't be so terrible as we can place both the beginning and end
addresses within the same cache line, likely incurring only one or so
additional instruction in HEAP_ALLOCED.
Test Plan: validate
Reviewers: austin, simonmar
Reviewed By: simonmar
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1353
GHC Trac Issues: #10877
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The current OS memory allocator conflates the concepts of allocating
address space and allocating memory, which makes the HEAP_ALLOCED()
implementation excessively complicated (as the only thing it cares
about is address space layout) and slow. Instead, what we want
is to allocate a single insanely large contiguous block of address
space (to make HEAP_ALLOCED() checks fast), and then commit subportions
of that in 1MB blocks as we did before.
This is currently behind a flag, USE_LARGE_ADDRESS_SPACE, that is only enabled for
certain OSes.
Test Plan: validate
Reviewers: simonmar, ezyang, austin
Subscribers: thomie, carter
Differential Revision: https://phabricator.haskell.org/D524
GHC Trac Issues: #9706
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
x86 and x64
Summary:
On Windows, the default action for things like division by zero and
segfaults is to pop up a Dr. Watson error reporting dialog if the exception
is unhandled by the user code.
This is a pain when we are SSHed into a Windows machine, or when we
want to debug a problem with gdb (gdb will get a first and second chance to
handle the exception, but if it doesn't the pop-up will show).
veh_excn provides two macros, `BEGIN_CATCH` and `END_CATCH`, which
will catch such exceptions in the entire process and die by
printing a message and calling `stg_exit(1)`.
Previously this code was handled using SEH (Structured Exception Handlers)
however each compiler and platform have different ways of dealing with SEH.
`MSVC` compilers have the keywords `__try`, `__catch` and `__except` to have the
compiler generate the appropriate SEH handler code for you.
`MinGW` compilers have no such keywords and require you to manually set the
SEH Handlers, however because SEH is implemented differently in x86 and x64
the methods to use them in GCC differs.
`x86`: SEH is based on the stack, the SEH handlers are available at `FS[0]`.
On startup one would only need to add a new handler there. This has
a number of issues such as hard to share handlers and it can be exploited.
`x64`: In order to fix the issues with the way SEH worked in x86, on x64 SEH handlers
are statically compiled and added to the .pdata section by the compiler.
Instead of being thread global they can now be Image global since you have to
specify the `RVA` of the region of code that the handlers govern.
You can on x64 Dynamically allocate SEH handlers, but it seems that (based on
experimentation and it's very under-documented) that the dynamic calls cannot override
static SEH handlers in the .pdata section.
Because of this and because GHC no longer needs to support < windows XP, the better
alternative for handling errors would be using the in XP introduced VEH.
The bonus is because VEH (Vectored Exception Handler) are a runtime construct the API
is the same for both x86 and x64 (note that the Context object does contain CPU specific
structures) and the calls are the same cross compilers. Which means this file can be
simplified quite a bit.
Using VEH also means we don't have to worry about the dynamic code generated by GHCi.
Test Plan:
Prior to this diff the tests for `derefnull` and `divbyzero` seem to have been disabled for windows.
To reproduce the issue on x64:
1) open ghci
2) import GHC.Base
3) run: 1 `divInt` 0
which should lead to ghci crashing an a watson error box displaying.
After applying the patch, run:
make TEST="derefnull divbyzero"
on both x64 and x86 builds of ghc to verify fix.
Reviewers: simonmar, austin
Reviewed By: austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D691
GHC Trac Issues: #6079
|
|
|
|
|
|
|
|
| |
This reverts commit f0fcc41d755876a1b02d1c7c79f57515059f6417.
New changes: now works on 32-bit platforms too. I added some basic
support for 64-bit subtraction and comparison operations to the x86
NCG.
|
|
|
|
|
|
|
| |
This reverts commit 35672072b4091d6f0031417bc160c568f22d0469.
Conflicts:
compiler/main/DriverPipeline.hs
|
|
|
|
|
| |
This helps identify threads in gdb particularly in processes with a
lot of threads.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In preparation for indirecting all references to closures,
we rename _closure to _static_closure to ensure any old code
will get an undefined symbol error. In order to reference
a closure foobar_closure (which is now undefined), you should instead
use STATIC_CLOSURE(foobar). For convenience, a number of these
old identifiers are macro'd.
Across C-- and C (Windows and otherwise), there were differing
conventions on whether or not foobar_closure or &foobar_closure
was the address of the closure. Now, all foobar_closure references
are addresses, and no & is necessary.
CHARLIKE/INTLIKE were not changed, simply alpha-renamed.
Part of remove HEAP_ALLOCED patch set (#8199)
Depends on D265
Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
Test Plan: validate
Reviewers: simonmar, austin
Subscribers: simonmar, ezyang, carter, thomie
Differential Revision: https://phabricator.haskell.org/D267
GHC Trac Issues: #8199
|
|
|
|
| |
This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Cppcheck found a few defects in win32 IOManager and a typo in rts
testsuite. This commit fixes them.
Cppcheck 1.54 founds three possible null pointer dereferences of ioMan
pointer. It is dereferenced and checked for NULL after that.
testheapalloced.c contains typo in printf statement, which should print
percent sign but treated as parameter placement by compiler. To properly
print percent sign one need to use "%%" string.
FYI: Cppcheck 1.66 cannot find possible null pointer dereferences in
mentioned places, mistakenly thinking that some memory leaking instead.
I probably fill a regression bug to Cppcheck.
Test Plan:
Build project, run 'make fulltest'. It finished with 28 unexpected
failures. I don't know if they are related to my fix.
Unexpected results from:
TEST="T3500b T7891 tc124 T7653 T5321FD T5030 T4801 T6048 T5631 T5837 T5642 T9020 T3064 parsing001 T1969 T5321Fun T783 T3294"
OVERALL SUMMARY for test run started at Tue Sep 9 16:46:27 2014 NOVT
4:23:24 spent to go through
4101 total tests, which gave rise to
16075 test cases, of which
3430 were skipped
315 had missing libraries
12154 expected passes
145 expected failures
3 caused framework failures
0 unexpected passes
28 unexpected failures
Unexpected failures:
../../libraries/base/tests T7653 [bad exit code] (ghci,threaded1,threaded2)
perf/compiler T1969 [stat not good enough] (normal)
perf/compiler T3064 [stat not good enough] (normal)
perf/compiler T3294 [stat not good enough] (normal)
perf/compiler T4801 [stat not good enough] (normal)
perf/compiler T5030 [stat not good enough] (normal)
perf/compiler T5321FD [stat not good enough] (normal)
perf/compiler T5321Fun [stat not good enough] (normal)
perf/compiler T5631 [stat not good enough] (normal)
perf/compiler T5642 [stat not good enough] (normal)
perf/compiler T5837 [stat not good enough] (normal)
perf/compiler T6048 [stat not good enough] (optasm)
perf/compiler T783 [stat not good enough] (normal)
perf/compiler T9020 [stat not good enough] (optasm)
perf/compiler parsing001 [stat not good enough] (normal)
typecheck/should_compile T7891 [exit code non-0] (hpc,optasm,optllvm)
typecheck/should_compile tc124 [exit code non-0] (hpc,optasm,optllvm)
typecheck/should_run T3500b [exit code non-0] (hpc,optasm,threaded2,dyn,optllvm)
Reviewers: austin
Reviewed By: austin
Subscribers: simonmar, ezyang, carter
Differential Revision: https://phabricator.haskell.org/D203
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
|
|
| |
This will hopefully help ensure some basic consistency in the forward by
overriding buffer variables. In particular, it sets the wrap length, the
offset to 4, and turns off tabs.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before the patch any call to 'select()' with 'bad_fd' led to:
- unblocking of all threads
- hiding exception for 'threadWaitRead bad_fd'
The patch fixes both cases in this way:
after 'select()' failure we iterate over each blocked descriptor
and poll individually to see it's actual status, which is:
- READY (move to run queue)
- BLOCKED (leave in blocked queue)
- INVALID (send an IOErrror exception)
Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
|