| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This shims out fopen and sopen so that they use modern APIs under the hood
along with namespaced paths.
This lifts the MAX_PATH restrictions from Haskell programs and makes the new
limit ~32k.
There are only some slight caveats that have been documented.
Some utilities have not been upgraded such as lndir, since all these things are
different cabal packages I have been forced to copy the source in different places
which is less than ideal. But it's the only way to keep sdist working.
Test Plan: ./validate
Reviewers: hvr, bgamari, erikd, simonmar
Reviewed By: bgamari
Subscribers: rwbarton, thomie, carter
GHC Trac Issues: #10822
Differential Revision: https://phabricator.haskell.org/D4416
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: bgamari, Phyx, austin, hvr, simonmar
Reviewed By: bgamari
Subscribers: syd, rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D4041
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On 64-bit UNIX and Windows, Haskell `Int` has 64 bits
but C `int msecs` has 32 bits, resulting in an overflow.
This commit fixes it by switching fdReady() to take int64_t,
into which a Haskell `Int` will always fit.
(Note we could not switch to `long long` because that is
32 bit on 64-bit Windows machines.)
Further, to be able to actually wait longer than ~49 days,
we put loops around the waiting syscalls (they all accept only
32-bit integers).
Note the timer signal would typically interrupt the syscalls
before the ~49 days are over, but you can run Haskell programs
without the timer signal, an we want it to be correct in all
cases.
Reviewers: bgamari, austin, hvr, NicolasT, Phyx
Reviewed By: bgamari, Phyx
Subscribers: syd, Phyx, rwbarton, thomie
GHC Trac Issues: #14262
Differential Revision: https://phabricator.haskell.org/D4011
|
|
|
|
| |
Our new CPP linter enforces this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This both says what we mean and silences a bunch of spurious CPP linting
warnings. This pragma is supported by all CPP implementations which we
support.
Reviewers: austin, erikd, simonmar, hvr
Reviewed By: simonmar
Subscribers: rwbarton, thomie
Differential Revision: https://phabricator.haskell.org/D3482
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Runtime Linker is currently eagerly loading all object files on all
platforms which do not use the system linker for `GHCi`.
The problem with this approach is that it requires all symbols to be
found. Even those of functions never used/called. This makes the number
of libraries required to link things like `mingwex` quite high.
To work around this the `rts` was relying on a trick. It itself was
compiled with `MingW64-w`'s `GCC`. So it was already linked against
`mingwex`. As such, it re-exported the symbols from itself.
While this worked it made it impossible to link against `mingwex` in
user libraries. And with this means no `C99` code could ever run in
`GHCi` on Windows without having the required symbols re-exported from
the rts.
Consequently this rules out a large number of packages on Windows.
SDL2, HMatrix etc.
After talking with @rwbarton I have taken the approach of loading entire
object files when a symbol is needed instead of doing the dependency
tracking on a per symbol basis. This is a lot less fragile and a lot
less complicated to implement.
The changes come down to the following steps:
1) modify the linker to and introduce a new state for ObjectCode:
`Needed`. A Needed object is one that is required for the linking to
succeed. The initial set consists of all Object files passed as
arguments to the link.
2) Change `ObjectCode`'s to be indexed but not initialized or resolved.
This means we know where we would load the symbols,
but haven't actually done so.
3) Mark any `ObjectCode` belonging to `.o` passed as argument
as required: ObjectState `NEEDED`.
4) During `Resolve` object calls, mark all `ObjectCode`
containing the required symbols as `NEEDED`
5) During `lookupSymbol` lookups, (which is called from `linkExpr`
and `linkDecl` in `GHCI.hs`) is the symbol is in a not-yet-loaded
`ObjectCode` then load the `ObjectCode` on demand and return the
address of the symbol. Otherwise produce an unresolved symbols error
as expected.
6) On `unloadObj` we then change the state of the object and remove
it's symbols from the `reqSymHash` table so it can be reloaded.
This change affects all platforms and OSes which use the runtime linker.
It seems there are no real perf tests for `GHCi`, but performance
shouldn't be impacted much. We gain a lot of time not loading all `obj`
files, and we lose some time in `lookupSymbol` when we're finding
sections that have to be loaded. The actual finding itself is O(1)
(Assuming the hashtnl is perfect)
It also consumes slighly more memory as instead of storing just the
address of a symbol I also store some other information, like if the
symbol is weak or not.
This change will break any packages relying on renamed POSIX functions
that were re-named and re-exported by the rts. Any packages following
the proper naming for functions as found on MSDN will work fine.
Test Plan: ./validate on all platforms which use the Runtime linker.
Reviewers: thomie, rwbarton, simonmar, erikd, bgamari, austin, hvr
Reviewed By: erikd
Subscribers: kgardas, gridaphobe, RyanGlScott, simonmar,
rwbarton, #ghc_windows_task_force
Differential Revision: https://phabricator.haskell.org/D1805
GHC Trac Issues: #11223
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Long time ago, IRIX was way ahead of its time in the last century with
its SMP capabilities of scaling up to 1024 processors and other features
such as XFS or OpenGL that originated in IRIX and live on to this day in
other operating systems.
However, IRIX's last software update was in 2006 and support ended
around 2013 according to [1], so it's considered an extinct platform by
now. So this commit message is effectively an obituary for GHC's IRIX
support.
R.I.P. IRIX
[1]: https://en.wikipedia.org/wiki/IRIX
|
|
|
|
|
|
|
| |
Simplify some preprocessor expressions involving `_MSC_VER` because
`_WIN32` is always defined when `_MSC_VER` is.
Differential Revision: https://phabricator.haskell.org/D981
|
|
|
|
|
|
|
|
|
|
| |
In Haskell files, replace `__MINGW32__` by `mingw32_HOST_OS`.
In .c and .h files, delete `__MINGW32__` when `_WIN32` is also tested
because `_WIN32` is always defined when `__MINGW32__` is. Also replace
`__MINGW32__` by `_WIN32` when used standalone for consistency.
Differential Revision: https://phabricator.haskell.org/D971
|
|
|
|
|
|
|
|
|
|
| |
Now that HUGS and NHC specific code has been removed, this commit "folds"
the now redundant `#if((n)def)`s containing `__GLASGOW_HASKELL__`. This
renders `base` officially GHC only.
This commit also removes redundant `{-# LANGUAGE CPP #-}`.
Signed-off-by: Herbert Valerio Riedel <hvr@gnu.org>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes the output of throwGetLastError to include the system error
message, rather than the message of our fictitious errno.
It also adds several definitions to GHC.Windows, mostly from the Win32 package.
The exceptions are:
* getErrorMessage: returns a String, unlike in System.Win32.Types,
where it returns an LPWSTR.
* errCodeToIOError: new
* c_maperrno_func: new
|
| |
|
| |
|
| |
|
|
|
|
|
| |
sys/timeb.h is deprecated on FreeBSD meaning validation fails quite early
without this patch.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Bytestring uses it, so I've moved it into that package.
|
| |
|
|
|
|
|
| |
Now that we have the CTYPE pragma we can do this in such a way that
it doesn't break the build on OS X.
|
|
|
|
| |
They broken the build on OSX. See #2979.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Based on part of a patch from Arnaud Degroote in #5480.
On NetBSD just calling the function directly warns:
warning: reference to compatibility gettimeofday(); include <sys/time.h>
to generate correct reference
|
|
|
|
|
|
|
|
| |
Fixes validate on amd64/Linux with:
SRC_CC_OPTS += -Wmissing-parameter-type
SRC_CC_OPTS += -Wold-style-declaration
SRC_CC_OPTS += -Wold-style-definition
|
| |
|
|
|
|
| |
Fixes 21 testsuite errors on NetBSD 5.99.
|
|
|
|
| |
C functions like isDoubleNaN moved here (primFloat.c)
|
|
|
|
| |
now that it isn't used on Windows any more.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Highlights:
* Unicode support for Handle I/O:
** Automatic encoding and decoding using a per-Handle encoding.
** The encoding defaults to the locale encoding (only on Unix
so far, perhaps Windows later).
** Built-in UTF-8, UTF-16 (BE/LE), and UTF-32 (BE/LE) codecs.
** iconv-based codec for other encodings on Unix
* Modularity: the low-level IO interface is exposed as a type class
(GHC.IO.IODevice) so you can build your own low-level IO providers and
make Handles from them.
* Newline translation: instead of being Windows-specific wired-in
magic, the translation from \r\n -> \n and back again is available
on all platforms and is configurable for reading/writing
independently.
Unicode-aware Handles
~~~~~~~~~~~~~~~~~~~~~
This is a significant restructuring of the Handle implementation with
the primary goal of supporting Unicode character encodings.
The only change to the existing behaviour is that by default, text IO
is done in the prevailing locale encoding of the system (except on
Windows [1]).
Handles created by openBinaryFile use the Latin-1 encoding, as do
Handles placed in binary mode using hSetBinaryMode.
We provide a way to change the encoding for an existing Handle:
GHC.IO.Handle.hSetEncoding :: Handle -> TextEncoding -> IO ()
and various encodings (from GHC.IO.Encoding):
latin1,
utf8,
utf16, utf16le, utf16be,
utf32, utf32le, utf32be,
localeEncoding,
and a way to lookup other encodings:
GHC.IO.Encoding.mkTextEncoding :: String -> IO TextEncoding
(it's system-dependent whether the requested encoding will be
available).
We may want to export these from somewhere more permanent; that's a
topic for a future library proposal.
Thanks to suggestions from Duncan Coutts, it's possible to call
hSetEncoding even on buffered read Handles, and the right thing
happens. So we can read from text streams that include multiple
encodings, such as an HTTP response or email message, without having
to turn buffering off (though there is a penalty for switching
encodings on a buffered Handle, as the IO system has to do some
re-decoding to figure out where it should start reading from again).
If there is a decoding error, it is reported when an attempt is made
to read the offending character from the Handle, as you would expect.
Performance varies. For "hGetContents >>= putStr" I found the new
library was faster on my x86_64 machine, but slower on an x86. On the
whole I'd expect things to be a bit slower due to the extra
decoding/encoding, but probabaly not noticeably. If performance is
critical for your app, then you should be using bytestring and text
anyway.
[1] Note: locale encoding is not currently implemented on Windows due
to the built-in Win32 APIs for encoding/decoding not being sufficient
for our purposes. Ask me for details. Offers of help gratefully
accepted.
Newline Translation
~~~~~~~~~~~~~~~~~~~
In the old IO library, text-mode Handles on Windows had automatic
translation from \r\n -> \n on input, and the opposite on output. It
was implemented using the underlying CRT functions, which meant that
there were certain odd restrictions, such as read/write text handles
needing to be unbuffered, and seeking not working at all on text
Handles.
In the rewrite, newline translation is now implemented in the upper
layers, as it needs to be since we have to perform Unicode decoding
before newline translation. This means that it is now available on
all platforms, which can be quite handy for writing portable code.
For now, I have left the behaviour as it was, namely \r\n -> \n on
Windows, and no translation on Unix. However, another reasonable
default (similar to what Python does) would be to do \r\n -> \n on
input, and convert to the platform-native representation (either \r\n
or \n) on output. This is called universalNewlineMode (below).
The API is as follows. (available from GHC.IO.Handle for now, again
this is something we will probably want to try to get into System.IO
at some point):
-- | The representation of a newline in the external file or stream.
data Newline = LF -- ^ "\n"
| CRLF -- ^ "\r\n"
deriving Eq
-- | Specifies the translation, if any, of newline characters between
-- internal Strings and the external file or stream. Haskell Strings
-- are assumed to represent newlines with the '\n' character; the
-- newline mode specifies how to translate '\n' on output, and what to
-- translate into '\n' on input.
data NewlineMode
= NewlineMode { inputNL :: Newline,
-- ^ the representation of newlines on input
outputNL :: Newline
-- ^ the representation of newlines on output
}
deriving Eq
-- | The native newline representation for the current platform
nativeNewline :: Newline
-- | Map "\r\n" into "\n" on input, and "\n" to the native newline
-- represetnation on output. This mode can be used on any platform, and
-- works with text files using any newline convention. The downside is
-- that @readFile a >>= writeFile b@ might yield a different file.
universalNewlineMode :: NewlineMode
universalNewlineMode = NewlineMode { inputNL = CRLF,
outputNL = nativeNewline }
-- | Use the native newline representation on both input and output
nativeNewlineMode :: NewlineMode
nativeNewlineMode = NewlineMode { inputNL = nativeNewline,
outputNL = nativeNewline }
-- | Do no newline translation at all.
noNewlineTranslation :: NewlineMode
noNewlineTranslation = NewlineMode { inputNL = LF, outputNL = LF }
-- | Change the newline translation mode on the Handle.
hSetNewlineMode :: Handle -> NewlineMode -> IO ()
IO Devices
~~~~~~~~~~
The major change here is that the implementation of the Handle
operations is separated from the underlying IO device, using type
classes. File descriptors are just one IO provider; I have also
implemented memory-mapped files (good for random-access read/write)
and a Handle that pipes output to a Chan (useful for testing code that
writes to a Handle). New kinds of Handle can be implemented outside
the base package, for instance someone could write bytestringToHandle.
A Handle is made using mkFileHandle:
-- | makes a new 'Handle'
mkFileHandle :: (IODevice dev, BufferedIO dev, Typeable dev)
=> dev -- ^ the underlying IO device, which must support
-- 'IODevice', 'BufferedIO' and 'Typeable'
-> FilePath
-- ^ a string describing the 'Handle', e.g. the file
-- path for a file. Used in error messages.
-> IOMode
-- ^ The mode in which the 'Handle' is to be used
-> Maybe TextEncoding
-- ^ text encoding to use, if any
-> NewlineMode
-- ^ newline translation mode
-> IO Handle
This also means that someone can write a completely new IO
implementation on Windows based on native Win32 HANDLEs, and
distribute it as a separate package (I really hope somebody does
this!).
This restructuring isn't as radical as previous designs. I haven't
made any attempt to make a separate binary I/O layer, for example
(although hGetBuf/hPutBuf do bypass the text encoding and newline
translation). The main goal here was to get Unicode support in, and
to allow others to experiment with making new kinds of Handle. We
could split up the layers further later.
API changes and Module structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NB. GHC.IOBase and GHC.Handle are now DEPRECATED (they are still
present, but are just re-exporting things from other modules now).
For 6.12 we'll want to bump base to version 5 and add a base4-compat.
For now I'm using #if __GLASGOW_HASKEL__ >= 611 to avoid deprecated
warnings.
I split modules into smaller parts in many places. For example, we
now have GHC.IORef, GHC.MVar and GHC.IOArray containing the
implementations of IORef, MVar and IOArray respectively. This was
necessary for untangling dependencies, but it also makes things easier
to follow.
The new module structurue for the IO-relatied parts of the base
package is:
GHC.IO
Implementation of the IO monad; unsafe*; throw/catch
GHC.IO.IOMode
The IOMode type
GHC.IO.Buffer
Buffers and operations on them
GHC.IO.Device
The IODevice and RawIO classes.
GHC.IO.BufferedIO
The BufferedIO class.
GHC.IO.FD
The FD type, with instances of IODevice, RawIO and BufferedIO.
GHC.IO.Exception
IO-related Exceptions
GHC.IO.Encoding
The TextEncoding type; built-in TextEncodings; mkTextEncoding
GHC.IO.Encoding.Types
GHC.IO.Encoding.Iconv
GHC.IO.Encoding.Latin1
GHC.IO.Encoding.UTF8
GHC.IO.Encoding.UTF16
GHC.IO.Encoding.UTF32
Implementation internals for GHC.IO.Encoding
GHC.IO.Handle
The main API for GHC's Handle implementation, provides all the Handle
operations + mkFileHandle + hSetEncoding.
GHC.IO.Handle.Types
GHC.IO.Handle.Internals
GHC.IO.Handle.Text
Implementation of Handles and operations.
GHC.IO.Handle.FD
Parts of the Handle API implemented by file-descriptors: openFile,
stdin, stdout, stderr, fdToHandle etc.
|
| |
|
| |
|
|
|
|
|
| |
We need to do this as it has a (, ...) type, which we aren't allowed to
directly call with the FFI.
|
|
|
|
|
| |
This pipe is an internal implementation detail, we don't really want
it to be exposed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The API is the same (for now). The new implementation has the
capability to define signal handlers that have access to the siginfo
of the signal (#592), but this functionality is not exposed in this
patch.
#2451 is the ticket for the new API.
The main purpose of bringing this in now is to fix race conditions in
the old signal handling code (#2858). Later we can enable the new
API in the HEAD.
Implementation differences:
- More of the signal-handling is moved into Haskell. We store the
table of signal handlers in an MVar, rather than having a table of
StablePtrs in the RTS.
- In the threaded RTS, the siginfo of the signal is passed down the
pipe to the IO manager thread, which manages the business of
starting up new signal handler threads. In the non-threaded RTS,
the siginfo of caught signals is stored in the RTS, and the
scheduler starts new signal handler threads.
|