- Jul 03, 2023
-
-
-
Flakify and document it, making it far less sensitive to the build environment.
-
An unhelpfully small stack size appears to have been the real culprit behind the metric fluctuations in #19293. Debugging metric decreases triggered by !10729 helped to finally identify the problem. Metric Decrease: MultiLayerModules MultiLayerModulesTH_Make T13701 T14697
-
-
This fixes #23492. The problem was that we used the real source span of the field declaration for the generated catch-all case in the selector function, in particular in the generated call to `recSelError`, which meant it was included in the HIE output. Using `generatedSrcSpan` instead means that it is not included.
-
This addresses the work of ticket #20118 Created the following constructors for TcRnMessage - TcRnInaccessibleCoAxBranch - TcRnPatersonCondFailure
-
-
-
Previously, it was possible for pinned, aligned allocation requests to allocate beyond the end of the pinned accumulator block. Specifically, we failed to account for the padding needed to achieve the requested alignment in the "large object" check. With large alignment requests, this can result in the allocator using the capability's pinned object accumulator block to service a request which is larger than `PINNED_EMPTY_SIZE`. To fix this we reorganize `allocatePinned` to consistently account for the alignment padding in all large object checks. This is a bit subtle as we must handle the case of a small allocation request filling the accumulator block, as well as large requests. Fixes #23400.
-
Rather than statically enabling breakpoints only for the interpreter, this adds a new flag. Tracking ticket: #23057 MR: !10466
-
- Jun 30, 2023
-
-
For the docs:* rule we need to actually build the package rather than just the haddocks for the dependent packages. Therefore we depend on the .conf files of the packages we are trying to build documentation for as well as the .haddock files. Fixes #23472
- Jun 29, 2023
-
-
-
-
As debugTrace is a macro we must take care to ensure that the fact is clear to the compiler lest we see warnings.
-
This was guarded on `darwin_HOST_OS` instead of `defined(darwin_HOST_OS)`.
-
The libffi shipped with Apple's XCode toolchain does not contain a definition of the FFI_GO_CLOSURES macro, despite containing references to said macro. Work around this by defining the macro, following the model of a similar workaround in OpenJDK [1]. [1] https://github.com/openjdk/jdk17u-dev/pull/741/files
-
We used to choose flags to pass to the toolchain at runtime based on the platform running GHC, and in this commit we drop all of those runtime linker checks Ultimately, this represents a change in policy: We no longer adapt at runtime to the toolchain being used, but rather make final decisions about the toolchain used at /configure time/ (we have deleted Note [Run-time linker info] altogether!). This works towards the goal of having all toolchain configuration logic living in the same place, which facilities the work towards a runtime-retargetable GHC (see #19877). As of this commit, the runtime linker/compiler logic was moved to autoconf, but soon it, and the rest of the existing toolchain configuration logic, will live in the standalone ghc-toolchain program (see !9263) In particular, what used to be done at runtime is now as follows: * The flags -Wl,--no-as-needed for needed shared libs are configured into settings * The flag -fstack-check is configured into settings * The check for broken tables-next-to-code was outdated * We use the configured c compiler by default as the assembler program * We drop `asmOpts` because we already configure -Qunused-arguments flag into settings (see !10589) Fixes #23562 Co-author: Rodrigo Mesquita (@alt-romes)
-
Polymorphic specialisation has led to a number of hard to diagnose incorrect runtime result bugs (see #23469, #23109, #21229, #23445) so this commit introduces a flag `-fpolymorhphic-specialisation` which allows users to turn on this experimental optimisation if they are willing to buy into things going very wrong. Ticket #23469
-
-
The previous configuration script to test whether Ld supported response files was * Incorrect (see #23542) * Used, in practice, to check if the *merge objects tool* supported response files. This commit modifies the macro to run the merge objects tool (rather than Ld), using a response file, and checking the result with $NM Fixes #23542
-
The `'[]` case in `tc_infer_hs_type` is smart enough to handle arity-0 uses of `'[]` (see the newly added `T23543` test case for an example), but the `'[]` case in `tc_hs_type` was not. We fix this by changing the `tc_hs_type` case to invoke `tc_infer_hs_type`, as prescribed in `Note [Future-proofing the type checker]`. There are some benign changes to test cases' expected output due to the new code path using `forall a. [a]` as the kind of `'[]` rather than `[k]`. Fixes #23543.
-
D1..D4 are defined for aarch64 and thus not free.
-
x86-64/Darwin's toolchain inexplicably warns that collectFreshWeakPtrs needs to be a prototype.
-
-
This is no longer used.
-
-
-
-
This is no longer used.
-
rts: Drop write_barrier
-
-
- cache last elements of `relTable`, `relaTable` and `symbolTables` in `ocInit_ELF` - cache shndx table in ObjectCode - run `checkProddableBlock` only with debug rts
-
Hadrian's `topDirectory` is intended to provide an absolute path to the root of the GHC tree. However, if the tree is reached via a symlink this One question here is whether the `canonicalizePath` call is expensive enough to warrant caching. In a quick microbenchmark I observed that `canonicalizePath "."` takes around 10us per call; this seems sufficiently low not to worry. Alternatively, another approach here would have been to rather move the canonicalization into `m4/fp_find_root.m4`. This would have avoided repeated canonicalization but sadly path canonicalization is a hard problem in POSIX shell. Addresses #22451.
-
There is a distinction to be made between the Haskell Preprocessor and the C preprocessor. The former is used to preprocess Haskell files, while the latter is used in C preprocessing such as Cmm files. In practice, they are both the same program (usually the C compiler) but invoked with different flags. Previously we would, at configure time, configure the haskell preprocessor and save the configuration in the settings file, but, instead of doing the same for CPP, we had hardcoded in GHC that the CPP program was either `cc -E` or `cpp`. This commit fixes that asymmetry by also configuring CPP at configure time, and tries to make more explicit the difference between HsCpp and Cpp (see Note [Preprocessing invocations]). Note that we don't use the standard CPP and CPPFLAGS to configure Cpp, but instead use the non-standard --with-cpp and --with-cpp-flags. The reason is that autoconf sets CPP to "$CC -E", whereas we expect the CPP command to be configured as a standalone executable rather than a command. These are symmetrical with --with-hs-cpp and --with-hs-cpp-flags. Cleanup: Hadrian no longer needs to pass the CPP configuration for CPP to be C99 compatible through -optP, since we now configure that into settings. Closes #23422
-
- Jun 28, 2023
-