Skip to content
Snippets Groups Projects
  1. Oct 11, 2024
  2. Oct 10, 2024
    • Fabian Thorand's avatar
      Handle exceptions from IO manager backend · 69960230
      Fabian Thorand authored and Ben Gamari's avatar Ben Gamari committed
      If an IO manager backend throws, it will not actually have registered
      the file descriptor. However, at that point, the IO manager state was
      already updated to assume the file descriptor is being tracked, leading
      to errors and an eventual deadlock down the line as documented in the
      issue #21969.
      
      The fix for this is to undo the IO manager state change in case the
      backend throws (just as we already do when the backend signals that the
      file type is not supported). The exception then bubbles up to user code.
      
      That way we make sure that
      1. the bookkeeping state of the IO manager is consistent with the
         actions taken by the backend, even in the presence of unexpected
         failures, and
      2. the error is not silent and visible to user code, making failures
         easier to debug.
      69960230
  3. Oct 09, 2024
    • Alan Zimmerman's avatar
      EPA: Remove [AddEpAnn] from (most of) HsExpr · ef481813
      Alan Zimmerman authored and Marge Bot's avatar Marge Bot committed
      EPA: introduce EpAnnLam for lambda annotationsi, and remove `glAA`
      from `Parser.y`, it is the same as `glR`
      
      EPA: Remove unused annotation from XOpApp
      
      EPA: Use EpToken for XNPat and XNegApp
      
      EPA: specific anns for XExplicitTuple / XTuplePat / sumPatParens.
      
      EPA: Use specific annotation for MultiIf
      
      EPA: Move annotations into FunRhs
      
      EPA: Remove [AddEpAnn] from SigPat and ExprWithTySig
      
      EPA: Remove [AddEpAnn] from ArithSeq
      
      EPA: Remove [AddEpAnn] from HsProc
      
      EPA: Remove [AddEpAnn] from HsStatic
      
      EPA: Remove [AddEpAnn] from BindStmt
      
      EPA: Remove [AddEpAnn] from TransStmt
      
      EPA: Remove [AddEpAnn] from HsTypedSplice
      
      EPA: Remove [AddEpAnn] from HsUntypedSpliceExpr
      ef481813
    • Andrzej Rybczak's avatar
      Fix typo in the @since annotation of annotateIO · 55609880
      Andrzej Rybczak authored and Marge Bot's avatar Marge Bot committed
      55609880
  4. Oct 08, 2024
  5. Oct 07, 2024
  6. Oct 06, 2024
    • Krzysztof Gogolewski's avatar
      Only allow (a => b) :: Constraint rather than CONSTRAINT rep · 92f8939a
      Krzysztof Gogolewski authored and Marge Bot's avatar Marge Bot committed
      Fixes #25243
      92f8939a
    • Daniel Díaz's avatar
      Clarify the meaning of "exactly once" in LinearTypes · 535a2117
      Daniel Díaz authored and Marge Bot's avatar Marge Bot committed
      Solves documentaion issue #25084.
      535a2117
    • Torsten Schmits's avatar
      Parallelize getRootSummary computations in dep analysis downsweep · 135fd1ac
      Torsten Schmits authored and Cheng Shao's avatar Cheng Shao committed
      This reuses the upsweep step's infrastructure to process batches of
      modules in parallel.
      
      I benchmarked this by running `ghc -M` on two sets of 10,000 modules;
      one with a linear dependency chain and the other with a binary tree.
      Comparing different values for the number of modules per thread
      suggested an optimum at `length targets `div` (n_cap * 2)`, with results
      similar to this one (6 cores, 12 threads):
      
      ```
      Benchmark 1: linear 1 jobs
        Time (mean ± σ):      1.775 s ±  0.026 s    [User: 1.377 s, System: 0.399 s]
        Range (min … max):    1.757 s …  1.793 s    2 runs
      
      Benchmark 2: linear 6 jobs
        Time (mean ± σ):     876.2 ms ±  20.9 ms    [User: 1833.2 ms, System: 518.6 ms]
        Range (min … max):   856.2 ms … 898.0 ms    3 runs
      
      Benchmark 3: linear 12 jobs
        Time (mean ± σ):     793.5 ms ±  23.2 ms    [User: 2318.9 ms, System: 718.6 ms]
        Range (min … max):   771.9 ms … 818.0 ms    3 runs
      ```
      
      Results don't differ much when the batch size is reduced to a quarter
      of that, but there's significant thread scheduling overhead for a size
      of 1:
      
      ```
      Benchmark 1: linear 1 jobs
        Time (mean ± σ):      2.611 s ±  0.029 s    [User: 2.851 s, System: 0.783 s]
        Range (min … max):    2.591 s …  2.632 s    2 runs
      
      Benchmark 2: linear 6 jobs
        Time (mean ± σ):      1.189 s ±  0.007 s    [User: 2.707 s, System: 1.103 s]
        Range (min … max):    1.184 s …  1.194 s    2 runs
      
      Benchmark 3: linear 12 jobs
        Time (mean ± σ):      1.097 s ±  0.006 s    [User: 2.938 s, System: 1.300 s]
        Range (min … max):    1.093 s …  1.101 s    2 runs
      ```
      
      Larger batches also slightly worsen performance.
      135fd1ac
    • Cheng Shao's avatar
      driver: fix runWorkerLimit on wasm · ceca9efb
      Cheng Shao authored
      This commit fixes link-time unresolved symbol errors for sem_open etc
      on wasm, by making runWorkerLimit always behave single-threaded. This
      avoids introducing the jobserver logic into the final wasm module and
      thus avoids referencing the posix semaphore symbols.
      ceca9efb
  7. Oct 05, 2024
  8. Oct 04, 2024
Loading