Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
GHC
GHC
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 4,394
    • Issues 4,394
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
    • Iterations
  • Merge Requests 373
    • Merge Requests 373
  • Requirements
    • Requirements
    • List
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Operations
    • Operations
    • Incidents
    • Environments
  • Analytics
    • Analytics
    • CI / CD
    • Code Review
    • Insights
    • Issue
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Glasgow Haskell Compiler
  • GHCGHC
  • Issues
  • #18805

Closed
Open
Opened Oct 05, 2020 by Ben Gamari@bgamari🐢Maintainer

WinIO improvement: Multiple workers

While discussing ~WinIO with @Phyx he noted that currently WinIO may regress a bit under serial workloads. This can be addressed by supporting multiple I/O workers:

<Phyx-> right now, the current I/O manager has no overhead from the haskell runtime. It just blocks and gives back data, blocks gives back data.
<Phyx-> winio issues the asynchronous requests, but when it handles it, and you request a new chunk
<Phyx-> that read is often done before the worker thread has made it back to the blocking wait call
<Phyx-> so the caller waits longer. the problem should go away when you have multiple workers. as they can  immediately server the next chunk without waiting for the previous worker to return
<Phyx-> the workers do extra work at the end, as they have to calculate the next timeout interval
<Phyx-> so the overhead is expected, and the designed workaround is to have multiple workers
Edited Oct 05, 2020 by Ben Gamari
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
Reference: ghc/ghc#18805