Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • GHC GHC
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 5,410
    • Issues 5,410
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
  • Merge requests 604
    • Merge requests 604
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Releases
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Glasgow Haskell CompilerGlasgow Haskell Compiler
  • GHCGHC
  • Issues
  • #23185
Closed
Open
Issue created Mar 27, 2023 by Ben Gamari@bgamari🐢Maintainer

Soundness of thunk update?

I am beginning to have second thoughts on whether the thunk update protocol we currently use (see Note [Heap memory barriers] in SMP.h) is safe on weakly ordered platforms. The problem is that lazy blackholing may mean that multiple threads race to update a thunk, resulting in a value being exposed via an indirection before being made available to other cores.

Specifically, I am worried about interleavings like the following (where t is a thunk being updated and a and b are two evaluation results):

Thread A              Thread B                      Thread C             
---------             ---------                     ---------            
t.indirectee=a                                                           
                      t.indirectee=b                                     
release fence                                                                 
t.info=BLACKHOLE                                                         
                                                    read t.info          
                                                    inspect t.indirectee
                      release fence
                      t.info=BLACKHOLE                                   
                                                                         

Specifically, we see here that Thread A writes its result, a to the indirectee. Typically, the soundness of the update of the indirectee field is guaranteed by the release ordering of the t.info=BLACKHOLE store, guaranteeing that a is visible to readers when the closure becomes a blackhole.

However, in this case Thread B races to update indirectee, storing its own result b. Meanwhile, thread C enters t (which is now a blackhole), inspects t.indirectee, and is therefore exposed to b before Thread B has guaranteed that it is visible.

This seems like an extrodinarily narrow race, but AFAICT it's a race nevertheless.

Edited Mar 27, 2023 by Ben Gamari
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking