[Git][ghc/ghc][wip/jsem] some rewording of jsem notes

sheaf (@sheaf) gitlab at gitlab.haskell.org
Mon Jan 16 10:37:09 UTC 2023



sheaf pushed to branch wip/jsem at Glasgow Haskell Compiler / GHC


Commits:
7ce502b6 by sheaf at 2023-01-16T11:36:55+01:00
some rewording of jsem notes

- - - - -


1 changed file:

- compiler/GHC/Driver/MakeSem.hs


Changes:

=====================================
compiler/GHC/Driver/MakeSem.hs
=====================================
@@ -498,51 +498,52 @@ runJSemAbstractSem sem action = MC.mask \ unmask -> do
 Note [Architecture of the Job Server]
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In `-jsem` mode the amount of parrelism that GHC can use is controlled by a
-system semaphore. We take resources from it when we need them and give them back
-if we don't have enought to do.
+In `-jsem` mode, the amount of parallelism that GHC can use is controlled by a
+system semaphore. We take resources from the semaphore when we need them, and
+give them back if we don't have enough to do.
 
 A naive implementation would just take and release the semaphore around performing
-the action but this leads to two issues.
+the action, but this leads to two issues:
 
-* When taking a slot in the semaphore we must call `setNumCapabilities` in order
-  to adjust how many capabilities are available for parralel garbage collection. This
-  causes a synchronisation
-* We want to implement a debounce so that whilst there is pending work in the current
-  process we prefer to keep hold of resources from the semaphore. This reduces
-  overall memory usage as there are less live GHC processes at once.
+* When taking a token in the semaphore, we must call `setNumCapabilities` in order
+  to adjust how many capabilities are available for parallel garbage collection.
+  This causes unnecessary synchronisations.
+* We want to implement a debounce, so that whilst there is pending work in the
+  current process we prefer to keep hold of resources from the semaphore.
+  This reduces overall memory usage, as there are fewer live GHC processes at once.
 
-Therefore the obtention of semaphore resources is separated away from the
+Therefore, the obtention of semaphore resources is separated away from the
 request for the resource in the driver.
 
-A slot from the semaphore is requested using `acquireJob`, this creates a pending
-job which is a MVar which can be filling in to signal that the requested slot is ready.
+A token from the semaphore is requested using `acquireJob`. This creates a pending
+job, which is a MVar that can be filled in to signal that the requested token is ready.
 
-When the job is finished, the slot is released by calling `releaseJob`, which just
+When the job is finished, the token is released by calling `releaseJob`, which just
 increases the number of `free` jobs. If there are more pending jobs when the free count
-is increased the slot is immediately reused (see `modifyJobResources`).
+is increased, the token is immediately reused (see `modifyJobResources`).
 
-The `jobServerLoop` interacts with the system semaphore, when there are still pending
-jobs then `acquireThread` blocks waiting for a slot in the semaphore and increases
-the owned count when the slot is obtained.
+The `jobServerLoop` interacts with the system semaphore: when there are pending
+jobs, `acquireThread` blocks, waiting for a token from the semaphore. Once a
+token is obtained, it increases the owned count.
 
-When there are free slots, no pending jobs and the debounce has expired
-then `releaseThread` will release slots back to the global semaphore.
+When GHC has free tokens (tokens from the semaphore that it is not using),
+no pending jobs, and the debounce has expired, then `releaseThread` will
+release tokens back to the global semaphore.
 
 `tryStopThread` attempts to kill threads which are waiting to acquire a resource
 when we no longer need it. For example, consider that we attempt to acquire two
-slots of the semaphore but the first job finishes before we acquire the second resources,
-the second slot is no longer needed so we should cancel the wait (as it would not be used to
-do any work and not returned until the debounce). We just need to kill in the acquiring
-state because the releading state can't block.
+tokens, but the first job finishes before we acquire the second token.
+This second token is no longer needed, so we should cancel the wait
+(as it would not be used to do any work, and not be returned until the debounce).
+We only need to kill `acquireJob`, because `releaseJob` never blocks.
 
 Note [Eventlog Messages for jsem]
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 It can be tricky to verify that the work is shared adequately across different
-processes. To help debug this whenever the global state changes the values of
-`JobResources` are output to the eventlog. There are some scripts which can be used
-to analyse this output and report statistics about core saturation in this
-github repo (https://github.com/mpickering/ghc-jsem-analyse).
+processes. To help debug this, we output the values of `JobResource` to the
+eventlog whenever the global state changes. There are some scripts which can be used
+to analyse this output and report statistics about core saturation in the
+GitHub repo (https://github.com/mpickering/ghc-jsem-analyse).
 
 -}



View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/commit/7ce502b65968ed27457d2575fd058a8e4c84873b

-- 
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/commit/7ce502b65968ed27457d2575fd058a8e4c84873b
You're receiving this email because of your account on gitlab.haskell.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20230116/0cc52ac2/attachment-0001.html>


More information about the ghc-commits mailing list