[Git][ghc/ghc][wip/ttg-con-pat] 13 commits: StgCRun: Enable unwinding only on Linux
cgibbard
gitlab at gitlab.haskell.org
Wed Apr 15 15:45:57 UTC 2020
cgibbard pushed to branch wip/ttg-con-pat at Glasgow Haskell Compiler / GHC
Commits:
5b08e0c0 by Ben Gamari at 2020-04-14T23:28:20-04:00
StgCRun: Enable unwinding only on Linux
It's broken on macOS due and SmartOS due to assembler differences
(#15207) so let's be conservative in enabling it. Also, refactor things
to make the intent clearer.
- - - - -
27cc2e7b by Ben Gamari at 2020-04-14T23:28:57-04:00
rts: Don't mark evacuate_large as inline
This function has two callsites and is quite large. GCC consequently
decides not to inline and warns instead. Given the situation, I can't
blame it. Let's just remove the inline specifier.
- - - - -
9853fc5e by Ben Gamari at 2020-04-14T23:29:48-04:00
base: Enable large file support for OFD locking impl.
Not only is this a good idea in general but this should also avoid
issue #17950 by ensuring that off_t is 64-bits.
- - - - -
7b41f21b by Matthew Pickering at 2020-04-14T23:30:24-04:00
Hadrian: Make -i paths absolute
The primary reason for this change is that ghcide does not work with
relative paths. It also matches what cabal and stack do, they always
pass absolute paths.
- - - - -
41230e26 by Daniel Gröber at 2020-04-14T23:31:01-04:00
Zero out pinned block alignment slop when profiling
The heap profiler currently cannot traverse pinned blocks because of
alignment slop. This used to just be a minor annoyance as the whole block
is accounted into a special cost center rather than the respective object's
CCS, cf. #7275. However for the new root profiler we would like to be able
to visit _every_ closure on the heap. We need to do this so we can get rid
of the current 'flip' bit hack in the heap traversal code.
Since info pointers are always non-zero we can in principle skip all the
slop in the profiler if we can rely on it being zeroed. This assumption
caused problems in the past though, commit a586b33f8e ("rts: Correct
handling of LARGE ARR_WORDS in LDV profiler"), part of !1118, tried to use
the same trick for BF_LARGE objects but neglected to take into account that
shrink*Array# functions don't ensure that slop is zeroed when not
compiling with profiling.
Later, commit 0c114c6599 ("Handle large ARR_WORDS in heap census (fix
as we will only be assuming slop is zeroed when profiling is on.
This commit also reduces the ammount of slop we introduce in the first
place by calculating the needed alignment before doing the allocation for
small objects where we know the next available address. For large objects
we don't know how much alignment we'll have to do yet since those details
are hidden behind the allocateMightFail function so there we continue to
allocate the maximum additional words we'll need to do the alignment.
So we don't have to duplicate all this logic in the cmm code we pull it
into the RTS allocatePinned function instead.
Metric Decrease:
T7257
haddock.Cabal
haddock.base
- - - - -
15fa9bd6 by Daniel Gröber at 2020-04-14T23:31:01-04:00
rts: Expand and add more notes regarding slop
- - - - -
caf3f444 by Daniel Gröber at 2020-04-14T23:31:01-04:00
rts: allocatePinned: Fix confusion about word/byte units
- - - - -
c3c0f662 by Daniel Gröber at 2020-04-14T23:31:01-04:00
rts: Underline some Notes as is conventional
- - - - -
e149dea9 by Daniel Gröber at 2020-04-14T23:31:38-04:00
rts: Fix nomenclature in OVERWRITING_CLOSURE macros
The additional commentary introduced by commit 8916e64e5437 ("Implement
shrinkSmallMutableArray# and resizeSmallMutableArray#.") unfortunately got
this wrong. We set 'prim' to true in overwritingClosureOfs because we
_don't_ want to call LDV_recordDead().
The reason is because of this "inherently used" distinction made in the LDV
profiler so I rename the variable to be more appropriate.
- - - - -
1dd3d18c by Daniel Gröber at 2020-04-14T23:31:38-04:00
Remove call to LDV_RECORD_CREATE for array resizing
- - - - -
19de2fb0 by Daniel Gröber at 2020-04-14T23:31:38-04:00
rts: Assert LDV_recordDead is not called for inherently used closures
The comments make it clear LDV_recordDead should not be called for
inhererently used closures, so add an assertion to codify this fact.
- - - - -
0b934e30 by Ryan Scott at 2020-04-14T23:32:14-04:00
Bump template-haskell version to 2.17.0.0
This requires bumping the `exceptions` and `text` submodules to bring
in commits that bump their respective upper version bounds on
`template-haskell`.
Fixes #17645. Fixes #17696.
Note that the new `text` commit includes a fair number of additions
to the Haddocks in that library. As a result, Haddock has to do more
work during the `haddock.Cabal` test case, increasing the number of
allocations it requires. Therefore,
-------------------------
Metric Increase:
haddock.Cabal
-------------------------
- - - - -
c8175f9a by John Ericson at 2020-04-15T11:45:02-04:00
Trees That Grow refactor for `ConPat` and `CoPat`
- `ConPat{In,Out}` -> `ConPat`
- `CoPat` -> `XPat (CoPat ..)`
Note that `GHC.HS.*` still uses `HsWrap`, but only when `p ~ GhcTc`.
After this change, moving the type family instances out of `GHC.HS.*` is
sufficient to break the cycle.
Add XCollectPat class to decide how binders are collected from XXPat based on the pass.
Previously we did this with IsPass, but that doesn't work for Haddock's
DocNameI, and the constraint doesn't express what actual distinction is being
made. Perhaps a class for collecting binders more generally is in order, but we
haven't attempted this yet.
Pure refactor of code around ConPat
- InPat/OutPat synonyms removed
- rename several identifiers
- redundant constraints removed
- move extension field in ConPat to be first
- make ConPat use record syntax more consistently
Fix T6145 (ConPatIn became ConPat)
Add comments from SPJ.
Add comment about haddock's use of CollectPass.
Updates haddock submodule.
- - - - -
30 changed files:
- compiler/GHC/Hs/Instances.hs
- compiler/GHC/Hs/Pat.hs
- compiler/GHC/Hs/Utils.hs
- compiler/GHC/HsToCore/Arrows.hs
- compiler/GHC/HsToCore/Docs.hs
- compiler/GHC/HsToCore/Expr.hs
- compiler/GHC/HsToCore/ListComp.hs
- compiler/GHC/HsToCore/Match.hs
- compiler/GHC/HsToCore/Match/Constructor.hs
- compiler/GHC/HsToCore/PmCheck.hs
- compiler/GHC/HsToCore/Quote.hs
- compiler/GHC/HsToCore/Utils.hs
- compiler/GHC/Iface/Ext/Ast.hs
- compiler/GHC/Rename/Expr.hs
- compiler/GHC/Rename/HsType.hs
- compiler/GHC/Rename/Pat.hs
- compiler/GHC/Tc/Deriv/Generate.hs
- compiler/GHC/Tc/Gen/Arrow.hs
- compiler/GHC/Tc/Gen/Bind.hs
- compiler/GHC/Tc/Gen/Pat.hs
- compiler/GHC/Tc/TyCl.hs
- compiler/GHC/Tc/TyCl/PatSyn.hs
- compiler/GHC/Tc/TyCl/Utils.hs
- compiler/GHC/Tc/Utils/Zonk.hs
- compiler/GHC/Tc/Validity.hs
- compiler/GHC/ThToHs.hs
- compiler/ghc.cabal.in
- compiler/parser/RdrHsSyn.hs
- ghc.mk
- hadrian/src/Settings/Builders/Ghc.hs
The diff was not included because it is too large.
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/b18aef96db43671ca89087389a4a7cf5c3a2734b...c8175f9a4b88b62aa3933e97463f89b5f964d1c7
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/b18aef96db43671ca89087389a4a7cf5c3a2734b...c8175f9a4b88b62aa3933e97463f89b5f964d1c7
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20200415/bb9fc674/attachment.html>
More information about the ghc-commits
mailing list