[Git][ghc/ghc][wip/marge_bot_batch_merge_job] 14 commits: StgCRun: Enable unwinding only on Linux
Marge Bot
gitlab at gitlab.haskell.org
Wed Apr 15 15:08:35 UTC 2020
Marge Bot pushed to branch wip/marge_bot_batch_merge_job at Glasgow Haskell Compiler / GHC
Commits:
5b08e0c0 by Ben Gamari at 2020-04-14T23:28:20-04:00
StgCRun: Enable unwinding only on Linux
It's broken on macOS due and SmartOS due to assembler differences
(#15207) so let's be conservative in enabling it. Also, refactor things
to make the intent clearer.
- - - - -
27cc2e7b by Ben Gamari at 2020-04-14T23:28:57-04:00
rts: Don't mark evacuate_large as inline
This function has two callsites and is quite large. GCC consequently
decides not to inline and warns instead. Given the situation, I can't
blame it. Let's just remove the inline specifier.
- - - - -
9853fc5e by Ben Gamari at 2020-04-14T23:29:48-04:00
base: Enable large file support for OFD locking impl.
Not only is this a good idea in general but this should also avoid
issue #17950 by ensuring that off_t is 64-bits.
- - - - -
7b41f21b by Matthew Pickering at 2020-04-14T23:30:24-04:00
Hadrian: Make -i paths absolute
The primary reason for this change is that ghcide does not work with
relative paths. It also matches what cabal and stack do, they always
pass absolute paths.
- - - - -
41230e26 by Daniel Gröber at 2020-04-14T23:31:01-04:00
Zero out pinned block alignment slop when profiling
The heap profiler currently cannot traverse pinned blocks because of
alignment slop. This used to just be a minor annoyance as the whole block
is accounted into a special cost center rather than the respective object's
CCS, cf. #7275. However for the new root profiler we would like to be able
to visit _every_ closure on the heap. We need to do this so we can get rid
of the current 'flip' bit hack in the heap traversal code.
Since info pointers are always non-zero we can in principle skip all the
slop in the profiler if we can rely on it being zeroed. This assumption
caused problems in the past though, commit a586b33f8e ("rts: Correct
handling of LARGE ARR_WORDS in LDV profiler"), part of !1118, tried to use
the same trick for BF_LARGE objects but neglected to take into account that
shrink*Array# functions don't ensure that slop is zeroed when not
compiling with profiling.
Later, commit 0c114c6599 ("Handle large ARR_WORDS in heap census (fix
as we will only be assuming slop is zeroed when profiling is on.
This commit also reduces the ammount of slop we introduce in the first
place by calculating the needed alignment before doing the allocation for
small objects where we know the next available address. For large objects
we don't know how much alignment we'll have to do yet since those details
are hidden behind the allocateMightFail function so there we continue to
allocate the maximum additional words we'll need to do the alignment.
So we don't have to duplicate all this logic in the cmm code we pull it
into the RTS allocatePinned function instead.
Metric Decrease:
T7257
haddock.Cabal
haddock.base
- - - - -
15fa9bd6 by Daniel Gröber at 2020-04-14T23:31:01-04:00
rts: Expand and add more notes regarding slop
- - - - -
caf3f444 by Daniel Gröber at 2020-04-14T23:31:01-04:00
rts: allocatePinned: Fix confusion about word/byte units
- - - - -
c3c0f662 by Daniel Gröber at 2020-04-14T23:31:01-04:00
rts: Underline some Notes as is conventional
- - - - -
e149dea9 by Daniel Gröber at 2020-04-14T23:31:38-04:00
rts: Fix nomenclature in OVERWRITING_CLOSURE macros
The additional commentary introduced by commit 8916e64e5437 ("Implement
shrinkSmallMutableArray# and resizeSmallMutableArray#.") unfortunately got
this wrong. We set 'prim' to true in overwritingClosureOfs because we
_don't_ want to call LDV_recordDead().
The reason is because of this "inherently used" distinction made in the LDV
profiler so I rename the variable to be more appropriate.
- - - - -
1dd3d18c by Daniel Gröber at 2020-04-14T23:31:38-04:00
Remove call to LDV_RECORD_CREATE for array resizing
- - - - -
19de2fb0 by Daniel Gröber at 2020-04-14T23:31:38-04:00
rts: Assert LDV_recordDead is not called for inherently used closures
The comments make it clear LDV_recordDead should not be called for
inhererently used closures, so add an assertion to codify this fact.
- - - - -
0b934e30 by Ryan Scott at 2020-04-14T23:32:14-04:00
Bump template-haskell version to 2.17.0.0
This requires bumping the `exceptions` and `text` submodules to bring
in commits that bump their respective upper version bounds on
`template-haskell`.
Fixes #17645. Fixes #17696.
Note that the new `text` commit includes a fair number of additions
to the Haddocks in that library. As a result, Haddock has to do more
work during the `haddock.Cabal` test case, increasing the number of
allocations it requires. Therefore,
-------------------------
Metric Increase:
haddock.Cabal
-------------------------
- - - - -
5c07bb7d by Ryan Scott at 2020-04-15T11:08:22-04:00
Fix #18052 by using pprPrefixOcc in more places
This fixes several small oversights in the choice of pretty-printing
function to use. Fixes #18052.
- - - - -
a5fddb24 by Daniel Gröber at 2020-04-15T11:08:24-04:00
rts: ProfHeap: Fix wrong time in last heap profile sample
We've had this longstanding issue in the heap profiler, where the time of
the last sample in the profile is sometimes way off causing the rendered
graph to be quite useless for long runs.
It seems to me the problem is that we use mut_user_time() for the last
sample as opposed to getRTSStats(), which we use when calling heapProfile()
in GC.c.
The former is equivalent to getProcessCPUTime() but the latter does
some additional stuff:
getProcessCPUTime() - end_init_cpu - stats.gc_cpu_ns -
stats.nonmoving_gc_cpu_ns
So to fix this just use getRTSStats() in both places.
- - - - -
28 changed files:
- compiler/GHC/Core/Ppr.hs
- compiler/GHC/Tc/Module.hs
- compiler/ghc.cabal.in
- ghc.mk
- hadrian/src/Settings/Builders/Ghc.hs
- includes/rts/storage/ClosureMacros.h
- includes/rts/storage/GC.h
- libraries/base/GHC/IO/Handle/Lock/LinuxOFD.hsc
- libraries/exceptions
- libraries/ghci/ghci.cabal.in
- libraries/template-haskell/template-haskell.cabal.in
- libraries/text
- rts/Apply.cmm
- rts/LdvProfile.c
- rts/PrimOps.cmm
- rts/ProfHeap.c
- rts/StgCRun.c
- rts/sm/Evac.c
- rts/sm/Sanity.c
- rts/sm/Storage.c
- + testsuite/tests/ghci/should_fail/T18052b.script
- + testsuite/tests/ghci/should_fail/T18052b.stderr
- testsuite/tests/ghci/should_fail/all.T
- testsuite/tests/partial-sigs/should_compile/ExtraConstraints3.stderr
- + testsuite/tests/printer/T18052a.hs
- + testsuite/tests/printer/T18052a.stderr
- testsuite/tests/printer/all.T
- utils/ghc-cabal/ghc.mk
Changes:
=====================================
compiler/GHC/Core/Ppr.hs
=====================================
@@ -123,11 +123,13 @@ ppr_binding ann (val_bdr, expr)
, pp_bind
]
where
+ pp_val_bdr = pprPrefixOcc val_bdr
+
pp_bind = case bndrIsJoin_maybe val_bdr of
Nothing -> pp_normal_bind
Just ar -> pp_join_bind ar
- pp_normal_bind = hang (ppr val_bdr) 2 (equals <+> pprCoreExpr expr)
+ pp_normal_bind = hang pp_val_bdr 2 (equals <+> pprCoreExpr expr)
-- For a join point of join arity n, we want to print j = \x1 ... xn -> e
-- as "j x1 ... xn = e" to differentiate when a join point returns a
@@ -135,7 +137,7 @@ ppr_binding ann (val_bdr, expr)
-- an n-argument function).
pp_join_bind join_arity
| bndrs `lengthAtLeast` join_arity
- = hang (ppr val_bdr <+> sep (map (pprBndr LambdaBind) lhs_bndrs))
+ = hang (pp_val_bdr <+> sep (map (pprBndr LambdaBind) lhs_bndrs))
2 (equals <+> pprCoreExpr rhs)
| otherwise -- Yikes! A join-binding with too few lambda
-- Lint will complain, but we don't want to crash
@@ -164,8 +166,10 @@ ppr_expr :: OutputableBndr b => (SDoc -> SDoc) -> Expr b -> SDoc
-- an atomic value (e.g. function args)
ppr_expr add_par (Var name)
- | isJoinId name = add_par ((text "jump") <+> ppr name)
- | otherwise = ppr name
+ | isJoinId name = add_par ((text "jump") <+> pp_name)
+ | otherwise = pp_name
+ where
+ pp_name = pprPrefixOcc name
ppr_expr add_par (Type ty) = add_par (text "TYPE:" <+> ppr ty) -- Weird
ppr_expr add_par (Coercion co) = add_par (text "CO:" <+> ppr co)
ppr_expr add_par (Lit lit) = pprLiteral add_par lit
@@ -429,7 +433,7 @@ pprKindedTyVarBndr tyvar
-- pprIdBndr does *not* print the type
-- When printing any Id binder in debug mode, we print its inline pragma and one-shot-ness
pprIdBndr :: Id -> SDoc
-pprIdBndr id = ppr id <+> pprIdBndrInfo (idInfo id)
+pprIdBndr id = pprPrefixOcc id <+> pprIdBndrInfo (idInfo id)
pprIdBndrInfo :: IdInfo -> SDoc
pprIdBndrInfo info
=====================================
compiler/GHC/Tc/Module.hs
=====================================
@@ -2122,7 +2122,7 @@ tcRnStmt hsc_env rdr_stmt
}
where
bad_unboxed id = addErr (sep [text "GHCi can't bind a variable of unlifted type:",
- nest 2 (ppr id <+> dcolon <+> ppr (idType id))])
+ nest 2 (pprPrefixOcc id <+> dcolon <+> ppr (idType id))])
{-
--------------------------------------------------------------------------
@@ -2903,7 +2903,7 @@ ppr_types debug type_env
-- etc are suppressed (unless -dppr-debug),
-- because they appear elsewhere
- ppr_sig id = hang (ppr id <+> dcolon) 2 (ppr (tidyTopType (idType id)))
+ ppr_sig id = hang (pprPrefixOcc id <+> dcolon) 2 (ppr (tidyTopType (idType id)))
ppr_tycons :: Bool -> [FamInst] -> TypeEnv -> SDoc
ppr_tycons debug fam_insts type_env
@@ -2921,7 +2921,7 @@ ppr_tycons debug fam_insts type_env
| otherwise = isExternalName (tyConName tycon) &&
not (tycon `elem` fi_tycons)
ppr_tc tc
- = vcat [ hang (ppr (tyConFlavour tc) <+> ppr tc
+ = vcat [ hang (ppr (tyConFlavour tc) <+> pprPrefixOcc (tyConName tc)
<> braces (ppr (tyConArity tc)) <+> dcolon)
2 (ppr (tidyTopType (tyConKind tc)))
, nest 2 $
@@ -2955,7 +2955,7 @@ ppr_patsyns type_env
= ppr_things "PATTERN SYNONYMS" ppr_ps
(typeEnvPatSyns type_env)
where
- ppr_ps ps = ppr ps <+> dcolon <+> pprPatSynType ps
+ ppr_ps ps = pprPrefixOcc ps <+> dcolon <+> pprPatSynType ps
ppr_insts :: [ClsInst] -> SDoc
ppr_insts ispecs
=====================================
compiler/ghc.cabal.in
=====================================
@@ -69,7 +69,7 @@ Library
containers >= 0.5 && < 0.7,
array >= 0.1 && < 0.6,
filepath >= 1 && < 1.5,
- template-haskell == 2.16.*,
+ template-haskell == 2.17.*,
hpc == 0.6.*,
transformers == 0.5.*,
ghc-boot == @ProjectVersionMunged@,
=====================================
ghc.mk
=====================================
@@ -413,8 +413,8 @@ else # CLEANING
# Packages that are built by stage0. These packages are dependencies of
# programs such as GHC and ghc-pkg, that we do not assume the stage0
# compiler already has installed (or up-to-date enough).
-
-PACKAGES_STAGE0 = binary text transformers mtl parsec Cabal/Cabal hpc ghc-boot-th ghc-boot template-haskell ghc-heap ghci
+# Note that these must be given in topological order.
+PACKAGES_STAGE0 = binary transformers mtl hpc ghc-boot-th ghc-boot template-haskell text parsec Cabal/Cabal ghc-heap ghci
ifeq "$(Windows_Host)" "NO"
PACKAGES_STAGE0 += terminfo
endif
@@ -441,14 +441,14 @@ PACKAGES_STAGE1 += process
PACKAGES_STAGE1 += hpc
PACKAGES_STAGE1 += pretty
PACKAGES_STAGE1 += binary
-PACKAGES_STAGE1 += text
PACKAGES_STAGE1 += transformers
PACKAGES_STAGE1 += mtl
-PACKAGES_STAGE1 += parsec
-PACKAGES_STAGE1 += Cabal/Cabal
PACKAGES_STAGE1 += ghc-boot-th
PACKAGES_STAGE1 += ghc-boot
PACKAGES_STAGE1 += template-haskell
+PACKAGES_STAGE1 += text
+PACKAGES_STAGE1 += parsec
+PACKAGES_STAGE1 += Cabal/Cabal
PACKAGES_STAGE1 += ghc-compact
PACKAGES_STAGE1 += ghc-heap
=====================================
hadrian/src/Settings/Builders/Ghc.hs
=====================================
@@ -11,6 +11,7 @@ import Settings.Builders.Common
import Settings.Warnings
import qualified Context as Context
import Rules.Libffi (libffiName)
+import System.Directory
ghcBuilderArgs :: Args
ghcBuilderArgs = mconcat [ compileAndLinkHs, compileC, findHsDependencies
@@ -215,18 +216,20 @@ packageGhcArgs = do
includeGhcArgs :: Args
includeGhcArgs = do
pkg <- getPackage
- path <- getBuildPath
+ path <- exprIO . makeAbsolute =<< getBuildPath
context <- getContext
srcDirs <- getContextData srcDirs
- autogen <- expr $ autogenPath context
+ abSrcDirs <- exprIO $ mapM makeAbsolute [ (pkgPath pkg -/- dir) | dir <- srcDirs ]
+ autogen <- expr (autogenPath context)
+ cautogen <- exprIO (makeAbsolute autogen)
stage <- getStage
- libPath <- expr $ stageLibPath stage
+ libPath <- expr (stageLibPath stage)
let cabalMacros = autogen -/- "cabal_macros.h"
expr $ need [cabalMacros]
mconcat [ arg "-i"
, arg $ "-i" ++ path
- , arg $ "-i" ++ autogen
- , pure [ "-i" ++ pkgPath pkg -/- dir | dir <- srcDirs ]
+ , arg $ "-i" ++ cautogen
+ , pure [ "-i" ++ d | d <- abSrcDirs ]
, cIncludeArgs
, arg $ "-I" ++ libPath
, arg $ "-optc-I" ++ libPath
=====================================
includes/rts/storage/ClosureMacros.h
=====================================
@@ -474,31 +474,39 @@ INLINE_HEADER StgWord8 *mutArrPtrsCard (StgMutArrPtrs *a, W_ n)
OVERWRITING_CLOSURE(p) on the old closure that is about to be
overwritten.
- Note [zeroing slop]
+ Note [zeroing slop when overwriting closures]
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- In some scenarios we write zero words into "slop"; memory that is
- left unoccupied after we overwrite a closure in the heap with a
- smaller closure.
+ When we overwrite a closure in the heap with a smaller one, in some scenarios
+ we need to write zero words into "slop"; the memory that is left
+ unoccupied. See Note [slop on the heap]
Zeroing slop is required for:
- - full-heap sanity checks (DEBUG, and +RTS -DS)
- - LDV profiling (PROFILING, and +RTS -hb)
+ - full-heap sanity checks (DEBUG, and +RTS -DS),
- Zeroing slop must be disabled for:
+ - LDV profiling (PROFILING, and +RTS -hb) and
- - THREADED_RTS with +RTS -N2 and greater, because we cannot
- overwrite slop when another thread might be reading it.
+ However we can get into trouble if we're zeroing slop for ordinarily
+ immutable closures when using multiple threads, since there is nothing
+ preventing another thread from still being in the process of reading the
+ memory we're about to zero.
- Hence, slop is zeroed when either:
+ Thus, with the THREADED RTS and +RTS -N2 or greater we must not zero
+ immutable closure's slop.
- - PROFILING && era <= 0 (LDV is on)
- - !THREADED_RTS && DEBUG
+ Hence, an immutable closure's slop is zeroed when either:
- And additionally:
+ - PROFILING && era > 0 (LDV is on) or
+ - !THREADED && DEBUG
- - LDV profiling and +RTS -N2 are incompatible
- - full-heap sanity checks are disabled for THREADED_RTS
+ Additionally:
+
+ - LDV profiling and +RTS -N2 are incompatible,
+
+ - full-heap sanity checks are disabled for the THREADED RTS, at least when
+ they don't run right after GC when there is no slop.
+ See Note [heap sanity checking with SMP].
-------------------------------------------------------------------------- */
@@ -524,23 +532,24 @@ INLINE_HEADER StgWord8 *mutArrPtrsCard (StgMutArrPtrs *a, W_ n)
#if defined(PROFILING)
void LDV_recordDead (const StgClosure *c, uint32_t size);
+RTS_PRIVATE bool isInherentlyUsed ( StgHalfWord closure_type );
#endif
EXTERN_INLINE void overwritingClosure_ (StgClosure *p,
uint32_t offset /* in words */,
uint32_t size /* closure size, in words */,
- bool prim /* Whether to call LDV_recordDead */
+ bool inherently_used USED_IF_PROFILING
);
-EXTERN_INLINE void overwritingClosure_ (StgClosure *p, uint32_t offset, uint32_t size, bool prim USED_IF_PROFILING)
+EXTERN_INLINE void overwritingClosure_ (StgClosure *p, uint32_t offset, uint32_t size, bool inherently_used USED_IF_PROFILING)
{
#if ZERO_SLOP_FOR_LDV_PROF && !ZERO_SLOP_FOR_SANITY_CHECK
- // see Note [zeroing slop], also #8402
+ // see Note [zeroing slop when overwriting closures], also #8402
if (era <= 0) return;
#endif
// For LDV profiling, we need to record the closure as dead
#if defined(PROFILING)
- if (!prim) { LDV_recordDead(p, size); };
+ if (!inherently_used) { LDV_recordDead(p, size); };
#endif
for (uint32_t i = offset; i < size; i++) {
@@ -551,7 +560,11 @@ EXTERN_INLINE void overwritingClosure_ (StgClosure *p, uint32_t offset, uint32_t
EXTERN_INLINE void overwritingClosure (StgClosure *p);
EXTERN_INLINE void overwritingClosure (StgClosure *p)
{
- overwritingClosure_(p, sizeofW(StgThunkHeader), closure_sizeW(p), false);
+#if defined(PROFILING)
+ ASSERT(!isInherentlyUsed(get_itbl(p)->type));
+#endif
+ overwritingClosure_(p, sizeofW(StgThunkHeader), closure_sizeW(p),
+ /*inherently_used=*/false);
}
// Version of 'overwritingClosure' which overwrites only a suffix of a
@@ -564,21 +577,24 @@ EXTERN_INLINE void overwritingClosure (StgClosure *p)
EXTERN_INLINE void overwritingClosureOfs (StgClosure *p, uint32_t offset);
EXTERN_INLINE void overwritingClosureOfs (StgClosure *p, uint32_t offset)
{
- // Set prim = true because overwritingClosureOfs is only
- // ever called by
- // shrinkMutableByteArray# (ARR_WORDS)
- // shrinkSmallMutableArray# (SMALL_MUT_ARR_PTRS)
- // This causes LDV_recordDead to be invoked. We want this
- // to happen because the implementations of the above
- // primops both call LDV_RECORD_CREATE after calling this,
- // effectively replacing the LDV closure biography.
- // See Note [LDV Profiling when Shrinking Arrays]
- overwritingClosure_(p, offset, closure_sizeW(p), true);
+ // Since overwritingClosureOfs is only ever called by:
+ //
+ // - shrinkMutableByteArray# (ARR_WORDS) and
+ //
+ // - shrinkSmallMutableArray# (SMALL_MUT_ARR_PTRS)
+ //
+ // we can safely set inherently_used = true, which means LDV_recordDead
+ // won't be invoked below. Since these closures are inherenlty used we don't
+ // need to track their destruction.
+ overwritingClosure_(p, offset, closure_sizeW(p), /*inherently_used=*/true);
}
// Version of 'overwritingClosure' which takes closure size as argument.
EXTERN_INLINE void overwritingClosureSize (StgClosure *p, uint32_t size /* in words */);
EXTERN_INLINE void overwritingClosureSize (StgClosure *p, uint32_t size)
{
- overwritingClosure_(p, sizeofW(StgThunkHeader), size, false);
+#if defined(PROFILING)
+ ASSERT(!isInherentlyUsed(get_itbl(p)->type));
+#endif
+ overwritingClosure_(p, sizeofW(StgThunkHeader), size, /*inherently_used=*/false);
}
=====================================
includes/rts/storage/GC.h
=====================================
@@ -170,10 +170,13 @@ extern generation * oldest_gen;
Allocates memory from the nursery in
the current Capability.
- StgPtr allocatePinned(Capability *cap, W_ n)
+ StgPtr allocatePinned(Capability *cap, W_ n, W_ alignment, W_ align_off)
Allocates a chunk of contiguous store
n words long, which is at a fixed
- address (won't be moved by GC).
+ address (won't be moved by GC). The
+ word at the byte offset 'align_off'
+ will be aligned to 'alignment', which
+ must be a power of two.
Returns a pointer to the first word.
Always succeeds.
@@ -191,7 +194,7 @@ extern generation * oldest_gen;
StgPtr allocate ( Capability *cap, W_ n );
StgPtr allocateMightFail ( Capability *cap, W_ n );
-StgPtr allocatePinned ( Capability *cap, W_ n );
+StgPtr allocatePinned ( Capability *cap, W_ n, W_ alignment, W_ align_off);
/* memory allocator for executable memory */
typedef void* AdjustorWritable;
=====================================
libraries/base/GHC/IO/Handle/Lock/LinuxOFD.hsc
=====================================
@@ -12,6 +12,9 @@ module GHC.IO.Handle.Lock.LinuxOFD where
import GHC.Base () -- Make implicit dependency known to build system
#else
+-- Not only is this a good idea but it also works around #17950.
+#define _FILE_OFFSET_BITS 64
+
#include <unistd.h>
#include <fcntl.h>
=====================================
libraries/exceptions
=====================================
@@ -1 +1 @@
-Subproject commit 0a1f9ff0f407da360fc9405a07d5d06d28e6c077
+Subproject commit fe4166f8d23d8288ef2cbbf9e36118b6b99e0d7d
=====================================
libraries/ghci/ghci.cabal.in
=====================================
@@ -81,7 +81,7 @@ library
ghc-boot == @ProjectVersionMunged@,
ghc-boot-th == @ProjectVersionMunged@,
ghc-heap == @ProjectVersionMunged@,
- template-haskell == 2.16.*,
+ template-haskell == 2.17.*,
transformers == 0.5.*
if !os(windows)
=====================================
libraries/template-haskell/template-haskell.cabal.in
=====================================
@@ -3,7 +3,7 @@
-- template-haskell.cabal.
name: template-haskell
-version: 2.16.0.0
+version: 2.17.0.0
-- NOTE: Don't forget to update ./changelog.md
license: BSD3
license-file: LICENSE
=====================================
libraries/text
=====================================
@@ -1 +1 @@
-Subproject commit 1127b30e1e0affa08f056e35ad17957b12982ba3
+Subproject commit a01843250166b5559936ba5eb81f7873e709587a
=====================================
rts/Apply.cmm
=====================================
@@ -689,7 +689,7 @@ for:
// Because of eager blackholing the closure no longer has correct size so
// threadPaused() can't correctly zero the slop, so we do it here. See #15571
- // and Note [zeroing slop].
+ // and Note [zeroing slop when overwriting closures].
OVERWRITING_CLOSURE_SIZE(ap, BYTES_TO_WDS(SIZEOF_StgThunkHeader) + 2 + Words);
ENTER_R1();
=====================================
rts/LdvProfile.c
=====================================
@@ -18,6 +18,37 @@
#include "RtsUtils.h"
#include "Schedule.h"
+bool isInherentlyUsed( StgHalfWord closure_type )
+{
+ switch(closure_type) {
+ case TSO:
+ case STACK:
+ case MVAR_CLEAN:
+ case MVAR_DIRTY:
+ case TVAR:
+ case MUT_ARR_PTRS_CLEAN:
+ case MUT_ARR_PTRS_DIRTY:
+ case MUT_ARR_PTRS_FROZEN_CLEAN:
+ case MUT_ARR_PTRS_FROZEN_DIRTY:
+ case SMALL_MUT_ARR_PTRS_CLEAN:
+ case SMALL_MUT_ARR_PTRS_DIRTY:
+ case SMALL_MUT_ARR_PTRS_FROZEN_CLEAN:
+ case SMALL_MUT_ARR_PTRS_FROZEN_DIRTY:
+ case ARR_WORDS:
+ case WEAK:
+ case MUT_VAR_CLEAN:
+ case MUT_VAR_DIRTY:
+ case BCO:
+ case PRIM:
+ case MUT_PRIM:
+ case TREC_CHUNK:
+ return true;
+
+ default:
+ return false;
+ }
+}
+
/* --------------------------------------------------------------------------
* This function is called eventually on every object destroyed during
* a garbage collection, whether it is a major garbage collection or
@@ -55,33 +86,13 @@ processHeapClosureForDead( const StgClosure *c )
size = closure_sizeW(c);
- switch (info->type) {
- /*
+ /*
'inherently used' cases: do nothing.
- */
- case TSO:
- case STACK:
- case MVAR_CLEAN:
- case MVAR_DIRTY:
- case TVAR:
- case MUT_ARR_PTRS_CLEAN:
- case MUT_ARR_PTRS_DIRTY:
- case MUT_ARR_PTRS_FROZEN_CLEAN:
- case MUT_ARR_PTRS_FROZEN_DIRTY:
- case SMALL_MUT_ARR_PTRS_CLEAN:
- case SMALL_MUT_ARR_PTRS_DIRTY:
- case SMALL_MUT_ARR_PTRS_FROZEN_CLEAN:
- case SMALL_MUT_ARR_PTRS_FROZEN_DIRTY:
- case ARR_WORDS:
- case WEAK:
- case MUT_VAR_CLEAN:
- case MUT_VAR_DIRTY:
- case BCO:
- case PRIM:
- case MUT_PRIM:
- case TREC_CHUNK:
+ */
+ if(isInherentlyUsed(info->type))
return size;
+ switch (info->type) {
/*
ordinary cases: call LDV_recordDead().
*/
=====================================
rts/PrimOps.cmm
=====================================
@@ -89,22 +89,15 @@ stg_newPinnedByteArrayzh ( W_ n )
/* When we actually allocate memory, we need to allow space for the
header: */
bytes = bytes + SIZEOF_StgArrBytes;
- /* And we want to align to BA_ALIGN bytes, so we need to allow space
- to shift up to BA_ALIGN - 1 bytes: */
- bytes = bytes + BA_ALIGN - 1;
/* Now we convert to a number of words: */
words = ROUNDUP_BYTES_TO_WDS(bytes);
- ("ptr" p) = ccall allocatePinned(MyCapability() "ptr", words);
+ ("ptr" p) = ccall allocatePinned(MyCapability() "ptr", words, BA_ALIGN, SIZEOF_StgArrBytes);
if (p == NULL) {
jump stg_raisezh(base_GHCziIOziException_heapOverflow_closure);
}
TICK_ALLOC_PRIM(SIZEOF_StgArrBytes,WDS(payload_words),0);
- /* Now we need to move p forward so that the payload is aligned
- to BA_ALIGN bytes: */
- p = p + ((-p - SIZEOF_StgArrBytes) & BA_MASK);
-
/* No write barrier needed since this is a new allocation. */
SET_HDR(p, stg_ARR_WORDS_info, CCCS);
StgArrBytes_bytes(p) = n;
@@ -121,7 +114,7 @@ stg_newAlignedPinnedByteArrayzh ( W_ n, W_ alignment )
/* we always supply at least word-aligned memory, so there's no
need to allow extra space for alignment if the requirement is less
than a word. This also prevents mischief with alignment == 0. */
- if (alignment <= SIZEOF_W) { alignment = 1; }
+ if (alignment <= SIZEOF_W) { alignment = SIZEOF_W; }
bytes = n;
@@ -131,23 +124,15 @@ stg_newAlignedPinnedByteArrayzh ( W_ n, W_ alignment )
/* When we actually allocate memory, we need to allow space for the
header: */
bytes = bytes + SIZEOF_StgArrBytes;
- /* And we want to align to <alignment> bytes, so we need to allow space
- to shift up to <alignment - 1> bytes: */
- bytes = bytes + alignment - 1;
/* Now we convert to a number of words: */
words = ROUNDUP_BYTES_TO_WDS(bytes);
- ("ptr" p) = ccall allocatePinned(MyCapability() "ptr", words);
+ ("ptr" p) = ccall allocatePinned(MyCapability() "ptr", words, alignment, SIZEOF_StgArrBytes);
if (p == NULL) {
jump stg_raisezh(base_GHCziIOziException_heapOverflow_closure);
}
TICK_ALLOC_PRIM(SIZEOF_StgArrBytes,WDS(payload_words),0);
- /* Now we need to move p forward so that the payload is aligned
- to <alignment> bytes. Note that we are assuming that
- <alignment> is a power of 2, which is technically not guaranteed */
- p = p + ((-p - SIZEOF_StgArrBytes) & (alignment - 1));
-
/* No write barrier needed since this is a new allocation. */
SET_HDR(p, stg_ARR_WORDS_info, CCCS);
StgArrBytes_bytes(p) = n;
@@ -173,6 +158,17 @@ stg_isMutableByteArrayPinnedzh ( gcptr mba )
jump stg_isByteArrayPinnedzh(mba);
}
+/* Note [LDV profiling and resizing arrays]
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * As far as the LDV profiler is concerned arrays are "inherently used" which
+ * means we don't track their time of use and eventual destruction. We just
+ * assume they get used.
+ *
+ * Thus it is not necessary to call LDV_RECORD_CREATE when resizing them as we
+ * used to as the LDV profiler will essentially ignore arrays anyways.
+ */
+
// shrink size of MutableByteArray in-place
stg_shrinkMutableByteArrayzh ( gcptr mba, W_ new_size )
// MutableByteArray# s -> Int# -> State# s -> State# s
@@ -182,9 +178,7 @@ stg_shrinkMutableByteArrayzh ( gcptr mba, W_ new_size )
OVERWRITING_CLOSURE_OFS(mba, (BYTES_TO_WDS(SIZEOF_StgArrBytes) +
ROUNDUP_BYTES_TO_WDS(new_size)));
StgArrBytes_bytes(mba) = new_size;
- // See the comments in overwritingClosureOfs for an explanation
- // of the interaction with LDV profiling.
- LDV_RECORD_CREATE(mba);
+ // No need to call LDV_RECORD_CREATE. See Note [LDV profiling and resizing arrays]
return ();
}
@@ -208,7 +202,7 @@ stg_resizzeMutableByteArrayzh ( gcptr mba, W_ new_size )
OVERWRITING_CLOSURE_OFS(mba, (BYTES_TO_WDS(SIZEOF_StgArrBytes) +
new_size_wds));
StgArrBytes_bytes(mba) = new_size;
- LDV_RECORD_CREATE(mba);
+ // No need to call LDV_RECORD_CREATE. See Note [LDV profiling and resizing arrays]
return (mba);
} else {
@@ -237,9 +231,7 @@ stg_shrinkSmallMutableArrayzh ( gcptr mba, W_ new_size )
OVERWRITING_CLOSURE_OFS(mba, (BYTES_TO_WDS(SIZEOF_StgSmallMutArrPtrs) +
new_size));
StgSmallMutArrPtrs_ptrs(mba) = new_size;
- // See the comments in overwritingClosureOfs for an explanation
- // of the interaction with LDV profiling.
- LDV_RECORD_CREATE(mba);
+ // No need to call LDV_RECORD_CREATE. See Note [LDV profiling and resizing arrays]
return ();
}
=====================================
rts/ProfHeap.c
=====================================
@@ -280,6 +280,8 @@ LDV_recordDead( const StgClosure *c, uint32_t size )
uint32_t t;
counter *ctr;
+ ASSERT(!isInherentlyUsed(get_itbl(c)->type));
+
if (era > 0 && closureSatisfiesConstraints(c)) {
size -= sizeofW(StgProfHeader);
ASSERT(LDVW(c) != 0);
@@ -550,8 +552,6 @@ initHeapProfiling(void)
void
endHeapProfiling(void)
{
- StgDouble seconds;
-
if (! RtsFlags.ProfFlags.doHeapProfile) {
return;
}
@@ -594,7 +594,10 @@ endHeapProfiling(void)
stgFree(censuses);
- seconds = mut_user_time();
+ RTSStats stats;
+ getRTSStats(&stats);
+ Time mut_time = stats.mutator_cpu_ns;
+ StgDouble seconds = TimeToSecondsDbl(mut_time);
printSample(true, seconds);
printSample(false, seconds);
fclose(hp_file);
@@ -1275,8 +1278,22 @@ heapCensusChain( Census *census, bdescr *bd )
heapProfObject(census,(StgClosure*)p,size,prim);
p += size;
- /* skip over slop */
- while (p < bd->free && !*p) p++; // skip slop
+
+ /* skip over slop, see Note [slop on the heap] */
+ while (p < bd->free && !*p) p++;
+ /* Note [skipping slop in the heap profiler]
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * We make sure to zero slop that can remain after a major GC so
+ * here we can assume any slop words we see until the block's free
+ * pointer are zero. Since info pointers are always nonzero we can
+ * use this to scan for the next valid heap closure.
+ *
+ * Note that not all types of slop are relevant here, only the ones
+ * that can reman after major GC. So essentially just large objects
+ * and pinned objects. All other closures will have been packed nice
+ * and thight into fresh blocks.
+ */
}
}
}
=====================================
rts/StgCRun.c
=====================================
@@ -29,6 +29,13 @@
#include "PosixSource.h"
#include "ghcconfig.h"
+// Enable DWARF Call-Frame Information (used for stack unwinding) on Linux.
+// This is not supported on Darwin and SmartOS due to assembler differences
+// (#15207).
+#if defined(linux_HOST_OS)
+#define ENABLE_UNWINDING
+#endif
+
#if defined(sparc_HOST_ARCH) || defined(USE_MINIINTERPRETER)
/* include Stg.h first because we want real machine regs in here: we
* have to get the value of R1 back from Stg land to C land intact.
@@ -405,7 +412,7 @@ StgRunIsImplementedInAssembler(void)
"movq %%xmm15,136(%%rax)\n\t"
#endif
-#if !defined(darwin_HOST_OS)
+#if defined(ENABLE_UNWINDING)
/*
* Let the unwinder know where we saved the registers
* See Note [Unwinding foreign exports on x86-64].
@@ -444,7 +451,7 @@ StgRunIsImplementedInAssembler(void)
#error "RSP_DELTA too big"
#endif
"\n\t"
-#endif /* !defined(darwin_HOST_OS) */
+#endif /* defined(ENABLE_UNWINDING) */
/*
* Set BaseReg
@@ -519,7 +526,7 @@ StgRunIsImplementedInAssembler(void)
"i"(RESERVED_C_STACK_BYTES + STG_RUN_STACK_FRAME_SIZE
/* rip relative to cfa */)
-#if !defined(darwin_HOST_OS)
+#if defined(ENABLE_UNWINDING)
, "i"((RSP_DELTA & 127) | (128 * ((RSP_DELTA >> 7) > 0)))
/* signed LEB128-encoded delta from rsp - byte 1 */
#if (RSP_DELTA >> 7) > 0
@@ -538,7 +545,7 @@ StgRunIsImplementedInAssembler(void)
#endif
#undef RSP_DELTA
-#endif /* !defined(darwin_HOST_OS) */
+#endif /* defined(ENABLE_UNWINDING) */
);
/*
=====================================
rts/sm/Evac.c
=====================================
@@ -298,7 +298,7 @@ copy(StgClosure **p, const StgInfoTable *info,
that has been evacuated, or unset otherwise.
-------------------------------------------------------------------------- */
-STATIC_INLINE void
+static void
evacuate_large(StgPtr p)
{
bdescr *bd;
=====================================
rts/sm/Sanity.c
=====================================
@@ -475,7 +475,7 @@ void checkHeapChain (bdescr *bd)
ASSERT( size >= MIN_PAYLOAD_SIZE + sizeofW(StgHeader) );
p += size;
- /* skip over slop */
+ /* skip over slop, see Note [slop on the heap] */
while (p < bd->free &&
(*p < 0x1000 || !LOOKS_LIKE_INFO_PTR(*p))) { p++; }
}
@@ -796,12 +796,17 @@ static void checkGeneration (generation *gen,
ASSERT(countBlocks(gen->large_objects) == gen->n_large_blocks);
#if defined(THREADED_RTS)
+ // Note [heap sanity checking with SMP]
+ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ //
// heap sanity checking doesn't work with SMP for two reasons:
- // * we can't zero the slop (see Updates.h). However, we can sanity-check
- // the heap after a major gc, because there is no slop.
//
- // * the nonmoving collector may be mutating its large object lists, unless we
- // were in fact called by the nonmoving collector.
+ // * We can't zero the slop. However, we can sanity-check the heap after a
+ // major gc, because there is no slop. See also Updates.h and Note
+ // [zeroing slop when overwriting closures].
+ //
+ // * The nonmoving collector may be mutating its large object lists,
+ // unless we were in fact called by the nonmoving collector.
if (!after_major_gc) return;
#endif
=====================================
rts/sm/Storage.c
=====================================
@@ -907,6 +907,54 @@ accountAllocation(Capability *cap, W_ n)
}
+/* Note [slop on the heap]
+ * ~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * We use the term "slop" to refer to allocated memory on the heap which isn't
+ * occupied by any closure. Usually closures are packet tightly into the heap
+ * blocks, storage for one immediately following another. However there are
+ * situations where slop is left behind:
+ *
+ * - Allocating large objects (BF_LARGE)
+ *
+ * These are given an entire block, but if they don't fill the entire block
+ * the rest is slop. See allocateMightFail in Storage.c.
+ *
+ * - Allocating pinned objects with alignment (BF_PINNED)
+ *
+ * These are packet into blocks like normal closures, however they
+ * can have alignment constraints and any memory that needed to be skipped for
+ * alignment becomes slop. See allocatePinned in Storage.c.
+ *
+ * - Shrinking (Small)Mutable(Byte)Array#
+ *
+ * The size of these closures can be decreased after allocation, leaving any,
+ * now unused memory, behind as slop. See stg_resizzeMutableByteArrayzh,
+ * stg_shrinkSmallMutableArrayzh, and stg_shrinkMutableByteArrayzh in
+ * PrimOps.cmm.
+ *
+ * This type of slop is extra tricky because it can also be pinned and
+ * large.
+ *
+ * - Overwriting closures
+ *
+ * During GC the RTS overwrites closures with forwarding pointers, this can
+ * leave slop behind depending on the size of the closure being
+ * overwritten. See Note [zeroing slop when overwriting closures].
+ *
+ * Under various ways we actually zero slop so we can linearly scan over blocks
+ * of closures. This trick is used by the sanity checking code and the heap
+ * profiler, see Note [skipping slop in the heap profiler].
+ *
+ * When profiling we zero:
+ * - Pinned object alignment slop, see MEMSET_IF_PROFILING_W in allocatePinned.
+ * - Shrunk array slop, see OVERWRITING_MUTABLE_CLOSURE.
+ *
+ * When performing LDV profiling or using a (single threaded) debug RTS we zero
+ * slop even when overwriting immutable closures, see Note [zeroing slop when
+ * overwriting closures].
+ */
+
/* -----------------------------------------------------------------------------
StgPtr allocate (Capability *cap, W_ n)
@@ -1059,6 +1107,26 @@ allocateMightFail (Capability *cap, W_ n)
return p;
}
+/**
+ * Calculate the number of words we need to add to 'p' so it satisfies the
+ * alignment constraint '(p + off) & (align-1) == 0'.
+ */
+#define ALIGN_WITH_OFF_W(p, align, off) \
+ (((-((uintptr_t)p) - off) & (align-1)) / sizeof(W_))
+
+/**
+ * When profiling we zero the space used for alignment. This allows us to
+ * traverse pinned blocks in the heap profiler.
+ *
+ * See Note [skipping slop in the heap profiler]
+ */
+#if defined(PROFILING)
+#define MEMSET_IF_PROFILING_W(p, val, len_w) memset(p, val, (len_w) * sizeof(W_))
+#else
+#define MEMSET_IF_PROFILING_W(p, val, len_w) \
+ do { (void)(p); (void)(val); (void)(len_w); } while(0)
+#endif
+
/* ---------------------------------------------------------------------------
Allocate a fixed/pinned object.
@@ -1084,29 +1152,49 @@ allocateMightFail (Capability *cap, W_ n)
------------------------------------------------------------------------- */
StgPtr
-allocatePinned (Capability *cap, W_ n)
+allocatePinned (Capability *cap, W_ n /*words*/, W_ alignment /*bytes*/, W_ align_off /*bytes*/)
{
StgPtr p;
bdescr *bd;
+ // Alignment and offset have to be a power of two
+ ASSERT(alignment && !(alignment & (alignment - 1)));
+ ASSERT(alignment >= sizeof(W_));
+
+ ASSERT(!(align_off & (align_off - 1)));
+
+ const StgWord alignment_w = alignment / sizeof(W_);
+
// If the request is for a large object, then allocate()
// will give us a pinned object anyway.
if (n >= LARGE_OBJECT_THRESHOLD/sizeof(W_)) {
- p = allocateMightFail(cap, n);
+ // For large objects we don't bother optimizing the number of words
+ // allocated for alignment reasons. Here we just allocate the maximum
+ // number of extra words we could possibly need to satisfy the alignment
+ // constraint.
+ p = allocateMightFail(cap, n + alignment_w - 1);
if (p == NULL) {
return NULL;
} else {
Bdescr(p)->flags |= BF_PINNED;
+ W_ off_w = ALIGN_WITH_OFF_W(p, alignment, align_off);
+ MEMSET_IF_PROFILING_W(p, 0, off_w);
+ p += off_w;
+ MEMSET_IF_PROFILING_W(p + n, 0, alignment_w - off_w - 1);
return p;
}
}
- accountAllocation(cap, n);
bd = cap->pinned_object_block;
+ W_ off_w = 0;
+
+ if(bd)
+ off_w = ALIGN_WITH_OFF_W(bd->free, alignment, align_off);
+
// If we don't have a block of pinned objects yet, or the current
// one isn't large enough to hold the new object, get a new one.
- if (bd == NULL || (bd->free + n) > (bd->start + BLOCK_SIZE_W)) {
+ if (bd == NULL || (bd->free + off_w + n) > (bd->start + BLOCK_SIZE_W)) {
// stash the old block on cap->pinned_object_blocks. On the
// next GC cycle these objects will be moved to
@@ -1158,10 +1246,20 @@ allocatePinned (Capability *cap, W_ n)
// the next GC the BF_EVACUATED flag will be cleared, and the
// block will be promoted as usual (if anything in it is
// live).
+
+ off_w = ALIGN_WITH_OFF_W(bd->free, alignment, align_off);
}
p = bd->free;
+
+ MEMSET_IF_PROFILING_W(p, 0, off_w);
+
+ n += off_w;
+ p += off_w;
bd->free += n;
+
+ accountAllocation(cap, n);
+
return p;
}
=====================================
testsuite/tests/ghci/should_fail/T18052b.script
=====================================
@@ -0,0 +1,2 @@
+:set -XMagicHash
+let (%%%) = 1#
=====================================
testsuite/tests/ghci/should_fail/T18052b.stderr
=====================================
@@ -0,0 +1,3 @@
+
+<interactive>:1:1: error:
+ GHCi can't bind a variable of unlifted type: (%%%) :: GHC.Prim.Int#
=====================================
testsuite/tests/ghci/should_fail/all.T
=====================================
@@ -3,3 +3,4 @@ test('T10549a', [], ghci_script, ['T10549a.script'])
test('T15055', normalise_version('ghc'), ghci_script, ['T15055.script'])
test('T16013', [], ghci_script, ['T16013.script'])
test('T16287', [], ghci_script, ['T16287.script'])
+test('T18052b', [], ghci_script, ['T18052b.script'])
=====================================
testsuite/tests/partial-sigs/should_compile/ExtraConstraints3.stderr
=====================================
@@ -1,28 +1,28 @@
TYPE SIGNATURES
- !! :: forall {a}. [a] -> Int -> a
- $ :: forall {a} {b}. (a -> b) -> a -> b
- $! :: forall {a} {b}. (a -> b) -> a -> b
- && :: Bool -> Bool -> Bool
- * :: forall {a}. Num a => a -> a -> a
- ** :: forall {a}. Floating a => a -> a -> a
- + :: forall {a}. Num a => a -> a -> a
- ++ :: forall {a}. [a] -> [a] -> [a]
- - :: forall {a}. Num a => a -> a -> a
- . :: forall {b} {c} {a}. (b -> c) -> (a -> b) -> a -> c
- / :: forall {a}. Fractional a => a -> a -> a
- /= :: forall {a}. Eq a => a -> a -> Bool
- < :: forall {a}. Ord a => a -> a -> Bool
- <= :: forall {a}. Ord a => a -> a -> Bool
- =<< ::
+ (!!) :: forall {a}. [a] -> Int -> a
+ ($) :: forall {a} {b}. (a -> b) -> a -> b
+ ($!) :: forall {a} {b}. (a -> b) -> a -> b
+ (&&) :: Bool -> Bool -> Bool
+ (*) :: forall {a}. Num a => a -> a -> a
+ (**) :: forall {a}. Floating a => a -> a -> a
+ (+) :: forall {a}. Num a => a -> a -> a
+ (++) :: forall {a}. [a] -> [a] -> [a]
+ (-) :: forall {a}. Num a => a -> a -> a
+ (.) :: forall {b} {c} {a}. (b -> c) -> (a -> b) -> a -> c
+ (/) :: forall {a}. Fractional a => a -> a -> a
+ (/=) :: forall {a}. Eq a => a -> a -> Bool
+ (<) :: forall {a}. Ord a => a -> a -> Bool
+ (<=) :: forall {a}. Ord a => a -> a -> Bool
+ (=<<) ::
forall {m :: * -> *} {a} {b}. Monad m => (a -> m b) -> m a -> m b
- == :: forall {a}. Eq a => a -> a -> Bool
- > :: forall {a}. Ord a => a -> a -> Bool
- >= :: forall {a}. Ord a => a -> a -> Bool
- >> :: forall {m :: * -> *} {a} {b}. Monad m => m a -> m b -> m b
- >>= ::
+ (==) :: forall {a}. Eq a => a -> a -> Bool
+ (>) :: forall {a}. Ord a => a -> a -> Bool
+ (>=) :: forall {a}. Ord a => a -> a -> Bool
+ (>>) :: forall {m :: * -> *} {a} {b}. Monad m => m a -> m b -> m b
+ (>>=) ::
forall {m :: * -> *} {a} {b}. Monad m => m a -> (a -> m b) -> m b
- ^ :: forall {b} {a}. (Integral b, Num a) => a -> b -> a
- ^^ :: forall {a} {b}. (Fractional a, Integral b) => a -> b -> a
+ (^) :: forall {b} {a}. (Integral b, Num a) => a -> b -> a
+ (^^) :: forall {a} {b}. (Fractional a, Integral b) => a -> b -> a
abs :: forall {a}. Num a => a -> a
acos :: forall {a}. Floating a => a -> a
acosh :: forall {a}. Floating a => a -> a
@@ -234,7 +234,7 @@ TYPE SIGNATURES
zipWith3 ::
forall {a} {b} {c} {d}.
(a -> b -> c -> d) -> [a] -> [b] -> [c] -> [d]
- || :: Bool -> Bool -> Bool
+ (||) :: Bool -> Bool -> Bool
Dependent modules: []
-Dependent packages: [base-4.13.0.0, ghc-prim-0.6.1,
- integer-gmp-1.0.2.0]
+Dependent packages: [base-4.14.0.0, ghc-prim-0.6.1,
+ integer-gmp-1.0.3.0]
=====================================
testsuite/tests/printer/T18052a.hs
=====================================
@@ -0,0 +1,8 @@
+{-# LANGUAGE PatternSynonyms #-}
+{-# LANGUAGE TypeOperators #-}
+module T18052a where
+
+(+++) = (++)
+pattern x :||: y = (x,y)
+type (^^^) = Either
+data (&&&)
=====================================
testsuite/tests/printer/T18052a.stderr
=====================================
@@ -0,0 +1,42 @@
+TYPE SIGNATURES
+ (+++) :: forall {a}. [a] -> [a] -> [a]
+TYPE CONSTRUCTORS
+ data type (&&&){0} :: *
+ type synonym (^^^){0} :: * -> * -> *
+PATTERN SYNONYMS
+ (:||:) :: forall {a} {b}. a -> b -> (a, b)
+Dependent modules: []
+Dependent packages: [base-4.14.0.0, ghc-prim-0.6.1,
+ integer-gmp-1.0.3.0]
+
+==================== Tidy Core ====================
+Result size of Tidy Core
+ = {terms: 18, types: 53, coercions: 0, joins: 0/0}
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+T18052a.$b:||: :: forall {a} {b}. a -> b -> (a, b)
+[GblId, Arity=2, Unf=OtherCon []]
+T18052a.$b:||: = GHC.Tuple.(,)
+
+-- RHS size: {terms: 1, types: 0, coercions: 0, joins: 0/0}
+(+++) :: forall {a}. [a] -> [a] -> [a]
+[GblId]
+(+++) = (++)
+
+-- RHS size: {terms: 13, types: 20, coercions: 0, joins: 0/0}
+T18052a.$m:||:
+ :: forall {rep :: GHC.Types.RuntimeRep} {r :: TYPE rep} {a} {b}.
+ (a, b) -> (a -> b -> r) -> (GHC.Prim.Void# -> r) -> r
+[GblId, Arity=3, Unf=OtherCon []]
+T18052a.$m:||:
+ = \ (@(rep :: GHC.Types.RuntimeRep))
+ (@(r :: TYPE rep))
+ (@a)
+ (@b)
+ (scrut :: (a, b))
+ (cont :: a -> b -> r)
+ _ [Occ=Dead] ->
+ case scrut of { (x, y) -> cont x y }
+
+
+
=====================================
testsuite/tests/printer/all.T
=====================================
@@ -57,3 +57,5 @@ test('T14306', ignore_stderr, makefile_test, ['T14306'])
test('T14343', normal, compile_fail, [''])
test('T14343b', normal, compile_fail, [''])
test('T15761', normal, compile_fail, [''])
+test('T18052a', normal, compile,
+ ['-ddump-simpl -ddump-types -dno-typeable-binds -dsuppress-uniques'])
=====================================
utils/ghc-cabal/ghc.mk
=====================================
@@ -23,9 +23,9 @@ CABAL_CONSTRAINT := --constraint="Cabal == $(CABAL_DOTTED_VERSION)"
# macros is triggered by `-hide-all-packages`, so we have to explicitly
# enumerate all packages we need in scope.
ifeq "$(Windows_Host)" "YES"
-CABAL_BUILD_DEPS := ghc-prim base array transformers time containers bytestring deepseq process pretty directory filepath Win32
+CABAL_BUILD_DEPS := ghc-prim base array transformers time containers bytestring deepseq process pretty directory filepath Win32 template-haskell
else
-CABAL_BUILD_DEPS := ghc-prim base array transformers time containers bytestring deepseq process pretty directory filepath unix
+CABAL_BUILD_DEPS := ghc-prim base array transformers time containers bytestring deepseq process pretty directory filepath unix template-haskell
endif
ghc-cabal_DIST_BINARY_NAME = ghc-cabal$(exeext0)
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/b48d78d3fa1c4e0b70e9f98478a101fb4dd57ea3...a5fddb249ce819c5d3aa9a33a63e99f8042a0ced
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/b48d78d3fa1c4e0b70e9f98478a101fb4dd57ea3...a5fddb249ce819c5d3aa9a33a63e99f8042a0ced
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20200415/ff4e2a47/attachment-0001.html>
More information about the ghc-commits
mailing list