[Git][ghc/ghc][wip/angerman/out-of-range-reloc] 3 commits: Fix GHCi :print on big-endian platforms

Moritz Angermann gitlab at gitlab.haskell.org
Fri Jul 10 10:39:38 UTC 2020



Moritz Angermann pushed to branch wip/angerman/out-of-range-reloc at Glasgow Haskell Compiler / GHC


Commits:
b7de4b96 by Stefan Schulze Frielinghaus at 2020-07-09T09:49:22-04:00
Fix GHCi :print on big-endian platforms

On big-endian platforms executing

  import GHC.Exts
  data Foo = Foo Float# deriving Show
  foo = Foo 42.0#
  foo
  :print foo

results in an arithmetic overflow exception which is caused by function
index where moveBytes equals
  word_size - (r + item_size_b) * 8
Here we have a mixture of units. Both, word_size and item_size_b have
unit bytes whereas r has unit bits.  On 64-bit platforms moveBytes
equals then
  8 - (0 + 4) * 8
which results in a negative and therefore invalid second parameter for a
shiftL operation.

In order to make things more clear the expression
  (word .&. (mask `shiftL` moveBytes)) `shiftR` moveBytes
is equivalent to
  (word `shiftR` moveBytes) .&. mask
On big-endian platforms the shift must be a left shift instead of a
right shift. For symmetry reasons not a mask is used but two shifts in
order to zero out bits. Thus the fixed version equals
  case endian of
    BigEndian    -> (word `shiftL` moveBits) `shiftR` zeroOutBits `shiftL` zeroOutBits
    LittleEndian -> (word `shiftR` moveBits) `shiftL` zeroOutBits `shiftR` zeroOutBits

Fixes #16548 and #14455

- - - - -
3656dff8 by Sylvain Henry at 2020-07-09T09:50:01-04:00
LLVM: fix MO_S_Mul2 support (#18434)

The value indicating if the carry is useful wasn't taken into account.

- - - - -
aedfeb0b by Moritz Angermann at 2020-07-10T18:38:57+08:00
[linker] Fix out of range relocations.

mmap may return address all over the place. mmap_next will ensure we get
the next free page after the requested address.

This is especially important for linking on aarch64, where the memory model with PIC
admits relocations in the +-4GB range, and as such we can't work with
arbitrary object locations in memory.

Of note: we map the rts into process space, so any mapped objects must
not be ouside of the 4GB from the processes address space.

- - - - -


11 changed files:

- compiler/GHC/CmmToLlvm/CodeGen.hs
- compiler/GHC/Runtime/Heap/Inspect.hs
- rts/Linker.c
- rts/LinkerInternals.h
- rts/linker/Elf.c
- rts/linker/LoadArchive.c
- rts/linker/M32Alloc.c
- rts/linker/MachO.c
- rts/linker/SymbolExtras.c
- rts/linker/elf_got.c
- testsuite/tests/ghci.debugger/scripts/all.T


Changes:

=====================================
compiler/GHC/CmmToLlvm/CodeGen.hs
=====================================
@@ -353,7 +353,7 @@ genCall (PrimTarget (MO_U_Mul2 w)) [dstH, dstL] [lhs, rhs] = runStmtsDecls $ do
     statement $ Store retL dstRegL
     statement $ Store retH dstRegH
 
-genCall (PrimTarget (MO_S_Mul2 w)) [dstH, dstL] [lhs, rhs] = runStmtsDecls $ do
+genCall (PrimTarget (MO_S_Mul2 w)) [dstC, dstH, dstL] [lhs, rhs] = runStmtsDecls $ do
     let width = widthToLlvmInt w
         bitWidth = widthInBits w
         width2x = LMInt (bitWidth * 2)
@@ -373,10 +373,18 @@ genCall (PrimTarget (MO_S_Mul2 w)) [dstH, dstL] [lhs, rhs] = runStmtsDecls $ do
     retShifted <- doExprW width2x $ LlvmOp LM_MO_AShr retV widthLlvmLit
     -- And extract them into retH.
     retH <- doExprW width $ Cast LM_Trunc retShifted width
+    -- Check if the carry is useful by doing a full arithmetic right shift on
+    -- retL and comparing the result with retH
+    let widthLlvmLitm1 = LMLitVar $ LMIntLit (fromIntegral bitWidth - 1) width
+    retH' <- doExprW width $ LlvmOp LM_MO_AShr retL widthLlvmLitm1
+    retC1  <- doExprW i1 $ Compare LM_CMP_Ne retH retH' -- Compare op returns a 1-bit value (i1)
+    retC   <- doExprW width $ Cast LM_Zext retC1 width  -- so we zero-extend it
     dstRegL <- getCmmRegW (CmmLocal dstL)
     dstRegH <- getCmmRegW (CmmLocal dstH)
+    dstRegC <- getCmmRegW (CmmLocal dstC)
     statement $ Store retL dstRegL
     statement $ Store retH dstRegH
+    statement $ Store retC dstRegC
 
 -- MO_U_QuotRem2 is another case we handle by widening the registers to double
 -- the width and use normal LLVM instructions (similarly to the MO_U_Mul2). The


=====================================
compiler/GHC/Runtime/Heap/Inspect.hs
=====================================
@@ -870,20 +870,21 @@ extractSubTerms recurse clos = liftM thdOf3 . go 0 0
                 (error "unboxedTupleTerm: no HValue for unboxed tuple") terms
 
     -- Extract a sub-word sized field from a word
-    index item_size_b index_b word_size endian =
-        (word .&. (mask `shiftL` moveBytes)) `shiftR` moveBytes
-      where
-        mask :: Word
-        mask = case item_size_b of
-            1 -> 0xFF
-            2 -> 0xFFFF
-            4 -> 0xFFFFFFFF
-            _ -> panic ("Weird byte-index: " ++ show index_b)
-        (q,r) = index_b `quotRem` word_size
-        word = array!!q
-        moveBytes = case endian of
-         BigEndian    -> word_size - (r + item_size_b) * 8
-         LittleEndian -> r * 8
+    -- A sub word is aligned to the left-most part of a word on big-endian
+    -- platforms, and to the right-most part of a word on little-endian
+    -- platforms.  This allows to write and read it back from memory
+    -- independent of endianness.  Bits not belonging to a sub word are zeroed
+    -- out, although, this is strictly speaking not necessary since a sub word
+    -- is read back from memory by appropriately casted pointers (see e.g.
+    -- ppr_float of cPprTermBase).
+    index size_b aligned_idx word_size endian = case endian of
+      BigEndian    -> (word `shiftL` moveBits) `shiftR` zeroOutBits `shiftL` zeroOutBits
+      LittleEndian -> (word `shiftR` moveBits) `shiftL` zeroOutBits `shiftR` zeroOutBits
+     where
+      (q, r) = aligned_idx `quotRem` word_size
+      word = array!!q
+      moveBits = r * 8
+      zeroOutBits = (word_size - size_b) * 8
 
 
 -- | Fast, breadth-first Type reconstruction


=====================================
rts/Linker.c
=====================================
@@ -188,7 +188,7 @@ int ocTryLoad( ObjectCode* oc );
  *
  * MAP_32BIT not available on OpenBSD/amd64
  */
-#if defined(MAP_32BIT) && defined(x86_64_HOST_ARCH)
+#if defined(MAP_32BIT) && (defined(x86_64_HOST_ARCH) || (defined(aarch64_TARGET_ARCH) || defined(aarch64_HOST_ARCH)))
 #define MAP_LOW_MEM
 #define TRY_MAP_32BIT MAP_32BIT
 #else
@@ -214,10 +214,22 @@ int ocTryLoad( ObjectCode* oc );
  * systems, we have to pick a base address in the low 2Gb of the address space
  * and try to allocate memory from there.
  *
+ * The same holds for aarch64, where the default, even with PIC, model
+ * is 4GB. The linker is free to emit AARCH64_ADR_PREL_PG_HI21
+ * relocations.
+ *
  * We pick a default address based on the OS, but also make this
  * configurable via an RTS flag (+RTS -xm)
  */
-#if defined(MAP_32BIT) || DEFAULT_LINKER_ALWAYS_PIC
+
+#if (defined(aarch64_TARGET_ARCH) || defined(aarch64_HOST_ARCH))
+// Try to use stg_upd_frame_info as the base. We need to be within +-4GB of that
+// address, otherwise we violate the aarch64 memory model. Any object we load
+// can potentially reference any of the ones we bake into the binary (and list)
+// in RtsSymbols. Thus we'll need to be within +-4GB of those,
+// stg_upd_frame_info is a good candidate as it's referenced often.
+#define MMAP_32BIT_BASE_DEFAULT (void*)&stg_upd_frame_info;
+#elif defined(MAP_32BIT) || DEFAULT_LINKER_ALWAYS_PIC
 // Try to use MAP_32BIT
 #define MMAP_32BIT_BASE_DEFAULT 0
 #else
@@ -1040,11 +1052,47 @@ resolveSymbolAddr (pathchar* buffer, int size,
 }
 
 #if RTS_LINKER_USE_MMAP
+
+/* -----------------------------------------------------------------------------
+   Occationally we depend on mmap'd region being close to already mmap'd regions.
+
+   Our static in-memory linker may be restricted by the architectures relocation
+   range. E.g. aarch64 has a +-4GB range for PIC code, thus we'd preferrably
+   get memory for the linker close to existing mappings.  mmap on it's own is
+   free to return any memory location, independent of what the preferred
+   location argument indicates.
+
+   For example mmap (via qemu) might give you addresses all over the available
+   memory range if the requested location is already occupied.
+
+   mmap_next will do a linear search from the start page upwards to find a
+   suitable location that is as close as possible to the locations (proivded
+   via the first argument).
+   -------------------------------------------------------------------------- */
+
+void*
+mmap_next(void *addr, size_t length, int prot, int flags, int fd, off_t offset) {
+  if(addr == NULL) return mmap(addr, length, prot, flags, fd, offset);
+  // we are going to look for up to pageSize * 1024 * 1024 (4GB) from the
+  // address.
+  size_t pageSize = getPageSize();
+  for(int i = (uintptr_t)addr & (pageSize-1) ? 1 : 0; i < 1024*1024; i++) {
+    void *target = (void*)(((uintptr_t)addr & ~(pageSize-1))+(i*pageSize));
+    void *mem = mmap(target, length, prot, flags, fd, offset);
+    if(mem == NULL) return mem;
+    if(mem == target) return mem;
+    munmap(mem, length);
+    IF_DEBUG(linker && (i % 1024 == 0),
+      debugBelch("mmap_next failed to find suitable space in %p - %p\n", addr, target));
+  }
+  return NULL;
+}
+
 //
 // Returns NULL on failure.
 //
 void *
-mmapForLinker (size_t bytes, uint32_t flags, int fd, int offset)
+mmapForLinker (size_t bytes, uint32_t prot, uint32_t flags, int fd, int offset)
 {
    void *map_addr = NULL;
    void *result;
@@ -1065,15 +1113,14 @@ mmap_again:
        map_addr = mmap_32bit_base;
    }
 
-   const int prot = PROT_READ | PROT_WRITE;
    IF_DEBUG(linker,
             debugBelch("mmapForLinker: \tprotection %#0x\n", prot));
    IF_DEBUG(linker,
             debugBelch("mmapForLinker: \tflags      %#0x\n",
                        MAP_PRIVATE | tryMap32Bit | fixed | flags));
 
-   result = mmap(map_addr, size, prot,
-                 MAP_PRIVATE|tryMap32Bit|fixed|flags, fd, offset);
+   result = mmap_next(map_addr, size, prot,
+                      MAP_PRIVATE|tryMap32Bit|fixed|flags, fd, offset);
 
    if (result == MAP_FAILED) {
        sysErrorBelch("mmap %" FMT_Word " bytes at %p",(W_)size,map_addr);
@@ -1126,6 +1173,28 @@ mmap_again:
            goto mmap_again;
        }
    }
+#elif (defined(aarch64_TARGET_ARCH) || defined(aarch64_HOST_ARCH))
+    // for aarch64 we need to make sure we stay within 4GB of the
+    // mmap_32bit_base, and we also do not want to update it.
+//    if (mmap_32bit_base != (void*)&stg_upd_frame_info) {
+    if (result == map_addr) {
+        mmap_32bit_base = (void*)((uintptr_t)map_addr + size);
+    } else {
+        // upper limit 4GB - size of the object file - 1mb wiggle room.
+        if(llabs((uintptr_t)result - (uintptr_t)&stg_upd_frame_info) > (2<<32) - size - (2<<20)) {
+            // not within range :(
+            debugTrace(DEBUG_linker,
+                        "MAP_32BIT didn't work; gave us %lu bytes at 0x%p",
+                        bytes, result);
+            munmap(result, size);
+            // TODO: some abort/mmap_32bit_base recomputation based on
+            //       if mmap_32bit_base is changed, or still at stg_upd_frame_info
+            goto mmap_again;
+        } else {
+            mmap_32bit_base = (void*)((uintptr_t)result + size);
+        }
+    }
+//   }
 #endif
 
    IF_DEBUG(linker,
@@ -1454,9 +1523,9 @@ preloadObjectFile (pathchar *path)
     * See also the misalignment logic for darwin below.
     */
 #if defined(ios_HOST_OS)
-   image = mmap(NULL, fileSize, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+   image = mmapForLinker(fileSize, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
 #else
-   image = mmap(NULL, fileSize, PROT_READ|PROT_WRITE|PROT_EXEC,
+   image = mmapForLinker(fileSize, PROT_READ|PROT_WRITE|PROT_EXEC,
                 MAP_PRIVATE, fd, 0);
 #endif
 


=====================================
rts/LinkerInternals.h
=====================================
@@ -14,6 +14,7 @@
 
 #if RTS_LINKER_USE_MMAP
 #include <sys/mman.h>
+void* mmap_next(void *addr, size_t length, int prot, int flags, int fd, off_t offset);
 #endif
 
 void printLoadedObjects(void);
@@ -293,7 +294,7 @@ void exitLinker( void );
 void freeObjectCode (ObjectCode *oc);
 SymbolAddr* loadSymbol(SymbolName *lbl, RtsSymbolInfo *pinfo);
 
-void *mmapForLinker (size_t bytes, uint32_t flags, int fd, int offset);
+void *mmapForLinker (size_t bytes, uint32_t prot, uint32_t flags, int fd, int offset);
 void mmapForLinkerMarkExecutable (void *start, size_t len);
 
 void addProddableBlock ( ObjectCode* oc, void* start, int size );


=====================================
rts/linker/Elf.c
=====================================
@@ -637,7 +637,7 @@ mapObjectFileSection (int fd, Elf_Word offset, Elf_Word size,
 
     pageOffset = roundDownToPage(offset);
     pageSize = roundUpToPage(offset-pageOffset+size);
-    p = mmapForLinker(pageSize, 0, fd, pageOffset);
+    p = mmapForLinker(pageSize, PROT_READ | PROT_WRITE, 0, fd, pageOffset);
     if (p == NULL) return NULL;
     *mapped_size = pageSize;
     *mapped_offset = pageOffset;
@@ -709,7 +709,7 @@ ocGetNames_ELF ( ObjectCode* oc )
                * address might be out of range for sections that are mmaped.
                */
               alloc = SECTION_MMAP;
-              start = mmapForLinker(size, MAP_ANONYMOUS, -1, 0);
+              start = mmapForLinker(size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
               mapped_start = start;
               mapped_offset = 0;
               mapped_size = roundUpToPage(size);
@@ -751,8 +751,9 @@ ocGetNames_ELF ( ObjectCode* oc )
           unsigned nstubs = numberOfStubsForSection(oc, i);
           unsigned stub_space = STUB_SIZE * nstubs;
 
-          void * mem = mmapForLinker(size+stub_space, MAP_ANON, -1, 0);
-          if( mem == NULL ) {
+          void * mem = mmapForLinker(size+stub_space, PROT_READ | PROT_WRITE, MAP_ANON, -1, 0);
+
+          if( mem == MAP_FAILED ) {
               barf("failed to mmap allocated memory to load section %d. "
                    "errno = %d", i, errno);
           }
@@ -841,6 +842,26 @@ ocGetNames_ELF ( ObjectCode* oc )
 
       unsigned curSymbol = 0;
 
+      unsigned long common_size = 0;
+      unsigned long common_used = 0;
+      for(ElfSymbolTable *symTab = oc->info->symbolTables;
+           symTab != NULL; symTab = symTab->next) {
+           for (size_t j = 0; j < symTab->n_symbols; j++) {
+               ElfSymbol *symbol = &symTab->symbols[j];
+               if (SHN_COMMON == symTab->symbols[j].elf_sym->st_shndx) {
+                   common_size += symbol->elf_sym->st_size;
+               }
+           }
+      }
+      void * common_mem = NULL;
+      if(common_size > 0) {
+          common_mem = mmapForLinker(common_size,
+                            PROT_READ | PROT_WRITE,
+                            MAP_ANON | MAP_PRIVATE,
+                            -1, 0);
+          ASSERT(common_mem != NULL);
+      }
+
       //TODO: we ignore local symbols anyway right? So we can use the
       //      shdr[i].sh_info to get the index of the first non-local symbol
       // ie we should use j = shdr[i].sh_info
@@ -876,12 +897,15 @@ ocGetNames_ELF ( ObjectCode* oc )
 
                if (shndx == SHN_COMMON) {
                    isLocal = false;
-                   symbol->addr = stgCallocBytes(1, symbol->elf_sym->st_size,
-                                       "ocGetNames_ELF(COMMON)");
-                   /*
-                   debugBelch("COMMON symbol, size %d name %s\n",
-                                   stab[j].st_size, nm);
-                   */
+                   ASSERT(common_used < common_size);
+                   ASSERT(common_mem);
+                   symbol->addr = (void*)((uintptr_t)common_mem + common_used);
+                   common_used += symbol->elf_sym->st_size;
+                   ASSERT(common_used <= common_size);
+
+                   debugBelch("COMMON symbol, size %ld name %s allocated at %p\n",
+                                   symbol->elf_sym->st_size, nm, symbol->addr);
+
                    /* Pointless to do addProddableBlock() for this area,
                       since the linker should never poke around in it. */
                } else if ((ELF_ST_BIND(symbol->elf_sym->st_info) == STB_GLOBAL


=====================================
rts/linker/LoadArchive.c
=====================================
@@ -489,7 +489,7 @@ static HsInt loadArchive_ (pathchar *path)
 
 #if defined(darwin_HOST_OS) || defined(ios_HOST_OS)
             if (RTS_LINKER_USE_MMAP)
-                image = mmapForLinker(memberSize, MAP_ANONYMOUS, -1, 0);
+                image = mmapForLinker(memberSize, PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
             else {
                 /* See loadObj() */
                 misalignment = machoGetMisalignment(f);
@@ -548,7 +548,7 @@ while reading filename from `%" PATH_FMT "'", path);
             }
             DEBUG_LOG("Found GNU-variant file index\n");
 #if RTS_LINKER_USE_MMAP
-            gnuFileIndex = mmapForLinker(memberSize + 1, MAP_ANONYMOUS, -1, 0);
+            gnuFileIndex = mmapForLinker(memberSize + 1, PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
 #else
             gnuFileIndex = stgMallocBytes(memberSize + 1, "loadArchive(image)");
 #endif


=====================================
rts/linker/M32Alloc.c
=====================================
@@ -256,7 +256,7 @@ m32_alloc_page(void)
     m32_free_page_pool_size --;
     return page;
   } else {
-    struct m32_page_t *page = mmapForLinker(getPageSize(),MAP_ANONYMOUS,-1,0);
+    struct m32_page_t *page = mmapForLinker(getPageSize(), PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
     if (page > (struct m32_page_t *) 0xffffffff) {
       barf("m32_alloc_page: failed to get allocation in lower 32-bits");
     }
@@ -280,7 +280,7 @@ m32_allocator_new(bool executable)
   // Preallocate the initial M32_MAX_PAGES to ensure that they don't
   // fragment the memory.
   size_t pgsz = getPageSize();
-  char* bigchunk = mmapForLinker(pgsz * M32_MAX_PAGES,MAP_ANONYMOUS,-1,0);
+  char* bigchunk = mmapForLinker(pgsz * M32_MAX_PAGES, PROT_READ | PROT_WRITE, MAP_ANONYMOUS,-1,0);
   if (bigchunk == NULL)
       barf("m32_allocator_init: Failed to map");
 
@@ -396,7 +396,7 @@ m32_alloc(struct m32_allocator_t *alloc, size_t size, size_t alignment)
    if (m32_is_large_object(size,alignment)) {
       // large object
       size_t alsize = ROUND_UP(sizeof(struct m32_page_t), alignment);
-      struct m32_page_t *page = mmapForLinker(alsize+size,MAP_ANONYMOUS,-1,0);
+      struct m32_page_t *page = mmapForLinker(alsize+size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS,-1,0);
       page->filled_page.size = alsize + size;
       m32_allocator_push_filled_list(&alloc->unprotected_list, (struct m32_page_t *) page);
       return (char*) page + alsize;


=====================================
rts/linker/MachO.c
=====================================
@@ -508,7 +508,7 @@ makeGot(ObjectCode * oc) {
 
     if(got_slots > 0) {
         oc->info->got_size =  got_slots * sizeof(void*);
-        oc->info->got_start = mmap(NULL, oc->info->got_size,
+        oc->info->got_start = mmapForLinker(oc->info->got_size,
                                    PROT_READ | PROT_WRITE,
                                    MAP_ANON | MAP_PRIVATE,
                                    -1, 0);
@@ -1114,7 +1114,7 @@ ocBuildSegments_MachO(ObjectCode *oc)
         return 1;
     }
 
-    mem = mmapForLinker(size_compound, MAP_ANON, -1, 0);
+    mem = mmapForLinker(size_compound, PROT_READ | PROT_WRITE, MAP_ANON, -1, 0);
     if (NULL == mem) return 0;
 
     IF_DEBUG(linker, debugBelch("ocBuildSegments: allocating %d segments\n", n_activeSegments));


=====================================
rts/linker/SymbolExtras.c
=====================================
@@ -79,7 +79,7 @@ int ocAllocateExtras(ObjectCode* oc, int count, int first, int bssSize)
       size_t n = roundUpToPage(oc->fileSize);
       bssSize = roundUpToAlign(bssSize, 8);
       size_t allocated_size = n + bssSize + extras_size;
-      void *new = mmapForLinker(allocated_size, MAP_ANONYMOUS, -1, 0);
+      void *new = mmapForLinker(allocated_size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
       if (new) {
           memcpy(new, oc->image, oc->fileSize);
           if (oc->imageMapped) {


=====================================
rts/linker/elf_got.c
=====================================
@@ -48,7 +48,7 @@ makeGot(ObjectCode * oc) {
     }
     if(got_slots > 0) {
         oc->info->got_size = got_slots * sizeof(void *);
-         void * mem = mmap(NULL, oc->info->got_size,
+         void * mem = mmapForLinker(oc->info->got_size,
                            PROT_READ | PROT_WRITE,
                            MAP_ANON | MAP_PRIVATE,
                            -1, 0);


=====================================
testsuite/tests/ghci.debugger/scripts/all.T
=====================================
@@ -28,9 +28,7 @@ test('print020', [extra_files(['../HappyTest.hs']),
                   omit_ways(['ghci-ext'])], ghci_script, ['print020.script'])
 
 test('print021', normal, ghci_script, ['print021.script'])
-test('print022',
-     [when(arch('powerpc64'), expect_broken(14455))],
-     ghci_script, ['print022.script'])
+test('print022', normal, ghci_script, ['print022.script'])
 test('print023', extra_files(['../Test.hs']), ghci_script, ['print023.script'])
 test('print024', extra_files(['../Test.hs']), ghci_script, ['print024.script'])
 test('print025', normal, ghci_script, ['print025.script'])



View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/b7b1720236236068e2bed4cfe85918a1e712fc4a...aedfeb0b2b22172a0dfca0fe0c020ac80539d6ae

-- 
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/b7b1720236236068e2bed4cfe85918a1e712fc4a...aedfeb0b2b22172a0dfca0fe0c020ac80539d6ae
You're receiving this email because of your account on gitlab.haskell.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20200710/69bf9103/attachment-0001.html>


More information about the ghc-commits mailing list