[Git][ghc/ghc][ghc-9.10] 7 commits: mk-ghcup-metadata: Fix directory of testsuite tarball
Ben Gamari (@bgamari)
gitlab at gitlab.haskell.org
Wed Mar 27 21:44:53 UTC 2024
Ben Gamari pushed to branch ghc-9.10 at Glasgow Haskell Compiler / GHC
Commits:
6f4b0335 by Ben Gamari at 2024-03-26T21:32:47-04:00
mk-ghcup-metadata: Fix directory of testsuite tarball
As reported in #24546, the `dlTest` artifact should be extracted into
the `testsuite` directory.
(cherry picked from commit 9d936c5799daadf96392211b03e38520925aea17)
- - - - -
878e8529 by Ben Gamari at 2024-03-26T21:32:49-04:00
ghcup-metadata: Don't populate dlOutput unless necessary
ghcup can apparently infer the output name of an artifact from its URL.
Consequently, we should only include the `dlOutput` field when it would
differ from the filename of `dlUri`.
Fixes #24547.
(cherry picked from commit 6d398066b6084a971248da8ce37bc40c53a83525)
- - - - -
725343aa by Rodrigo Mesquita at 2024-03-26T22:34:19-04:00
bindist: Clean xattrs of bin and lib at configure time
For issue #21506, we started cleaning the extended attributes of
binaries and libraries from the bindist *after* they were installed to
workaround notarisation (#17418), as part of `make install`.
However, the `ghc-toolchain` binary that is now shipped with the bindist
must be run at `./configure` time. Since we only cleaned the xattributes
of the binaries and libs after they were installed, in some situations
users would be unable to run `ghc-toolchain` from the bindist, failing
at configure time (#24554).
In this commit we move the xattr cleaning logic to the configure script.
Fixes #24554
(cherry picked from commit bf65a7c388b399b7b3c6b97f25d97164390b75f1)
- - - - -
5dd67b52 by Alan Zimmerman at 2024-03-27T08:37:54-04:00
EPA: Do not extend declaration range for trailine zero len semi
The lexer inserts virtual semicolons having zero width.
Do not use them to extend the list span of items in a list.
(cherry picked from commit 0acfe391583d77a72051d505f05fab0ada056c49)
- - - - -
10829530 by Rodrigo Mesquita at 2024-03-27T08:38:20-04:00
configure: Use LDFLAGS when trying linkers
A user may configure `LDFLAGS` but not `LD`. When choosing a linker, we
will prefer `ldd`, then `ld.gold`, then `ld.bfd` -- however, we have to
check for a working linker. If either of these fail, we try the next in
line.
However, we were not considering the `$LDFLAGS` when checking if these
linkers worked. So we would pick a linker that does not support the
current $LDFLAGS and fail further down the line when we used that linker
with those flags.
Fixes #24565, where `LDFLAGS=-Wl,-z,pack-relative-relocs` is not
supported by `ld.gold` but that was being picked still.
(cherry picked from commit 32a8103f3b3e22907fdd67b69c919c5251d8cc20)
- - - - -
3914808d by Cheng Shao at 2024-03-27T08:38:35-04:00
libffi-tarballs: bump libffi-tarballs submodule to libffi 3.4.6
This commit bumps the libffi-tarballs submodule to libffi 3.4.6, which
includes numerous upstream libffi fixes, especially
https://github.com/libffi/libffi/issues/760.
(cherry picked from commit 810660b780e1111b36c91326bcd0041e1f62706b)
- - - - -
6ecd5f2f by Cheng Shao at 2024-03-27T08:38:49-04:00
rts: fix clang compilation on aarch64
This patch fixes function prototypes in ARMOutlineAtomicsSymbols.h
which causes "error: address argument to atomic operation must be a
pointer to _Atomic type" when compiling with clang on aarch64.
(cherry picked from commit 7db8c9927fae3369fc4ecff68f80c4cb32eea757)
- - - - -
10 changed files:
- .gitlab/rel_eng/mk-ghcup-metadata/mk_ghcup_metadata.py
- compiler/GHC/Parser.y
- distrib/configure.ac.in
- hadrian/bindist/Makefile
- libffi-tarballs
- m4/fp_cc_linker_flag_try.m4
- rts/ARMOutlineAtomicsSymbols.h
- testsuite/tests/printer/Makefile
- + testsuite/tests/printer/PprLetIn.hs
- testsuite/tests/printer/all.T
Changes:
=====================================
.gitlab/rel_eng/mk-ghcup-metadata/mk_ghcup_metadata.py
=====================================
@@ -36,6 +36,7 @@ import os
import yaml
import gitlab
from urllib.request import urlopen
+from urllib.parse import urlparse
import hashlib
import sys
import json
@@ -80,7 +81,7 @@ source_artifact = Artifact('source-tarball'
test_artifact = Artifact('source-tarball'
, 'ghc-{version}-testsuite.tar.xz'
, 'ghc-{version}-testsuite.tar.xz'
- , 'ghc-{version}' )
+ , 'ghc-{version}/testsuite' )
def debian(arch, n):
return linux_platform(arch, "{arch}-linux-deb{n}".format(arch=arch, n=n))
@@ -156,13 +157,18 @@ def mk_one_metadata(release_mode, version, job_map, artifact):
eprint(f"Bindist URL: {url}")
eprint(f"Download URL: {final_url}")
- #Download and hash from the release pipeline, this must not change anyway during upload.
+ # Download and hash from the release pipeline, this must not change anyway during upload.
h = download_and_hash(url)
res = { "dlUri": final_url
, "dlSubdir": artifact.subdir.format(version=version)
- , "dlOutput": artifact.output_name.format(version=version)
, "dlHash" : h }
+
+ # Only add dlOutput if it is inconsistent with the filename inferred from the URL
+ output = artifact.output_name.format(version=version)
+ if Path(urlparse(final_url).path).name != output:
+ res["dlOutput"] = output
+
eprint(res)
return res
=====================================
compiler/GHC/Parser.y
=====================================
@@ -1438,10 +1438,10 @@ ty_fam_inst_eqns :: { Located [LTyFamInstEqn GhcPs] }
h' <- addTrailingSemiA h (gl $2)
return (sLL $1 $> ($3 : h' : t)) }
| ty_fam_inst_eqns ';' {% case unLoc $1 of
- [] -> return (sLL $1 $> (unLoc $1))
+ [] -> return (sLZ $1 $> (unLoc $1))
(h:t) -> do
h' <- addTrailingSemiA h (gl $2)
- return (sLL $1 $> (h':t)) }
+ return (sLZ $1 $> (h':t)) }
| ty_fam_inst_eqn { sLL $1 $> [$1] }
| {- empty -} { noLoc [] }
@@ -1719,12 +1719,12 @@ decls_cls :: { Located ([AddEpAnn],OrdList (LHsDecl GhcPs)) } -- Reversed
return (sLL $1 $> (fst $ unLoc $1
, snocOL hs t' `appOL` unitOL $3)) }
| decls_cls ';' {% if isNilOL (snd $ unLoc $1)
- then return (sLL $1 $> ( (fst $ unLoc $1) ++ (mz AnnSemi $2)
+ then return (sLZ $1 $> ( (fst $ unLoc $1) ++ (mz AnnSemi $2)
,snd $ unLoc $1))
else case (snd $ unLoc $1) of
SnocOL hs t -> do
t' <- addTrailingSemiA t (gl $2)
- return (sLL $1 $> (fst $ unLoc $1
+ return (sLZ $1 $> (fst $ unLoc $1
, snocOL hs t')) }
| decl_cls { sL1 $1 ([], unitOL $1) }
| {- empty -} { noLoc ([],nilOL) }
@@ -1765,12 +1765,12 @@ decls_inst :: { Located ([AddEpAnn],OrdList (LHsDecl GhcPs)) } -- Reversed
return (sLL $1 $> (fst $ unLoc $1
, snocOL hs t' `appOL` unLoc $3)) }
| decls_inst ';' {% if isNilOL (snd $ unLoc $1)
- then return (sLL $1 $> ((fst $ unLoc $1) ++ (mz AnnSemi $2)
+ then return (sLZ $1 $> ((fst $ unLoc $1) ++ (mz AnnSemi $2)
,snd $ unLoc $1))
else case (snd $ unLoc $1) of
SnocOL hs t -> do
t' <- addTrailingSemiA t (gl $2)
- return (sLL $1 $> (fst $ unLoc $1
+ return (sLZ $1 $> (fst $ unLoc $1
, snocOL hs t')) }
| decl_inst { sL1 $1 ([],unLoc $1) }
| {- empty -} { noLoc ([],nilOL) }
@@ -1806,12 +1806,12 @@ decls :: { Located ([AddEpAnn], OrdList (LHsDecl GhcPs)) }
return (rest `seq` this `seq` these `seq`
(sLL $1 $> (fst $ unLoc $1, these))) }
| decls ';' {% if isNilOL (snd $ unLoc $1)
- then return (sLL $1 $> (((fst $ unLoc $1) ++ (msemiA $2)
+ then return (sLZ $1 $> (((fst $ unLoc $1) ++ (msemiA $2)
,snd $ unLoc $1)))
else case (snd $ unLoc $1) of
SnocOL hs t -> do
t' <- addTrailingSemiA t (gl $2)
- return (sLL $1 $> (fst $ unLoc $1
+ return (sLZ $1 $> (fst $ unLoc $1
, snocOL hs t')) }
| decl { sL1 $1 ([], unitOL $1) }
| {- empty -} { noLoc ([],nilOL) }
@@ -3334,11 +3334,11 @@ alts1(PATS) :: { forall b. DisambECP b => PV (Located ([AddEpAnn],[LMatch GhcPs
return (sLL $1 $> (fst $ unLoc $1,$3 : h' : t)) }
| alts1(PATS) ';' { $1 >>= \ $1 ->
case snd $ unLoc $1 of
- [] -> return (sLL $1 $> ((fst $ unLoc $1) ++ (mz AnnSemi $2)
+ [] -> return (sLZ $1 $> ((fst $ unLoc $1) ++ (mz AnnSemi $2)
,[]))
(h:t) -> do
h' <- addTrailingSemiA h (gl $2)
- return (sLL $1 $> (fst $ unLoc $1, h' : t)) }
+ return (sLZ $1 $> (fst $ unLoc $1, h' : t)) }
| alt(PATS) { $1 >>= \ $1 -> return $ sL1 $1 ([],[$1]) }
alt(PATS) :: { forall b. DisambECP b => PV (LMatch GhcPs (LocatedA b)) }
@@ -3442,7 +3442,7 @@ stmts :: { forall b. DisambECP b => PV (Located (OrdList AddEpAnn,[LStmt GhcPs (
| stmts ';' { $1 >>= \ $1 ->
case (snd $ unLoc $1) of
- [] -> return (sLL $1 $> ((fst $ unLoc $1) `snocOL` (mj AnnSemi $2),snd $ unLoc $1))
+ [] -> return (sLZ $1 $> ((fst $ unLoc $1) `snocOL` (mj AnnSemi $2),snd $ unLoc $1))
(h:t) -> do
{ h' <- addTrailingSemiA h (gl $2)
; return $ sL1 $1 (fst $ unLoc $1,h':t) }}
@@ -3552,7 +3552,7 @@ dbinds :: { Located [LIPBind GhcPs] } -- reversed
| dbinds ';' {% case unLoc $1 of
(h:t) -> do
h' <- addTrailingSemiA h (gl $2)
- return (sLL $1 $> (h':t)) }
+ return (sLZ $1 $> (h':t)) }
| dbind { let this = $1 in this `seq` (sL1 $1 [this]) }
-- | {- empty -} { [] }
@@ -4195,6 +4195,12 @@ sLLAsl :: (HasLoc a) => [a] -> Located b -> c -> Located c
sLLAsl [] = sL1
sLLAsl (!x:_) = sLL x
+{-# INLINE sLZ #-}
+sLZ :: (HasLoc a, HasLoc b) => a -> b -> c -> Located c
+sLZ !x !y = if isZeroWidthSpan (getHasLoc y)
+ then sL (getHasLoc x)
+ else sL (comb2 x y)
+
{- Note [Adding location info]
~~~~~~~~~~~~~~~~~~~~~~~~~~~
=====================================
distrib/configure.ac.in
=====================================
@@ -105,6 +105,29 @@ if test "$HostOS" = "mingw32" -a "$EnableDistroToolchain" = "NO"; then
FP_SETUP_WINDOWS_TOOLCHAIN([$hardtop/mingw/], [\$\$topdir/../mingw/])
fi
+
+if test "$HostOS" = "darwin"; then
+ # On darwin, we need to clean the extended attributes of the
+ # ghc-toolchain binary and its dynamic library before we can execute it in the bindist
+ # (this is a workaround for #24554, for the lack of proper notarisation #17418)
+
+ # The following is the work around suggested by @carter in #17418 during
+ # install time. This should help us with code signing issues by removing
+ # extended attributes from all files.
+ XATTR=${XATTR:-/usr/bin/xattr}
+
+ if [ -e "${XATTR}" ]; then
+
+ # Instead of cleaning the attributes of the ghc-toolchain binary only,
+ # we clean them from all files in the bin/ and lib/ directories, as it additionally future
+ # proofs running executables from the bindist besides ghc-toolchain at configure time, and
+ # we can avoid figuring out the path to the ghc-toolchain dynlib specifically.
+ /usr/bin/xattr -rc bin/
+ /usr/bin/xattr -rc lib/
+
+ fi
+fi
+
dnl ** Which gcc to use?
dnl --------------------------------------------------------------
AC_PROG_CC([gcc clang])
=====================================
hadrian/bindist/Makefile
=====================================
@@ -19,13 +19,6 @@ default:
# TODO : find if a better function exists
eq=$(and $(findstring $(1),$(2)),$(findstring $(2),$(1)))
-# the following is the work around suggested by @carter in #17418 during install
-# time. This should help us with code signing issues by removing extended
-# attributes from all files.
-ifeq "$(Darwin_Host)" "YES"
-XATTR ?= /usr/bin/xattr
-endif
-
# patchpackageconf
#
# Hacky function to patch up the 'haddock-interfaces' and 'haddock-html'
@@ -156,10 +149,6 @@ install_bin_libdir:
$(INSTALL_PROGRAM) "$$i" "$(DESTDIR)$(ActualBinsDir)"; \
fi; \
done
- # Work around #17418 on Darwin
- if [ -e "${XATTR}" ]; then \
- "${XATTR}" -c -r "$(DESTDIR)$(ActualBinsDir)"; \
- fi
.PHONY: install_bin_direct
install_bin_direct:
@@ -194,10 +183,6 @@ install_lib: lib/settings
$(INSTALL_DATA) $$i "$$dest/`dirname $$i`" ;; \
esac; \
done; \
- # Work around #17418 on Darwin
- if [ -e "${XATTR}" ]; then \
- "${XATTR}" -c -r "$(DESTDIR)$(ActualLibsDir)"; \
- fi
.PHONY: install_docs
install_docs:
=====================================
libffi-tarballs
=====================================
@@ -1 +1 @@
-Subproject commit 5624fd5c8bbce8432cd3c0b0ea92d152a1bba047
+Subproject commit 89a9b01c5647c8f0d3899435b99df690f582e9f1
=====================================
m4/fp_cc_linker_flag_try.m4
=====================================
@@ -9,7 +9,7 @@
AC_DEFUN([FP_CC_LINKER_FLAG_TRY], [
AC_MSG_CHECKING([whether C compiler supports -fuse-ld=$1])
echo 'int main(void) {return 0;}' > conftest.c
- if $CC -o conftest.o -fuse-ld=$1 conftest.c > /dev/null 2>&1
+ if $CC -o conftest.o -fuse-ld=$1 $LDFLAGS conftest.c > /dev/null 2>&1
then
$2="-fuse-ld=$1"
AC_MSG_RESULT([yes])
=====================================
rts/ARMOutlineAtomicsSymbols.h
=====================================
@@ -10,583 +10,583 @@
#include <stdint.h>
#include <stdatomic.h>
-uint8_t ghc___aarch64_cas1_relax(uint8_t old, uint8_t new, uint8_t* p);
-uint8_t ghc___aarch64_cas1_relax(uint8_t old, uint8_t new, uint8_t* p) {
+uint8_t ghc___aarch64_cas1_relax(uint8_t old, uint8_t new, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_cas1_relax(uint8_t old, uint8_t new, _Atomic uint8_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_relaxed, memory_order_relaxed); return old;
}
-uint8_t ghc___aarch64_cas1_acq(uint8_t old, uint8_t new, uint8_t* p);
-uint8_t ghc___aarch64_cas1_acq(uint8_t old, uint8_t new, uint8_t* p) {
+uint8_t ghc___aarch64_cas1_acq(uint8_t old, uint8_t new, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_cas1_acq(uint8_t old, uint8_t new, _Atomic uint8_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acquire, memory_order_acquire); return old;
}
-uint8_t ghc___aarch64_cas1_acq_rel(uint8_t old, uint8_t new, uint8_t* p);
-uint8_t ghc___aarch64_cas1_acq_rel(uint8_t old, uint8_t new, uint8_t* p) {
+uint8_t ghc___aarch64_cas1_acq_rel(uint8_t old, uint8_t new, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_cas1_acq_rel(uint8_t old, uint8_t new, _Atomic uint8_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acq_rel, memory_order_acquire); return old;
}
-uint8_t ghc___aarch64_cas1_sync(uint8_t old, uint8_t new, uint8_t* p);
-uint8_t ghc___aarch64_cas1_sync(uint8_t old, uint8_t new, uint8_t* p) {
+uint8_t ghc___aarch64_cas1_sync(uint8_t old, uint8_t new, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_cas1_sync(uint8_t old, uint8_t new, _Atomic uint8_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_seq_cst, memory_order_seq_cst); return old;
}
-uint16_t ghc___aarch64_cas2_relax(uint16_t old, uint16_t new, uint16_t* p);
-uint16_t ghc___aarch64_cas2_relax(uint16_t old, uint16_t new, uint16_t* p) {
+uint16_t ghc___aarch64_cas2_relax(uint16_t old, uint16_t new, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_cas2_relax(uint16_t old, uint16_t new, _Atomic uint16_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_relaxed, memory_order_relaxed); return old;
}
-uint16_t ghc___aarch64_cas2_acq(uint16_t old, uint16_t new, uint16_t* p);
-uint16_t ghc___aarch64_cas2_acq(uint16_t old, uint16_t new, uint16_t* p) {
+uint16_t ghc___aarch64_cas2_acq(uint16_t old, uint16_t new, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_cas2_acq(uint16_t old, uint16_t new, _Atomic uint16_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acquire, memory_order_acquire); return old;
}
-uint16_t ghc___aarch64_cas2_acq_rel(uint16_t old, uint16_t new, uint16_t* p);
-uint16_t ghc___aarch64_cas2_acq_rel(uint16_t old, uint16_t new, uint16_t* p) {
+uint16_t ghc___aarch64_cas2_acq_rel(uint16_t old, uint16_t new, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_cas2_acq_rel(uint16_t old, uint16_t new, _Atomic uint16_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acq_rel, memory_order_acquire); return old;
}
-uint16_t ghc___aarch64_cas2_sync(uint16_t old, uint16_t new, uint16_t* p);
-uint16_t ghc___aarch64_cas2_sync(uint16_t old, uint16_t new, uint16_t* p) {
+uint16_t ghc___aarch64_cas2_sync(uint16_t old, uint16_t new, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_cas2_sync(uint16_t old, uint16_t new, _Atomic uint16_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_seq_cst, memory_order_seq_cst); return old;
}
-uint32_t ghc___aarch64_cas4_relax(uint32_t old, uint32_t new, uint32_t* p);
-uint32_t ghc___aarch64_cas4_relax(uint32_t old, uint32_t new, uint32_t* p) {
+uint32_t ghc___aarch64_cas4_relax(uint32_t old, uint32_t new, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_cas4_relax(uint32_t old, uint32_t new, _Atomic uint32_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_relaxed, memory_order_relaxed); return old;
}
-uint32_t ghc___aarch64_cas4_acq(uint32_t old, uint32_t new, uint32_t* p);
-uint32_t ghc___aarch64_cas4_acq(uint32_t old, uint32_t new, uint32_t* p) {
+uint32_t ghc___aarch64_cas4_acq(uint32_t old, uint32_t new, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_cas4_acq(uint32_t old, uint32_t new, _Atomic uint32_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acquire, memory_order_acquire); return old;
}
-uint32_t ghc___aarch64_cas4_acq_rel(uint32_t old, uint32_t new, uint32_t* p);
-uint32_t ghc___aarch64_cas4_acq_rel(uint32_t old, uint32_t new, uint32_t* p) {
+uint32_t ghc___aarch64_cas4_acq_rel(uint32_t old, uint32_t new, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_cas4_acq_rel(uint32_t old, uint32_t new, _Atomic uint32_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acq_rel, memory_order_acquire); return old;
}
-uint32_t ghc___aarch64_cas4_sync(uint32_t old, uint32_t new, uint32_t* p);
-uint32_t ghc___aarch64_cas4_sync(uint32_t old, uint32_t new, uint32_t* p) {
+uint32_t ghc___aarch64_cas4_sync(uint32_t old, uint32_t new, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_cas4_sync(uint32_t old, uint32_t new, _Atomic uint32_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_seq_cst, memory_order_seq_cst); return old;
}
-uint64_t ghc___aarch64_cas8_relax(uint64_t old, uint64_t new, uint64_t* p);
-uint64_t ghc___aarch64_cas8_relax(uint64_t old, uint64_t new, uint64_t* p) {
+uint64_t ghc___aarch64_cas8_relax(uint64_t old, uint64_t new, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_cas8_relax(uint64_t old, uint64_t new, _Atomic uint64_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_relaxed, memory_order_relaxed); return old;
}
-uint64_t ghc___aarch64_cas8_acq(uint64_t old, uint64_t new, uint64_t* p);
-uint64_t ghc___aarch64_cas8_acq(uint64_t old, uint64_t new, uint64_t* p) {
+uint64_t ghc___aarch64_cas8_acq(uint64_t old, uint64_t new, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_cas8_acq(uint64_t old, uint64_t new, _Atomic uint64_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acquire, memory_order_acquire); return old;
}
-uint64_t ghc___aarch64_cas8_acq_rel(uint64_t old, uint64_t new, uint64_t* p);
-uint64_t ghc___aarch64_cas8_acq_rel(uint64_t old, uint64_t new, uint64_t* p) {
+uint64_t ghc___aarch64_cas8_acq_rel(uint64_t old, uint64_t new, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_cas8_acq_rel(uint64_t old, uint64_t new, _Atomic uint64_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_acq_rel, memory_order_acquire); return old;
}
-uint64_t ghc___aarch64_cas8_sync(uint64_t old, uint64_t new, uint64_t* p);
-uint64_t ghc___aarch64_cas8_sync(uint64_t old, uint64_t new, uint64_t* p) {
+uint64_t ghc___aarch64_cas8_sync(uint64_t old, uint64_t new, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_cas8_sync(uint64_t old, uint64_t new, _Atomic uint64_t* p) {
atomic_compare_exchange_strong_explicit(p, &old, new, memory_order_seq_cst, memory_order_seq_cst); return old;
}
-uint8_t ghc___aarch64_swp1_relax(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_swp1_relax(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_swp1_relax(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_swp1_relax(uint8_t v, _Atomic uint8_t* p) {
return atomic_exchange_explicit(p, v, memory_order_relaxed);
}
-uint8_t ghc___aarch64_swp1_acq(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_swp1_acq(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_swp1_acq(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_swp1_acq(uint8_t v, _Atomic uint8_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acquire);
}
-uint8_t ghc___aarch64_swp1_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_swp1_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_swp1_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_swp1_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_exchange_explicit(p, v, memory_order_release);
}
-uint8_t ghc___aarch64_swp1_acq_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_swp1_acq_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_swp1_acq_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_swp1_acq_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acq_rel);
}
-uint8_t ghc___aarch64_swp1_sync(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_swp1_sync(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_swp1_sync(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_swp1_sync(uint8_t v, _Atomic uint8_t* p) {
return atomic_exchange_explicit(p, v, memory_order_seq_cst);
}
-uint16_t ghc___aarch64_swp2_relax(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_swp2_relax(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_swp2_relax(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_swp2_relax(uint16_t v, _Atomic uint16_t* p) {
return atomic_exchange_explicit(p, v, memory_order_relaxed);
}
-uint16_t ghc___aarch64_swp2_acq(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_swp2_acq(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_swp2_acq(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_swp2_acq(uint16_t v, _Atomic uint16_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acquire);
}
-uint16_t ghc___aarch64_swp2_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_swp2_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_swp2_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_swp2_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_exchange_explicit(p, v, memory_order_release);
}
-uint16_t ghc___aarch64_swp2_acq_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_swp2_acq_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_swp2_acq_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_swp2_acq_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acq_rel);
}
-uint16_t ghc___aarch64_swp2_sync(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_swp2_sync(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_swp2_sync(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_swp2_sync(uint16_t v, _Atomic uint16_t* p) {
return atomic_exchange_explicit(p, v, memory_order_seq_cst);
}
-uint32_t ghc___aarch64_swp4_relax(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_swp4_relax(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_swp4_relax(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_swp4_relax(uint32_t v, _Atomic uint32_t* p) {
return atomic_exchange_explicit(p, v, memory_order_relaxed);
}
-uint32_t ghc___aarch64_swp4_acq(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_swp4_acq(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_swp4_acq(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_swp4_acq(uint32_t v, _Atomic uint32_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acquire);
}
-uint32_t ghc___aarch64_swp4_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_swp4_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_swp4_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_swp4_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_exchange_explicit(p, v, memory_order_release);
}
-uint32_t ghc___aarch64_swp4_acq_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_swp4_acq_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_swp4_acq_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_swp4_acq_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acq_rel);
}
-uint32_t ghc___aarch64_swp4_sync(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_swp4_sync(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_swp4_sync(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_swp4_sync(uint32_t v, _Atomic uint32_t* p) {
return atomic_exchange_explicit(p, v, memory_order_seq_cst);
}
-uint64_t ghc___aarch64_swp8_relax(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_swp8_relax(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_swp8_relax(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_swp8_relax(uint64_t v, _Atomic uint64_t* p) {
return atomic_exchange_explicit(p, v, memory_order_relaxed);
}
-uint64_t ghc___aarch64_swp8_acq(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_swp8_acq(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_swp8_acq(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_swp8_acq(uint64_t v, _Atomic uint64_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acquire);
}
-uint64_t ghc___aarch64_swp8_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_swp8_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_swp8_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_swp8_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_exchange_explicit(p, v, memory_order_release);
}
-uint64_t ghc___aarch64_swp8_acq_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_swp8_acq_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_swp8_acq_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_swp8_acq_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_exchange_explicit(p, v, memory_order_acq_rel);
}
-uint64_t ghc___aarch64_swp8_sync(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_swp8_sync(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_swp8_sync(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_swp8_sync(uint64_t v, _Atomic uint64_t* p) {
return atomic_exchange_explicit(p, v, memory_order_seq_cst);
}
-uint8_t ghc___aarch64_ldadd1_relax(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldadd1_relax(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldadd1_relax(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldadd1_relax(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_relaxed);
}
-uint8_t ghc___aarch64_ldadd1_acq(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldadd1_acq(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldadd1_acq(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldadd1_acq(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acquire);
}
-uint8_t ghc___aarch64_ldadd1_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldadd1_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldadd1_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldadd1_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_release);
}
-uint8_t ghc___aarch64_ldadd1_acq_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldadd1_acq_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldadd1_acq_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldadd1_acq_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acq_rel);
}
-uint8_t ghc___aarch64_ldadd1_sync(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldadd1_sync(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldadd1_sync(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldadd1_sync(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_seq_cst);
}
-uint16_t ghc___aarch64_ldadd2_relax(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldadd2_relax(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldadd2_relax(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldadd2_relax(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_relaxed);
}
-uint16_t ghc___aarch64_ldadd2_acq(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldadd2_acq(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldadd2_acq(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldadd2_acq(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acquire);
}
-uint16_t ghc___aarch64_ldadd2_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldadd2_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldadd2_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldadd2_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_release);
}
-uint16_t ghc___aarch64_ldadd2_acq_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldadd2_acq_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldadd2_acq_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldadd2_acq_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acq_rel);
}
-uint16_t ghc___aarch64_ldadd2_sync(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldadd2_sync(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldadd2_sync(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldadd2_sync(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_seq_cst);
}
-uint32_t ghc___aarch64_ldadd4_relax(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldadd4_relax(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldadd4_relax(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldadd4_relax(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_relaxed);
}
-uint32_t ghc___aarch64_ldadd4_acq(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldadd4_acq(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldadd4_acq(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldadd4_acq(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acquire);
}
-uint32_t ghc___aarch64_ldadd4_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldadd4_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldadd4_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldadd4_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_release);
}
-uint32_t ghc___aarch64_ldadd4_acq_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldadd4_acq_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldadd4_acq_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldadd4_acq_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acq_rel);
}
-uint32_t ghc___aarch64_ldadd4_sync(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldadd4_sync(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldadd4_sync(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldadd4_sync(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_seq_cst);
}
-uint64_t ghc___aarch64_ldadd8_relax(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldadd8_relax(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldadd8_relax(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldadd8_relax(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_relaxed);
}
-uint64_t ghc___aarch64_ldadd8_acq(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldadd8_acq(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldadd8_acq(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldadd8_acq(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acquire);
}
-uint64_t ghc___aarch64_ldadd8_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldadd8_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldadd8_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldadd8_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_release);
}
-uint64_t ghc___aarch64_ldadd8_acq_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldadd8_acq_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldadd8_acq_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldadd8_acq_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_acq_rel);
}
-uint64_t ghc___aarch64_ldadd8_sync(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldadd8_sync(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldadd8_sync(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldadd8_sync(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_add_explicit(p, v, memory_order_seq_cst);
}
-uint8_t ghc___aarch64_ldclr1_relax(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldclr1_relax(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldclr1_relax(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldclr1_relax(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_relaxed);
}
-uint8_t ghc___aarch64_ldclr1_acq(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldclr1_acq(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldclr1_acq(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldclr1_acq(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acquire);
}
-uint8_t ghc___aarch64_ldclr1_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldclr1_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldclr1_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldclr1_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_release);
}
-uint8_t ghc___aarch64_ldclr1_acq_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldclr1_acq_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldclr1_acq_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldclr1_acq_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acq_rel);
}
-uint8_t ghc___aarch64_ldclr1_sync(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldclr1_sync(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldclr1_sync(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldclr1_sync(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_seq_cst);
}
-uint16_t ghc___aarch64_ldclr2_relax(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldclr2_relax(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldclr2_relax(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldclr2_relax(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_relaxed);
}
-uint16_t ghc___aarch64_ldclr2_acq(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldclr2_acq(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldclr2_acq(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldclr2_acq(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acquire);
}
-uint16_t ghc___aarch64_ldclr2_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldclr2_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldclr2_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldclr2_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_release);
}
-uint16_t ghc___aarch64_ldclr2_acq_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldclr2_acq_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldclr2_acq_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldclr2_acq_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acq_rel);
}
-uint16_t ghc___aarch64_ldclr2_sync(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldclr2_sync(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldclr2_sync(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldclr2_sync(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_seq_cst);
}
-uint32_t ghc___aarch64_ldclr4_relax(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldclr4_relax(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldclr4_relax(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldclr4_relax(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_relaxed);
}
-uint32_t ghc___aarch64_ldclr4_acq(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldclr4_acq(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldclr4_acq(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldclr4_acq(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acquire);
}
-uint32_t ghc___aarch64_ldclr4_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldclr4_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldclr4_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldclr4_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_release);
}
-uint32_t ghc___aarch64_ldclr4_acq_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldclr4_acq_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldclr4_acq_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldclr4_acq_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acq_rel);
}
-uint32_t ghc___aarch64_ldclr4_sync(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldclr4_sync(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldclr4_sync(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldclr4_sync(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_seq_cst);
}
-uint64_t ghc___aarch64_ldclr8_relax(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldclr8_relax(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldclr8_relax(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldclr8_relax(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_relaxed);
}
-uint64_t ghc___aarch64_ldclr8_acq(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldclr8_acq(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldclr8_acq(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldclr8_acq(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acquire);
}
-uint64_t ghc___aarch64_ldclr8_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldclr8_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldclr8_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldclr8_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_release);
}
-uint64_t ghc___aarch64_ldclr8_acq_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldclr8_acq_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldclr8_acq_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldclr8_acq_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_acq_rel);
}
-uint64_t ghc___aarch64_ldclr8_sync(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldclr8_sync(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldclr8_sync(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldclr8_sync(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_and_explicit(p, v, memory_order_seq_cst);
}
-uint8_t ghc___aarch64_ldeor1_relax(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldeor1_relax(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldeor1_relax(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldeor1_relax(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_relaxed);
}
-uint8_t ghc___aarch64_ldeor1_acq(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldeor1_acq(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldeor1_acq(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldeor1_acq(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acquire);
}
-uint8_t ghc___aarch64_ldeor1_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldeor1_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldeor1_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldeor1_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_release);
}
-uint8_t ghc___aarch64_ldeor1_acq_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldeor1_acq_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldeor1_acq_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldeor1_acq_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acq_rel);
}
-uint8_t ghc___aarch64_ldeor1_sync(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldeor1_sync(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldeor1_sync(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldeor1_sync(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_seq_cst);
}
-uint16_t ghc___aarch64_ldeor2_relax(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldeor2_relax(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldeor2_relax(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldeor2_relax(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_relaxed);
}
-uint16_t ghc___aarch64_ldeor2_acq(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldeor2_acq(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldeor2_acq(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldeor2_acq(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acquire);
}
-uint16_t ghc___aarch64_ldeor2_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldeor2_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldeor2_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldeor2_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_release);
}
-uint16_t ghc___aarch64_ldeor2_acq_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldeor2_acq_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldeor2_acq_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldeor2_acq_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acq_rel);
}
-uint16_t ghc___aarch64_ldeor2_sync(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldeor2_sync(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldeor2_sync(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldeor2_sync(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_seq_cst);
}
-uint32_t ghc___aarch64_ldeor4_relax(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldeor4_relax(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldeor4_relax(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldeor4_relax(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_relaxed);
}
-uint32_t ghc___aarch64_ldeor4_acq(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldeor4_acq(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldeor4_acq(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldeor4_acq(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acquire);
}
-uint32_t ghc___aarch64_ldeor4_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldeor4_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldeor4_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldeor4_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_release);
}
-uint32_t ghc___aarch64_ldeor4_acq_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldeor4_acq_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldeor4_acq_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldeor4_acq_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acq_rel);
}
-uint32_t ghc___aarch64_ldeor4_sync(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldeor4_sync(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldeor4_sync(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldeor4_sync(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_seq_cst);
}
-uint64_t ghc___aarch64_ldeor8_relax(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldeor8_relax(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldeor8_relax(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldeor8_relax(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_relaxed);
}
-uint64_t ghc___aarch64_ldeor8_acq(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldeor8_acq(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldeor8_acq(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldeor8_acq(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acquire);
}
-uint64_t ghc___aarch64_ldeor8_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldeor8_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldeor8_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldeor8_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_release);
}
-uint64_t ghc___aarch64_ldeor8_acq_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldeor8_acq_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldeor8_acq_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldeor8_acq_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_acq_rel);
}
-uint64_t ghc___aarch64_ldeor8_sync(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldeor8_sync(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldeor8_sync(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldeor8_sync(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_xor_explicit(p, v, memory_order_seq_cst);
}
-uint8_t ghc___aarch64_ldset1_relax(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldset1_relax(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldset1_relax(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldset1_relax(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_relaxed);
}
-uint8_t ghc___aarch64_ldset1_acq(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldset1_acq(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldset1_acq(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldset1_acq(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acquire);
}
-uint8_t ghc___aarch64_ldset1_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldset1_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldset1_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldset1_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_release);
}
-uint8_t ghc___aarch64_ldset1_acq_rel(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldset1_acq_rel(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldset1_acq_rel(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldset1_acq_rel(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acq_rel);
}
-uint8_t ghc___aarch64_ldset1_sync(uint8_t v, uint8_t* p);
-uint8_t ghc___aarch64_ldset1_sync(uint8_t v, uint8_t* p) {
+uint8_t ghc___aarch64_ldset1_sync(uint8_t v, _Atomic uint8_t* p);
+uint8_t ghc___aarch64_ldset1_sync(uint8_t v, _Atomic uint8_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_seq_cst);
}
-uint16_t ghc___aarch64_ldset2_relax(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldset2_relax(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldset2_relax(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldset2_relax(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_relaxed);
}
-uint16_t ghc___aarch64_ldset2_acq(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldset2_acq(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldset2_acq(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldset2_acq(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acquire);
}
-uint16_t ghc___aarch64_ldset2_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldset2_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldset2_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldset2_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_release);
}
-uint16_t ghc___aarch64_ldset2_acq_rel(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldset2_acq_rel(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldset2_acq_rel(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldset2_acq_rel(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acq_rel);
}
-uint16_t ghc___aarch64_ldset2_sync(uint16_t v, uint16_t* p);
-uint16_t ghc___aarch64_ldset2_sync(uint16_t v, uint16_t* p) {
+uint16_t ghc___aarch64_ldset2_sync(uint16_t v, _Atomic uint16_t* p);
+uint16_t ghc___aarch64_ldset2_sync(uint16_t v, _Atomic uint16_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_seq_cst);
}
-uint32_t ghc___aarch64_ldset4_relax(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldset4_relax(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldset4_relax(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldset4_relax(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_relaxed);
}
-uint32_t ghc___aarch64_ldset4_acq(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldset4_acq(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldset4_acq(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldset4_acq(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acquire);
}
-uint32_t ghc___aarch64_ldset4_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldset4_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldset4_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldset4_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_release);
}
-uint32_t ghc___aarch64_ldset4_acq_rel(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldset4_acq_rel(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldset4_acq_rel(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldset4_acq_rel(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acq_rel);
}
-uint32_t ghc___aarch64_ldset4_sync(uint32_t v, uint32_t* p);
-uint32_t ghc___aarch64_ldset4_sync(uint32_t v, uint32_t* p) {
+uint32_t ghc___aarch64_ldset4_sync(uint32_t v, _Atomic uint32_t* p);
+uint32_t ghc___aarch64_ldset4_sync(uint32_t v, _Atomic uint32_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_seq_cst);
}
-uint64_t ghc___aarch64_ldset8_relax(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldset8_relax(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldset8_relax(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldset8_relax(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_relaxed);
}
-uint64_t ghc___aarch64_ldset8_acq(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldset8_acq(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldset8_acq(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldset8_acq(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acquire);
}
-uint64_t ghc___aarch64_ldset8_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldset8_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldset8_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldset8_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_release);
}
-uint64_t ghc___aarch64_ldset8_acq_rel(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldset8_acq_rel(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldset8_acq_rel(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldset8_acq_rel(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_acq_rel);
}
-uint64_t ghc___aarch64_ldset8_sync(uint64_t v, uint64_t* p);
-uint64_t ghc___aarch64_ldset8_sync(uint64_t v, uint64_t* p) {
+uint64_t ghc___aarch64_ldset8_sync(uint64_t v, _Atomic uint64_t* p);
+uint64_t ghc___aarch64_ldset8_sync(uint64_t v, _Atomic uint64_t* p) {
return atomic_fetch_or_explicit(p, v, memory_order_seq_cst);
}
=====================================
testsuite/tests/printer/Makefile
=====================================
@@ -821,3 +821,8 @@ AnnotationNoListTuplePuns:
Test24533:
$(CHECK_PPR) $(LIBDIR) Test24533.hs
$(CHECK_EXACT) $(LIBDIR) Test24533.hs
+
+.PHONY: PprLetIn
+PprLetIn:
+ $(CHECK_PPR) $(LIBDIR) PprLetIn.hs
+ $(CHECK_EXACT) $(LIBDIR) PprLetIn.hs
=====================================
testsuite/tests/printer/PprLetIn.hs
=====================================
@@ -0,0 +1,5 @@
+module PprLetIn where
+
+ff = let
+ x = 1
+ in 4
=====================================
testsuite/tests/printer/all.T
=====================================
@@ -197,3 +197,4 @@ test('Test23885', [ignore_stderr, req_ppr_deps], makefile_test, ['Test23885'])
test('ListTuplePuns', extra_files(['ListTuplePuns.hs']), ghci_script, ['ListTuplePuns.script'])
test('AnnotationNoListTuplePuns', [ignore_stderr, req_ppr_deps], makefile_test, ['AnnotationNoListTuplePuns'])
test('Test24533', [ignore_stderr, req_ppr_deps], makefile_test, ['Test24533'])
+test('PprLetIn', [ignore_stderr, req_ppr_deps], makefile_test, ['PprLetIn'])
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/337a5073e23fcb82e6b341ce083708e736c9ea89...6ecd5f2ff97af53c7334f2d8581651203a2c6b7d
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/337a5073e23fcb82e6b341ce083708e736c9ea89...6ecd5f2ff97af53c7334f2d8581651203a2c6b7d
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20240327/995ed917/attachment-0001.html>
More information about the ghc-commits
mailing list