From ezyang at mit.edu Mon Aug 1 21:03:35 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 01 Aug 2016 14:03:35 -0700 Subject: FINAL CALL FOR TALKS (Aug 8 deadline): Haskell Implementors Workshop 2016, Sep 24, Nara Message-ID: <1470085297-sup-7909@sabre> Deadline is in a week! Submit your talks! Call for Contributions ACM SIGPLAN Haskell Implementors' Workshop http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016 Nara, Japan, 24 September, 2016 Co-located with ICFP 2016 http://www.icfpconference.org/icfp2016/ Important dates --------------- Proposal Deadline: Monday, 8 August, 2016 Notification: Monday, 22 August, 2016 Workshop: Saturday, 24 September, 2016 The 8th Haskell Implementors' Workshop is to be held alongside ICFP 2016 this year in Nara. It is a forum for people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure, to share their work and discuss future directions and collaborations with others. Talks and/or demos are proposed by submitting an abstract, and selected by a small program committee. There will be no published proceedings; the workshop will be informal and interactive, with a flexible timetable and plenty of room for ad-hoc discussion, demos, and impromptu short talks. Scope and target audience ------------------------- It is important to distinguish the Haskell Implementors' Workshop from the Haskell Symposium which is also co-located with ICFP 2016. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors' Workshop will have no proceedings -- although we will aim to make talk videos, slides and presented data available with the consent of the speakers. In the Haskell Implementors' Workshop, we hope to study the underlying technology. We want to bring together anyone interested in the nitty-gritty details behind turning plain-text source code into a deployed product. Having said that, members of the wider Haskell community are more than welcome to attend the workshop -- we need your feedback to keep the Haskell ecosystem thriving. The scope covers any of the following topics. There may be some topics that people feel we've missed, so by all means submit a proposal even if it doesn't fit exactly into one of these buckets: * Compilation techniques * Language features and extensions * Type system implementation * Concurrency and parallelism: language design and implementation * Performance, optimisation and benchmarking * Virtual machines and run-time systems * Libraries and tools for development or deployment Talks ----- At this stage we would like to invite proposals from potential speakers for talks and demonstrations. We are aiming for 20 minute talks with 10 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 300 words. Submissions should be made via HotCRP. The website is: https://icfp-hiw16.hotcrp.com/ We will also have a lightning talks session which will be organised on the day. These talks will be 5-10 minutes, depending on available time. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Organisers --- End forwarded message --- From alex at slab.org Mon Aug 15 17:36:46 2016 From: alex at slab.org (Alex McLean) Date: Mon, 15 Aug 2016 18:36:46 +0100 Subject: ghci on pi zero Message-ID: Hi all, I'm trying to get a working ghc on the pi zero, with its armv6l processor. I'm in need of ghci and/or the hint package, and raspbian only comes with a ghc without ghci and that is unable to compile hint. What's the current status of ghc (and in particular ghci) on the original pi and pi zero? Any tips on getting it running? Best wishes, alex From amindfv at gmail.com Mon Aug 15 20:18:09 2016 From: amindfv at gmail.com (amindfv at gmail.com) Date: Mon, 15 Aug 2016 16:18:09 -0400 Subject: ghci on pi zero In-Reply-To: References: Message-ID: I know ARM support for ghci is pretty scarce, but looks like it can work: https://redd.it/35bw0b Tom > El 15 ago 2016, a las 13:36, Alex McLean escribió: > > Hi all, > > I'm trying to get a working ghc on the pi zero, with its armv6l > processor. I'm in need of ghci and/or the hint package, and raspbian > only comes with a ghc without ghci and that is unable to compile hint. > > What's the current status of ghc (and in particular ghci) on the > original pi and pi zero? Any tips on getting it running? > > Best wishes, > > alex > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users From amindfv at gmail.com Mon Aug 15 20:21:07 2016 From: amindfv at gmail.com (amindfv at gmail.com) Date: Mon, 15 Aug 2016 16:21:07 -0400 Subject: ghci on pi zero In-Reply-To: References: Message-ID: Also, in case you haven't seen it, there's a compilation of info (not exhaustive, and some old) at wiki.haskell.org/ARM Tom > El 15 ago 2016, a las 16:18, amindfv at gmail.com escribió: > > I know ARM support for ghci is pretty scarce, but looks like it can work: > > https://redd.it/35bw0b > > Tom > > >> El 15 ago 2016, a las 13:36, Alex McLean escribió: >> >> Hi all, >> >> I'm trying to get a working ghc on the pi zero, with its armv6l >> processor. I'm in need of ghci and/or the hint package, and raspbian >> only comes with a ghc without ghci and that is unable to compile hint. >> >> What's the current status of ghc (and in particular ghci) on the >> original pi and pi zero? Any tips on getting it running? >> >> Best wishes, >> >> alex >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users From harendra.kumar at gmail.com Tue Aug 16 11:08:33 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 16 Aug 2016 16:38:33 +0530 Subject: Behavior of GHC_PACKAGE_PATH Message-ID: Hi, As per the GHC manual (https://downloads.haskell. org/~ghc/master/users-guide/packages.html#the-ghc-package- path-environment-variable), packages which come earlier in the GHC_PACKAGE_PATH supersede the ones which come later. But that does not seem to be the case always. I am dealing with a case where I have multiple versions of a package in different databases. I am passing the list of package dbs via GHC_PACKAGE_PATH and ghc is picking the one which comes later. I expect it to pick the one which comes earlier in the path. In one case the package being picked from a package db which comes later is a newer version which makes me wonder if GHC always prefers a newer version. In another case both the versions are same but still GHC is using the one which come later in the package path. Are there any undocumented factors into play here? None of the packages is broken (as reported by ghc-pkg check). See https://github.com/commercialhaskell/stack/issues/1957#issuecomment- 239655912 for more details on where it is coming from. Thanks, Harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From trebla at vex.net Sat Aug 20 01:07:14 2016 From: trebla at vex.net (Albert Y. C. Lai) Date: Fri, 19 Aug 2016 21:07:14 -0400 Subject: Behavior of GHC_PACKAGE_PATH In-Reply-To: References: Message-ID: <6321c6e0-83b0-63c1-11ab-c368dadd4ed5@vex.net> On 2016-08-16 07:08 AM, Harendra Kumar wrote: > As per the GHC manual > (https://downloads.haskell.org/~ghc/master/users-guide/packages.html#the-ghc-package-path-environment-variable > ), > packages which come earlier in the GHC_PACKAGE_PATH supersede the ones > which come later. But that does not seem to be the case always. > > I am dealing with a case where I have multiple versions of a package in > different databases. No, they don't mean multiple versions. They mean this: For example if two databases both have HUnit-1.1 (important: same name and same version number), then that's when all the talk about overriding is relevant. If you have different version numbers, the override rule doesn't apply, the rule that applies is the shadow rule: the highest version number wins. The shadow rule exists because there are two scenerios that you really don't want to behave differently: 1. You have both HUnit-1.1 and HUnit-1.2, and they are in the same database. 2. You have both HUnit-1.1 and HUnit-1.2, but they are in different databases. From harendra.kumar at gmail.com Sat Aug 20 04:50:17 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sat, 20 Aug 2016 10:20:17 +0530 Subject: Behavior of GHC_PACKAGE_PATH In-Reply-To: <6321c6e0-83b0-63c1-11ab-c368dadd4ed5@vex.net> References: <6321c6e0-83b0-63c1-11ab-c368dadd4ed5@vex.net> Message-ID: Thanks Albert! That clarifies the behavior. We had discussion on this in https://github.com/commercialhaskell/stack/issues/1957 as well where ezyang provided clarifications on the behavior. We need to update the documentation to make it more precise and accurate. I will file a ticket for that. -harendra On 20 August 2016 at 06:37, Albert Y. C. Lai wrote: > On 2016-08-16 07:08 AM, Harendra Kumar wrote: > >> As per the GHC manual >> (https://downloads.haskell.org/~ghc/master/users-guide/packa >> ges.html#the-ghc-package-path-environment-variable >> > ges.html#the-ghc-package-path-environment-variable>), >> packages which come earlier in the GHC_PACKAGE_PATH supersede the ones >> which come later. But that does not seem to be the case always. >> >> I am dealing with a case where I have multiple versions of a package in >> different databases. >> > > No, they don't mean multiple versions. They mean this: > > For example if two databases both have HUnit-1.1 (important: same name and > same version number), then that's when all the talk about overriding is > relevant. > > If you have different version numbers, the override rule doesn't apply, > the rule that applies is the shadow rule: the highest version number wins. > > The shadow rule exists because there are two scenerios that you really > don't want to behave differently: > > 1. You have both HUnit-1.1 and HUnit-1.2, and they are in the same > database. > > 2. You have both HUnit-1.1 and HUnit-1.2, but they are in different > databases. > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Sat Aug 20 18:27:56 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sat, 20 Aug 2016 23:57:56 +0530 Subject: Using stringize and string concatenation in ghc preprocessing Message-ID: Hi, To reduce boilerplate code in an FFI implementation file I am trying to use the stringizing and string concatenation features of the C preprocessor. Since ghc passes '-traditional' to the preprocessor which disables these features I thought I can pass my own flags to the preprocessor like this: {-# OPTIONS_GHC -optP -E -optP -undef #-} But "-optP" seems to only append to the flags that GHC already passes and gcc has no "-no-traditional" option to undo the effect of the "-traditional" that GHC has already passed. I think "-optP" should override the flags passed by ghc rather than appending to them. Is there a reason not to do that? Is there any other better way to achieve this? What is the standard way of doing this if any? -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Aug 20 18:33:08 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 20 Aug 2016 14:33:08 -0400 Subject: Using stringize and string concatenation in ghc preprocessing In-Reply-To: References: Message-ID: On Sat, Aug 20, 2016 at 2:27 PM, Harendra Kumar wrote: > But "-optP" seems to only append to the flags that GHC already passes and > gcc has no "-no-traditional" option to undo the effect of the > "-traditional" that GHC has already passed. I think "-optP" should override > the flags passed by ghc rather than appending to them. Is there a reason > not to do that? > > Is there any other better way to achieve this? What is the standard way of > doing this if any? > Removing -traditional will break much Haskell source. Go look at the history of clang with ghc (clang doesn't do -traditional) to see what happens. (tl;dr: without -traditional, cpp knows too much about what constitutes valid C, and mangles and/or throws errors on valid Haskell that doesn't lex the way C does.) You might want to look at cpphs as an alternative preprocessor. There are some ancient K&R-era hacks that could be used if absolutely necessary, but cpphs should be a much simpler and cleaner solution. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Sun Aug 21 20:31:37 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Sun, 21 Aug 2016 22:31:37 +0200 Subject: Using stringize and string concatenation in ghc preprocessing In-Reply-To: References: Message-ID: Hi Harendra, I ran into this very problem recently. Turns out -traditional knows string concatenation too. I seem to remember learning this by browsing the GHC source code, but now I can't find any occurrence of this pattern. But here's an example of how to do string concatenation with CPP in -traditional mode: https://github.com/tweag/sparkle/blob/a4e481aa5180b6ec93c219f827aefe932b66a953/inline-java/src/Foreign/JNI.hs#L274 . HTH, -- Mathieu Boespflug Founder at http://tweag.io. On 20 August 2016 at 20:33, Brandon Allbery wrote: > On Sat, Aug 20, 2016 at 2:27 PM, Harendra Kumar > wrote: > >> But "-optP" seems to only append to the flags that GHC already passes and >> gcc has no "-no-traditional" option to undo the effect of the >> "-traditional" that GHC has already passed. I think "-optP" should override >> the flags passed by ghc rather than appending to them. Is there a reason >> not to do that? >> >> Is there any other better way to achieve this? What is the standard way >> of doing this if any? >> > > Removing -traditional will break much Haskell source. Go look at the > history of clang with ghc (clang doesn't do -traditional) to see what > happens. (tl;dr: without -traditional, cpp knows too much about what > constitutes valid C, and mangles and/or throws errors on valid Haskell that > doesn't lex the way C does.) > > You might want to look at cpphs as an alternative preprocessor. There are > some ancient K&R-era hacks that could be used if absolutely necessary, but > cpphs should be a much simpler and cleaner solution. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Aug 21 20:52:21 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 21 Aug 2016 16:52:21 -0400 Subject: Using stringize and string concatenation in ghc preprocessing In-Reply-To: References: Message-ID: On Sun, Aug 21, 2016 at 4:31 PM, Boespflug, Mathieu wrote: > I ran into this very problem recently. Turns out -traditional knows string > concatenation too. I seem to remember learning this by browsing the GHC > source code, but now I can't find any occurrence of this pattern. But > here's an example of how to do string concatenation with CPP in > -traditional mode: https://github.com/tweag/sparkle/blob/ > a4e481aa5180b6ec93c219f827aefe932b66a953/inline-java/src/ > Foreign/JNI.hs#L274 > > . > That's the hacky K&R way I mentioned earlier. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Mon Aug 22 05:51:49 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 22 Aug 2016 11:21:49 +0530 Subject: Using stringize and string concatenation in ghc preprocessing In-Reply-To: References: Message-ID: Thanks Mathieu. This works pretty well for gcc ( https://gcc.gnu.org/onlinedocs/cpp/Traditional-macros.html) but sadly it does not work for clang cpp as Brandon too pointed out earlier that clang does not have a traditional mode. -harendra On 22 August 2016 at 02:01, Boespflug, Mathieu wrote: > Hi Harendra, > > I ran into this very problem recently. Turns out -traditional knows string > concatenation too. I seem to remember learning this by browsing the GHC > source code, but now I can't find any occurrence of this pattern. But > here's an example of how to do string concatenation with CPP in > -traditional mode: https://github.com/tweag/sparkle/blob/ > a4e481aa5180b6ec93c219f827aefe932b66a953/inline-java/src/ > Foreign/JNI.hs#L274. > > HTH, > > -- > Mathieu Boespflug > Founder at http://tweag.io. > > On 20 August 2016 at 20:33, Brandon Allbery wrote: > >> On Sat, Aug 20, 2016 at 2:27 PM, Harendra Kumar > > wrote: >> >>> But "-optP" seems to only append to the flags that GHC already passes >>> and gcc has no "-no-traditional" option to undo the effect of the >>> "-traditional" that GHC has already passed. I think "-optP" should override >>> the flags passed by ghc rather than appending to them. Is there a reason >>> not to do that? >>> >>> Is there any other better way to achieve this? What is the standard way >>> of doing this if any? >>> >> >> Removing -traditional will break much Haskell source. Go look at the >> history of clang with ghc (clang doesn't do -traditional) to see what >> happens. (tl;dr: without -traditional, cpp knows too much about what >> constitutes valid C, and mangles and/or throws errors on valid Haskell that >> doesn't lex the way C does.) >> >> You might want to look at cpphs as an alternative preprocessor. There are >> some ancient K&R-era hacks that could be used if absolutely necessary, but >> cpphs should be a much simpler and cleaner solution. >> >> -- >> brandon s allbery kf8nh sine nomine >> associates >> allbery.b at gmail.com >> ballbery at sinenomine.net >> unix, openafs, kerberos, infrastructure, xmonad >> http://sinenomine.net >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfdyck at google.com Wed Aug 24 00:05:28 2016 From: mfdyck at google.com (Matthew Farkas-Dyck) Date: Tue, 23 Aug 2016 17:05:28 -0700 Subject: Exposing target language in Haskell with GHC API Message-ID: A colleague and i are writing, as an unofficial side project, a Haskell→Bluespec compiler, using GHC as our Haskell front-end. The source language of the part we are writing is GHC Core. We need to somehow expose some Bluespec terms and types to the Haskell source program. We had a few ideas: 1. Some "NO_MANGLE" pragma which would tell GHC to not mangle the emitted name, e.g. `x = {-# NO_MANGLE #-} x` to expose `x` 2. `foreign import prim`, not quite sure how yet 3. "CORE" pragmas, e.g. `x = {-# CORE "foo" #-} x` to expose `x` 4. "ANN" pragmas, e.g. `{-# ANN x "no_mangle" #-} x = x` to expose `x` 1 and 2 would mean modifying GHC which we'd rather not do. For 3, we're not sure how to find the "CORE"-pragmatic annotations in a `Core` AST. 4 seems it would work but be a little cumbersome, as the annotation is not on the `Core` AST. Anyone know a good way to do this? From dominic at steinitz.org Thu Aug 25 09:10:41 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Thu, 25 Aug 2016 10:10:41 +0100 Subject: GHC Performance / Replacement for R? Message-ID: I am trying to use Haskell as a replacement for R but running into two problems which I describe below. Are there any plans to address the performance issues I have encountered? 1. I seem to have to jump through a lot of hoops just to be able to select the data I am interested in. {-# LANGUAGE ScopedTypeVariables #-} {-# OPTIONS_GHC -Wall #-} import Data.Csv hiding ( decodeByName ) import qualified Data.Vector as V import Data.ByteString ( ByteString ) import qualified Data.ByteString.Char8 as B import qualified Pipes.Prelude as P import qualified Pipes.ByteString as Bytes import Pipes import qualified Pipes.Csv as Csv import System.IO import qualified Control.Foldl as L main :: IO () main = withFile "examples/787338586_T_ONTIME.csv" ReadMode $ \h -> do let csvs :: Producer (V.Vector ByteString) IO () csvs = Csv.decode HasHeader (Bytes.fromHandle h) >-> P.concat uvectors :: Producer (V.Vector ByteString) IO () uvectors = csvs >-> P.map (V.foldr V.cons V.empty) vec_vec <- L.impurely P.foldM L.vector uvectors print $ (vec_vec :: V.Vector (V.Vector ByteString)) V.! 17 print $ V.length vec_vec let rockspring = V.filter (\x -> x V.! 8 == B.pack "RKS") vec_vec print $ V.length rockspring Here's the equivalent R: df <- read.csv("787338586_T_ONTIME.csv") rockspring <- df[df$ORIGIN == "RKS",] 2. Now I think I could improve the above to make an environment that is more similar to the one my colleagues are used to in R but more problematical is the memory usage. * 112.5M file * Just loading the source into ghci takes 142.7M * > foo <- readFile "examples/787338586_T_ONTIME.csv" > length foo takes me up to 4.75G. But we probably don't want to do this! * Let's try again. * > :set -XScopedTypeVariables * > h <- openFile "examples/787338586_T_ONTIME.csv" ReadMode * > let csvs :: Producer (V.Vector ByteString) IO () = Csv.decode HasHeader (Bytes.fromHandle h) >-> P.concat * > let uvectors :: Producer (V.Vector ByteString) IO () = csvs >-> P.map (V.map id) >-> P.map (V.foldr V.cons V.empty) * > vec_vec :: V.Vector (V.Vector ByteString) <- L.impurely P.foldM L.vector uvectors * Now I am up at 3.17G. In R I am under 221.3M. * > V.length rockspring takes a long time to return 155 and now I am at 3.5G!!! In R > rockspring <- df[df$ORIGIN == "RKS",] seems instantaneous and now uses only 379.5M. * > length(rockspring) 37 > length(df$ORIGIN) 471949 i.e. there are 37 columns and 471,949 rows. Running this as an executable gives ~/Dropbox/Private/labels $ ./examples/BugReport +RTS -s ["2014-01-01","EV","20366","N904EV","2512","10747","1074702","30747", "BRO","Brownsville, TX","Texas","11298","1129803","30194", "DFW","Dallas/Fort Worth, TX","Texas","0720","0718", "-2.00","8.00","0726","0837","7.00","0855","0844","-11.00","0.00", "","0.00","482.00","","","","","",""] 471949 155 14,179,764,240 bytes allocated in the heap 3,378,342,072 bytes copied during GC 786,333,512 bytes maximum residency (13 sample(s)) 36,933,976 bytes maximum slop 1434 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 26989 colls, 0 par 1.423s 1.483s 0.0001s 0.0039s Gen 1 13 colls, 0 par 1.005s 1.499s 0.1153s 0.6730s INIT time 0.000s ( 0.003s elapsed) MUT time 3.195s ( 3.193s elapsed) GC time 2.428s ( 2.982s elapsed) EXIT time 0.016s ( 0.138s elapsed) Total time 5.642s ( 6.315s elapsed) %GC time 43.0% (47.2% elapsed) Alloc rate 4,437,740,019 bytes per MUT second Productivity 57.0% of total user, 50.9% of total elapsed From simonpj at microsoft.com Thu Aug 25 10:31:49 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 25 Aug 2016 10:31:49 +0000 Subject: GHC Performance / Replacement for R? In-Reply-To: References: Message-ID: Sounds bad. But it'll need someone with bytestring expertise to debug. Maybe there's a GHC problem underlying; or maybe it's shortcoming of bytestring. Simon | -----Original Message----- | From: Glasgow-haskell-users [mailto:glasgow-haskell-users- | bounces at haskell.org] On Behalf Of Dominic Steinitz | Sent: 25 August 2016 10:11 | To: GHC users | Subject: GHC Performance / Replacement for R? | | I am trying to use Haskell as a replacement for R but running into two | problems which I describe below. Are there any plans to address the | performance issues I have encountered? | | 1. I seem to have to jump through a lot of hoops just to be able to | select the data I am interested in. | | {-# LANGUAGE ScopedTypeVariables #-} | | {-# OPTIONS_GHC -Wall #-} | | import Data.Csv hiding ( decodeByName ) | import qualified Data.Vector as V | | import Data.ByteString ( ByteString ) | import qualified Data.ByteString.Char8 as B | | import qualified Pipes.Prelude as P | import qualified Pipes.ByteString as Bytes import Pipes import | qualified Pipes.Csv as Csv import System.IO | | import qualified Control.Foldl as L | | main :: IO () | main = withFile "examples/787338586_T_ONTIME.csv" ReadMode $ \h -> do | let csvs :: Producer (V.Vector ByteString) IO () | csvs = Csv.decode HasHeader (Bytes.fromHandle h) >-> P.concat | uvectors :: Producer (V.Vector ByteString) IO () | uvectors = csvs >-> P.map (V.foldr V.cons V.empty) | vec_vec <- L.impurely P.foldM L.vector uvectors | print $ (vec_vec :: V.Vector (V.Vector ByteString)) V.! 17 | print $ V.length vec_vec | let rockspring = V.filter (\x -> x V.! 8 == B.pack "RKS") vec_vec | print $ V.length rockspring | | Here's the equivalent R: | | df <- read.csv("787338586_T_ONTIME.csv") | rockspring <- df[df$ORIGIN == "RKS",] | | 2. Now I think I could improve the above to make an environment that | is more similar to the one my colleagues are used to in R but more | problematical is the memory usage. | | * 112.5M file | * Just loading the source into ghci takes 142.7M | * > foo <- readFile "examples/787338586_T_ONTIME.csv" > length foo | takes me up to 4.75G. But we probably don't want to do this! | * Let's try again. | * > :set -XScopedTypeVariables | * > h <- openFile "examples/787338586_T_ONTIME.csv" ReadMode | * > let csvs :: Producer (V.Vector ByteString) IO () = Csv.decode | HasHeader (Bytes.fromHandle h) >-> P.concat | * > let uvectors :: Producer (V.Vector ByteString) IO () = csvs >-> | P.map (V.map id) >-> P.map (V.foldr V.cons V.empty) | * > vec_vec :: V.Vector (V.Vector ByteString) <- L.impurely P.foldM | L.vector uvectors | * Now I am up at 3.17G. In R I am under 221.3M. | * > V.length rockspring takes a long time to return 155 and now I am | at 3.5G!!! In R > rockspring <- df[df$ORIGIN == "RKS",] seems | instantaneous and now uses only 379.5M. | * > length(rockspring) 37 > length(df$ORIGIN) 471949 i.e. there are | 37 columns and 471,949 rows. | | Running this as an executable gives | | ~/Dropbox/Private/labels $ ./examples/BugReport +RTS -s ["2014-01- | 01","EV","20366","N904EV","2512","10747","1074702","30747", | "BRO","Brownsville, TX","Texas","11298","1129803","30194", | "DFW","Dallas/Fort Worth, TX","Texas","0720","0718", | "-2.00","8.00","0726","0837","7.00","0855","0844","-11.00","0.00", | "","0.00","482.00","","","","","",""] | 471949 | 155 | 14,179,764,240 bytes allocated in the heap | 3,378,342,072 bytes copied during GC | 786,333,512 bytes maximum residency (13 sample(s)) | 36,933,976 bytes maximum slop | 1434 MB total memory in use (0 MB lost due to | fragmentation) | | Tot time (elapsed) Avg pause | Max pause | Gen 0 26989 colls, 0 par 1.423s 1.483s 0.0001s | 0.0039s | Gen 1 13 colls, 0 par 1.005s 1.499s 0.1153s | 0.6730s | | INIT time 0.000s ( 0.003s elapsed) | MUT time 3.195s ( 3.193s elapsed) | GC time 2.428s ( 2.982s elapsed) | EXIT time 0.016s ( 0.138s elapsed) | Total time 5.642s ( 6.315s elapsed) | | %GC time 43.0% (47.2% elapsed) | | Alloc rate 4,437,740,019 bytes per MUT second | | Productivity 57.0% of total user, 50.9% of total elapsed | | _______________________________________________ | Glasgow-haskell-users mailing list | Glasgow-haskell-users at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h | askell.org%2fcgi-bin%2fmailman%2flistinfo%2fglasgow-haskell- | users&data=01%7c01%7csimonpj%40microsoft.com%7c5017a5fe26cb4df9c41d08d | 3ccc7b5bd%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=2Ku1Fr5QttHRoj5 | NSOJREZrt2Fsqhi63iJOpxmku68E%3d From christiaan.baaij at gmail.com Fri Aug 26 07:56:12 2016 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Fri, 26 Aug 2016 09:56:12 +0200 Subject: Exposing target language in Haskell with GHC API In-Reply-To: References: Message-ID: <57BFF61C.8090507@gmail.com> Hi Matthew, Although it doesn't "really" answer you question, you could take the same approach as I did with CLaSH (http://clash-lang.org/) which translates Haskell to VHDL/(System)Verilog. Although I didn't want to "expose" VHDL/(System)Verilog terms and types per sé, I guess you could see it like that in a way. Let's take a look at the BitVector type (http://hackage.haskell.org/package/clash-prelude-0.10.13/docs/CLaSH-Sized-Internal-BitVector.html) for example. = Exposing data types = You could, in a way, say that the BitVector type, and its operations, are the "exposed" std_logic_vector/bitvector types and operations of VHDL/(System)Verilog. So, I "exposed" the bitvector/std_logic_vector type as: > import GHC.TypeLits > > newtype BitVector (n :: Nat) = BV {unsafeToInteger :: Integer} Now, there are, using newtypes can be a bit "dangerous" as GHC will have the tendency to coerce between newtypes and it's underlying representation. That, and if you use https://downloads.haskell.org/~ghc/8.0.1/docs/html/libraries/ghc-8.0.1/Type.html#v:coreView, you might accidentally look through the newtype. So, perhaps a safer way is to just do: > data BitVector (n :: Nat) = BV {unsafeToInteger :: Integer} Also, I use 'Integer' as the underlying representation because I want to "simulate" my circuits in Haskell. However, if you dont't care about this, you might as well use '()'. = Exposing functions = You mentioned that GHC does name mangling, but I must say I've never seen GHC do this. What GHC does do is inlining and specialisation, which might optimise away your carefully constructed "primitive". What I do in this case, is simply mark my "primitive" function, your "exposed" BlueSpec functions, as NOINLINE. For example, I define equality on BitVector as: > instance Eq (BitVector n) where > (==) = eq# > (/=) = neq# > > {-# NOINLINE eq# #-} > eq# :: BitVector n -> BitVector n -> Bool > eq# (BV v1) (BV v2) = v1 == v2 > > {-# NOINLINE neq# #-} > neq# :: BitVector n -> BitVector n -> Bool > neq# (BV v1) (BV v2) = v1 /= v2 Again, I want to "simulate" my circuits, so I've given actual definitions for my operations. But if you don't care for this, you can simply leave them as 'undefined'. = Wrapping up = So I hope it's clear how to "expose" target terms and types through simple data type wrapping and the use of NOINLINE. You'll have to do the above for all the BlueSpec data types and functions you want to expose and then package it up. Perhaps also, if you have the time, using BlueSpec's FFI and Haskell's FFI, you could hook up the "exposed" operations to BlueSim, and have "co-simulation" between Haskell and BlueSpec. Anyhow, I hope this helps. Let me know if you have any more questions. Regards, Christiaan On 08/24/2016 02:05 AM, Matthew Farkas-Dyck via Glasgow-haskell-users wrote: > A colleague and i are writing, as an unofficial side project, a > Haskell→Bluespec compiler, using GHC as our Haskell front-end. The > source language of the part we are writing is GHC Core. We need to > somehow expose some Bluespec terms and types to the Haskell source > program. We had a few ideas: > 1. Some "NO_MANGLE" pragma which would tell GHC to not mangle the > emitted name, e.g. `x = {-# NO_MANGLE #-} x` to expose `x` > 2. `foreign import prim`, not quite sure how yet > 3. "CORE" pragmas, e.g. `x = {-# CORE "foo" #-} x` to expose `x` > 4. "ANN" pragmas, e.g. `{-# ANN x "no_mangle" #-} x = x` to expose `x` > > 1 and 2 would mean modifying GHC which we'd rather not do. For 3, > we're not sure how to find the "CORE"-pragmatic annotations in a > `Core` AST. 4 seems it would work but be a little cumbersome, as the > annotation is not on the `Core` AST. > > Anyone know a good way to do this? > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > From iavor.diatchki at gmail.com Tue Aug 30 21:05:07 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 30 Aug 2016 14:05:07 -0700 Subject: GHC Performance / Replacement for R? In-Reply-To: References: Message-ID: Hello, when you parse the CSV fully, you end up creating a lot of small bytestring objects, and each of these adds some overhead. The vectors themselves add up some additional overhead. All of this adds up when you have as many fields as you do. An alternative would be to use a different representation for the data, which recomputes things when needed. While this might be a bit slower in some cases, it could have significant saving in terms of memory use. I wrote up a small example to illustrate what I have in mind, which should be attached to this e-mail. Basically, instead of parsing the CSV file fully, I just indexed where the lines are (ref. the "rows" field of "CSV"). This allows me to access each row quickly, and the when I need to get a specific field, I simply parse the bytes of the row. One could play all kinds of games like that, and I imagine R does something similar, although I have never looked at how it works. To test the approach I generated ~200Mb of sample data (generator is also in the attached file), and I was able to filter it using ~240Mb, which is comparable to what you reported about R. One could probably package all this up in library that supports "R like" operations. These are the stats I get from -s: 4,137,632,432 bytes allocated in the heap 925,200 bytes copied during GC 200,104,224 bytes maximum residency (2 sample(s)) 6,217,864 bytes maximum slop 246 MB total memory in use (1 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 7564 colls, 0 par 0.024s 0.011s 0.0000s 0.0001s Gen 1 2 colls, 0 par 0.000s 0.001s 0.0003s 0.0006s INIT time 0.000s ( 0.000s elapsed) MUT time 0.364s ( 0.451s elapsed) GC time 0.024s ( 0.011s elapsed) EXIT time 0.000s ( 0.001s elapsed) Total time 0.388s ( 0.463s elapsed) %GC time 6.2% (2.5% elapsed) Alloc rate 11,367,122,065 bytes per MUT second Productivity 93.8% of total user, 78.6% of total elapsed -Iavor On Thu, Aug 25, 2016 at 3:31 AM, Simon Peyton Jones via Glasgow-haskell-users wrote: > Sounds bad. But it'll need someone with bytestring expertise to debug. > Maybe there's a GHC problem underlying; or maybe it's shortcoming of > bytestring. > > Simon > > | -----Original Message----- > | From: Glasgow-haskell-users [mailto:glasgow-haskell-users- > | bounces at haskell.org] On Behalf Of Dominic Steinitz > | Sent: 25 August 2016 10:11 > | To: GHC users > | Subject: GHC Performance / Replacement for R? > | > | I am trying to use Haskell as a replacement for R but running into two > | problems which I describe below. Are there any plans to address the > | performance issues I have encountered? > | > | 1. I seem to have to jump through a lot of hoops just to be able to > | select the data I am interested in. > | > | {-# LANGUAGE ScopedTypeVariables #-} > | > | {-# OPTIONS_GHC -Wall #-} > | > | import Data.Csv hiding ( decodeByName ) > | import qualified Data.Vector as V > | > | import Data.ByteString ( ByteString ) > | import qualified Data.ByteString.Char8 as B > | > | import qualified Pipes.Prelude as P > | import qualified Pipes.ByteString as Bytes import Pipes import > | qualified Pipes.Csv as Csv import System.IO > | > | import qualified Control.Foldl as L > | > | main :: IO () > | main = withFile "examples/787338586_T_ONTIME.csv" ReadMode $ \h -> do > | let csvs :: Producer (V.Vector ByteString) IO () > | csvs = Csv.decode HasHeader (Bytes.fromHandle h) >-> P.concat > | uvectors :: Producer (V.Vector ByteString) IO () > | uvectors = csvs >-> P.map (V.foldr V.cons V.empty) > | vec_vec <- L.impurely P.foldM L.vector uvectors > | print $ (vec_vec :: V.Vector (V.Vector ByteString)) V.! 17 > | print $ V.length vec_vec > | let rockspring = V.filter (\x -> x V.! 8 == B.pack "RKS") vec_vec > | print $ V.length rockspring > | > | Here's the equivalent R: > | > | df <- read.csv("787338586_T_ONTIME.csv") > | rockspring <- df[df$ORIGIN == "RKS",] > | > | 2. Now I think I could improve the above to make an environment that > | is more similar to the one my colleagues are used to in R but more > | problematical is the memory usage. > | > | * 112.5M file > | * Just loading the source into ghci takes 142.7M > | * > foo <- readFile "examples/787338586_T_ONTIME.csv" > length foo > | takes me up to 4.75G. But we probably don't want to do this! > | * Let's try again. > | * > :set -XScopedTypeVariables > | * > h <- openFile "examples/787338586_T_ONTIME.csv" ReadMode > | * > let csvs :: Producer (V.Vector ByteString) IO () = Csv.decode > | HasHeader (Bytes.fromHandle h) >-> P.concat > | * > let uvectors :: Producer (V.Vector ByteString) IO () = csvs >-> > | P.map (V.map id) >-> P.map (V.foldr V.cons V.empty) > | * > vec_vec :: V.Vector (V.Vector ByteString) <- L.impurely P.foldM > | L.vector uvectors > | * Now I am up at 3.17G. In R I am under 221.3M. > | * > V.length rockspring takes a long time to return 155 and now I am > | at 3.5G!!! In R > rockspring <- df[df$ORIGIN == "RKS",] seems > | instantaneous and now uses only 379.5M. > | * > length(rockspring) 37 > length(df$ORIGIN) 471949 i.e. there are > | 37 columns and 471,949 rows. > | > | Running this as an executable gives > | > | ~/Dropbox/Private/labels $ ./examples/BugReport +RTS -s ["2014-01- > | 01","EV","20366","N904EV","2512","10747","1074702","30747", > | "BRO","Brownsville, TX","Texas","11298","1129803","30194", > | "DFW","Dallas/Fort Worth, TX","Texas","0720","0718", > | "-2.00","8.00","0726","0837","7.00","0855","0844","-11.00","0.00", > | "","0.00","482.00","","","","","",""] > | 471949 > | 155 > | 14,179,764,240 bytes allocated in the heap > | 3,378,342,072 bytes copied during GC > | 786,333,512 bytes maximum residency (13 sample(s)) > | 36,933,976 bytes maximum slop > | 1434 MB total memory in use (0 MB lost due to > | fragmentation) > | > | Tot time (elapsed) Avg pause > | Max pause > | Gen 0 26989 colls, 0 par 1.423s 1.483s 0.0001s > | 0.0039s > | Gen 1 13 colls, 0 par 1.005s 1.499s 0.1153s > | 0.6730s > | > | INIT time 0.000s ( 0.003s elapsed) > | MUT time 3.195s ( 3.193s elapsed) > | GC time 2.428s ( 2.982s elapsed) > | EXIT time 0.016s ( 0.138s elapsed) > | Total time 5.642s ( 6.315s elapsed) > | > | %GC time 43.0% (47.2% elapsed) > | > | Alloc rate 4,437,740,019 bytes per MUT second > | > | Productivity 57.0% of total user, 50.9% of total elapsed > | > | _______________________________________________ > | Glasgow-haskell-users mailing list > | Glasgow-haskell-users at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h > | askell.org%2fcgi-bin%2fmailman%2flistinfo%2fglasgow-haskell- > | users&data=01%7c01%7csimonpj%40microsoft.com%7c5017a5fe26cb4df9c41d08d > | 3ccc7b5bd%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=2Ku1Fr5QttHRoj5 > | NSOJREZrt2Fsqhi63iJOpxmku68E%3d > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.hs Type: text/x-haskell Size: 1986 bytes Desc: not available URL: From dominic at steinitz.org Wed Aug 31 13:31:04 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Wed, 31 Aug 2016 14:31:04 +0100 Subject: GHC Performance / Replacement for R? In-Reply-To: References: Message-ID: <387e7bce-5f0d-25ce-fe6a-2a377131c412@steinitz.org> Hi Iavor, Thank you very much for this. It's nice to know that we have the ability in Haskell to be as frugal (or profligate) with memory as R when working with data frames. I should say this number of fields is quite low in the data science world. Data sets with 500 columns are not uncommon and I did have one with 10,000 columns! I know other folks have worked on producing something like data frames e.g. https://github.com/acowley/Frames and http://stla.github.io/stlapblog/posts/HaskellFrames.html for example but I wanted to remain in the world of relatively simple types and I haven't looked at its performance in terms of memory. On the plus side it did manage to read in the 10,000 column data set although ghc took about 5 minutes to do the typechecking (I should say within ghci). Just to mention that R is not the only language that has nice facilities for data exploration; python has a package called pandas: http://pandas.pydata.org. I feel we still have a way to go to make Haskell provide as easy an environment for data exploration as R or Python but I shall continue on my crusade. Many thanks once again, Dominic. On 30/08/2016 22:05, Iavor Diatchki wrote: > Hello, > > when you parse the CSV fully, you end up creating a lot of small > bytestring objects, and each of these adds some overhead. The > vectors themselves add up some additional overhead. All of this adds > up when you have as many fields as you do. An alternative would be > to use a different representation for the data, which recomputes > things when needed. While this might be a bit slower in some cases, > it could have significant saving in terms of memory use. I wrote up > a small example to illustrate what I have in mind, which should be > attached to this e-mail. > > Basically, instead of parsing the CSV file fully, I just indexed where > the lines are (ref. the "rows" field of "CSV"). This allows me to > access each row quickly, and the when I need to get a specific field, > I simply parse the bytes of the row. > One could play all kinds of games like that, and I imagine R does > something similar, although I have never looked at how it works. To > test the approach I generated ~200Mb of sample data (generator is also > in the attached file), and I was able to filter it using ~240Mb, which > is comparable to what you reported about R. One could probably > package all this up in library that supports "R like" operations. > > These are the stats I get from -s: > > 4,137,632,432 bytes allocated in the heap > 925,200 bytes copied during GC > 200,104,224 bytes maximum residency (2 sample(s)) > 6,217,864 bytes maximum slop > 246 MB total memory in use (1 MB lost due to fragmentation) > > Tot time (elapsed) Avg pause > Max pause > Gen 0 7564 colls, 0 par 0.024s 0.011s 0.0000s > 0.0001s > Gen 1 2 colls, 0 par 0.000s 0.001s 0.0003s > 0.0006s > > INIT time 0.000s ( 0.000s elapsed) > MUT time 0.364s ( 0.451s elapsed) > GC time 0.024s ( 0.011s elapsed) > EXIT time 0.000s ( 0.001s elapsed) > Total time 0.388s ( 0.463s elapsed) > > %GC time 6.2% (2.5% elapsed) > > Alloc rate 11,367,122,065 bytes per MUT second > > Productivity 93.8% of total user, 78.6% of total elapsed > > -Iavor > > > > > On Thu, Aug 25, 2016 at 3:31 AM, Simon Peyton Jones via > Glasgow-haskell-users > wrote: > > Sounds bad. But it'll need someone with bytestring expertise to > debug. Maybe there's a GHC problem underlying; or maybe it's > shortcoming of bytestring. > > Simon > > | -----Original Message----- > | From: Glasgow-haskell-users [mailto:glasgow-haskell-users- > > | bounces at haskell.org ] On Behalf Of > Dominic Steinitz > | Sent: 25 August 2016 10:11 > | To: GHC users > > | Subject: GHC Performance / Replacement for R? > | > | I am trying to use Haskell as a replacement for R but running > into two > | problems which I describe below. Are there any plans to address the > | performance issues I have encountered? > | > | 1. I seem to have to jump through a lot of hoops just to be > able to > | select the data I am interested in. > | > | {-# LANGUAGE ScopedTypeVariables #-} > | > | {-# OPTIONS_GHC -Wall #-} > | > | import Data.Csv hiding ( decodeByName ) > | import qualified Data.Vector as V > | > | import Data.ByteString ( ByteString ) > | import qualified Data.ByteString.Char8 as B > | > | import qualified Pipes.Prelude as P > | import qualified Pipes.ByteString as Bytes import Pipes import > | qualified Pipes.Csv as Csv import System.IO > | > | import qualified Control.Foldl as L > | > | main :: IO () > | main = withFile "examples/787338586_T_ONTIME.csv" ReadMode $ \h > -> do > | let csvs :: Producer (V.Vector ByteString) IO () > | csvs = Csv.decode HasHeader (Bytes.fromHandle h) >-> P.concat > | uvectors :: Producer (V.Vector ByteString) IO () > | uvectors = csvs >-> P.map (V.foldr V.cons V.empty) > | vec_vec <- L.impurely P.foldM L.vector uvectors > | print $ (vec_vec :: V.Vector (V.Vector ByteString)) V.! 17 > | print $ V.length vec_vec > | let rockspring = V.filter (\x -> x V.! 8 == B.pack "RKS") vec_vec > | print $ V.length rockspring > | > | Here's the equivalent R: > | > | df <- read.csv("787338586_T_ONTIME.csv") > | rockspring <- df[df$ORIGIN == "RKS",] > | > | 2. Now I think I could improve the above to make an > environment that > | is more similar to the one my colleagues are used to in R > but more > | problematical is the memory usage. > | > | * 112.5M file > | * Just loading the source into ghci takes 142.7M > | * > foo <- readFile "examples/787338586_T_ONTIME.csv" > length foo > | takes me up to 4.75G. But we probably don't want to do this! > | * Let's try again. > | * > :set -XScopedTypeVariables > | * > h <- openFile "examples/787338586_T_ONTIME.csv" ReadMode > | * > let csvs :: Producer (V.Vector ByteString) IO () = Csv.decode > | HasHeader (Bytes.fromHandle h) >-> P.concat > | * > let uvectors :: Producer (V.Vector ByteString) IO () = > csvs >-> > | P.map (V.map id) >-> P.map (V.foldr V.cons V.empty) > | * > vec_vec :: V.Vector (V.Vector ByteString) <- L.impurely > P.foldM > | L.vector uvectors > | * Now I am up at 3.17G. In R I am under 221.3M. > | * > V.length rockspring takes a long time to return 155 and > now I am > | at 3.5G!!! In R > rockspring <- df[df$ORIGIN == "RKS",] seems > | instantaneous and now uses only 379.5M. > | * > length(rockspring) 37 > length(df$ORIGIN) 471949 i.e. > there are > | 37 columns and 471,949 rows. > | > | Running this as an executable gives > | > | ~/Dropbox/Private/labels $ ./examples/BugReport +RTS -s ["2014-01- > | 01","EV","20366","N904EV","2512","10747","1074702","30747", > | "BRO","Brownsville, TX","Texas","11298","1129803","30194", > | "DFW","Dallas/Fort Worth, TX","Texas","0720","0718", > | > "-2.00","8.00","0726","0837","7.00","0855","0844","-11.00","0.00", > | "","0.00","482.00","","","","","",""] > | 471949 > | 155 > | 14,179,764,240 bytes allocated in the heap > | 3,378,342,072 bytes copied during GC > | 786,333,512 bytes maximum residency (13 sample(s)) > | 36,933,976 bytes maximum slop > | 1434 MB total memory in use (0 MB lost due to > | fragmentation) > | > | Tot time (elapsed) Avg pause > | Max pause > | Gen 0 26989 colls, 0 par 1.423s 1.483s 0.0001s > | 0.0039s > | Gen 1 13 colls, 0 par 1.005s 1.499s 0.1153s > | 0.6730s > | > | INIT time 0.000s ( 0.003s elapsed) > | MUT time 3.195s ( 3.193s elapsed) > | GC time 2.428s ( 2.982s elapsed) > | EXIT time 0.016s ( 0.138s elapsed) > | Total time 5.642s ( 6.315s elapsed) > | > | %GC time 43.0% (47.2% elapsed) > | > | Alloc rate 4,437,740,019 bytes per MUT second > | > | Productivity 57.0% of total user, 50.9% of total elapsed > | > | _______________________________________________ > | Glasgow-haskell-users mailing list > | Glasgow-haskell-users at haskell.org > > | > https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h > > | askell.org > %2fcgi-bin%2fmailman%2flistinfo%2fglasgow-haskell- > | users&data=01%7c01%7csimonpj%40microsoft.com > %7c5017a5fe26cb4df9c41d08d > | > 3ccc7b5bd%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=2Ku1Fr5QttHRoj5 > | NSOJREZrt2Fsqhi63iJOpxmku68E%3d > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: