From simon.jakobi at googlemail.com Fri Nov 1 00:31:20 2019 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Fri, 1 Nov 2019 01:31:20 +0100 Subject: [Haskell-cafe] Expired certificate for prime.haskell.org Message-ID: Hi! I just wanted to give someone a link to the Semigroup-Monoid-proposal, but couldn't open the page in Chromium: https://prime.haskell.org/wiki/Libraries/Proposals/SemigroupMonoid There's already a short email thread regarding the issue from April on the haskell-prime mailing list: https://mail.haskell.org/pipermail/haskell-prime/2019-April/004448.html It would be nice if proposals like the SMP remained accessible! Cheers, Simon From ben at well-typed.com Fri Nov 1 00:44:52 2019 From: ben at well-typed.com (Ben Gamari) Date: Thu, 31 Oct 2019 20:44:52 -0400 Subject: [Haskell-cafe] Expired certificate for prime.haskell.org In-Reply-To: References: Message-ID: <9A460337-8A50-42F6-B043-39985A123F33@well-typed.com> Yes, running the gitlab import on prime.haskell.org is one of the last things on my list before that box can be torn down. I'll try to do it tomorrow. Cheers, - Ben On October 31, 2019 8:31:20 PM EDT, Simon Jakobi via Libraries wrote: >Hi! > >I just wanted to give someone a link to the Semigroup-Monoid-proposal, >but couldn't open the page in Chromium: > >https://prime.haskell.org/wiki/Libraries/Proposals/SemigroupMonoid > >There's already a short email thread regarding the issue from April on >the haskell-prime mailing list: > >https://mail.haskell.org/pipermail/haskell-prime/2019-April/004448.html > >It would be nice if proposals like the SMP remained accessible! > >Cheers, >Simon >_______________________________________________ >Libraries mailing list >Libraries at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From taylor at fausak.me Fri Nov 1 12:17:18 2019 From: taylor at fausak.me (Taylor Fausak) Date: Fri, 1 Nov 2019 08:17:18 -0400 Subject: [Haskell-cafe] 2019 State of Haskell Survey Message-ID: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> Hi friends! I am excited to announce the third annual State of Haskell Survey. I’d appreciate it if you could take a few minutes to fill it out. Thanks! https://haskellweekly.news/survey/2019.html From markus.l2ll at gmail.com Fri Nov 1 14:32:25 2019 From: markus.l2ll at gmail.com (=?UTF-8?B?TWFya3VzIEzDpGxs?=) Date: Fri, 1 Nov 2019 16:32:25 +0200 Subject: [Haskell-cafe] Reverse of -ddump-splices Message-ID: Hi list! Is it so that there is no reverse for -ddump-splices? In ghci I can set it with :set -ddump-splices and it turns on, but adding "no" to the flag appears not to work. Found this https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html so probably there is no reverse? -- Markus Läll From simon.jakobi at googlemail.com Fri Nov 1 15:13:36 2019 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Fri, 1 Nov 2019 16:13:36 +0100 Subject: [Haskell-cafe] Reverse of -ddump-splices In-Reply-To: References: Message-ID: Hi Markus! Does :unset --ddump-splices work? Am Fr., 1. Nov. 2019 um 15:33 Uhr schrieb Markus Läll : > > Hi list! > > Is it so that there is no reverse for -ddump-splices? In ghci I can > set it with :set -ddump-splices and it turns on, but adding "no" to > the flag appears not to work. > > Found this > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html > so probably there is no reverse? > > -- > Markus Läll > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From markus.l2ll at gmail.com Fri Nov 1 15:22:03 2019 From: markus.l2ll at gmail.com (=?UTF-8?B?TWFya3VzIEzDpGxs?=) Date: Fri, 1 Nov 2019 17:22:03 +0200 Subject: [Haskell-cafe] Reverse of -ddump-splices In-Reply-To: References: Message-ID: Hi Simon -- great idea, but unfortunately not. :) $ :unset -ddump-splices $ don't know how to reverse -ddump-splices On Fri, Nov 1, 2019 at 5:14 PM Simon Jakobi wrote: > > Hi Markus! > > Does > > :unset --ddump-splices > > work? > > Am Fr., 1. Nov. 2019 um 15:33 Uhr schrieb Markus Läll : > > > > Hi list! > > > > Is it so that there is no reverse for -ddump-splices? In ghci I can > > set it with :set -ddump-splices and it turns on, but adding "no" to > > the flag appears not to work. > > > > Found this > > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html > > so probably there is no reverse? > > > > -- > > Markus Läll > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. -- Markus Läll From jeffbrown.the at gmail.com Fri Nov 1 21:12:36 2019 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Fri, 1 Nov 2019 16:12:36 -0500 Subject: [Haskell-cafe] 2019 State of Haskell Survey In-Reply-To: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> References: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> Message-ID: There's a question about whether you contribute to projects. I contribute issues but so far no code, so I marked no. =Maybe not a big deal, but the survey results for that question might be easier to interpret if the question was subdivided. On Fri, Nov 1, 2019 at 7:17 AM Taylor Fausak wrote: > Hi friends! I am excited to announce the third annual State of Haskell > Survey. I’d appreciate it if you could take a few minutes to fill it out. > Thanks! > > https://haskellweekly.news/survey/2019.html > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Jeff Brown | Jeffrey Benjamin Brown Website | Facebook | LinkedIn (spammy, so I often miss messages here) | Github -------------- next part -------------- An HTML attachment was scrubbed... URL: From fa-ml at ariis.it Sat Nov 2 00:21:03 2019 From: fa-ml at ariis.it (Francesco Ariis) Date: Sat, 2 Nov 2019 01:21:03 +0100 Subject: [Haskell-cafe] 2019 State of Haskell Survey In-Reply-To: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> References: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> Message-ID: <20191102002103.GA18762@x60s.casa> Hello Taylor, On Fri, Nov 01, 2019 at 08:17:18AM -0400, Taylor Fausak wrote: > Hi friends! I am excited to announce the third annual State of Haskell > Survey. I’d appreciate it if you could take a few minutes to fill it out. > Thanks! thanks for your efforts! I was not so happy in seeing a (required) email address, but I understand it is needed to prevent silly people from tampering with the survey. Minor remarks: - vim (which is not vi) is missing from the "which editor do you use" section. - both "Where do you interact with the Haskell community?" and "How did you hear about this survey?" miss "Haskell Discourse". Thanks again -F From ben at well-typed.com Sat Nov 2 00:45:31 2019 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Nov 2019 20:45:31 -0400 Subject: [Haskell-cafe] Expired certificate for prime.haskell.org In-Reply-To: <9A460337-8A50-42F6-B043-39985A123F33@well-typed.com> References: <9A460337-8A50-42F6-B043-39985A123F33@well-typed.com> Message-ID: <87wocjp0m3.fsf@smart-cactus.org> Ben Gamari writes: > Yes, running the gitlab import on prime.haskell.org is one of the last > things on my list before that box can be torn down. I'll try to do it > tomorrow. > I have gone ahead and imported the issues, wiki, and repository content from prime.haskell.org to GitLab [1]. Do let me know whether anything looks amiss. Cheers, - Ben [1] https://gitlab.haskell.org/haskell/prime -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simon.jakobi at googlemail.com Sat Nov 2 01:37:05 2019 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Sat, 2 Nov 2019 02:37:05 +0100 Subject: [Haskell-cafe] Expired certificate for prime.haskell.org In-Reply-To: <87wocjp0m3.fsf@smart-cactus.org> References: <9A460337-8A50-42F6-B043-39985A123F33@well-typed.com> <87wocjp0m3.fsf@smart-cactus.org> Message-ID: Thanks a lot Ben! After a bit of searching I found the page for the SMP proposal: https://gitlab.haskell.org/haskell/prime/wikis/libraries/proposals/semigroup-monoid I noticed that the code blocks there are a bit mangled. If your import tools could fix that programmatically, that would be very nice! Cheers, Simon Am Sa., 2. Nov. 2019 um 01:45 Uhr schrieb Ben Gamari : > > Ben Gamari writes: > > > Yes, running the gitlab import on prime.haskell.org is one of the last > > things on my list before that box can be torn down. I'll try to do it > > tomorrow. > > > I have gone ahead and imported the issues, wiki, and repository content > from prime.haskell.org to GitLab [1]. Do let me know whether anything > looks amiss. > > Cheers, > > - Ben > > > [1] https://gitlab.haskell.org/haskell/prime From taylor at fausak.me Sat Nov 2 14:15:55 2019 From: taylor at fausak.me (Taylor Fausak) Date: Sat, 02 Nov 2019 10:15:55 -0400 Subject: [Haskell-cafe] 2019 State of Haskell Survey In-Reply-To: <20191102002103.GA18762@x60s.casa> References: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> <20191102002103.GA18762@x60s.casa> Message-ID: <6a6643cf-3154-479d-a319-e91f8044062b@www.fastmail.com> You can take the "Vi" answer choice of the editor question to mean "Vi family", including Vim, Neovim, and so on. I added Discourse as an option for both community questions. On Fri, Nov 1, 2019, at 8:21 PM, Francesco Ariis wrote: > Hello Taylor, > > On Fri, Nov 01, 2019 at 08:17:18AM -0400, Taylor Fausak wrote: > > Hi friends! I am excited to announce the third annual State of Haskell > > Survey. I’d appreciate it if you could take a few minutes to fill it out. > > Thanks! > > thanks for your efforts! I was not so happy in seeing a (required) email > address, but I understand it is needed to prevent silly people from > tampering with the survey. > > Minor remarks: > - vim (which is not vi) is missing from the "which editor do you use" > section. > - both "Where do you interact with the Haskell community?" and > "How did you hear about this survey?" miss "Haskell Discourse". > > Thanks again > -F > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From chrisdone at gmail.com Sun Nov 3 12:01:10 2019 From: chrisdone at gmail.com (Christopher Done) Date: Sun, 3 Nov 2019 12:01:10 +0000 Subject: [Haskell-cafe] Call for maintainer: formatting package Message-ID: Hi all, The formatting package needs a maintainer. I made this package 6 years ago based on the HoleyMonoid package's idea. I thought it would become a staple package for me. Since then, I have basically never actually used it in a project. It turns out, I don't like the printf-style of doing printing at all, and prefer to just manually append things. As such, it doesn't make sense for me to maintain this package. Feel free to comment here or on the GitHub issue if you are interested: https://github.com/chrisdone/formatting/issues/58 Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Sun Nov 3 12:13:47 2019 From: chrisdone at gmail.com (Christopher Done) Date: Sun, 3 Nov 2019 12:13:47 +0000 Subject: [Haskell-cafe] hindent: maintainer needed Message-ID: Hi all, I'm sunsetting my maintenance of hindent. It basically does what I want and has done for the last four years. I've put enough hours into this project that I'm ready to move onto new things. The approach of using GHC to format code is already being explored by at least two other projects: * brittany * ormolu So if you still want hindent to continue being updated, I suggest you take over as maintainer, either as a group or an individual. Feel free to comment here. I'm happy to create an organization and move the project into it. In the absence of anyone taking over, the GitHub project will be archived on Jan 31st. By that point, the issues/PRs will be lost. There is a related GitHub issue here: Cheers, Chris From mihai.maruseac at gmail.com Sun Nov 3 16:59:26 2019 From: mihai.maruseac at gmail.com (Mihai Maruseac) Date: Sun, 3 Nov 2019 08:59:26 -0800 Subject: [Haskell-cafe] hindent: maintainer needed In-Reply-To: References: Message-ID: I volunteer to maintain it. On Sun, Nov 3, 2019 at 4:13 AM Christopher Done wrote: > > Hi all, > > I'm sunsetting my maintenance of hindent. It basically does what I > want and has done for the last four years. I've put enough hours into > this project that I'm ready to move onto new things. > > The approach of using GHC to format code is already being explored by > at least two other projects: > > * brittany > * ormolu > > So if you still want hindent to continue being updated, I suggest you > take over as maintainer, either as a group or an individual. Feel free > to comment here. I'm happy to create an organization and move the > project into it. > > In the absence of anyone taking over, the GitHub project will be > archived on Jan 31st. By that point, the issues/PRs will be lost. > > There is a related GitHub issue here: > > > Cheers, > > Chris > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Mihai Maruseac (MM) "If you can't solve a problem, then there's an easier problem you can solve: find it." -- George Polya From Graham.Hutton at nottingham.ac.uk Mon Nov 4 06:51:04 2019 From: Graham.Hutton at nottingham.ac.uk (Graham Hutton) Date: Mon, 4 Nov 2019 06:51:04 +0000 Subject: [Haskell-cafe] Journal of Functional Programming - Call for PhD Abstracts Message-ID: ============================================================ CALL FOR PHD ABSTRACTS Journal of Functional Programming Deadline: 30th November 2019 http://tinyurl.com/jfp-phd-abstracts ============================================================ PREAMBLE: Many students complete PhDs in functional programming each year. As a service to the community, twice per year the Journal of Functional Programming publishes the abstracts from PhD dissertations completed during the previous year. The abstracts are made freely available on the JFP website, i.e. not behind any paywall. They do not require any transfer of copyright, merely a license from the author. A dissertation is eligible for inclusion if parts of it have or could have appeared in JFP, that is, if it is in the general area of functional programming. The abstracts are not reviewed. Please submit dissertation abstracts according to the instructions below. We welcome submissions from both the PhD student and PhD advisor/supervisor although we encourage them to coordinate. ============================================================ SUBMISSION: Please submit the following information to Graham Hutton by 30th November 2019: o Dissertation title: (including any subtitle) o Student: (full name) o Awarding institution: (full name and country) o Date of PhD award: (month and year; depending on the institution, this may be the date of the viva, corrections being approved, graduation ceremony, or otherwise) o Advisor/supervisor: (full names) o Dissertation URL: (please provide a permanently accessible link to the dissertation if you have one, such as to an institutional repository or other public archive; links to personal web pages should be considered a last resort) o Dissertation abstract: (plain text, maximum 350 words; you may use \emph{...} for emphasis, but we prefer no other markup or formatting; if your original abstract exceeds the word limit, please submit an abridged version within the limit) Please do not submit a copy of the dissertation itself, as this is not required. JFP reserves the right to decline to publish abstracts that are not deemed appropriate. ============================================================ PHD ABSTRACT EDITOR: Graham Hutton School of Computer Science University of Nottingham Nottingham NG8 1BB United Kingdom ============================================================ This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law. From wolfgang-it at jeltsch.info Mon Nov 4 11:51:32 2019 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Mon, 04 Nov 2019 13:51:32 +0200 Subject: [Haskell-cafe] 2019 State of Haskell Survey In-Reply-To: <6a6643cf-3154-479d-a319-e91f8044062b@www.fastmail.com> References: <5D02021F-FCE7-4A58-8642-5C60AAD954E8@fausak.me> <20191102002103.GA18762@x60s.casa> <6a6643cf-3154-479d-a319-e91f8044062b@www.fastmail.com> Message-ID: <705ed32a8b31669f3e1003eeb698341339e2969a.camel@jeltsch.info> Am Samstag, den 02.11.2019, 10:15 -0400 schrieb Taylor Fausak: > You can take the "Vi" answer choice of the editor question to mean "Vi > family", including Vim, Neovim, and so on. Then this should be made clear, for example by the survey saying “Vi or Vi derivative” instead of just “Vi”. I can only second this: Vi is not Vim. Vim is much more feature-rich to the point that I’m a happy Vim user who would never think of using Vi. With the answer being named “Vi”, probably many Vim users will not put their mark there and you’ll get distorted results. All the best, Wolfgang -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Nov 4 16:42:17 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 4 Nov 2019 16:42:17 +0000 Subject: [Haskell-cafe] Reverse of -ddump-splices In-Reply-To: References: Message-ID: <4CED676E-9CC2-4884-BFC3-60C506FE8B79@richarde.dev> There should be a way to reverse that. Post a bug! :) > On Nov 1, 2019, at 2:32 PM, Markus Läll wrote: > > Hi list! > > Is it so that there is no reverse for -ddump-splices? In ghci I can > set it with :set -ddump-splices and it turns on, but adding "no" to > the flag appears not to work. > > Found this > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html > so probably there is no reverse? > > -- > Markus Läll > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From meng.wang at bristol.ac.uk Mon Nov 4 21:02:54 2019 From: meng.wang at bristol.ac.uk (Meng Wang) Date: Mon, 4 Nov 2019 21:02:54 +0000 Subject: [Haskell-cafe] [FP] Postdoc position at University of Bristol in functional programming In-Reply-To: <789CFDDA-38E0-4B44-8BAE-B3E21022AC74@bristol.ac.uk> References: <7BB74EBE-3BDE-42D8-B6E7-8BDF5D903198@bristol.ac.uk> <789CFDDA-38E0-4B44-8BAE-B3E21022AC74@bristol.ac.uk> Message-ID: <8884B696-3CA3-4A7A-8220-601FD7E4906C@bristol.ac.uk> Dear Haskellers, The programming languages group at Bristol has an open post doc position in the area of functional programming. Haskell programmers are particularly welcome. Please pass it on to anyone who might be interested. Thanks! Best regards, Meng Meng Wang, PhD (Oxon) Senior Lecturer (Associate Professor) Department of Computer Science University of Bristol Merchant Venturers Building, Woodland Road, Clifton BS8 1UB +44 (0) 117 954 5145 meng.wang at bristol.ac.uk We are looking for an enthusiastic, self-motivated individual to contribute to an EPSRC-funded project, which aims to design programming languages that guarantee strong properties, and the application of them. The post holder will be based in the programming languages group at the University of Bristol Computer Science Department, which consists of three academics, two PDRAs, and a number of PhD students. You will also be working with a network of partners from Oxford, Edinburgh, Tohoku Japan, Chalmers Sweden, and industrial partner DFINITY Foundations providing expertise on WebAssembly. You should have a PhD in programming languages, or a closely related field. This post is available immediately and is offered on a full-time based for an initial term of three years. Appointment at a higher salary point than grade I is possible based on relevant experience. Informal enquiries should be addressed to Dr. Meng Wang (meng.wang at bristol.ac.uk) For more details about this position, please see: http://www.bristol.ac.uk/jobs/find/details.html?nPostingID=57894&nPostingTargetID=171215&option=28&sort=DESC&respnr=1&ID=Q50FK026203F3VBQBV7V77V83&JobNum=ACAD104298&Resultsperpage=10&lg=UK&mask=uobext Deadline: 25 November 2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aa593b4365 at turingmachine.eu Wed Nov 6 23:37:30 2019 From: aa593b4365 at turingmachine.eu (Donald Cordes) Date: Thu, 7 Nov 2019 00:37:30 +0100 Subject: [Haskell-cafe] Performance difference parallel 'toString' Message-ID: <7fb1e7a7-0aab-d80a-d2e0-4bf4d7f10928@turingmachine.eu> Hi everybody, I'm trying to squeeze more performance out of Haskell, and I'm hoping someone can help me understand why the following two pieces of code have different performance. In particular, I find that the apparently 'more parallel' version of the code performs worse. Quick description of the code: basically it solves the following problem [https://www.spoj.com/problems/PALIN/]. The input is a set of huge integer numbers. For each number, print the smallest 'palindrome' that is larger. A palindrome is a number for which when you reverse its digits you end up with the same number. The code uses 'parBuffer' to do the calculations in parallel. Below is high-level description of the steps of each version: Version 1 [palin.parfromstring.hs]: 1. Read the input numbers as strings into a list 2. Perform the following calculation for each number (the steps for this calculation run in sequence, the entire calculation runs in parallel w.r.t. different numbers): 2.1 Convert from string to array of digits 2.2 Calculate the palindrome in-place The result of step 2 is a list of digit-arrays. That concludes the parallel step. 3. For each palindrome, convert it to a string and print it. Version 2 [palin.parfromstringtostring.hs]: 1. Same as above 2.1 Same as above 2.2 Same as above 2.3 Convert the palindrome to a string 3. For each string, print it. I'd expect that version 2 is faster than version 1 because it's doing the conversion back to string also in parallel, but on a set of 50 numbers it's surprisingly ~750ms slower on my computer. I see that the GC is doing a lot more copying for version 2 (1GB more than version 1), but I don't really understand what I'm looking it. :) How can one best think about getting more performance out of parallelism in Haskell? As in, what are some good rules of thumb by which one can decide 'parallelize this, don't parallelize that'? Attached you can find the event logs for ThreadScope and full code for each version. The code is almost identical, so attached is also the diff between the two. Thanks in advance -------------- next part -------------- A non-text attachment was scrubbed... Name: palin-eventlogs.7z Type: application/x-7z-compressed Size: 1315173 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: palin.parfromstring.hs Type: text/x-haskell Size: 3961 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: palin.parfromstringtostring.hs Type: text/x-haskell Size: 3961 bytes Desc: not available URL: -------------- next part -------------- --- palin.parfromstring.hs 2019-11-06 23:40:12.648324958 +0100 +++ palin.parfromstringtostring.hs 2019-11-06 23:39:42.279920483 +0100 @@ -129,5 +129,5 @@ cases <- replicateM n getLine let strategy = parBuffer cap rdeepseq answers = map (palindrome.fromString) cases `using` strategy - in mapM_ (putStrLn.toString) answers - where palindrome c = runST $ nextPalindrome c :: Vec.Vector Digit + in mapM_ putStrLn answers + where palindrome c = toString (runST $ nextPalindrome c :: Vec.Vector Digit) From benjamin.redelings at gmail.com Thu Nov 7 16:56:52 2019 From: benjamin.redelings at gmail.com (Benjamin Redelings) Date: Thu, 7 Nov 2019 11:56:52 -0500 Subject: [Haskell-cafe] Explicit approach to lazy effects For probability monad? In-Reply-To: References: Message-ID: <3ae874a9-df18-c133-7e9b-38ad0cc14113@gmail.com> Hi Olaf, Thanks for your reply!  I think I was unclear about a few things: 1. Mainly, I am _assuming_ that you can implement a lazy probability monad while ignoring random number generators.  So, monad should be commutative, and should have the second property of laziness that you mention. (As an aside, I think the normal way to do this is to implement a function that splits the random number generator whenever you perform a lazy random computation using function: split :: g -> (g,g).  My hack is to present that we have a hardware instruction that generates true random numbers, and then put that in the IO Monad, and then use unsafeInterLeaveIO.  However, I would have to think more about this.) 2. My question is really about how you can represent side effects in a lazy context.  Thus the monad would be something like EffectMonad = (a, Set Effect), where Effect is some ADT that represents effects.  Each effect represents some action that can be undone, such as registering a newly created random variable in the list of all random variables. This seems to be easy in a strict context, because you can change a function a->b that has effects into a-> EffectMonad b. Then your interpreter just needs to modify the global state to add the effects from the interpreter state1 (x <<= y) = let (result1,effects1) = interpreter state1 x state2 = state1 `union` effects1 in interpreter state2 (y result1) However, with a lazy language I think this does not work, because we do not want to include "effects1" unless the "result1" is actually consumed. In that context, I think that a function (a->b) would end up becoming EffectMonad a -> EffectMonad (EffectMonad b) The argument 'a' changes to EffectMonad 'a' because the function itself (and not the interpreter) must decide whether to include the effects of the input into the effects of the output.  The output changes to EffectMonad (EffectMonad b) so that the result is still of type (EffectMonad b) after the result is unwrapped. Does that make more sense? -BenRI On 10/17/19 4:03 PM, Olaf Klinke wrote: > Hi Benjamin, > > Your example code seems to deal with two distinct types: > The do-notation is about the effects monad (on the random number generator?) and the `sample` function pulls whatever representation you have for an actual probability distribution into this effect monad. In my mental model, the argument to `sample` represents a function Double -> x that interprets a number coming out of the standard random number generator as an element of type x. > I suggest to consider the following two properties of the mathematical probability monad (a.k.a. the Giry monad), which I will use as syntactic re-write rules in your modeling language. > > The first property is Fubini's Theorem. In Haskell terms it says that for all f, a :: m x and b :: m y the two terms > > do {x <- a; y <- b; f x y} > do {y <- b; x <- a; f x y} > > are semantically equivalent. (For state monads, this fails.) Monads where this holds are said to be commutative. If you have two urns, then drawing from the left and then drawing from the right is the same as first drawing from the right and then drawing from the left. Using Fubini, we can swap the first two lines in your example: > > model = do > cond <- bernoulli 0.5 > x <- normal 0 1 > return (if cond == 1 then x else 0) > > This desugars to > > bernoulli 0.5 >>= (\cond -> normal 0 1 >>= (\x -> return (if cond == 1 then x else return 0))) > bernoulli 0.5 >>= (\cond -> fmap (\x -> if cond == 1 then x else 0) (normal 0 1)) > > The second property is a kind of lazyness, namely > > fmap (const x) and return are semantically equivalent. > > which holds for mathematical distributions (but not for state monads). Now one could argue that in case cond == 0 the innermost function is constant in x, in which case the whole thing does not depend on the argument (normal 0 1). The Lemma we need here is semantic equivalence of the following two lambda terms. > > \cond -> \x -> if cond == 1 then x else 0 > \cond -> if cond == 1 then id else const 0 > > If the above is admissible then the following syntactic transformation is allowed: > > model = do > cond <- bernoulli 0.5 > if cond == 1 then normal 0 1 else return 0 > > which makes it obvious that the normal distribution is only sampled when needed. But I don't know whether you would regard this as the same model. Notice that I disregarded your `sample` function. That is, my monadic language is the monad of probabilites, not the monad of state transformations of a random number generator. Maybe you can delay using the random number generator until the very end? I don't know the complete set of operations your modeling language sports. If that delay is possible, then maybe you can use a monad that has the above two properties (e.g. a reader monad) and only feed the random numbers to the final model. As proof of concept, consider the following. > > type Model = Reader Double > model :: Model Int > model = do > x <- reader (\r -> last [1..round (recip r)]) > cond <- reader (\r -> r > 0.5) > return (if cond then x else 0) > > runReader model is fast for very small inputs, which would not be the case when the first line was always evaluated. > > Cheers, > Olaf -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.redelings at gmail.com Thu Nov 7 17:04:36 2019 From: benjamin.redelings at gmail.com (Benjamin Redelings) Date: Thu, 7 Nov 2019 12:04:36 -0500 Subject: [Haskell-cafe] Explicit approach to lazy effects For probability monad? In-Reply-To: <19EB8AB6-D6DB-4A6E-9C19-0C453F81B0B9@web.de> References: <00929421-4b00-69ef-a8c1-3c133f132230@gmail.com> <19EB8AB6-D6DB-4A6E-9C19-0C453F81B0B9@web.de> Message-ID: <53d2b040-c18d-3d37-ca99-cb9331e829f1@gmail.com> Hi Sandra, Thank you for the references! On 10/15/19 11:10 AM, Sandra Dylus wrote: > if you’re explicitly interested in sharing computations (rather than only modelling non-strictness) then the following approach by Fisher, Kiselyov and Shan might be of interest. > > http://homes.sice.indiana.edu/ccshan/rational/S0956796811000189a.pdf (Purely functional lazy nondeterministic programming) > > The are modelling the functional logic language Curry, but have also some remarks about modelling a lazy probabilistic language with their approach. > If you’re not interested in the sharing part of laziness, the paper might be a good first starting point nonetheless. They use a deep monadic embedding that you can use to model non-strictness. Other papers that use such an encoding are the following. Their function 'share :: m a -> m (m a)' is interesting: I think the double wrapping m (m a) suggests how one could handle lazy effects in a monad. A function (a->b) would get lifted to (m a -> m (m b)). (i) the input changes from a to (m a) so that the function itself can decide whether to include the effect of the input into the output (ii) the output changes from b to m (m b) so that it still has type (m b) after being unwrapped by the monad. For example, if g takes type (m b) as input, then: do   x <- f args -- if f has return type (m b) then x has type b   y <- g x -- but x needs to have type (m b) here   return y -BenRI > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.192.7153&rep=rep1&type=pdf#page=8 (Transforming Functional Logic Programs into Monadic Functional Programs) > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.134.9706&rep=rep1&type=pdf (Verifying Haskell Programs Using Constructive Type Theory) > > Best regards > Sandra -------------- next part -------------- An HTML attachment was scrubbed... URL: From corentin.dupont at gmail.com Thu Nov 7 17:21:22 2019 From: corentin.dupont at gmail.com (Corentin Dupont) Date: Thu, 7 Nov 2019 18:21:22 +0100 Subject: [Haskell-cafe] [ANN] Keycloak-hs Message-ID: Hello, I'm very proud to announce the release of Keycloak-hs, a library to access Keycloak: http://hackage.haskell.org/package/keycloak-hs. Keycloak (www.keycloak.org) is a tool allowing to authenticate users and protect your API resources. I included a tutorial in the README file. Keycloak as many many features, so this library will evolve to cover more. Help/PRs are welcome! Cheers! Corentin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at jackkelly.name Fri Nov 8 06:51:50 2019 From: jack at jackkelly.name (Jack Kelly) Date: Fri, 08 Nov 2019 16:51:50 +1000 Subject: [Haskell-cafe] ANN: ban-instance-0.1.0.1 Message-ID: <87y2wqkghl.fsf@jackkelly.name> I have just pushed an initial release of ban-instance[1] to Hackage. ban-instance uses template haskell to generate compile errors for typeclass instances that should not exist: -- Declare that Foo should never have a ToJSON instance $(banInstance [t|ToJSON Foo|] "why ToJSON Foo should never be defined") The custom errors then say why an instance is banned: • Attempt to use banned instance (ToJSON Foo) Reason for banning: why ToJSON Foo should never be defined Instance banned at [moduleName] filePath:lineNumber We have found this useful to prevent definition of ToJSON/FromJSON instances on core data types, as this forces programmers to instead place the serialisation instances on newtypes at the serialisation boundary. GitHub issues and PRs are welcome[2]. In particular, there are currently a few limitations: 1. There is currently no support for classes with associated types or associated data types; 2. It would be great to generate haddocks for banned instances, marking them as such; and 3. Type quotations [t|...|] do not support free variables (GHC#5616). Nevertheless, I hope that it is useful. Best, -- Jack [1]: http://hackage.haskell.org/package/ban-instance-0.1.0.1 [2]: https://github.com/qfpl/ban-instance/ From ysangkok at gmail.com Sat Nov 9 18:59:57 2019 From: ysangkok at gmail.com (Janus Troelsen) Date: Sat, 9 Nov 2019 18:59:57 +0000 Subject: [Haskell-cafe] Type class constraints in rewrite rules In-Reply-To: References: Message-ID: Hi, I am trying to speed up some code using the Simplicity library by roconnor. He suggested I apply a rewrite rule to implement the full-adder using Haskell's addition: > fullAdder @(Kleisli m) w > into > Kleisli (\((a,b),c) -> return $ let sum = fromWord w a + fromWord w b + fromWord1 c in (toBit (sum >= 2 ^ wordSize w), toWord w sum)) I turned that description into a rewrite rule by prepending "forall m w.". But that results in "Forall'd variable ‘m’ does not appear on left hand side" and "Not in scope: type variable ‘m’" (which seems contradictory?) So I thought, OK, typically Type Applications are not necessary, I will just remove it. So the RULE becomes: "fullAdderOpt" forall w. fullAdder w = Kleisli (\((a,b),c) -> return $ let sum = fromWord w a + fromWord w b + fromWord1 c in (toBit (sum >= 2 ^ wordSize w), toWord w sum)) but compiling that, I get: > Could not deduce (Monad m) arising from a use of ‘return’". > Possible fix: add (Monad m) to the context of the RULE "fullAdderOpt" OK, that must mean I should prepend "(Monad m) =>". I tried inserting that either before or after the forall, but both variants are parse errors. So I am suspecting a GHC bug, since I was suggested to do something that doesn't parse. But really, I am just wondering how to make a rewrite rule that does what roconnor suggested. The full source code of the final version is here: https://hushfile.it/api/file?fileid=5dc706576e329 Regards, Janus From gershomb at gmail.com Sun Nov 10 00:43:12 2019 From: gershomb at gmail.com (Gershom B) Date: Sat, 9 Nov 2019 19:43:12 -0500 Subject: [Haskell-cafe] Help wanted: Postfix admin guru for Haskell.org Message-ID: We've been getting increasing amounts of bounces and dropped mail for haskell.org emails (things sent hackage trustees, things sent from our wiki and hackage servers, etc). We _mainly_ have kept things working, but it looks like policies have been amped up in terms of requiring various measures. Additionally, there's something about reverse dns and valid hostnames that seems to not be configured correctly, according to mxtoolbox. Along with all that, we'd really like to migrate the mail infrastructure to our new packet servers, away from rackspace. As such, we could use some concentrated help from someone familiar with maintaining and administering postfix servers, if someone is up to volunteer for the task. Thanks! --Gershom From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Sun Nov 10 07:55:58 2019 From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis) Date: Sun, 10 Nov 2019 07:55:58 +0000 Subject: [Haskell-cafe] Help wanted: Postfix admin guru for Haskell.org In-Reply-To: References: Message-ID: <20191110075558.xpfwfceput4dfafd@weber> On Sat, Nov 09, 2019 at 07:43:12PM -0500, Gershom B wrote: > [...] we'd really like to migrate the mail infrastructure to our new > packet servers, away from rackspace. > > As such, we could use some concentrated help from someone familiar > with maintaining and administering postfix servers, if someone is up > to volunteer for the task. It sounds like the mail infrastructure is self-hosted. If so, is there any particular reason for that? It seems to me that it would be much simpler to host the accounts at an established mail provider. I can guess one reason would be funding. Are there others? Tom From olf at aatal-apotheke.de Sun Nov 10 21:21:41 2019 From: olf at aatal-apotheke.de (Olaf Klinke) Date: Sun, 10 Nov 2019 22:21:41 +0100 Subject: [Haskell-cafe] Explicit approach to lazy effects For probability monad? In-Reply-To: <3ae874a9-df18-c133-7e9b-38ad0cc14113@gmail.com> References: <3ae874a9-df18-c133-7e9b-38ad0cc14113@gmail.com> Message-ID: <9BB8C94C-8E56-4FF2-9DA3-D883B721C917@aatal-apotheke.de> Benjamin, I believe that with the right monad, you won't need to think about side-effects, at least not the side-effects that are manual fiddling with registering variables. Haskell should do that for you. It seems to me what you really need is a strictness analyzer together with the appropriate re-write rules that push the call to monadic actions as deep into the probabilistic model as possible. But initially you said you want to avoid source-to-source translations. Indeed your mention of splitting the random number generator seems to buy laziness, judging by the documentation of MonadInterleave in the MonadRandom package. But looking at the definition of interleave for RandT you can see that the monadic computation is still executed, only the random number generator state is restored afterwards. In particular, side effects of the inner monad are always executed. (Or was I using it wrong?) Hence from what I've seen my judgement is that state transformer monads are a dead end. The idea with changing a -> m b to m a -> m (m b) seemed promising as well. You can easily make a Category instance for the type C a b = m a -> m (m b) so that composition of arrows in C does not necessarily execute the monadic computation. However, functions of the Arrow instance (e.g. 'first') must look at the input monad action in order to turn C a b into C (a,c) (b,c), at least I could not think of another way. Sorry for writing so destructive posts. In mathematics it is generally easier to find counterexamples than to write proofs. Olaf > Am 07.11.2019 um 17:56 schrieb Benjamin Redelings : > > Hi Olaf, > > Thanks for your reply! I think I was unclear about a few things: > > 1. Mainly, I am _assuming_ that you can implement a lazy probability monad while ignoring random number generators. So, monad should be commutative, and should have the second property of laziness that you mention. > > (As an aside, I think the normal way to do this is to implement a function that splits the random number generator whenever you perform a lazy random computation using function: split :: g -> (g,g). My hack is to present that we have a hardware instruction that generates true random numbers, and then put that in the IO Monad, and then use unsafeInterLeaveIO. However, I would have to think more about this.) > 2. My question is really about how you can represent side effects in a lazy context. Thus the monad would be something like > EffectMonad = (a, Set Effect), > > where Effect is some ADT that represents effects. Each effect represents some action that can be undone, such as registering a newly created random variable in the list of all random variables. > This seems to be easy in a strict context, because you can change a function a->b that has effects into a-> EffectMonad b. Then your interpreter just needs to modify the global state to add the effects from the > interpreter state1 (x <<= y) = let (result1,effects1) = interpreter state1 x > state2 = state1 `union` effects1 > in interpreter state2 (y result1) > > However, with a lazy language I think this does not work, because we do not want to include "effects1" unless the "result1" is actually consumed. > > In that context, I think that a function (a->b) would end up becoming > > EffectMonad a -> EffectMonad (EffectMonad b) > > The argument 'a' changes to EffectMonad 'a' because the function itself (and not the interpreter) must decide whether to include the effects of the input into the effects of the output. The output changes to EffectMonad (EffectMonad b) so that the result is still of type (EffectMonad b) after the result is unwrapped. > > Does that make more sense? > -BenRI > > > > On 10/17/19 4:03 PM, Olaf Klinke wrote: >> Hi Benjamin, >> >> Your example code seems to deal with two distinct types: >> The do-notation is about the effects monad (on the random number generator?) and the `sample` function pulls whatever representation you have for an actual probability distribution into this effect monad. In my mental model, the argument to `sample` represents a function Double -> x that interprets a number coming out of the standard random number generator as an element of type x. >> I suggest to consider the following two properties of the mathematical probability monad (a.k.a. the Giry monad), which I will use as syntactic re-write rules in your modeling language. >> >> The first property is Fubini's Theorem. In Haskell terms it says that for all f, a :: m x and b :: m y the two terms >> >> do {x <- a; y <- b; f x y} >> do {y <- b; x <- a; f x y} >> >> are semantically equivalent. (For state monads, this fails.) Monads where this holds are said to be commutative. If you have two urns, then drawing from the left and then drawing from the right is the same as first drawing from the right and then drawing from the left. Using Fubini, we can swap the first two lines in your example: >> >> model = do >> cond <- bernoulli 0.5 >> x <- normal 0 1 >> return (if cond == 1 then x else 0) >> >> This desugars to >> >> bernoulli 0.5 >>= (\cond -> normal 0 1 >>= (\x -> return (if cond == 1 then x else return 0))) >> bernoulli 0.5 >>= (\cond -> fmap (\x -> if cond == 1 then x else 0) (normal 0 1)) >> >> The second property is a kind of lazyness, namely >> >> fmap (const x) and return are semantically equivalent. >> >> which holds for mathematical distributions (but not for state monads). Now one could argue that in case cond == 0 the innermost function is constant in x, in which case the whole thing does not depend on the argument (normal 0 1). The Lemma we need here is semantic equivalence of the following two lambda terms. >> >> \cond -> \x -> if cond == 1 then x else 0 >> \cond -> if cond == 1 then id else const 0 >> >> If the above is admissible then the following syntactic transformation is allowed: >> >> model = do >> cond <- bernoulli 0.5 >> if cond == 1 then normal 0 1 else return 0 >> >> which makes it obvious that the normal distribution is only sampled when needed. But I don't know whether you would regard this as the same model. Notice that I disregarded your `sample` function. That is, my monadic language is the monad of probabilites, not the monad of state transformations of a random number generator. Maybe you can delay using the random number generator until the very end? I don't know the complete set of operations your modeling language sports. If that delay is possible, then maybe you can use a monad that has the above two properties (e.g. a reader monad) and only feed the random numbers to the final model. As proof of concept, consider the following. >> >> type Model = Reader Double >> model :: Model Int >> model = do >> x <- reader (\r -> last [1..round (recip r)]) >> cond <- reader (\r -> r > 0.5) >> return (if cond then x else 0) >> >> runReader model is fast for very small inputs, which would not be the case when the first line was always evaluated. >> >> Cheers, >> Olaf >> From archambault.v at gmail.com Mon Nov 11 16:36:50 2019 From: archambault.v at gmail.com (Vincent Archambault-Bouffard) Date: Mon, 11 Nov 2019 11:36:50 -0500 Subject: [Haskell-cafe] First release of sexpresso Message-ID: Hello, I'm pleased to announce the first release of the sexpresso [0] library on Hackage. sexpresso is a library to parse, print and work with S-expression. You can for example : - Customize the opening and closing tag, usually '(' and ')'. - Specify if space is needed between atoms (or between only a subset of atoms) - The datatype for S-expression (SExpr) has a extra parameter to make parsing, working and printing with multiple "type" of S-expression, like Scheme list (...) and Scheme vector #(....), easier - Do much more ... read the doc on Hackage or the README on github [1] ! I plan to improve the printer and add recursive scheme soon. [0] http://hackage.haskell.org/package/sexpresso [1] https://github.com/archambaultv/sexpresso Vincent Archambault -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg7mdp at gmail.com Mon Nov 11 18:18:56 2019 From: greg7mdp at gmail.com (Gregory Popovitch) Date: Mon, 11 Nov 2019 13:18:56 -0500 Subject: [Haskell-cafe] First release of sexpresso In-Reply-To: References: Message-ID: With a name like that, it could also be an interesting coffee shop :-) _____ From: Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] On Behalf Of Vincent Archambault-Bouffard Sent: Monday, November 11, 2019 11:37 AM To: haskell-cafe at haskell.org Subject: [Haskell-cafe] First release of sexpresso Hello, I'm pleased to announce the first release of the sexpresso [0] library on Hackage. sexpresso is a library to parse, print and work with S-expression. You can for example : - Customize the opening and closing tag, usually '(' and ')'. - Specify if space is needed between atoms (or between only a subset of atoms) - The datatype for S-expression (SExpr) has a extra parameter to make parsing, working and printing with multiple "type" of S-expression, like Scheme list (...) and Scheme vector #(....), easier - Do much more ... read the doc on Hackage or the README on github [1] ! I plan to improve the printer and add recursive scheme soon. [0] http://hackage.haskell.org/package/sexpresso [1] https://github.com/archambaultv/sexpresso Vincent Archambault -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozgurakgun at gmail.com Wed Nov 13 10:04:03 2019 From: ozgurakgun at gmail.com (=?UTF-8?B?w5Z6Z8O8ciBBa2fDvG4=?=) Date: Wed, 13 Nov 2019 10:04:03 +0000 Subject: [Haskell-cafe] Code coverage: mark certain function calls as covered Message-ID: This may sound like a bizarre request, but I feel I can't be the only one. I have a function called "bug". This is essentially a special case of "error", but I am disciplined and I only use this function to mark parts of my code I believe are unreachable. So something like: data Term = X ... | Y ... | Z ... -- the argument can only be X or Y in this part of the program f :: Term -> ... f (X ...) = ... f (Y ...) = ... f Z{} = bug "This should never happen (in f)" You might hate this style of programming, and frankly I am not a fan either. But sometimes we have to write partial functions and I am at least trying to mark the cases explicitly in these cases. So this is my question: if I did a good job and if the Z case above is indeed unreachable, the code coverage report will always flag it as uncovered and in the overall report this will make it harder for me to see the parts of my code which aren't covered and should be. I'd like a way of marking (= generating a tick for?) every call to this function bug as covered. Please let me know if there is a better place to ask this question. I checked and hpc 's issue tracker is shared with GHC's and I wasn't sure if they'd appreciate a question there. -- Özgür Akgün -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.redelings at gmail.com Wed Nov 13 16:58:59 2019 From: benjamin.redelings at gmail.com (Benjamin Redelings) Date: Wed, 13 Nov 2019 11:58:59 -0500 Subject: [Haskell-cafe] Explicit approach to lazy effects For probability monad? In-Reply-To: <9BB8C94C-8E56-4FF2-9DA3-D883B721C917@aatal-apotheke.de> References: <3ae874a9-df18-c133-7e9b-38ad0cc14113@gmail.com> <9BB8C94C-8E56-4FF2-9DA3-D883B721C917@aatal-apotheke.de> Message-ID: Hi Olaf, Thanks for your interesting response!  Keep in mind that I have written my own Haskell interpreter, and that the lazy random monad is already working in my interpreter.  So I am seeing the issue with effects as second issue. See http://www.bali-phy.org/models.php for some more information on this system.  Note that I also have 'mfix' working in the lazy random monad, which I think is pretty cool.  (See the second example about trait evolution on a random tree).  However, I have no type system at all at the present time. On 11/10/19 4:21 PM, Olaf Klinke wrote: > I believe that with the right monad, you won't need to think about side-effects, at least not the side-effects that are manual fiddling with registering variables. Haskell should do that for you. So, I'm trying to implement a random walk on the space of Haskell program traces.  (A program trace records the execution graph to the extent that it depends on builtin operations with random results, such as sampling from a normal distribution.)  I could provide a PDF of an example execution trace, if you are interested. This means that logically Haskell is kind of running at two levels: (1) there is an inner random program that we are making a trace of. (2) there is another outer program that modifies these traces. The reason I think I need "registration" side-effects is that the outer program needs a list of random variables in the trace graph that it can randomly tweak.  For example, if the inner program is given by: do x <- sample $ normal 0 1 ----- [1] observe (normal x 1) 10 return $ log_all [x %% "x"] then the idea is that "x" would get registered when it first accessed.  This allows the outer program needs to know that it can modify the node for "x" in the trace graph.  This side-effect is invisible to the inner program, but visible to the outer program. So, I'm not sure if it is correct that I would not need side-effects.  However, if I could avoid side-effects, that would be great. > It seems to me what you really need is a strictness analyzer together with the appropriate re-write rules that push the call to monadic actions as deep into the probabilistic model as possible. But initially you said you want to avoid source-to-source translations. I think maybe you are right that source-to-source transforms are the right way.  Currently I have avoided by writing my own interpreter and virtual machine that keeps track of trace graphs. What do you mean about pushing monadic actions as deep into the probabilistic model as possible? > Indeed your mention of splitting the random number generator seems to buy laziness, judging by the documentation of MonadInterleave in the MonadRandom package. But looking at the definition of interleave for RandT you can see that the monadic computation is still executed, only the random number generator state is restored afterwards. In particular, side effects of the inner monad are always executed. (Or was I using it wrong?) Hence from what I've seen my judgement is that state transformer monads are a dead end. Hmm... I will have to look at this.  Are you saying that in the MonadRandom package, interleaved computations are not sequenced just because of the random number generator, but they ARE sequenced if they perform any monadic actions?  If this is true, then it would seem that the state is not the problem? In any case I think the problems that come from threading random number generator state are not fundamental.  There are machine instructions that generate true randomness, so one could always implement randomness in the IO monad without carrying around a state. interpreter (f >>= g) = do                           x <- unsafeInterleaveIO $ interpreter f interpreter $ g x I have a very hacky system, but this is basically what I am doing.  It is probably terrible, but I have not run into any problems so far.  Is there a reason that one should avoid this? So far it seems to work.  I am not that worried about the probability monad. > The idea with changing a -> m b to m a -> m (m b) seemed promising as well. You can easily make a Category instance for the type > > C a b = m a -> m (m b) > > so that composition of arrows in C does not necessarily execute the monadic computation. However, functions of the Arrow instance (e.g. 'first') must look at the input monad action in order to turn C a b into C (a,c) (b,c), at least I could not think of another way. I'm sorry, I am not very familiar with the Haskell functions for categories, I just read part of Chapter 1 of an Algebraic Topology book. Can you possibly rephrase this in terms of (very simple) math? Specifically, I was thinking that mapping (a->b) to (m a -> m (m b)) looks like a Functor where a       =>  m a                           --- this is "return" a->b =>  m a -> m (m b)        --- this is "fmap" Why are you saying that this a category instead of a functor?  I am probably just confused, I am not very familiar with categories yet, and have not had time to go look at the Arrow instance you are talking about. > Sorry for writing so destructive posts. In mathematics it is generally easier to find counterexamples than to write proofs. Haha, no problem.  Its not clear to me that this is possible.  If in general lazy effects cannot be represented in Haskell using do-notation, that would probably be interesting to state formally. -BenRI [1] Regarding "sample", I think that if I I write "x <- normal 0 1" then I think that I cannot write "observe (normal x 1) 10", but would have to distinguish the distribution (normalDistr x 1) from the action of sampling from the distribution (normal 0 1). In my notation (normal x 1) is the distribution itself, and not an action. But if I could eliminate the "sample" then I think the code would look a lot cleaner. > > Olaf > > >> Am 07.11.2019 um 17:56 schrieb Benjamin Redelings : >> >> Hi Olaf, >> >> Thanks for your reply! I think I was unclear about a few things: >> >> 1. Mainly, I am _assuming_ that you can implement a lazy probability monad while ignoring random number generators. So, monad should be commutative, and should have the second property of laziness that you mention. >> >> (As an aside, I think the normal way to do this is to implement a function that splits the random number generator whenever you perform a lazy random computation using function: split :: g -> (g,g). My hack is to present that we have a hardware instruction that generates true random numbers, and then put that in the IO Monad, and then use unsafeInterLeaveIO. However, I would have to think more about this.) >> 2. My question is really about how you can represent side effects in a lazy context. Thus the monad would be something like >> EffectMonad = (a, Set Effect), >> >> where Effect is some ADT that represents effects. Each effect represents some action that can be undone, such as registering a newly created random variable in the list of all random variables. >> This seems to be easy in a strict context, because you can change a function a->b that has effects into a-> EffectMonad b. Then your interpreter just needs to modify the global state to add the effects from the >> interpreter state1 (x <<= y) = let (result1,effects1) = interpreter state1 x >> state2 = state1 `union` effects1 >> in interpreter state2 (y result1) >> >> However, with a lazy language I think this does not work, because we do not want to include "effects1" unless the "result1" is actually consumed. >> >> In that context, I think that a function (a->b) would end up becoming >> >> EffectMonad a -> EffectMonad (EffectMonad b) >> >> The argument 'a' changes to EffectMonad 'a' because the function itself (and not the interpreter) must decide whether to include the effects of the input into the effects of the output. The output changes to EffectMonad (EffectMonad b) so that the result is still of type (EffectMonad b) after the result is unwrapped. >> >> Does that make more sense? >> -BenRI >> >> >> >> On 10/17/19 4:03 PM, Olaf Klinke wrote: >>> Hi Benjamin, >>> >>> Your example code seems to deal with two distinct types: >>> The do-notation is about the effects monad (on the random number generator?) and the `sample` function pulls whatever representation you have for an actual probability distribution into this effect monad. In my mental model, the argument to `sample` represents a function Double -> x that interprets a number coming out of the standard random number generator as an element of type x. >>> I suggest to consider the following two properties of the mathematical probability monad (a.k.a. the Giry monad), which I will use as syntactic re-write rules in your modeling language. >>> >>> The first property is Fubini's Theorem. In Haskell terms it says that for all f, a :: m x and b :: m y the two terms >>> >>> do {x <- a; y <- b; f x y} >>> do {y <- b; x <- a; f x y} >>> >>> are semantically equivalent. (For state monads, this fails.) Monads where this holds are said to be commutative. If you have two urns, then drawing from the left and then drawing from the right is the same as first drawing from the right and then drawing from the left. Using Fubini, we can swap the first two lines in your example: >>> >>> model = do >>> cond <- bernoulli 0.5 >>> x <- normal 0 1 >>> return (if cond == 1 then x else 0) >>> >>> This desugars to >>> >>> bernoulli 0.5 >>= (\cond -> normal 0 1 >>= (\x -> return (if cond == 1 then x else return 0))) >>> bernoulli 0.5 >>= (\cond -> fmap (\x -> if cond == 1 then x else 0) (normal 0 1)) >>> >>> The second property is a kind of lazyness, namely >>> >>> fmap (const x) and return are semantically equivalent. >>> >>> which holds for mathematical distributions (but not for state monads). Now one could argue that in case cond == 0 the innermost function is constant in x, in which case the whole thing does not depend on the argument (normal 0 1). The Lemma we need here is semantic equivalence of the following two lambda terms. >>> >>> \cond -> \x -> if cond == 1 then x else 0 >>> \cond -> if cond == 1 then id else const 0 >>> >>> If the above is admissible then the following syntactic transformation is allowed: >>> >>> model = do >>> cond <- bernoulli 0.5 >>> if cond == 1 then normal 0 1 else return 0 >>> >>> which makes it obvious that the normal distribution is only sampled when needed. But I don't know whether you would regard this as the same model. Notice that I disregarded your `sample` function. That is, my monadic language is the monad of probabilites, not the monad of state transformations of a random number generator. Maybe you can delay using the random number generator until the very end? I don't know the complete set of operations your modeling language sports. If that delay is possible, then maybe you can use a monad that has the above two properties (e.g. a reader monad) and only feed the random numbers to the final model. As proof of concept, consider the following. >>> >>> type Model = Reader Double >>> model :: Model Int >>> model = do >>> x <- reader (\r -> last [1..round (recip r)]) >>> cond <- reader (\r -> r > 0.5) >>> return (if cond then x else 0) >>> >>> runReader model is fast for very small inputs, which would not be the case when the first line was always evaluated. >>> >>> Cheers, >>> Olaf >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Wed Nov 13 22:53:55 2019 From: chrisdone at gmail.com (Christopher Done) Date: Wed, 13 Nov 2019 23:53:55 +0100 Subject: [Haskell-cafe] Statically checked overloaded strings Message-ID: Hi all, I just came up with this neat trick for adding static checks to overloaded strings: https://gist.github.com/chrisdone/809296b769ee36d352ae4f8dbe89a364 Hope it's helpful. I wonder whether the GHC devs would be open to a small patch to permit: $$"..." :-) Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Wed Nov 13 22:55:13 2019 From: amindfv at gmail.com (amindfv at gmail.com) Date: Wed, 13 Nov 2019 17:55:13 -0500 Subject: [Haskell-cafe] Code coverage: mark certain function calls as covered In-Reply-To: References: Message-ID: <8B50563D-7081-4BA5-A6F6-FE448AD38651@gmail.com> > El 13 nov 2019, a las 05:04, Özgür Akgün escribió: > > This may sound like a bizarre request, but I feel I can't be the only one. > I can give a +1 that this would be useful. Here's an annoying case I've run into that doesn't even require partial functions: fiveDiv :: Int -> Either String Int fiveDiv 0 = Left "Can't div by 0" fiveDiv n = Right $ 5 `div` n and the test cases are: fiveDiv 8 === Right 0 and isLeft (fiveDiv 0) Now the string itself ("Can't div by 0") is marked as an uncovered expression. This trivial case could easily be covered but: a) I've run into this frequently enough in more annoying cases, and b) it feels more idiomatic to write a test that doesn't care about the contents of the string (and the test doesn't need to change if the string changes) Tom From publicityifl at gmail.com Thu Nov 14 08:15:37 2019 From: publicityifl at gmail.com (Jurriaan Hage) Date: Thu, 14 Nov 2019 00:15:37 -0800 Subject: [Haskell-cafe] Second call for draft papers for TFPIE 2020 (Trends in Functional Programming in Education) Message-ID: Hello, Please, find below the second call for draft papers for TFPIE 2020. Please forward these to anyone you think may be interested. Apologies for any duplicates you may receive. best regards, Jurriaan Hage Chair of TFPIE 2020 ======================================================================== TFPIE 2020 Call for papers http://www.staff.science.uu.nl/~hage0101/tfpie2020/index.html February 12th 2020, Krakow, Poland (co-located with TFP 2020 and Lambda Days) TFPIE 2020 welcomes submissions describing techniques used in the classroom, tools used in and/or developed for the classroom and any creative use of functional programming (FP) to aid education in or outside Computer Science. Topics of interest include, but are not limited to: FP and beginning CS students FP and Computational Thinking FP and Artificial Intelligence FP in Robotics FP and Music Advanced FP for undergraduates FP in graduate education Engaging students in research using FP FP in Programming Languages FP in the high school curriculum FP as a stepping stone to other CS topics FP and Philosophy The pedagogy of teaching FP FP and e-learning: MOOCs, automated assessment etc. Best Lectures - more details below In addition to papers, we are requesting best lecture presentations. What's your best lecture topic in an FP related course? Do you have a fun way to present FP concepts to novices or perhaps an especially interesting presentation of a difficult topic? In either case, please consider sharing it. Best lecture topics will be selected for presentation based on a short abstract describing the lecture and its interest to TFPIE attendees. The length of the presentation should be comparable to that of a paper. On top of the lecture itself, the presentation can also provide commentary on the lecture. Submissions Potential presenters are invited to submit an extended abstract (4-6 pages) or a draft paper (up to 20 pages) in EPTCS style. The authors of accepted presentations will have their preprints and their slides made available on the workshop's website. Papers and abstracts can be submitted via easychair at the following link: https://easychair.org/conferences/?conf=tfpie2020 . After the workshop, presenters will be invited to submit (a revised version of) their article for review. The PC will select the best articles that will be published in the Electronic Proceedings in Theoretical Computer Science (EPTCS). Articles rejected for presentation and extended abstracts will not be formally reviewed by the PC. Dates Submission deadline: January 14th 2020, Anywhere on Earth. Notification: January 17th 2020 TFPIE Registration Deadline: January 20th 2020 Workshop: February 12th 2020 Submission for formal review: April 19th 2020, Anywhere on Earth. Notification of full article: June 6th 2020 Camera ready: July 1st 2020 Program Committee Olaf Chitil - University of Kent Youyou Cong - Tokyo Institute of Technology Marko van Eekelen - Open University of the Netherlands and Radboud University Nijmegen Jurriaan Hage (Chair) - Utrecht University Marco T. Morazan - Seton Hall University, USA Sharon Tuttle - Humboldt State University, USA Janis Voigtlaender - University of Duisburg-Essen Viktoria Zsok - Eotvos Lorand University Note: information on TFP is available at http://www.cse.chalmers.se/~rjmh/tfp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ollie at ocharles.org.uk Thu Nov 14 08:40:47 2019 From: ollie at ocharles.org.uk (Oliver Charles) Date: Thu, 14 Nov 2019 08:40:47 +0000 Subject: [Haskell-cafe] [ANNOUNCE] GHC 8.8.1 is now available In-Reply-To: <87tv7prg86.fsf@smart-cactus.org> References: <87zhjwl1mx.fsf@smart-cactus.org> <87tv91gzk9.fsf@smart-cactus.org> <37C7FFFB-7EBE-4FAA-91AD-DFD570A8C296@well-typed.com> <87eeyvrkqk.fsf@smart-cactus.org> <20191030142951.z7byvbomfeu3avtp@weber> <87tv7prg86.fsf@smart-cactus.org> Message-ID: I just realised another reason why this upload is important - I use http://packdeps.haskellers.com/ RSS feed support to notify me when any of my packages have out of date dependencies. Unfortunately, I haven't actually got any notifications about my packages not building with GHC 8.8, so I'm having to rely on users reporting issues. On Wed, 30 Oct 2019, 11:01 pm Ben Gamari, wrote: > Dan Burton writes: > > > My questions at this point are: > > > > * What are the reasons that hackage-server must be upgraded to Cabal 3? > > I'll leave this to Herbert as I am also not clear on this point. None of > the 8.8.1 boot packages should be relying on Cabal 3 syntax in their > .cabal files so I'm not sure what specifically gives rise to this > dependency. > > > * What are the reasons that make this upgrade difficult? > > Cabal 3 does change quite a bit. In response to the number of requests > to get this un-stuck I put in some work [1] this week and last to start > the upgrade. Indeed the process is non-trivial and requires touching a > lot of code. > > Nearly all of this churn is due to the removal of the Text typeclass in > favor of the Parsec and Pretty typeclasses. These changes are generally > quite mechanical but do take time. > > > * Can publishing to hackage be considered a proper part of the ghc > release > > process in the future? > > > I am quite willing to handle the uploads as part of the release process > if this is the direction we decide to take. > > Cheers, > > - Ben > > > [1] https://github.com/bgamari/hackage-server/tree/cabal-3 > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From merijn at inconsistent.nl Thu Nov 14 09:37:47 2019 From: merijn at inconsistent.nl (Merijn Verstraaten) Date: Thu, 14 Nov 2019 10:37:47 +0100 Subject: [Haskell-cafe] Statically checked overloaded strings In-Reply-To: References: Message-ID: I proposed a static validation for partial conversions from all sorts of polymorphic literals. At the time it was rejected (I don't really recall why), so I ended up implementing something along these lines as a library: https://hackage.haskell.org/package/validated-literals Which has the benefit that users don't have to deal with writing TH themselves, you can just write a regular pure conversion function. Cheers, Merijn > On 13 Nov 2019, at 23:53, Christopher Done wrote: > > Hi all, > > I just came up with this neat trick for adding static checks to overloaded strings: https://gist.github.com/chrisdone/809296b769ee36d352ae4f8dbe89a364 > > Hope it's helpful. > > I wonder whether the GHC devs would be open to a small patch to permit: $$"..." > > :-) > > Cheers, > > Chris > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From jared at jtobin.io Thu Nov 14 13:06:58 2019 From: jared at jtobin.io (Jared Tobin) Date: Thu, 14 Nov 2019 21:06:58 +0800 Subject: [Haskell-cafe] Explicit approach to lazy effects For probability monad? In-Reply-To: References: <3ae874a9-df18-c133-7e9b-38ad0cc14113@gmail.com> <9BB8C94C-8E56-4FF2-9DA3-D883B721C917@aatal-apotheke.de> Message-ID: <20191114130658.GA41458@castor> On Wed, Nov 13, 2019 at 11:58:59AM -0500, Benjamin Redelings wrote: > On 11/10/19 4:21 PM, Olaf Klinke wrote: > > I believe that with the right monad, you won't need to think about > > side-effects, at least not the side-effects that are manual fiddling > > with registering variables. Haskell should do that for you. > > So, I'm trying to implement a random walk on the space of Haskell > program traces. > > This means that logically Haskell is kind of running at two levels: > > (1) there is an inner random program that we are making a trace of. > > (2) there is another outer program that modifies these traces. > > [..] > > So, I'm not sure if it is correct that I would not need side-effects.  > However, if I could avoid side-effects, that would be great. FWIW, I believe it's possible to avoid effects, though I'm not sure if it can be done particularly efficiently. Instead of maintaining an external "database of randomness" (à la Wingate et al. 2011 -- i.e. the "registered" variables that you can supply randomness to), you should be able to directly annotate probabilistic nodes in your syntax tree with PRNG state snapshots. When you need randomness for the "outer program" semantics, you just iterate the PRNG states on-site. The representationally elegant way to do this seems to be to use a free monad to denote your probabilistic programs and a cofree comonad to represent their execution traces. The cofree comonad allows you to annotate the probabilistic syntax tree nodes with the required state (parameter space value, log-likelihood, PRNG state), and then perturb them lazily as required via a comonadic 'extend'. I wrote up a couple of prototypes of this sort of setup awhile back: * https://jtobin.io/simple-probabilistic-programming * https://jtobin.io/comonadic-mcmc Whether this can yield anything usefully efficient is an open question. It does avoid the annoying "observe" and "sample" syntax, at least. -- jared -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From olf at aatal-apotheke.de Thu Nov 14 21:00:55 2019 From: olf at aatal-apotheke.de (Olaf Klinke) Date: Thu, 14 Nov 2019 22:00:55 +0100 Subject: [Haskell-cafe] Explicit approach to lazy effects For probability monad? In-Reply-To: References: <3ae874a9-df18-c133-7e9b-38ad0cc14113@gmail.com> <9BB8C94C-8E56-4FF2-9DA3-D883B721C917@aatal-apotheke.de> Message-ID: <36636779-6F8D-48F9-8FC7-031848F19168@aatal-apotheke.de> Benjamin, rather than giving examples I would prefer a list of all the keywords and syntactic constructs of the language, together with their semantics. What, for instance, is the type of 'log_all'? What is its semantics? What is the difference between 'sample' and 'random'? The Bali-Phy tutorial seems to be mainly composed of examples, too. I have to admit that I still don't understand what an execution trace is in your case. My guess: The inner program is a Haskell term that, given a different starting state of the random number generator, produces a value (and logs some intermediate values?) The outer program repeatedly feeds new starting states to the RNG and executes the inner program anew, gathering an overall picture of the random behaviour? Is that what you mean by "modifying the node x in the trace graph"? In the remainder I will argue that 'interleave' only works if the inner monad has my second lazyness property. The definition of interleave in MonadRandom is copied below (RandT is an alias for StateT). The occurrence of liftM and runStateT tells you that the inner monad action is executed, no matter what. It is only the final state that is known before the action executes. Let's experiment: import Control.Monad.State.Lazy import Control.Monad.Writer import Control.Applicative -- Just for the sake of example type Gen = Int split :: Gen -> (Gen,Gen) split g = (g,g) type RandT m a = StateT Gen m a interleave :: Monad m => RandT m a -> RandT m a interleave m = StateT $ \g -> case split g of (gl,gr) -> liftM (\p -> (fst p,gr)) $ runStateT m gl foo :: RandT (Writer [Char]) String foo = StateT (\g -> tell "foo executed;" >> return ("foo",succ g)) bar :: RandT (Writer [Char]) String bar = StateT (\g -> tell "bar executed;" >> return ("bar",succ g)) ghci> flip runStateT 0 $ fmap snd $ (,) <$> foo <*> bar WriterT (Identity (("bar",2),"foo executed;bar executed;")) ghci> flip runStateT 0 $ fmap snd $ (,) <$> (interleave foo) <*> bar WriterT (Identity (("bar",1),"foo executed;bar executed;")) You can see that while interleave makes the generator state look like (interleave foo) has not used the generator, the inner monadic action does execute. In particular, a sequential monad like Writer spoils the intentions of interleave. However, other non-sequential monads are more well-behaved. Consider the following. import System.IO.Unsafe import Data.Functor.Identity foo' :: RandT Identity String foo' = StateT (\g -> case unsafePerformIO (putStrLn "foo executed") of () -> return ("foo",succ g)) bar' :: RandT Identity String bar' = StateT (\g -> do case unsafePerformIO (putStrLn "bar executed") of () -> return ("bar",succ g)) ghci> flip runStateT 0 $ fmap snd $ (,) <$> foo' <*> bar' Identity ("bar executed bar",foo executed 2) ghci> flip runStateT 0 $ fmap snd $ (,) <$> (interleave foo') <*> bar' Identity ("bar executed bar",1) I believe this is not the doing of interleave, it is a property of Identity: ghci> fmap snd $ (,) <$> undefined <*> (Identity "bar") Identity "bar" This is exactly the lazyness property I was advertising: fmap.const = const.pure In this case fmap (\x -> snd (x,"bar") = fmap (const "bar") I'm CC-ing the maintainer of MonadRandom, hoping to get his opinion on this. Olaf > Am 13.11.2019 um 17:58 schrieb Benjamin Redelings : > > Hi Olaf, > > Thanks for your interesting response! Keep in mind that I have written my own Haskell interpreter, and that the lazy random monad is already working in my interpreter. So I am seeing the issue with effects as second issue. > > See http://www.bali-phy.org/models.php for some more information on this system. Note that I also have 'mfix' working in the lazy random monad, which I think is pretty cool. (See the second example about trait evolution on a random tree). However, I have no type system at all at the present time. > > On 11/10/19 4:21 PM, Olaf Klinke wrote: >> I believe that with the right monad, you won't need to think about side-effects, at least not the side-effects that are manual fiddling with registering variables. Haskell should do that for you. > So, I'm trying to implement a random walk on the space of Haskell program traces. (A program trace records the execution graph to the extent that it depends on builtin operations with random results, such as sampling from a normal distribution.) I could provide a PDF of an example execution trace, if you are interested. > > This means that logically Haskell is kind of running at two levels: > > (1) there is an inner random program that we are making a trace of. > > (2) there is another outer program that modifies these traces. > > The reason I think I need "registration" side-effects is that the outer program needs a list of random variables in the trace graph that it can randomly tweak. For example, if the inner program is given by: > > do > x <- sample $ normal 0 1 ----- [1] > observe (normal x 1) 10 > return $ log_all [x %% "x"] > > then the idea is that "x" would get registered when it first accessed. This allows the outer program needs to know that it can modify the node for "x" in the trace graph. This side-effect is invisible to the inner program, but visible to the outer program. > > So, I'm not sure if it is correct that I would not need side-effects. However, if I could avoid side-effects, that would be great. > >> It seems to me what you really need is a strictness analyzer together with the appropriate re-write rules that push the call to monadic actions as deep into the probabilistic model as possible. But initially you said you want to avoid source-to-source translations. > I think maybe you are right that source-to-source transforms are the right way. Currently I have avoided by writing my own interpreter and virtual machine that keeps track of trace graphs. > > What do you mean about pushing monadic actions as deep into the probabilistic model as possible? > >> Indeed your mention of splitting the random number generator seems to buy laziness, judging by the documentation of MonadInterleave in the MonadRandom package. But looking at the definition of interleave for RandT you can see that the monadic computation is still executed, only the random number generator state is restored afterwards. In particular, side effects of the inner monad are always executed. (Or was I using it wrong?) Hence from what I've seen my judgement is that state transformer monads are a dead end. > Hmm... I will have to look at this. Are you saying that in the MonadRandom package, interleaved computations are not sequenced just because of the random number generator, but they ARE sequenced if they perform any monadic actions? If this is true, then it would seem that the state is not the problem? > > In any case I think the problems that come from threading random number generator state are not fundamental. There are machine instructions that generate true randomness, so one could always implement randomness in the IO monad without carrying around a state. > > interpreter (f >>= g) = do > x <- unsafeInterleaveIO $ interpreter f > interpreter $ g x > > I have a very hacky system, but this is basically what I am doing. It is probably terrible, but I have not run into any problems so far. Is there a reason that one should avoid this? > > So far it seems to work. I am not that worried about the probability monad. > > > >> The idea with changing a -> m b to m a -> m (m b) seemed promising as well. You can easily make a Category instance for the type >> >> C a b = m a -> m (m b) >> >> so that composition of arrows in C does not necessarily execute the monadic computation. However, functions of the Arrow instance (e.g. 'first') must look at the input monad action in order to turn C a b into C (a,c) (b,c), at least I could not think of another way. >> > I'm sorry, I am not very familiar with the Haskell functions for categories, I just read part of Chapter 1 of an Algebraic Topology book. > > Can you possibly rephrase this in terms of (very simple) math? Specifically, I was thinking that mapping (a->b) to (m a -> m (m b)) looks like a Functor where > > a => m a --- this is "return" > > a->b => m a -> m (m b) --- this is "fmap" > > Why are you saying that this a category instead of a functor? I am probably just confused, I am not very familiar with categories yet, and have not had time to go look at the Arrow instance you are talking about. > >> Sorry for writing so destructive posts. In mathematics it is generally easier to find counterexamples than to write proofs. > Haha, no problem. Its not clear to me that this is possible. If in general lazy effects cannot be represented in Haskell using do-notation, that would probably be interesting to state formally. > > > > -BenRI > > [1] Regarding "sample", I think that if I I write "x <- normal 0 1" then I think that I cannot write "observe (normal x 1) 10", but would have to distinguish the distribution (normalDistr x 1) from the action of sampling from the distribution (normal 0 1). In my notation (normal x 1) is the distribution itself, and not an action. > > But if I could eliminate the "sample" then I think the code would look a lot cleaner. > > >> Olaf >> >> >> >>> Am 07.11.2019 um 17:56 schrieb Benjamin Redelings >>> : >>> >>> Hi Olaf, >>> >>> Thanks for your reply! I think I was unclear about a few things: >>> >>> 1. Mainly, I am _assuming_ that you can implement a lazy probability monad while ignoring random number generators. So, monad should be commutative, and should have the second property of laziness that you mention. >>> >>> (As an aside, I think the normal way to do this is to implement a function that splits the random number generator whenever you perform a lazy random computation using function: split :: g -> (g,g). My hack is to present that we have a hardware instruction that generates true random numbers, and then put that in the IO Monad, and then use unsafeInterLeaveIO. However, I would have to think more about this.) >>> 2. My question is really about how you can represent side effects in a lazy context. Thus the monad would be something like >>> EffectMonad = (a, Set Effect), >>> >>> where Effect is some ADT that represents effects. Each effect represents some action that can be undone, such as registering a newly created random variable in the list of all random variables. >>> This seems to be easy in a strict context, because you can change a function a->b that has effects into a-> EffectMonad b. Then your interpreter just needs to modify the global state to add the effects from the >>> interpreter state1 (x <<= y) = let (result1,effects1) = interpreter state1 x >>> state2 = state1 `union` effects1 >>> in interpreter state2 (y result1) >>> >>> However, with a lazy language I think this does not work, because we do not want to include "effects1" unless the "result1" is actually consumed. >>> >>> In that context, I think that a function (a->b) would end up becoming >>> >>> EffectMonad a -> EffectMonad (EffectMonad b) >>> >>> The argument 'a' changes to EffectMonad 'a' because the function itself (and not the interpreter) must decide whether to include the effects of the input into the effects of the output. The output changes to EffectMonad (EffectMonad b) so that the result is still of type (EffectMonad b) after the result is unwrapped. >>> >>> Does that make more sense? >>> -BenRI >>> >>> >>> >>> On 10/17/19 4:03 PM, Olaf Klinke wrote: >>> >>>> Hi Benjamin, >>>> >>>> Your example code seems to deal with two distinct types: >>>> The do-notation is about the effects monad (on the random number generator?) and the `sample` function pulls whatever representation you have for an actual probability distribution into this effect monad. In my mental model, the argument to `sample` represents a function Double -> x that interprets a number coming out of the standard random number generator as an element of type x. >>>> I suggest to consider the following two properties of the mathematical probability monad (a.k.a. the Giry monad), which I will use as syntactic re-write rules in your modeling language. >>>> >>>> The first property is Fubini's Theorem. In Haskell terms it says that for all f, a :: m x and b :: m y the two terms >>>> >>>> do {x <- a; y <- b; f x y} >>>> do {y <- b; x <- a; f x y} >>>> >>>> are semantically equivalent. (For state monads, this fails.) Monads where this holds are said to be commutative. If you have two urns, then drawing from the left and then drawing from the right is the same as first drawing from the right and then drawing from the left. Using Fubini, we can swap the first two lines in your example: >>>> >>>> model = do >>>> cond <- bernoulli 0.5 >>>> x <- normal 0 1 >>>> return (if cond == 1 then x else 0) >>>> >>>> This desugars to >>>> >>>> bernoulli 0.5 >>= (\cond -> normal 0 1 >>= (\x -> return (if cond == 1 then x else return 0))) >>>> bernoulli 0.5 >>= (\cond -> fmap (\x -> if cond == 1 then x else 0) (normal 0 1)) >>>> >>>> The second property is a kind of lazyness, namely >>>> >>>> fmap (const x) and return are semantically equivalent. >>>> >>>> which holds for mathematical distributions (but not for state monads). Now one could argue that in case cond == 0 the innermost function is constant in x, in which case the whole thing does not depend on the argument (normal 0 1). The Lemma we need here is semantic equivalence of the following two lambda terms. >>>> >>>> \cond -> \x -> if cond == 1 then x else 0 >>>> \cond -> if cond == 1 then id else const 0 >>>> >>>> If the above is admissible then the following syntactic transformation is allowed: >>>> >>>> model = do >>>> cond <- bernoulli 0.5 >>>> if cond == 1 then normal 0 1 else return 0 >>>> >>>> which makes it obvious that the normal distribution is only sampled when needed. But I don't know whether you would regard this as the same model. Notice that I disregarded your `sample` function. That is, my monadic language is the monad of probabilites, not the monad of state transformations of a random number generator. Maybe you can delay using the random number generator until the very end? I don't know the complete set of operations your modeling language sports. If that delay is possible, then maybe you can use a monad that has the above two properties (e.g. a reader monad) and only feed the random numbers to the final model. As proof of concept, consider the following. >>>> >>>> type Model = Reader Double >>>> model :: Model Int >>>> model = do >>>> x <- reader (\r -> last [1..round (recip r)]) >>>> cond <- reader (\r -> r > 0.5) >>>> return (if cond then x else 0) >>>> >>>> runReader model is fast for very small inputs, which would not be the case when the first line was always evaluated. >>>> >>>> Cheers, >>>> Olaf >>>> >>>> > From jack at jackkelly.name Fri Nov 15 01:39:21 2019 From: jack at jackkelly.name (Jack Kelly) Date: Fri, 15 Nov 2019 11:39:21 +1000 Subject: [Haskell-cafe] ANN: semialign-extras-0.1.0.0, and a request for help Message-ID: <877e41ex4m.fsf@jackkelly.name> I have just pushed an initial release of semialign-extras[1] to Hackage. semialign-extras aims to collect interesting abstractions/operations that: 1. Build on top of (at least) the Semialign typeclass, or related classes from the semialign universe; and 2. Do not belong inside other packages in the semialign universe. PRs that serve these goals are most welcome, as are GitHub issues[2]. Currently, there are two modules: * Data.Semialign.Diff has diffing/patching for Semialigns; and * Data.Semialign.Merge has a collection of merge operations for Semialigns which are also Filterable/Traversable/Witherable Request for help: The functions in Data.Semialign.Diff work across a lot of types, but are are often not the most efficient way to diff. There is a high-performance merge API for Data.Map.Map in Data.Map.Merge.Lazy[3], and it would be great to have some rewrite rules that took advantage of this. I have been able to write rules that GHC accepts, but not make them actually fire[4]. I'd be grateful for any assistance people can provide. Best, -- Jack [1]: http://hackage.haskell.org/package/semialign-extras-0.1.0.0 [2]: https://github.com/qfpl/semialign-extras [3]: https://hackage.haskell.org/package/containers-0.6.0.1/docs/Data-Map-Merge-Lazy.html [4]: https://github.com/qfpl/semialign-extras/issues/3 From a.pelenitsyn at gmail.com Fri Nov 15 16:51:28 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Fri, 15 Nov 2019 11:51:28 -0500 Subject: [Haskell-cafe] GHC API: parsing as much as I can Message-ID: Hello Cafe, I need an advice on how to use GHC API to parse big collections of Haskell source files. Say, I want to collect ASTs of everything that is on Hackage. I downloaded the whole Hackage (latest versions only) and have it locally now. I tried simple advice found in the Parse module documentation: https://hackage.haskell.org/package/ghc-8.6.5/docs/Parser.html runParser :: DynFlags -> String -> P a -> ParseResult a runParser flags str parser = unP parser parseState where filename = "" location = mkRealSrcLoc (mkFastString filename) 1 1 buffer = stringToStringBuffer str parseState = mkPState flags buffer location It mostly works: 75% of .hs files on Hackage seem to parse fine. I looked into the rest 25% and noticed that this snippet can't handle files using GHC extensions such as RankNTypes, TemplateHaskell, BangPatterns, etc.when given the default DynFlags. This leads me to the question of how should I initialize DynFlags? Currently, I use this for getting DynFlags: initDflags :: IO DynFlags initDflags = do let ldir = Just libdir mySettings <- initSysTools ldir myLlvmConfig <- initLlvmConfig ldir initDynFlags (defaultDynFlags mySettings myLlvmConfig) I understand that simple parsing of individual files can't take into account extensions activated inside .cabal files. But I'd expect that it should be possible to, at least, consider the extensions mentioned in the LANGUAGE pragmas. Currently, this isn't happening. Any suggestions on how to achieve this are welcomed. I probably won't get to parsing 100% of Hackage, but I'd hope for better than the current 75%. -- Best wishes, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Nov 15 17:16:29 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 15 Nov 2019 17:16:29 +0000 Subject: [Haskell-cafe] GHC API: parsing as much as I can In-Reply-To: References: Message-ID: Hi Artem, You can look at these functions from ghc-exactprint for inspiration. http://hackage.haskell.org/package/ghc-exactprint-0.6.2/docs/Language-Haskell-GHC-ExactPrint-Parsers.html Cheers, Matt On Fri, Nov 15, 2019 at 4:52 PM Artem Pelenitsyn wrote: > > Hello Cafe, > > I need an advice on how to use GHC API to parse big collections of Haskell source files. > Say, I want to collect ASTs of everything that is on Hackage. > I downloaded the whole Hackage (latest versions only) and have it locally now. > I tried simple advice found in the Parse module documentation: > https://hackage.haskell.org/package/ghc-8.6.5/docs/Parser.html > > runParser :: DynFlags -> String -> P a -> ParseResult a > runParser flags str parser = unP parser parseState > where > filename = "" > location = mkRealSrcLoc (mkFastString filename) 1 1 > buffer = stringToStringBuffer str > parseState = mkPState flags buffer location > > It mostly works: 75% of .hs files on Hackage seem to parse fine. I looked into the rest 25% > and noticed that this snippet can't handle files using GHC extensions such as RankNTypes, > TemplateHaskell, BangPatterns, etc.when given the default DynFlags. This leads me to the question of how should I initialize DynFlags? > > Currently, I use this for getting DynFlags: > > initDflags :: IO DynFlags > initDflags = do > let ldir = Just libdir > mySettings <- initSysTools ldir > myLlvmConfig <- initLlvmConfig ldir > initDynFlags (defaultDynFlags mySettings myLlvmConfig) > > I understand that simple parsing of individual files can't take into account extensions activated inside .cabal files. But I'd expect that it should be possible to, at least, consider the extensions mentioned in the LANGUAGE pragmas. Currently, this isn't happening. Any suggestions on how to achieve this are welcomed. I probably won't get to parsing 100% of Hackage, but I'd hope for better than the current 75%. > > -- > Best wishes, > Artem > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From a.pelenitsyn at gmail.com Fri Nov 15 17:27:26 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Fri, 15 Nov 2019 12:27:26 -0500 Subject: [Haskell-cafe] GHC API: parsing as much as I can In-Reply-To: References: Message-ID: Looks like exactly what I need, many thanks! -- Best, Artem On Fri, 15 Nov 2019 at 12:16, Matthew Pickering wrote: > Hi Artem, > > You can look at these functions from ghc-exactprint for inspiration. > > > http://hackage.haskell.org/package/ghc-exactprint-0.6.2/docs/Language-Haskell-GHC-ExactPrint-Parsers.html > > Cheers, > > Matt > > On Fri, Nov 15, 2019 at 4:52 PM Artem Pelenitsyn > wrote: > > > > Hello Cafe, > > > > I need an advice on how to use GHC API to parse big collections of > Haskell source files. > > Say, I want to collect ASTs of everything that is on Hackage. > > I downloaded the whole Hackage (latest versions only) and have it > locally now. > > I tried simple advice found in the Parse module documentation: > > https://hackage.haskell.org/package/ghc-8.6.5/docs/Parser.html > > > > runParser :: DynFlags -> String -> P a -> ParseResult a > > runParser flags str parser = unP parser parseState > > where > > filename = "" > > location = mkRealSrcLoc (mkFastString filename) 1 1 > > buffer = stringToStringBuffer str > > parseState = mkPState flags buffer location > > > > It mostly works: 75% of .hs files on Hackage seem to parse fine. I > looked into the rest 25% > > and noticed that this snippet can't handle files using GHC extensions > such as RankNTypes, > > TemplateHaskell, BangPatterns, etc.when given the default DynFlags. This > leads me to the question of how should I initialize DynFlags? > > > > Currently, I use this for getting DynFlags: > > > > initDflags :: IO DynFlags > > initDflags = do > > let ldir = Just libdir > > mySettings <- initSysTools ldir > > myLlvmConfig <- initLlvmConfig ldir > > initDynFlags (defaultDynFlags mySettings myLlvmConfig) > > > > I understand that simple parsing of individual files can't take into > account extensions activated inside .cabal files. But I'd expect that it > should be possible to, at least, consider the extensions mentioned in the > LANGUAGE pragmas. Currently, this isn't happening. Any suggestions on how > to achieve this are welcomed. I probably won't get to parsing 100% of > Hackage, but I'd hope for better than the current 75%. > > > > -- > > Best wishes, > > Artem > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic-emmanuel.picca at synchrotron-soleil.fr Fri Nov 15 17:44:53 2019 From: frederic-emmanuel.picca at synchrotron-soleil.fr (PICCA Frederic-Emmanuel) Date: Fri, 15 Nov 2019 17:44:53 +0000 Subject: [Haskell-cafe] monoid fold concurrently Message-ID: Hello, I would like to create a fucntion whcih does mkCube :: [IO a] -> IO a mkCube = ... I want to fold all this concurrently in order to obtain the final a I have a function like this merge :: a -> a -> IO a in order to sort of merge two a into another one. I did not found something like this on hoogle. so I would like your advices in order to write mkCube Thanks Frederic From ietf-dane at dukhovni.org Fri Nov 15 20:05:46 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Fri, 15 Nov 2019 15:05:46 -0500 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: References: Message-ID: <20191115200546.GK34850@straasha.imrryr.org> On Fri, Nov 15, 2019 at 05:44:53PM +0000, PICCA Frederic-Emmanuel wrote: > Hello, I would like to create a fucntion whcih does > > mkCube :: [IO a] -> IO a > mkCube = ... > > I want to fold all this concurrently in order to obtain the final a Is the list is guaranteed non-empty? Do you want to enforce that at the type level (compile time), or "fail" at runtime when it is? You should probably be a bit more explicit about what you mean by "concurrently". Do you know in advance that the list length is sufficiently short to make it reasonably to immediately fork an async thread for each? Also, do you want the outputs to folded in list order, or in any order (e.g. roughly in order of IO action completion)? > I have a function like this > > merge :: a -> a -> IO a > > in order to sort of merge two a into another one. Is `merge` really an IO action, or did you mean "a -> a -> a"? Most if not all of the building blocks for this are in the async package, but putting them together correctly generally depends on the details of your use-case. -- Viktor. From frederic-emmanuel.picca at synchrotron-soleil.fr Fri Nov 15 20:20:44 2019 From: frederic-emmanuel.picca at synchrotron-soleil.fr (PICCA Frederic-Emmanuel) Date: Fri, 15 Nov 2019 20:20:44 +0000 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: <20191115200546.GK34850@straasha.imrryr.org> References: , <20191115200546.GK34850@straasha.imrryr.org> Message-ID: > Is the list is guaranteed non-empty? Do you want to enforce that at the type > level (compile time), or "fail" at runtime when it is? > You should probably be a bit more explicit about what you mean by > "concurrently". Do you know in advance that the list length is sufficiently > short to make it reasonably to immediately fork an async thread for each? > Also, do you want the outputs to folded in list order, or in any order (e.g. > roughly in order of IO action completion)? The real probleme, is: I have a bunch of hdf5 files which contain a stack of image and other metadata's. for each image an associated metadatas, I can create a cube (3D array), whcih is the binning in the 3D space for each image of the stack read image + metadata -> transformation in 3D space -> binning. then binning -> binning -> binning (this is the monoid), since this is pure computation, I can use unsafe IO to create the merge function. In my case, I want to distribute all this on all my core. each core can do in which ever order a merge of the binning until I have only one binning. [a1, a2, a3, a4] core1: a1 + a2 -> a12 core2: a3 + a4 -> a34 then first core available, a12 + a34 -> a1234 Is it clearer ? Fred From ietf-dane at dukhovni.org Fri Nov 15 21:12:35 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Fri, 15 Nov 2019 16:12:35 -0500 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: References: <20191115200546.GK34850@straasha.imrryr.org> Message-ID: <20191115211235.GL34850@straasha.imrryr.org> On Fri, Nov 15, 2019 at 08:20:44PM +0000, PICCA Frederic-Emmanuel wrote: > > Is the list is guaranteed non-empty? Do you want to enforce that at the type > > level (compile time), or "fail" at runtime when it is? ? > > You should probably be a bit more explicit about what you mean by > > "concurrently". Do you know in advance that the list length is sufficiently > > short to make it reasonable to immediately fork an async thread for each? ? > > Also, do you want the outputs to folded in list order, or in any order (e.g. > > roughly in order of IO action completion)? > > The real probleme, is: > > I have a bunch of hdf5 files which contain a stack of image and other metadata's. Is that at least one? O(1) per core, ... hundreds, tens of thousands? > for each image an associated metadatas, I can create a cube (3D array), whcih is the binning in the 3D space Do you have space constraints on the number of not yet folded together cubes that can be in memory at the same time? > binning -> binning -> binning (this is the monoid), since this is pure > computation, I can use unsafe IO to create the merge function. Is it actually a Monoid (has an identity), or only a Semigroup? > In my case, I want to distribute all this on all my core. Which is the (more) expensive operation, computing a cube or merging two already computed cubes? > each core can do in which ever order a merge of the binning until I have only one binning. > > [a1, a2, a3, a4] > > core1: a1 + a2 -> a12 > core2: a3 + a4 -> a34 > > then > > first core available, a12 + a34 -> a1234 That is still ultimately order preserving (but associative): (a1 + a2) + (a3 + a4). Is the semigroup also commutative, would: (a2 + a4) + (a1 + a3) also work? -- Viktor. From frederic-emmanuel.picca at synchrotron-soleil.fr Fri Nov 15 21:29:58 2019 From: frederic-emmanuel.picca at synchrotron-soleil.fr (PICCA Frederic-Emmanuel) Date: Fri, 15 Nov 2019 21:29:58 +0000 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: <20191115211235.GL34850@straasha.imrryr.org> References: <20191115200546.GK34850@straasha.imrryr.org> , <20191115211235.GL34850@straasha.imrryr.org> Message-ID: > > Is the list is guaranteed non-empty? Do you want to enforce that at the type > > level (compile time), or "fail" at runtime when it is? runtime error > > You should probably be a bit more explicit about what you mean by > > "concurrently". Do you know in advance that the list length is sufficiently > > short to make it reasonable to immediately fork an async thread for each? the list can have 100000 images of 1 million of pixels, and it is not that short per image. > Is that at least one? O(1) per core, ... hundreds, tens of thousands? for now I have 100000 images to merge on 24 core. > Do you have space constraints on the number of not yet folded > together cubes that can be in memory at the same time? On my computer I have 256 Go of memory enought to load all the images first, but I need to target smaller computer. the biggest stack that I processed was 60Go for a final cube of 2000x2000x2000 of int32. but most of the time it will be around 1000x1000x1000 of int32 > Is it actually a Monoid (has an identity), or only a Semigroup? I can have a identiy which is an empty cube. > Which is the (more) expensive operation, computing a cube or merging > two already computed cubes? computing the cube is constant in time for a given image size. merging cube is slower and slower with time, since the cube grows from merge to merge. > That is still ultimately order preserving (but associative): (a1 + a2) + (a3 + a4). > Is the semigroup also commutative, would: (a2 + a4) + (a1 + a3) also work? yes Fred From ietf-dane at dukhovni.org Fri Nov 15 22:06:16 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Fri, 15 Nov 2019 17:06:16 -0500 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: References: <20191115200546.GK34850@straasha.imrryr.org> <20191115211235.GL34850@straasha.imrryr.org> Message-ID: <20191115220616.GM34850@straasha.imrryr.org> On Fri, Nov 15, 2019 at 09:29:58PM +0000, PICCA Frederic-Emmanuel wrote: > > > Is the list is guaranteed non-empty? Do you want to enforce that at the type > > > level (compile time), or "fail" at runtime when it is? > > runtime error If there's an identity (empty) cube, then an empty list can perhaps just map to that, but you could still throw an error. > > > You should probably be a bit more explicit about what you mean by > > > "concurrently". Do you know in advance that the list length is sufficiently > > > short to make it reasonable to immediately fork an async thread for each? > > the list can have 100000 images of 1 million of pixels, and it is not that short per image. This suggests that processing them should not happen all at once, but rather you'd load more images as idle task slots become available. This suggests use of 'mapReduce' from: https://hackage.haskell.org/package/async-pool-0.9.0.2/docs/Control-Concurrent-Async-Pool.html > > Is it actually a Monoid (has an identity), or only a Semigroup? > > I can have a identiy which is an empty cube. This is good, since mapReduce wants a Monoid. > computing the cube is constant in time for a given image size. > merging cube is slower and slower with time, since the cube grows from merge to merge. It looks like mapReduce also reduces pairs in parallel, and discards the inputs. It looks to be order preserving, so you don't even need commutativity. I've not used this module myself, please post a summary of your experience. -- Viktor. From scm at iis.sinica.edu.tw Sat Nov 16 03:58:03 2019 From: scm at iis.sinica.edu.tw (Shin-Cheng Mu) Date: Sat, 16 Nov 2019 11:58:03 +0800 Subject: [Haskell-cafe] FLOPS 2020: DEADLINE EXTENSION (abstract 22 Nov, full paper 29 Nov) Message-ID: <48878DB6-16E2-4468-BE9D-42256D1158C9@iis.sinica.edu.tw> FINAL Call For Papers (*** DEADLINE EXTENSION ***) FLOPS 2020: 15th International Symposium on Functional and Logic Programming In-Cooperation with ACM SIGPLAN =============================== 23-25 April, 2020, Akita, Japan https://www.ipl.riec.tohoku.ac.jp/FLOPS2020/ Writing down detailed computational steps is not the only way of programming. An alternative, being used increasingly in practice, is to start by writing down the desired properties of the result. The computational steps are then (semi-)automatically derived from these higher-level specifications. Examples of this declarative style of programming include functional and logic programming, program transformation and rewriting, and extracting programs from proofs of their correctness. FLOPS aims to bring together practitioners, researchers and implementors of the declarative programming paradigm, to discuss mutually interesting results and common problems: theoretical advances, their implementations in language systems and tools, and applications of these systems in practice. The scope includes all aspects of the design, semantics, theory, applications, implementations, and teaching of declarative programming. FLOPS specifically aims to promote cross-fertilization between theory and practice and among different styles of declarative programming. *** Scope *** FLOPS solicits original papers in all areas of the declarative programming: * functional, logic, functional-logic programming, rewriting systems, formal methods and model checking, program transformations and program refinements, developing programs with the help of theorem provers or SAT/SMT solvers, verifying properties of programs using declarative programming techniques; * foundations, language design, implementation issues (compilation techniques, memory management, run-time systems, etc.), applications and case studies. FLOPS promotes cross-fertilization among different styles of declarative programming. Therefore, research papers must be written to be understandable by the wide audience of declarative programmers and researchers. In particular, each submission should explain its contributions in both general and technical terms, clearly identifying what has been accomplished, explaining why it is significant for its area, and comparing it with previous work. Submission of system descriptions and declarative pearls are especially encouraged. *** Submission *** Submissions should fall into one of the following categories: * Regular research papers: they should describe new results and will be judged on originality, correctness, and significance. * System descriptions: they should describe a working system and will be judged on originality, usefulness, and design. * Declarative pearls: new and excellent declarative programs or theories with illustrative applications. System descriptions and declarative pearls must be explicitly marked as such in the title. Submissions must be unpublished and not submitted for publication elsewhere. Work that already appeared in unpublished or informally published workshops proceedings may be submitted. See also ACM SIGPLAN Republication Policy, as explained on the web at http://www.sigplan.org/Resources/Policies/Republication. Submissions must be written in English and can be up to 15 pages excluding references, though system descriptions and pearls are typically shorter. The formatting has to conform to Springer's guidelines. Regular research papers should be supported by proofs and/or experimental results. In case of lack of space, this supporting information should be made accessible otherwise (e.g., a link to an anonymized Web page or an appendix, which does not count towards the page limit). However, it is the responsibility of the authors to guarantee that their paper can be understood and appreciated without referring to this supporting information; reviewers may simply choose not to look at it when writing their review. FLOPS 2020 will employ a double-blind reviewing process. To facilitate this, submitted papers must adhere to two rules: 1. author names and institutions must be omitted, and 2. references to authors' own related work should be in the third person (e.g., not "We build on our previous work..." but rather "We build on the work of..."). The purpose of this process is to help the reviewers come to a judgement about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult (e.g., important background references should not be omitted or anonymized). In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Papers should be submitted electronically at https://easychair.org/conferences/?conf=flops2020 Springer Guidelines https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines *** Proceedings *** The proceedings will be published by Springer International Publishing in the Lecture Notes in Computer Science (LNCS) series (www.springer.com/lncs). *** Important Dates *** [EXTENDED] 22 November 2019 (AoE): Abstract submission [EXTENDED] 29 November 2019 (AoE): Submission deadline [EXTENDED] 31 January 2020: Author notification [EXTENDED] 20 February 2020: Camera ready due 23-25 April 2020: FLOPS Symposium *** Programming Comittee *** Elvira Albert Universidad Complutense de Madrid María Alpuente Universitat Politècnica de València Edwin Brady University of St Andrews Michael Hanus CAU Kiel Nao Hirokawa JAIST Zhenjiang Hu Peking University John Hughes Chalmers University of Technology Kazuhiro Inaba Google Shin-Ya Katsumata National Institute of Informatics Ekaterina Komendantskaya Heriot-Watt University Leonidas Lampropoulos University of Pennsylvania Akimasa Morihata The University of Tokyo Shin-Cheng Mu Academia Sinica Keisuke Nakano Tohoku University (co-chair) Koji Nakazawa Nagoya University Enrico Pontelli New Mexico State University Didier Remy INRIA Ricardo Rocha University of Porto Konstantinos Sagonas Uppsala University (co-chair) Ilya Sergey Yale-NUS College Kohei Suenaga Kyoto University Tachio Terauchi Waseda University Kazushige Terui Kyoto University Simon Thompson University of Kent Philip Wadler, University of Edinburgh *** Organizers *** Keisuke Nakano Tohoku University, Japan (PC Co-Chair, General Chair) Kostis Sagonas Uppsala University, Sweden (PC Co-Chair) Kazuyuki Asada Tohoku University, Japan (Local Co-Chair) Ryoma Sin'ya Akita University, Japan (Local Co-Chair) Katsuhiro Ueno Tohoku University, Japan (Local Co-Chair) *** Contact Address *** flops2020 _AT_ easychair.org From ietf-dane at dukhovni.org Sat Nov 16 07:40:04 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Sat, 16 Nov 2019 02:40:04 -0500 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: <20191115220616.GM34850@straasha.imrryr.org> References: <20191115200546.GK34850@straasha.imrryr.org> <20191115211235.GL34850@straasha.imrryr.org> <20191115220616.GM34850@straasha.imrryr.org> Message-ID: <20191116074004.GO34850@straasha.imrryr.org> On Fri, Nov 15, 2019 at 05:06:16PM -0500, Viktor Dukhovni wrote: > I've not used this module myself, please post a summary of your > experience. I was curious, so I decided to try a simple case: {-# LANGUAGE BlockArguments #-} {-# LANGUAGE BangPatterns #-} module Main (main) where import Control.Concurrent.Async.Pool import Control.Concurrent.STM import Control.Monad import Data.List import Data.Monoid import System.Environment defCount, batchSz :: Int defCount = 10000 batchSz = 256 batchList :: Int -> [a] -> [[a]] batchList sz as = case splitAt sz as of ([], _) -> [] (t, []) -> [t] (h, t) -> h : batchList sz t main :: IO () main = do n <- maybe defCount read <$> (fmap fst . uncons) <$> getArgs let bs = batchList batchSz $ map Sum [1..n] s <- foldM mergeReduce mempty bs print $ getSum s where mergeReduce :: Sum Int -> [(Sum Int)] -> IO (Sum Int) mergeReduce !acc ms = (acc <>) <$> reduceBatch (return <$> ms) reduceBatch :: Monoid a => [IO a] -> IO a reduceBatch ms = withTaskGroup 8 $ (>>= wait) . atomically . flip mapReduce ms Without batching, the whole list of actions is brought into memory, all at once (to create the task dependency graph), and then the outputs are folded concurrently, which does not run in constant memory in the size of the list. In the above the list of actions is chunked (256 at a time), these are merged concurrently, but then the results from the chunks are merged sequentially. If the cost of storing the entire task list in memory is negligible, a single mapReduce may perform better: {-# LANGUAGE BlockArguments #-} module Main (main) where import Control.Concurrent.Async.Pool import Control.Concurrent.STM import Data.List import Data.Monoid import System.Environment defCount :: Int defCount = 100 main :: IO () main = do n <- maybe defCount read <$> (fmap fst . uncons) <$> getArgs withTaskGroup 8 \tg -> do reduction <- atomically $ mapReduce tg $ map (return . Sum) [1..n] wait reduction >>= print . getSum -- Viktor. From frederic-emmanuel.picca at synchrotron-soleil.fr Sat Nov 16 08:57:36 2019 From: frederic-emmanuel.picca at synchrotron-soleil.fr (PICCA Frederic-Emmanuel) Date: Sat, 16 Nov 2019 08:57:36 +0000 Subject: [Haskell-cafe] monoid fold concurrently In-Reply-To: <20191116074004.GO34850@straasha.imrryr.org> References: <20191115200546.GK34850@straasha.imrryr.org> <20191115211235.GL34850@straasha.imrryr.org> <20191115220616.GM34850@straasha.imrryr.org>, <20191116074004.GO34850@straasha.imrryr.org> Message-ID: thanks a lot, I will try to implement this with mapReduce and I will post my solution :) Cheers Frederic From markus.l2ll at gmail.com Mon Nov 18 15:27:04 2019 From: markus.l2ll at gmail.com (=?UTF-8?B?TWFya3VzIEzDpGxs?=) Date: Mon, 18 Nov 2019 17:27:04 +0200 Subject: [Haskell-cafe] Reverse of -ddump-splices In-Reply-To: <4CED676E-9CC2-4884-BFC3-60C506FE8B79@richarde.dev> References: <4CED676E-9CC2-4884-BFC3-60C506FE8B79@richarde.dev> Message-ID: Created an issue: https://gitlab.haskell.org/ghc/ghc/issues/17492 I wonder what the difficulty of implementing this is for a complete ghc noob (but not to haskell)? On Mon, Nov 4, 2019 at 6:42 PM Richard Eisenberg wrote: > There should be a way to reverse that. Post a bug! :) > > > On Nov 1, 2019, at 2:32 PM, Markus Läll wrote: > > > > Hi list! > > > > Is it so that there is no reverse for -ddump-splices? In ghci I can > > set it with :set -ddump-splices and it turns on, but adding "no" to > > the flag appears not to work. > > > > Found this > > > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html > > so probably there is no reverse? > > > > -- > > Markus Läll > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > > -- Markus Läll -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Nov 18 22:46:48 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 18 Nov 2019 22:46:48 +0000 Subject: [Haskell-cafe] Reverse of -ddump-splices In-Reply-To: References: <4CED676E-9CC2-4884-BFC3-60C506FE8B79@richarde.dev> Message-ID: You'll probably have a harder time just getting GHC building than actually adding the feature. Start in https://gitlab.haskell.org/ghc/ghc/blob/master/compiler/main/DynFlags.hs and follow your nose. :) Richard > On Nov 18, 2019, at 3:27 PM, Markus Läll wrote: > > Created an issue: https://gitlab.haskell.org/ghc/ghc/issues/17492 > > I wonder what the difficulty of implementing this is for a complete ghc noob (but not to haskell)? > > On Mon, Nov 4, 2019 at 6:42 PM Richard Eisenberg > wrote: > There should be a way to reverse that. Post a bug! :) > > > On Nov 1, 2019, at 2:32 PM, Markus Läll > wrote: > > > > Hi list! > > > > Is it so that there is no reverse for -ddump-splices? In ghci I can > > set it with :set -ddump-splices and it turns on, but adding "no" to > > the flag appears not to work. > > > > Found this > > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html > > so probably there is no reverse? > > > > -- > > Markus Läll > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > > > > -- > Markus Läll -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Tue Nov 19 07:59:14 2019 From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 19 Nov 2019 07:59:14 +0000 Subject: [Haskell-cafe] Reverse of -ddump-splices In-Reply-To: References: <4CED676E-9CC2-4884-BFC3-60C506FE8B79@richarde.dev> Message-ID: <20191119075914.gw3kevclsx6x5gai@weber> And as a new contributor to GHC over the past few weeks I found the onboarding process straightforward, so don't be intimidated! On Mon, Nov 18, 2019 at 10:46:48PM +0000, Richard Eisenberg wrote: > You'll probably have a harder time just getting GHC building than actually > adding the feature. Start in > https://gitlab.haskell.org/ghc/ghc/blob/master/compiler/main/DynFlags.hs > > and follow your nose. :) > > > On Nov 18, 2019, at 3:27 PM, Markus Läll wrote: > > > > Created an issue: https://gitlab.haskell.org/ghc/ghc/issues/17492 > > > > > > I wonder what the difficulty of implementing this is for a complete ghc > > noob (but not to haskell)? > > > > On Mon, Nov 4, 2019 at 6:42 PM Richard Eisenberg > > wrote: There should be a way to reverse that. > > Post a bug! :) > > > > > On Nov 1, 2019, at 2:32 PM, Markus Läll > wrote: > > > > > > Hi list! > > > > > > Is it so that there is no reverse for -ddump-splices? In ghci I can > > > set it with :set -ddump-splices and it turns on, but adding "no" to > > > the flag appears not to work. > > > > > > Found this > > > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/flags.html > > > so probably there is no reverse? From ramin.honary at cross-compass.com Tue Nov 19 08:17:47 2019 From: ramin.honary at cross-compass.com (Ramin Honary) Date: Tue, 19 Nov 2019 17:17:47 +0900 Subject: [Haskell-cafe] [Job] PureScript and Haskell, Tokyo or remote Message-ID: *[Job] PureScript and Haskell, Tokyo or remote* Cross Compass in Tokyo is hiring programmers to build a data science IDE on top of JupyterLab , with a Haskell backend and a PureScript/React frontend. The JupyterLab stack is new and we're building new technology. This is a small team and we want to add generalists to work full-time. We are looking for candidates with these areas of interest and experience. - Browser-based GUI applications, especially React and PureScript - Functional programming, especially Haskell - Devops, especially Docker and Nix - Python and the Jupyter ecosystem - Machine learning and data science *About Cross Compass* Cross Compass is a Japanese AI consultancy founded in 2015 in Tokyo . It is a leader in AI for manufacturing and industrial robotics with 80~100 projects per year. There are 66 employees (43 engineers), and the office conversation is about half Japanese, half English, with 15 nationalities represented among our staff. The number of self-described Haskellers at Cross Compass currently stands at 4 in Tokyo and 2 remote. github.com/xc-jp - Ramin Honary https://github.com/RaminHAL9001 - Dennis Gosnell https://github.com/cdepillabout - Jonas Carpay https://github.com/jonascarpay - Viktor Kronvall https://github.com/considerate - Robert Prije https://github.com/rprije - James Brock https://github.com/jamesdbrock *Application* We would like to see examples of how you work, so if you have any work published on the internet, please tell us where to look for it, and explain what is interesting about it. Please apply by email to recruit at cross-compass.com and include the following: - Highlights of work history - Highlights of education history - Open source software links - Papers, articles, social media, anything which reflects well on you - Request to work in Tokyo office or remote If you want to talk about this position or anything else, please email me at james.brock at cross-compass.com. Salary will be based on experience and shall be determined by market pay rates for programming positions in Tokyo. The application process will include a one month trial period. Remote work is possible but we prefer candidates who live, or want to live, in Tokyo, for a larger interaction with our teams. We provide visa sponsorship. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nbu at informatik.uni-kiel.de Tue Nov 19 16:43:45 2019 From: nbu at informatik.uni-kiel.de (Niels Bunkenburg) Date: Tue, 19 Nov 2019 17:43:45 +0100 Subject: [Haskell-cafe] Summer internship with a Haskell-related project (DAAD Rise Germany) Message-ID: My department (PL and compiler construction) at the University of Kiel, Germany, offers a summer internship in 2020 sponsored by DAAD Rise Germany [1]. The topic of the project is verification of effectful Haskell programs in Coq. For a brief overview of the internship and the topic, please have a look at the project page [2]. We'd be very happy to host a motivated student who likes working with our haskell-to-coq compiler. Depending on personal interests, the focus of the internship can be the compiler itself or its output and the Coq framework for reasoning about the generated code. Previous knowledge of Coq is not required! If you have any questions beforehand, feel free to reply directly or email us at the addresses listed on the project page. The program funding is based on the number of applications, so please spread the news to your colleagues, students or classmates! Best regards, Niels [1] https://www.daad.de/rise/en/rise-germany/find-an-internship/ [2] https://bunkenburg.net/projects/2019-11-01-daad-rise.html From carette at mcmaster.ca Wed Nov 20 18:08:05 2019 From: carette at mcmaster.ca (Jacques Carette) Date: Wed, 20 Nov 2019 13:08:05 -0500 Subject: [Haskell-cafe] Transforming from on State-transformer to another Message-ID: Is there a way to have a function of type (a -> b) -> State a c -> State b c ? In the particular case of interest, I am interested in is actually b -> State a c -> (State (a,b) c), which is of course a special case of the above. This matches 'first' on Data.Bifunctor, but State is not a Bifunctor. Nor a Profunctor, AFAIK. Use case: a Stateful computation where a local sub-computation needs more (local) state that will be, in-time, de-allocated. I guess the usual solution is probably to use 2 stacked State, but I'm curious if there's another way. Jacques From oleg.grenrus at iki.fi Wed Nov 20 18:15:14 2019 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Wed, 20 Nov 2019 20:15:14 +0200 Subject: [Haskell-cafe] Transforming from on State-transformer to another In-Reply-To: References: Message-ID: <9e39a6b1-e027-39f0-12d7-f58071dc8b33@iki.fi> You need something like Lens to "zoom" into the state. As state monad can not only `get` but also `put` the state, the plain a -> b function is not enough: you need Lens a b. See https://hackage.haskell.org/package/lens-4.18.1/docs/Control-Lens-Zoom.html#v:zoom or https://hackage.haskell.org/package/optics-extra-0.2/docs/Optics-Zoom.html#v:zoom - Oleg On 20.11.2019 20.08, Jacques Carette wrote: > Is there a way to have a function of type (a -> b) -> State a c -> > State b c ? > > In the particular case of interest, I am interested in is actually b > -> State a c -> (State (a,b) c), which is of course a special case of > the above. > > This matches 'first' on Data.Bifunctor, but State is not a Bifunctor. > Nor a Profunctor, AFAIK. > > Use case: a Stateful computation where a local sub-computation needs > more (local) state that will be, in-time, de-allocated. > > I guess the usual solution is probably to use 2 stacked State, but I'm > curious if there's another way. > > Jacques > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From carette at mcmaster.ca Wed Nov 20 19:00:35 2019 From: carette at mcmaster.ca (Jacques Carette) Date: Wed, 20 Nov 2019 14:00:35 -0500 Subject: [Haskell-cafe] Transforming from on State-transformer to another In-Reply-To: <9e39a6b1-e027-39f0-12d7-f58071dc8b33@iki.fi> References: <9e39a6b1-e027-39f0-12d7-f58071dc8b33@iki.fi> Message-ID: <73a15344-d3a7-4ee3-9f3c-d07b03ee76c8@mcmaster.ca> Ah, that looks perfect, thank you.  Jacques On 2019-11-20 1:15 p.m., Oleg Grenrus wrote: > You need something like Lens to "zoom" into the state. As state monad > can not only `get` but also `put` the state, the plain a -> b function > is not enough: you need Lens a b. > > See > https://hackage.haskell.org/package/lens-4.18.1/docs/Control-Lens-Zoom.html#v:zoom > or > https://hackage.haskell.org/package/optics-extra-0.2/docs/Optics-Zoom.html#v:zoom > > - Oleg > > On 20.11.2019 20.08, Jacques Carette wrote: >> Is there a way to have a function of type (a -> b) -> State a c -> >> State b c ? >> >> In the particular case of interest, I am interested in is actually b >> -> State a c -> (State (a,b) c), which is of course a special case of >> the above. >> >> This matches 'first' on Data.Bifunctor, but State is not a Bifunctor. >> Nor a Profunctor, AFAIK. >> >> Use case: a Stateful computation where a local sub-computation needs >> more (local) state that will be, in-time, de-allocated. >> >> I guess the usual solution is probably to use 2 stacked State, but >> I'm curious if there's another way. >> >> Jacques >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From ietf-dane at dukhovni.org Wed Nov 20 20:40:40 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Wed, 20 Nov 2019 15:40:40 -0500 Subject: [Haskell-cafe] Transforming from on State-transformer to another In-Reply-To: References: Message-ID: > On Nov 20, 2019, at 1:08 PM, Jacques Carette wrote: > > In the particular case of interest, I am interested in is actually b -> State a c -> (State (a,b) c), which is of course a special case of the above. > > Use case: a Stateful computation where a local sub-computation needs more (local) state that will be, in-time, de-allocated. Perhaps I misunderstood your use-case, but it seems that with a local sub-computation that needs extra state you can just: ... st <- get (c, (st', _)) <- runStateT local (st, extra) put st' ... where local :: StateT (a, b) m c. -- Viktor. From allbery.b at gmail.com Wed Nov 20 21:39:45 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 20 Nov 2019 16:39:45 -0500 Subject: [Haskell-cafe] Transforming from on State-transformer to another In-Reply-To: References: Message-ID: For what it's worth, I do sometimes define a version of MonadReader's "local" for MonadState; it runs a State computation on a transformed State. I've heard this used often enough to wonder why it's not a standard part of MonadState. On Wed, Nov 20, 2019 at 3:41 PM Viktor Dukhovni wrote: > > On Nov 20, 2019, at 1:08 PM, Jacques Carette > wrote: > > > > In the particular case of interest, I am interested in is actually b -> > State a c -> (State (a,b) c), which is of course a special case of the > above. > > > > Use case: a Stateful computation where a local sub-computation needs > more (local) state that will be, in-time, de-allocated. > > Perhaps I misunderstood your use-case, but it seems that with a local > sub-computation that needs extra state you can just: > > ... > st <- get > (c, (st', _)) <- runStateT local (st, extra) > put st' > ... > > where local :: StateT (a, b) m c. > > -- > Viktor. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.waldmann at htwk-leipzig.de Thu Nov 21 10:43:49 2019 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Thu, 21 Nov 2019 11:43:49 +0100 Subject: [Haskell-cafe] filepath Message-ID: Dear Cafe, is there a library that provides an "abstract representation of file and directory pathnames"? This had been discussed before - e.g., https://mail.haskell.org/pipermail/libraries/2007-December/008769.html (it's one message of a longer thread) That message references an obsolete (?) Java library, they now have https://docs.oracle.com/en/java/javase/13/docs/api/java.base/java/nio/file/Files.html It seems that https://hackage.haskell.org/package/filepattern-0.1.1/docs/System-FilePattern-Directory.html does cover typical usage. It is based on FilePath = String, and its implementation has some "++"-ing. That indicates that an abstraction layer is missing - or, that such a layer is indeed not required? - J.W. From fa-ml at ariis.it Thu Nov 21 10:54:45 2019 From: fa-ml at ariis.it (Francesco Ariis) Date: Thu, 21 Nov 2019 11:54:45 +0100 Subject: [Haskell-cafe] filepath In-Reply-To: References: Message-ID: <20191121105445.GA26669@x60s.casa> On Thu, Nov 21, 2019 at 11:43:49AM +0100, Johannes Waldmann wrote: > is there a library that provides an > "abstract representation of file and directory pathnames"? Yesterday sm showed this to me: https://hackage.haskell.org/package/path Will that do? -F From mail at joachim-breitner.de Fri Nov 22 10:49:34 2019 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 22 Nov 2019 11:49:34 +0100 Subject: [Haskell-cafe] Planet Haskell Message-ID: <3f4a4585e1122ba03c442bc7f15d06e16e603974.camel@joachim-breitner.de> Hi, I am reading planet haskell through a feed reader that pulls https://planet.haskell.org/atom.xml and I get new posts. But it seems that that https://planet.haskell.org/ is stuck in January 2019. Or, upon closer inspection, new posts _are_ on planet.haskell.org, but they are not on top. I guess in the times of reddit and slack, few people are following the planet, but maybe still some people do? Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ From olf at aatal-apotheke.de Fri Nov 22 12:59:00 2019 From: olf at aatal-apotheke.de (Olaf Klinke) Date: Fri, 22 Nov 2019 13:59:00 +0100 Subject: [Haskell-cafe] Transforming from on State-transformer to another In-Reply-To: <68ac8d9f-37f8-7361-04fd-4bff715aaf6f@mcmaster.ca> References: <68ac8d9f-37f8-7361-04fd-4bff715aaf6f@mcmaster.ca> Message-ID: <6F99A94F-3355-4DD8-85E4-04DC2AE3CF18@aatal-apotheke.de> > Am 21.11.2019 um 22:56 schrieb Jacques Carette : > > On 2019-Nov.-21 15:57 , Olaf Klinke wrote: >>> Is there a way to have a function of type (a -> b) -> State a c -> State >>> b c ? >> djinn says no. The reason ist that the a occurs both positively and negatively in State a c, that is, on the left and the right hand side of the -> in State. >> >>> In the particular case of interest, I am interested in is actually b -> >>> State a c -> (State (a,b) c), which is of course a special case of the >>> above. >> djinn says yes. The suggested term is >> >> \f -> \(a,b) -> let (a,c) = f a in ((a,b),c) > > > Thanks. I tried to play with the online djinn to ask about exactly this - and failed! How should I be asking? I downloaded the djinn package and built it with cabal. > > Also, something is weird about that term, since it doesn't match "b -> State a c -> (State (a,b) c)" at all. It does match "(b -> State a c) -> (State (a,b) c)" which is a completely different type (and not interesting...) Sorry, I tried to simplify djinn's output and failed. The term should be \f -> \(a,b) -> let (c,a') = f a in (c,(a',b)) :: (a -> (c, a3)) -> (a, b) -> (c, (a3, b)) where you can let a3 = a to get State a c -> State (a,b) c That is, it maps a transformer of state a into a transformer of state (a,b) that leaves the b component unchanged. Olaf From eric at seidel.io Fri Nov 22 13:25:44 2019 From: eric at seidel.io (Eric Seidel) Date: Fri, 22 Nov 2019 08:25:44 -0500 Subject: [Haskell-cafe] Planet Haskell In-Reply-To: <3f4a4585e1122ba03c442bc7f15d06e16e603974.camel@joachim-breitner.de> References: <3f4a4585e1122ba03c442bc7f15d06e16e603974.camel@joachim-breitner.de> Message-ID: Like you, I follow it with a feed reader and I get new posts that way. I haven't checked the planet.haskell.org website in years. On Fri, Nov 22, 2019, at 05:49, Joachim Breitner wrote: > Hi, > > I am reading planet haskell through a feed reader that pulls > https://planet.haskell.org/atom.xml > and I get new posts. But it seems that that > https://planet.haskell.org/ is stuck in January 2019. > > Or, upon closer inspection, new posts _are_ on planet.haskell.org, but > they are not on top. > > I guess in the times of reddit and slack, few people are following the > planet, but maybe still some people do? > > Cheers, > Joachim > > > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From ganesh at earth.li Fri Nov 22 13:34:01 2019 From: ganesh at earth.li (Ganesh Sittampalam) Date: Fri, 22 Nov 2019 13:34:01 +0000 Subject: [Haskell-cafe] Planet Haskell In-Reply-To: <3f4a4585e1122ba03c442bc7f15d06e16e603974.camel@joachim-breitner.de> References: <3f4a4585e1122ba03c442bc7f15d06e16e603974.camel@joachim-breitner.de> Message-ID: <60ca0c38-6686-f0cb-41a2-52dd18f1625e@earth.li> I follow it on the web and I find the stuck posts very annoying (but not annoying enough to have done something about it yet!) On 22/11/2019 10:49, Joachim Breitner wrote: > Hi, > > I am reading planet haskell through a feed reader that pulls > https://planet.haskell.org/atom.xml > and I get new posts. But it seems that that > https://planet.haskell.org/ is stuck in January 2019. > > Or, upon closer inspection, new posts _are_ on planet.haskell.org, but > they are not on top. > > I guess in the times of reddit and slack, few people are following the > planet, but maybe still some people do? > > Cheers, > Joachim > > From leah at vuxu.org Mon Nov 25 10:25:29 2019 From: leah at vuxu.org (Leah Neukirchen) Date: Mon, 25 Nov 2019 11:25:29 +0100 Subject: [Haskell-cafe] Munich Haskell Meeting, 2019-11-27 @ 19:30 Message-ID: <87o8x045ie.fsf@vuxu.org> Dear all, This week, our monthly Munich Haskell Meeting will take place again on Wednesday, November 27 at Cafe Puck(!) at 19h30. For details see here: http://muenchen.haskell.bayern/dates.html If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-nov-2019/ Everybody is welcome! cu, -- Leah Neukirchen https://leahneukirchen.org/ From P.Achten at cs.ru.nl Tue Nov 26 09:30:02 2019 From: P.Achten at cs.ru.nl (Peter Achten) Date: Tue, 26 Nov 2019 10:30:02 +0100 Subject: [Haskell-cafe] [TFP'20] draft paper deadline open (January 10 2020) Trends in Functional Programming 2020, 13-14 February, Krakow, Poland Message-ID: -------------------------------------------------------------------------                      Third call for papers         21st Symposium on Trends in Functional Programming                           tfp2020.org ------------------------------------------------------------------------- Did you miss the deadline to submit a paper to Trends in Functional Programming http://cse.chalmers.se/~rjmh/tfp/? No worries -- it's not too late! Submission is open until January 10th 2020, for a presentation slot at the event and post-symposium reviewing. The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trends in the area. It aspires to be a lively environment for presenting the latest research results, and other contributions. * TFP is moving to new winter dates, to provide an FP forum in between the   annual ICFP events. * TFP offers a supportive reviewing process designed to help less experienced   authors succeed, with two rounds of review, both before and after the   symposium itself. Authors have an opportunity to address reviewers' concerns   before final decisions on publication in the proceedings. * TFP offers two "best paper" awards, the John McCarthy award for best paper,   and the David Turner award for best student paper. * This year we are particularly excited to co-locate with Lambda Days in   beautiful Krakow. Lambda Days is a vibrant developer conference with hundreds   of attendees and a lively programme of talks on functional programming in   practice. TFP will be held in the same venue, and participants will be able   to session-hop between the two events. Important Dates --------------- Submission deadline for pre-symposium review:   15th November, 2019  -- passed -- Submission deadline for draft papers:           10th January, 2020 Symposium dates:                                13-14th February, 2020 Visit tfp2020.org for more information. From xuanrui at nagoya-u.jp Tue Nov 26 14:54:16 2019 From: xuanrui at nagoya-u.jp (Xuanrui Qi) Date: Tue, 26 Nov 2019 23:54:16 +0900 Subject: [Haskell-cafe] A Specification for GADTs in Haskell? Message-ID: Hello all, I recently came a cross a comment by Richard Eisenberg that there's actually no specification for GADTs and creating one would be a major research project (in a GHC proposal somewhere, I recall, but I don't remember exactly where). I was wondering what exactly does Richard's comment mean. What constitutes a specification for GADTs in Haskell? I suppose typing rules and semantics are necessary; what are the major roadblocks hindering the creation of a formal specification for GADTs in Haskell? Thanks! Xuanrui -- Xuanrui Qi Graduate School of Mathematics, Nagoya University https://www.xuanruiqi.com From vamchale at gmail.com Tue Nov 26 15:46:15 2019 From: vamchale at gmail.com (Vanessa McHale) Date: Tue, 26 Nov 2019 09:46:15 -0600 Subject: [Haskell-cafe] A Specification for GADTs in Haskell? In-Reply-To: References: Message-ID: A spec would be something in the spirit of the Haskell2010 perhaps? Typing rules written up, plus the syntax, plus specification for pattern matching. Cheers, Vanessa McHale > On Nov 26, 2019, at 8:54 AM, Xuanrui Qi wrote: > > Hello all, > > I recently came a cross a comment by Richard Eisenberg that there's > actually no specification for GADTs and creating one would be a major > research project (in a GHC proposal somewhere, I recall, but I don't > remember exactly where). > > I was wondering what exactly does Richard's comment mean. What > constitutes a specification for GADTs in Haskell? I suppose typing > rules and semantics are necessary; what are the major roadblocks > hindering the creation of a formal specification for GADTs in Haskell? > > Thanks! > > Xuanrui > > -- > Xuanrui Qi > Graduate School of Mathematics, Nagoya University > https://www.xuanruiqi.com > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From sjcjoosten+haskelcafe at gmail.com Tue Nov 26 16:30:24 2019 From: sjcjoosten+haskelcafe at gmail.com (Sebastiaan Joosten) Date: Tue, 26 Nov 2019 11:30:24 -0500 Subject: [Haskell-cafe] A Specification for GADTs in Haskell? In-Reply-To: References: Message-ID: Hi Xuanrui and Vanessa, Richard's PhD thesis is a big step towards having a specification for GADTs, so if his comment is more than 3 years old (he published his thesis in 2016), then the comment is outdated. I have not read all of his thesis ( https://www.cis.upenn.edu/~sweirich/papers/eisenberg-thesis.pdf ), but I think it's the closest thing to a semantics of GADTs there is out there right now. My guess is that what Richard means with 'having a semantics' involves answering two things: - which programs are considered 'correct' (i.e. well-typed) - what is an execution-step for a correct program. This is a description such that chaining valid execution-steps will cause terminating programs to end up in a value. Here 'value' refers to a correct program with no more possible outgoing execution steps. The description of the execution-step is called 'operational semantics'. A problem with Haskell's GADTs is that it has a non-terminating type system. This makes it unclear how to answer the first question. I believe the Pico language avoids this problem by requiring more type annotations. A naive solution could be to state that any program for which the type checker does not terminate is considered incorrect, but then one would need to prove that all execution-steps preserve this property. Another solution could be to ensure there is enough type information, and carry this type information throughout the execution when describing the operational semantics, which is the approach Richard seems to take. The downside is that you are then not describing Haskell itself exactly, but a more type-annotated version of it. For his own intermediate haskell/GADT-inspired language Pico, Richard gives those semantics in his thesis. He answers the first point by giving syntax (Fig 5.1 and 5.2) and describing typing (up to Section 5.6). For the second point, the execution steps are modeled by the rightward arrow notation (along with some context captured in sigma and gamma) in Section 5.7. If you want to quickly see how similar Pico is to Haskell, I suggest starting with the examples in Section 5.5. It contains some typical GADT examples. I hope this helps, did you ask Richard ( https://richarde.dev/ ) himself yet? I did not, so I may be wrong in all of the above. Best, Sebastiaan On Tue, Nov 26, 2019 at 9:54 AM Xuanrui Qi wrote: > Hello all, > > I recently came a cross a comment by Richard Eisenberg that there's > actually no specification for GADTs and creating one would be a major > research project (in a GHC proposal somewhere, I recall, but I don't > remember exactly where). > > I was wondering what exactly does Richard's comment mean. What > constitutes a specification for GADTs in Haskell? I suppose typing > rules and semantics are necessary; what are the major roadblocks > hindering the creation of a formal specification for GADTs in Haskell? > > Thanks! > > Xuanrui > > -- > Xuanrui Qi > Graduate School of Mathematics, Nagoya University > https://www.xuanruiqi.com > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Tue Nov 26 17:07:55 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 26 Nov 2019 17:07:55 +0000 Subject: [Haskell-cafe] A Specification for GADTs in Haskell? In-Reply-To: References: Message-ID: <677ED0EE-B063-4B07-9DCF-A8F23DAC9C60@richarde.dev> Hi Xuanrui, Glad you're interested in pursuing this topic! By specification, I mean that it should be possible to write down a set of (simple... or simple-ish) rules describing A) what programs are accepted by the compiler, and B) what will happen when these programs are run. (A) is called either typing rules or static semantics; (B) is called operational or dynamic semantics (or sometimes just semantics). The problem with GADTs is that we don't have that set of rules, at least not for Haskell's realization of GADTs. There is some work on this area: * GHC's type system and inference algorithm are well documented in https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/jfp-outsidein.pdf This paper lays out an overly-permissive set of rules for when GADTs should be accepted, but some programs are rejected that the rules suggest would be accepted. * These two papers describe type inference with GADTs: https://dl.acm.org/citation.cfm?id=2837665 and https://dl.acm.org/citation.cfm?id=3290322 . Neither is applicable to Haskell out-of-the-box. While I'm grateful to anyone who braves my thesis, it does not really address this problem, focusing much more on the internal language than on type inference. I suppose you could extract something from Chapter 6 (on type inference), but there are key bits missing there -- notably, any specification of a solver. I hope this is helpful -- happy to expand on this if you like! Richard > On Nov 26, 2019, at 2:54 PM, Xuanrui Qi wrote: > > Hello all, > > I recently came a cross a comment by Richard Eisenberg that there's > actually no specification for GADTs and creating one would be a major > research project (in a GHC proposal somewhere, I recall, but I don't > remember exactly where). > > I was wondering what exactly does Richard's comment mean. What > constitutes a specification for GADTs in Haskell? I suppose typing > rules and semantics are necessary; what are the major roadblocks > hindering the creation of a formal specification for GADTs in Haskell? > > Thanks! > > Xuanrui > > -- > Xuanrui Qi > Graduate School of Mathematics, Nagoya University > https://www.xuanruiqi.com > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanperezdominguez at gmail.com Wed Nov 27 19:45:46 2019 From: ivanperezdominguez at gmail.com (Ivan Perez) Date: Wed, 27 Nov 2019 14:45:46 -0500 Subject: [Haskell-cafe] [Announcement] Copilot 3.1, hard realtime C generator and runtime verification framework Message-ID: Dear all, We are very pleased to announce the release of Copilot 3.1, a stream-based DSL for writing and monitoring embedded C programs, with an emphasis on correctness and hard realtime requirements. Copilot is typically used as a high-level runtime verification framework, and supports temporal logic (LTL, PTLTL and MTL), clocks and voting algorithms. Among others, Copilot has been used at the Safety Critical Avionics Systems Branch of NASA Langley Research Center for monitoring test flights of drones. This new release contains a number of bug fixes and simplifications: * The installation instructions have been updated to work with new versions of cabal and GHC. * Random stream generators were not being used and have been removed. * External functions are no longer supported by the language (to prevent side-effects in streams). * Minor bug related to labels, nested locals, and pretty printing Copilot core, have been fixed. * Code internals have been simplified for a more future proof codebase. Copilot 3.1 is available on hackage [1]. For more information, including documentation, examples and links to the source code, please visit the webpage [2]. Current emphasis is on providing better documentation, facilitating the use with other systems, and generally improving the codebase. Users are encouraged to participate by opening issues and asking questions via our github repo [3]. Kind regards, The Copilot developers: - Frank Dedden (maintainer) - Alwyn Goodloe (maintainer) - Ivan Perez (maintainer) [1] http://hackage.haskell.org/package/copilot [2] https://copilot-language.github.io [3] https://github.com/Copilot-Language/copilot -------------- next part -------------- An HTML attachment was scrubbed... URL: From doaitse at swierstra.net Thu Nov 28 14:19:57 2019 From: doaitse at swierstra.net (Doaitse Swierstra) Date: Thu, 28 Nov 2019 15:19:57 +0100 Subject: [Haskell-cafe] may I politely ask for ... In-Reply-To: References: Message-ID: May I politely indicate a problem with messages like the one below. I am interested in Haskell in general and that is why I read Haskell-cafe; to keep an eye on new developments and what is going on. Unfortunately too often I see messages announcing (new versions) of packages, of which I have no idea what the package actually does (for a package named “network" I probably can guess). May I ask everyone to include at least one sentence of description in such announcements e.g. for the package below: Description ============== Brittany is a Haskell source code formatter based on ghc-exactprint. You can paste haskell code over here to test how it gets formatted by brittany. I find it annoying that I have to go to e.g. hackage just to see whether the message/package is interesting to me, Thanks in advance and keep the good work going, Doaitse Swierstra > On 19 Jun 2019, at 18:28, Evan Borden wrote: > > Hello, > > I am happy to announce the release of brittany version 0.12.0.0. This version > includes a number of fantastic improvements from many excellent contributors. > Additionally brittany will soon be added back to stackage. In the meantime it > can be built with the stack.yaml in its source or with cabal v2-build. > > https://hackage.haskell.org/package/brittany-0.12.0.0 > > Contributors > ============ > > * Benjamin Kovach @5outh > * Doug Beardsley @mightybyte > * Evan Borden @eborden > * Lennart Spitzner @lspitzner > * Matt Noonan @moatt-noonan > * Phil Hazelden @ChickenProp > * Rupert Horlick @ruhatch > * Sergey Vinokurov @sergv > > New Collaborators > ================= > > You may be wondering why I am sending this message instead of Lennart. Taylor > Fausak and I are taking on some of the maintenance for brittany. We will > hopefully lift some of the burden from Lennart so he can remain the guiding > light and driving force behind the project. > > Changes > ========== > > This release includes many bug fixes, additional support for a number of > syntactic constructs and a few layouting modifications. For full details check > the change log on github: > > https://github.com/lspitzner/brittany/blob/master/ChangeLog.md > > Again thank you to brittany's fantastic contributors for helping making this > release happen. > > -- > > Evan > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Sat Nov 30 19:22:19 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Sat, 30 Nov 2019 14:22:19 -0500 Subject: [Haskell-cafe] Is UndecidableInstances unavoidable in heterogenous Show derivation? Message-ID: <20191130192219.GP34850@straasha.imrryr.org> I was reading: Summer School on Generic and Effectful Programming Applying Type-Level and Generic Programming in Haskell Andres Löh, Well-Typed LLP https://www.cs.ox.ac.uk/projects/utgp/school/andres.pdf and trying out the various code fragments, and was mildly surprised by the need for -XUndecidableInstances when deriving the 'Show' instance for the heterogenous N-ary product 'NP': deriving instance All (Compose Show f) xs => Show (NP f xs) Without "UndecidableInstances" I get: • Variable ‘k’ occurs more often in the constraint ‘All (Compose Show f) xs’ than in the instance head ‘Show (NP f xs)’ (Use UndecidableInstances to permit this) • In the stand-alone deriving instance for ‘All (Compose Show f) xs => Show (NP f xs)’ This is not related to automatic deriving, as a hand-crafted instance triggers the same error: instance All (Compose Show f) xs => Show (NP f xs) where show Nil = "Nil" show (x :* xs) = show x ++ " :* " ++ show xs Is there some other way of putting together the building blocks that avoids the need for 'UndecidableInstances', or is that somehow intrinsic to the type of NP construction? I guess I should also ask whether there's a way to define something equivalent to 'Compose' without 'UndecidableSuperClasses', and perhaps the two are not unrelated. [ Perhaps I should be more blasé about enabling these extensions, but I prefer to leave sanity checks enabled when possible. ] -- Viktor. {-# LANGUAGE ConstraintKinds #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} import GHC.Exts (Constraint) -- | Map a constraint over a list of types. type family All (c :: k -> Constraint) (xs :: [k]) :: Constraint where All _ '[] = () All c (x ': xs) = (c x, All c xs) -- | With @g@ a type constructor and @f@ a constraint, define @Compose f g@ as -- a contraint on @x@ to mean that @g x@ satisfies @f at . -- -- Requires: -- -- 'ConstraintKinds' -- 'FlexibleInstances' -- 'MultiParamTypeClasses' -- 'TypeOperators' -- 'UndecidableSuperClasses' -- class (f (g x)) => (Compose f g) x instance (f (g x)) => (Compose f g) x -- | Type-level 'const'. newtype K a b = K { unK :: a } deriving instance Show a => Show (K a b) -- | N-ary product over an index list of types @xs@ via an interpretation -- function @f@, constructed as a list of elements @f x at . -- data NP (f :: k -> *) (xs :: [k]) where Nil :: NP f '[] (:*) :: f x -> NP f xs -> NP f (x ': xs) infixr 5 :* -- | If we can 'show' each @f x@, then we can 'show' @NP f xs at . -- -- Requires: 'UndecidableInstances' -- deriving instance All (Compose Show f) xs => Show (NP f xs)