From simon at joyful.com Sun Jul 4 01:26:12 2021
From: simon at joyful.com (Simon Michael)
Date: Sat, 3 Jul 2021 15:26:12 -1000
Subject: [Haskell-cafe] ANN: hledger-1.22
Message-ID: <0AC89A75-3B78-4E31-8C0E-EA02E64AACC7@joyful.com>
I'm pleased to announce hledger 1.22 !
https://hledger.org/release-notes.html#hledger-1-22
describes the user-visible changes. Highlights include:
- Optimisations: about 20% faster overall, with register about 2x
faster and 4x more memory-efficient
- Bugfixes, including fixes for several regressions.
Thank you to the following release contributors this time around:
Felix Yan,
crocket,
Eric Mertens,
Damien Cassou,
charukiewicz,
Garret McGraw,
and especially Stephen Morgan.
hledger (https://hledger.org) is a dependable, cross-platform "plain
text accounting" tool, with command-line, terminal and web interfaces.
It is an actively maintained, largely compatible reimplementation of
Ledger CLI with many improvements. You can use it to track money,
time, investments, cryptocurrencies, inventory and more. See also the
Plain Text Accounting site (https://plaintextaccounting.org).
https://hledger.org/download shows all the ways to install hledger on
mac, windows or unix (stack, cabal, brew, nix, CI binaries, your
package manager..). Or, run this bash script to install or upgrade to
the latest release:
$ curl -sO https://raw.githubusercontent.com/simonmichael/hledger/master/hledger-install/hledger-install.sh
$ less hledger-install.sh # security review
$ bash hledger-install.sh
New users, check out https://hledger.org/quickstart
or the tutorials (with pictures!) at hledger.org -> FIRST STEPS
or the https://hledger.org/videos.html.
To get help, see https://hledger.org#help, and join one of our chat rooms:
- #hledger:matrix.org (http://matrix.hledger.org)
- #hledger:libera.chat (http://irc.hledger.org)
Beginners and experts, contributors, sponsors, and all feedback are most welcome!
Wishing you health and prosperity,
-Simon
From bruno.bernardo at tutanota.com Mon Jul 5 21:01:42 2021
From: bruno.bernardo at tutanota.com (Bruno Bernardo)
Date: Mon, 5 Jul 2021 23:01:42 +0200 (CEST)
Subject: [Haskell-cafe] FMBC 2021 - Call for Participation
Message-ID:
[ Please distribute, apologies for multiple postings. ]
========================================================================
3rd International Workshop on Formal Methods for Blockchains (FMBC) 2021 - Call for Participation
https://fmbc.gitlab.io/2021
July 18 and 19, 2021, Online, 8AM-10AM PDT
Co-located with the 33rd International Conference on Computer-Aided Verification (CAV 2021)
http://i-cav.org/2021/
---------------------------------------------------------
The FMBC workshop is a forum to identify theoretical and practical
approaches of formal methods for Blockchain technology. Topics
include, but are not limited to:
* Formal models of Blockchain applications or concepts
* Formal methods for consensus protocols
* Formal methods for Blockchain-specific cryptographic primitives or protocols
* Design and implementation of Smart Contract languages
* Verification of Smart Contracts
The list of lightning talks and conditionally accepted papers is available on the FMBC 2021 website:
https://fmbc.gitlab.io/2021/program.html
There will be one keynote by David Dill, Lead Researcher on Blockchain at Novi/Facebook and professor emeritus at Stanford University, USA.
Registration
Registration to FMBC 2021 is done through the CAV 2021 registration form:
http://i-cav.org/2021/attending/
(*Early bird deadline is July 9.*)
From anthony.d.clayden at gmail.com Wed Jul 7 06:36:07 2021
From: anthony.d.clayden at gmail.com (Anthony Clayden)
Date: Wed, 7 Jul 2021 18:36:07 +1200
Subject: [Haskell-cafe] Prolog-style list syntax?
In-Reply-To:
References:
Message-ID:
Ok so I implemented both ideas:
> elem2 _ [] = False
> elem2 x [y,: ys] = x == y || elem2 x ys
> let [x,y,z,w,: ws] = "hello there" in ws -- yields "o there"
In fact it was easier to implement both ideas than make a special case of
`(:)`. So these work
> let [x,:+ xs] = [1,2,3,:+ Nily] in xs
> let [x,`ConsSet` xs] = [1,2,3,`ConsSet` NilSet] in xs
with decls
> infixr 5 :+
> data Listy a = Nily | a :+ (Listy a) deriving (Eq, Show, Read)
> infixr 5 `ConsSet`
> data Set a = NilSet | ConsSet a (Set a) deriving (Eq, Show, Read)
On Wed, 30 Jun 2021 at 19:00, Anthony Clayden
wrote:
> Ok thank you for the feedback, I get the message not to re-purpose
> currently valid syntax.
>
>
> > [x, y ,: ys ] -- ? not currently valid
>
> We could make list syntax work harder
>
> > [x, y ,:+ ys ]
>
> Means desugar the commas by applying constructor `:+` instead of `:`. That
> could be in general any constructor starting `:`.
>
> Or indeed could be a pattern synonym starting `:`, which is a 'smart
> constructor' to build the list maintaining some invariant.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From olf at aatal-apotheke.de Wed Jul 7 19:59:15 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Wed, 07 Jul 2021 21:59:15 +0200
Subject: [Haskell-cafe] How to debug SQL disconnect error
Message-ID:
Dear Café,
How can I go about examining an SQL error [1] when using persistent-
odbc to access an MS SQL database?
Set up a pristine MS SQL server 2012.
Installed the driver on Debian using
apt-get install msodbcsql17.
Upon running either of persitent's printMigration or runMigration, the
disconnect produces an error:
*** Exception: SqlError {seState = "["25000"]", seNativeError = -1,
seErrorMsg = "freeDbcIfNotAlready/SQLDisconnect: ["0: [Microsoft][ODBC
Driver 17 for SQL Server]Invalid transaction state"]"}
Running the printed SQL statements with isql executes allright, so it
is not the particular migration at fault here. Presently I don't even
know whether persistent or persistent-odbc or any false assumptions
about the connection configuration is the cause.
Thanks in advance
Olaf
[1] https://github.com/gbwey/persistent-odbc/issues/26
From raoknz at gmail.com Thu Jul 8 12:05:42 2021
From: raoknz at gmail.com (Richard O'Keefe)
Date: Fri, 9 Jul 2021 00:05:42 +1200
Subject: [Haskell-cafe] Prolog-style list syntax?
In-Reply-To:
References:
Message-ID:
I note that
(1) Clean uses ":" in lists the way Prolog uses "|".
(2) Before Prolog switched to "|", Edinburgh Prolog used ",.." (two tokens).
(3) There doesn't seem to be any reason why [p1,p2,p3]++pr could not be
a pattern, where p1, p2, p3 match elements and pr the rest. Erlang has
++ , but forbids ++ ,
presumably because it uses "|" like Prolog.
From carette at mcmaster.ca Thu Jul 8 14:40:14 2021
From: carette at mcmaster.ca (Carette, Jacques)
Date: Thu, 8 Jul 2021 14:40:14 +0000
Subject: [Haskell-cafe] Where have the discussions moved to?
Message-ID:
Haskell Café used to be quite active. But in the last 2 years, at least, there have been much less traffic. In the same time frame, Haskell's popularity and community has expanded greatly. The logical conclusion is that discussions have moved elsewhere.
I know that, for example, there are at times a large amount of discussion happening via github on the proposals [1] (in fact, so much so that I unsubscribed). I am aware that the community page [2] does mention quite a few places - too many. Is there one of those places where the kinds of discussions that used to happen here have moved to?
Jacques
[1] https://github.com/ghc-proposals/ghc-proposals
[2] https://www.haskell.org/community/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From frank at dedden.net Thu Jul 8 17:01:16 2021
From: frank at dedden.net (Frank Dedden)
Date: Thu, 8 Jul 2021 19:01:16 +0200
Subject: [Haskell-cafe] [ANN] Copilot 3.4 - hard realtime C runtime
verification
Message-ID:
Dear all,
We are pleased to announce the release of Copilot 3.4, a stream-based DSL for
writing and monitoring embedded C programs, with an emphasis on correctness and
hard realtime requirements. Copilot is typically used as a high-level runtime
verification framework, and supports temporal logic (LTL, PTLTL and MTL),
clocks and voting algorithms.
Among others, Copilot has been used at the Safety Critical Avionics Systems
Branch of NASA Langley Research Center for monitoring test flights of drones.
This release introduces a number of bug fixes and deprecates functions that
have been superseded.
The newest release is available on hackage [1]. For more information, including
documentation, examples and links to the source code, please visit the webpage
[2].
Current emphasis is on facilitating the use with other systems, and improving
the codebase in terms of stability and test coverage. Users are encouraged to
participate by opening issues and asking questions via our github repo [3].
Kind regards,
The Copilot developers:
- Frank Dedden
- Alwyn Goodloe
- Ivan Perez
[1] https://hackage.haskell.org/package/copilot
[2] https://copilot-language.github.io
[3] https://github.com/Copilot-Language/copilot
From curtis.dalves at gmail.com Thu Jul 8 17:11:21 2021
From: curtis.dalves at gmail.com (Curtis D'Alves)
Date: Thu, 8 Jul 2021 13:11:21 -0400
Subject: [Haskell-cafe] Where have the discussions moved to?
In-Reply-To:
References:
Message-ID:
r/haskell (the Haskell subreddit) is quite popular
Curtis D'Alves
On Thu., Jul. 8, 2021, 10:45 a.m. Carette, Jacques,
wrote:
> Haskell Café used to be quite active. But in the last 2 years, at least,
> there have been much less traffic. In the same time frame, Haskell’s
> popularity and community has expanded greatly. The logical conclusion is
> that discussions have moved elsewhere.
>
>
>
> I know that, for example, there are at times a large amount of discussion
> happening via github on the proposals [1] (in fact, so much so that I
> unsubscribed). I am aware that the community page [2] does mention quite a
> few places – too many. Is there one of those places where the kinds of
> discussions that used to happen here have moved to?
>
>
>
> Jacques
>
>
>
> [1] https://github.com/ghc-proposals/ghc-proposals
>
> [2] https://www.haskell.org/community/
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jclites at mac.com Thu Jul 8 17:11:48 2021
From: jclites at mac.com (Jeff Clites)
Date: Thu, 8 Jul 2021 10:11:48 -0700
Subject: [Haskell-cafe] Prolog-style list syntax?
In-Reply-To:
References:
Message-ID: <380B604F-ED88-4CB0-A9A9-E5074BB04724@mac.com>
> On Jul 8, 2021, at 5:05 AM, Richard O'Keefe wrote:
>
> (3) There doesn't seem to be any reason why [p1,p2,p3]++pr could not be
> a pattern, where p1, p2, p3 match elements and pr the rest.
The problem with that one is that:
[p1,p2,p3]++pr = list
is interpreted as defining “++”.
If the outermost expression isn’t a constructor or some new syntax (i.e., previously a syntax error) then I think you’ll have that problem.
Jeff
From lemming at henning-thielemann.de Thu Jul 8 17:20:00 2021
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Thu, 8 Jul 2021 19:20:00 +0200 (CEST)
Subject: [Haskell-cafe] Hyrum's law (Was: [RFC] Support Unicode characters
in instance Show String)
In-Reply-To:
References: <09808E7B-6AF7-46C6-8E8D-35E1FD0EF121@gmail.com>
<0f67eb49-f28d-af20-2754-e29e945b6c23@iki.fi>
Message-ID:
moving to Haskell Cafe
On Thu, 8 Jul 2021, Bardur Arantsson wrote:
> On 08/07/2021 17.53, Oleg Grenrus wrote:
>
> [--snip--]
>
>>
>> 78 out of 2819 tests failed (35.88s)
>>
>
> The use of Show in 'golden' test suites is interesting because Show
> doesn't really guarantee any real form of stability in its output.
>
> I guess Hyrum's Law applies here too.
>
> Anyway... just an idle observation. Obviously, breaking loads of test
> suites is going to be hard to swallow.
I found this explanation:
https://www.hyrumslaw.com/
Nice to have a name for this observation.
I think the problem can be relaxed if there are multiple implementations
for the same interface. But then you often find, that the implementations
do not even adhere to the interface, thus people start guessing what the
interface actually means.
I hope that in the future we can describe interfaces more formally, such
that a user knows what he can rely on.
From oleg.grenrus at iki.fi Thu Jul 8 17:28:12 2021
From: oleg.grenrus at iki.fi (Oleg Grenrus)
Date: Thu, 8 Jul 2021 20:28:12 +0300
Subject: [Haskell-cafe] Where have the discussions moved to?
In-Reply-To:
References:
Message-ID:
Also https://discourse.haskell.org/ which has some overlap with reddit
in the topics discussed.
Haskell’s popularity and community has expanded greatly... to the point
all discussions don't fit into one place. The cost of success :)
- Oleg
On 8.7.2021 20.11, Curtis D'Alves wrote:
> r/haskell (the Haskell subreddit) is quite popular
>
> Curtis D'Alves
>
> On Thu., Jul. 8, 2021, 10:45 a.m. Carette, Jacques,
> > wrote:
>
> Haskell Café used to be quite active. But in the last 2 years, at
> least, there have been much less traffic. In the same time frame,
> Haskell’s popularity and community has expanded greatly. The
> logical conclusion is that discussions have moved elsewhere.
>
>
>
> I know that, for example, there are at times a large amount of
> discussion happening via github on the proposals [1] (in fact, so
> much so that I unsubscribed). I am aware that the community page
> [2] does mention quite a few places – too many. Is there one of
> those places where the kinds of discussions that used to happen
> here have moved to?
>
>
>
> Jacques
>
>
>
> [1] https://github.com/ghc-proposals/ghc-proposals
>
>
> [2] https://www.haskell.org/community/
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
>
> Only members subscribed via the mailman list are allowed to post.
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From carter.schonwald at gmail.com Thu Jul 8 17:34:11 2021
From: carter.schonwald at gmail.com (Carter Schonwald)
Date: Thu, 8 Jul 2021 13:34:11 -0400
Subject: [Haskell-cafe] Hyrum's law (Was: [RFC] Support Unicode
characters in instance Show String)
In-Reply-To:
References: <09808E7B-6AF7-46C6-8E8D-35E1FD0EF121@gmail.com>
<0f67eb49-f28d-af20-2754-e29e945b6c23@iki.fi>
Message-ID:
Yeah. There’s no good best answer easily.
I can imagine having a different set of show instance be part of each
Haskell language flavor, but Haskell and ghc aren’t setup for doing that
with base libraries for a lot of architectural reasons.
On Thu, Jul 8, 2021 at 1:26 PM Henning Thielemann <
lemming at henning-thielemann.de> wrote:
>
> moving to Haskell Cafe
>
> On Thu, 8 Jul 2021, Bardur Arantsson wrote:
>
> > On 08/07/2021 17.53, Oleg Grenrus wrote:
> >
> > [--snip--]
> >
> >>
> >> 78 out of 2819 tests failed (35.88s)
> >>
> >
> > The use of Show in 'golden' test suites is interesting because Show
> > doesn't really guarantee any real form of stability in its output.
> >
> > I guess Hyrum's Law applies here too.
> >
> > Anyway... just an idle observation. Obviously, breaking loads of test
> > suites is going to be hard to swallow.
>
>
> I found this explanation:
> https://www.hyrumslaw.com/
>
> Nice to have a name for this observation.
>
>
> I think the problem can be relaxed if there are multiple implementations
> for the same interface. But then you often find, that the implementations
> do not even adhere to the interface, thus people start guessing what the
> interface actually means.
>
> I hope that in the future we can describe interfaces more formally, such
> that a user knows what he can rely on.
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Thu Jul 8 18:02:43 2021
From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis)
Date: Thu, 8 Jul 2021 19:02:43 +0100
Subject: [Haskell-cafe] Hyrum's law (Was: [RFC] Support Unicode
characters in instance Show String)
In-Reply-To:
References: <09808E7B-6AF7-46C6-8E8D-35E1FD0EF121@gmail.com>
<0f67eb49-f28d-af20-2754-e29e945b6c23@iki.fi>
Message-ID: <20210708180243.GF19939@cloudinit-builder>
On Thu, Jul 08, 2021 at 07:20:00PM +0200, Henning Thielemann wrote:
> moving to Haskell Cafe
> On Thu, 8 Jul 2021, Bardur Arantsson wrote:
> > The use of Show in 'golden' test suites is interesting because Show
> > doesn't really guarantee any real form of stability in its output.
> >
> > I guess Hyrum's Law applies here too.
> >
>
> I found this explanation:
> https://www.hyrumslaw.com/
which is:
Hyrum's Law
An observation on Software Engineering
Put succinctly, the observation is this:
With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.
From msantolu at barnard.edu Thu Jul 8 22:13:56 2021
From: msantolu at barnard.edu (Mark Santolucito)
Date: Thu, 8 Jul 2021 18:13:56 -0400
Subject: [Haskell-cafe] FMCAD Student Forum CFP (Deadline Sat July 10)
Message-ID:
# Student Forum
Continuing the tradition of the previous years, FMCAD 2021 is hosting a
Student Forum that provides a platform for students at any career stage
(undergraduate or graduate) to introduce their research to the wider Formal
Methods community, and solicit feedback. The Student Forum will be held in
a hybrid format, online via video conferencing.
## Submissions
Submissions must be short reports describing research ideas or ongoing work
that the student is currently pursuing. Joint submissions from two students
are allowed, provided the students contributed equally to the work -
however, joint submissions must be presented by a single student. The topic
of the reports must be within the scope of the FMCAD conference. These
reports will NOT be published, thus we welcome reports based on already
submitted/published papers. However, the novel aspects to be addressed in
future work must be clearly described.
Submissions should follow the same formatting guidelines as those for
regular FMCAD conference submissions, except that the length is limited to
2 pages IEEE format, including all figures and references.
## Important Dates
- Student forum submission: July 10, 2021
- Student forum notification: Aug 6, 2021
These deadlines are 11:59 pm AoE (Anywhere on Earth)
More info here: https://fmcad.org/FMCAD21/student_forum/
## Main Activities
### Student Forum Talks.
Each student will give a lightning talk.
In the case that a student is attending FMCAD physically, the talk will be
given in-person at the conference.
In the case that a student is attending FMCAD remotely, the talk will be
given over Zoom.
### Discussion Groups.
Students will have the opportunity to explain and discuss their work in
small groups. More details to come on the logistics of discussion for
remote participants.
Submissions for the event must be short reports describing research ideas
or ongoing work that the student is currently pursuing, and must be within
the scope of FMCAD. Work, part of which has been previously published, will
be considered; the novel aspect to be addressed in future work must be
clearly described in such cases. All submissions will be reviewed by a
subgroup of FMCAD committee members.
## Format
The event will consist of short presentations by the student authors of
each accepted submission, and of a virtual poster session. All participants
of the conference are encouraged to attend the talks, ask questions and
discuss with their fellow students in the virtual post sessions.
Instructions for the preparation of the talks and poster sessions will be
announced on notification of acceptance.
## Visibility
Accepted submissions will be listed, with title and author name, in the
event description in the conference proceedings. The authors will also have
the option to upload their slide deck/poster/presentation to the FMCAD
website. The report itself will not appear in the FMCAD proceedings; thus,
the presentation at FMCAD should not interfere with potential future
submissions of this research (to FMCAD or elsewhere).
The best contributions (determined by public vote by attendees) will be
given public recognition and a certificate at the event.
***Forum Chair***
Mark Santolucito (msantolu at barnard.edu) chairs the Student Forum. Feel free
to send an email if you have questions about the event.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From anthony.d.clayden at gmail.com Thu Jul 8 23:28:30 2021
From: anthony.d.clayden at gmail.com (Anthony Clayden)
Date: Fri, 9 Jul 2021 11:28:30 +1200
Subject: [Haskell-cafe] Prolog-style list syntax?
In-Reply-To:
References:
Message-ID:
Thanks Richard, see the discussion last month (same Subject line) that
considered/rejected various ideas
On Fri, 9 Jul 2021 at 00:05, Richard O'Keefe wrote:
> I note that
> (1) Clean uses ":" in lists the way Prolog uses "|".
>
Yeah that's where I started. [x : xs] is already valid Haskell, meaning [
(x: xs) ].
> (2) Before Prolog switched to "|", Edinburgh Prolog used ",.." (two
> tokens).
>
Haskell already has [1, 2 .. 5] for arithmetic sequences. `,..` Wouldn't
quite be ambiguous, but is dangerously close.
(3) There doesn't seem to be any reason why [p1,p2,p3]++pr could not be
> a pattern, where p1, p2, p3 match elements and pr the rest.
Oh but `++` is an operator, not a constructor. Syntactically that's not a
pattern.
So [p1, p2]++pr++pq is a valid expression; what would it mean as a pattern?
Somebody in last month's discussion wanted similar, especially for strings
"prefix"++pr.
But it would be valid in a pattern only if the lhs were a pattern, and we
made a special syntactic case for ++. Then what if there's a user-defined
override for ++?
Also see my OP last month, I want the list to be the thing in [ ], not also
some trailing expression.
> Erlang has ++ , but forbids ++
> ,
> presumably because it uses "|" like Prolog.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From anthony.d.clayden at gmail.com Thu Jul 8 23:56:50 2021
From: anthony.d.clayden at gmail.com (Anthony Clayden)
Date: Fri, 9 Jul 2021 11:56:50 +1200
Subject: [Haskell-cafe] Where have the discussions moved to?
Message-ID:
Yes I miss the free-and-easy atmosphere of the cafe.
As of 9/10 years ago, we thought we'd nearly figured out how to fix the
bogusness with FunDeps.
And there were several ideas for better records.
But no, GHC's FunDeps are just as bad as 2006; records have got a bit of
lipstick.
My take is that GHC has got so loaded with features, it's almost impossible
to engineer in any 'tweaks'.
So there has to be a bloated bureaucratic proposals process to make sure
nothing gets broken.
No free-wheeling what-if kind of discussion.
Reddit I find almost impossible to follow. The Discourse site seems to be
too much cut-and-dried.
If you want to peek at what code people are cutting, StackOverflow gets
some surprising material.
But yeah. Haskell is different these days. In a way I don't like. It
doesn't feel like 'popularity and community has expanded'.
AntC
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From safinaskar at mail.ru Fri Jul 9 02:18:22 2021
From: safinaskar at mail.ru (=?UTF-8?B?QXNrYXIgU2FmaW4=?=)
Date: Fri, 09 Jul 2021 05:18:22 +0300
Subject: [Haskell-cafe] =?utf-8?q?=5BANN=5D_parser-unbiased-choice-monad-e?=
=?utf-8?q?mbedding_-_the_best_parsing_library=3B_it_is_based_on_arrows_?=
=?utf-8?q?=28was=3A_Pearl!_I_just_proved_theorem_about_impossibility_of_m?=
=?utf-8?q?onad_transformer_for_parsing_with_=281=29_unbiased_choice_and_?=
=?utf-8?q?=282=29_ambiguity_checking_before_running_embedded_monadic_acti?=
=?utf-8?q?on_=28also=2C_I_THREAT_I_will_create_another_parsing_lib=29=29?=
Message-ID: <1625797102.508724549@f108.i.mail.ru>
Hi.
I announce my parsing library https://hackage.haskell.org/package/parser-unbiased-choice-monad-embedding .
I think it is best parsing library, and you should always use it instead of other solutions. I will tell
you why. You may check comparison table: https://paste.debian.net/1203863/ (if you don't
understand the table, don't worry; come back to the table after reading this mail).
My library is solution to problem described in this e-mail thread, so read it for motivation:
https://mail.haskell.org/pipermail/haskell-cafe/2021-June/134094.html .
Now let me describe my solution in detail.
I will distinguish parser errors (i. e. "no parse" or "ambiguous parse") and semantic errors
(i. e. "division by zero", "undefined identifier", "type mismatch", etc).
So, now I will show you parser with unbiased choice, which allows monad embedding.
As a very good introduction to arrows I recommend this:
https://ocharles.org.uk/guest-posts/2014-12-21-arrows.html .
We start from classic parser with this type:
newtype ParserClassic t a = ParserClassic ([t] -> [(a, [t])])
You can make it be instance of Functor, Applicative, Monad and Alternative.
This type is similar to ReadS from base
( https://hackage.haskell.org/package/base-4.15.0.0/docs/Text-ParserCombinators-ReadP.html#t:ReadS )
and to "type Parser = StateT String []" from example here:
https://hackage.haskell.org/package/transformers-0.5.5.0/docs/Control-Monad-Trans-Class.html .
I will not give more information, feel free to find it in internet.
Now let's replace "a" with Kleisli arrow "b -> m c". We will get this:
newtype Parser1 t m b c = Parser1 ([t] -> [(b -> m c, [t])])
Here is resulting parsing library and example: https://godbolt.org/z/qsrdKefjT (backup:
https://paste.debian.net/1203861/ ). We can use this parser in Applicative style. And when we
need to lift something to embedded monad, we resort to Arrow style. I didn't test this code much.
Parser1 cannot be Monad (I proved this in previous letters).
At this point someone may ask: why we need arrows? Can we achieve same effect using
Applicative only? Yes, we can. Here is result: https://godbolt.org/z/ocY3csWjs (backup:
https://paste.debian.net/1203862/ ), I will call such method "anti-arrow". But we have two
problems here: first, I think this method makes parser code uglier. Second, this method is
limited, and I will show you why later.
Okey, now back to arrows.
Still, we have this problems (I'm about our arrow code):
- Handling left-recursive grammars is tricky (it is possible, but not as simple as right-recursive
ones)
- Parsing errors (as opposed to semantic error messages) are not great. I. e. if there is no valid
parses, we don't get any additional information, i. e. we don't know which token caused error
- We don't track locations, so semantic errors are not great, too (i. e. we want to have location
info to embed it into semantic error messages)
- I suspect this parsing library will have very bad speed, possibly O(exp(input size))
So, let's combine our ideas with package Earley ( https://hackage.haskell.org/package/Earley ).
Earley has type "Prod r e t a". Let's replace "a" with Kleisli arrow "b -> m c".
Also let's wrap this Kleisli arrow to "L" from srcloc ( https://hackage.haskell.org/package/srcloc )
to get location info. Also, we wrap "t" to "L", too.
Thus we get this type:
newtype ArrowProd r e t m b c = ArrowProd (Prod r e (L t) (L (b -> m c)))
Here is resulting library with example: https://hackage.haskell.org/package/parser-unbiased-choice-monad-embedding .
(I didn't test much.) (I recommend to read docs to Earley first.) (My library is designed to be used
with some external lexer, lexer-applicative is recommended.)
The example also uses lexer-applicative, because lexer-applicative automatically wraps tokens into "L".
So, what we got? We solved original goals, i. e.:
- We have combinator parser with unbiased choice. Unfortunately, it is not monadic, but it is
Applicative and Arrow
- We can embed monad, for example, to handle semantic errors
- We can test parsing errors before executing embedded monadic action
Additionally we solved 4 remaining problems mentioned above, i. e.:
- Handling left-recursive grammars is as simple as right-recursive ones (thanks to Earley's
RecursiveDo)
- Parsing errors are ok
- We track locations and we can embed them into semantic errors
- We have relatively good speed thanks to Earley
What else? We can test grammar for ambiguity using
https://hackage.haskell.org/package/Earley-0.13.0.1/docs/Text-Earley.html#v:upTo (I didn't wrap
this function, but this can easily be done).
Personally I think that my parsing library is simply best. :) And that one should always use it
instead of all others. Why I think this? Because:
- We all know that CFG is better than PEG, i. e. unbiased choice is better that biased
- I don't want to merely produce AST, I want to process information while I parse, and this
processing will uncover some semantic errors
- So I want unbiased choice with handling semantic errors
- The only existing solution, which can do this is my library (and also "happy")
- But "happy" doesn't automatically track locations (as well as I know it tracks lines, but not columns)
- So the best parsing solution is my library :)
My library has another advantage over happy: by extensive use of Alternative's "many" (and
arrow banana brackets) you can write things, which are impossible to write in happy. Consider
this artificial example (subset of Pascal):
---
var
a: integer; b: integer;
begin
a := b + c;
end.
---
This is its parser, which uses my library (completely untested):
---
arrowRule (proc () -> do {
sym TVar -< ();
declated <- many (ident <* sym TColon <* sym TInteger <* sym TSemicolon) -< ();
sym TBegin -< ();
-- Banana brackets
(|many (do {
x <- ident -< ();
lift -< when (x `notElem` declated) $ Left "Undeclated identifier";
sym TAssign -< ();
-- My library doesn't have "sepBy", but it can be easily created
(|sepBy (do {
y <- ident -< ();
lift -< when (y `notElem` declated) $ Left "Undeclated identifier";
returnA -< ();
})|) [TPlus];
sym TSemicolon -< ();
returnA -< ();
|);
sym TEnd -< ();
sym TDot -< ();
returnA -< ();
})
---
This is impossible to write similar code using "happy" with similar ergonomics, because in
happy we would not have access to "declared" inside production for sum. The only way we
can access "declared" is using state monad (for example, StateT) and put this "declared"
into this state monad. But in my library you don't have to use any StateT!
Now let me say why mentioned "anti-arrow" style is not enough. Let's try to rewrite this
Pascal example using anti-arrow style:
---
antiArrowLift $ do { -- ApplicativeDo
sym TVar;
declated <- many (ident <* sym TColon <* sym TInteger <* sym TSemicolon);
sym TBegin;
many $ antiArrowLift $ do {
x <- ident;
sym TAssign;
-- My library doesn't have "sepBy", but it can be easily created
sepBy [TPlus] $ antiArrowLift $ do {
y <- ident;
pure $ when (y `notElem` declated) $ Left "Undeclated identifier"; -- Oops
};
sym TSemicolon;
pure $ when (x `notElem` declated) $ Left "Undeclated identifier"; -- Oops
};
sym TEnd;
sym TDot;
pure ();
}
---
Looks good. But there is a huge problem here: look at lines marked as "Oops". They refer to
"declared", but they cannot refer to it, because outer "do" is ApplicativeDo. So, yes, merely
Applicative is not enough.
Does my library have disadvantages? Of course, it has!
- It is not monadic
- It cannot statically check that grammar is element of LR(1) set (as well as I understand,
happy can do this)
- My library has relatively good speed asymptotic (same as Earley), but it is still not fastest
- My library will freeze on infinitely ambiguous grammars. Attempting to check such grammar
for ambiguity using Earley's "upTo" will cause freezing, too. See also: https://github.com/ollef/Earley/issues/54
- My library is based on unbiased choice and CFG (as opposed to biased choice and PEG).
I consider this as advantage, but my library will not go if you want to parse language defined by some PEG
My library is unfinished. The following things are needed:
- We need combinator similar to Alternative's "many", but every item should have access to
already parsed part of list. Such combinator should be made to be used by banana brackets
- We need combinators similar to parsec's chainl and chainr (my library already supports left
and right recursion thanks to Earley, but still such combinators would be useful)
- Already mentioned "sepBy"
- I didn't wrap all Earley functionality, for example, is left unwrapped
I don't have motivation for fix this things, because I decided to switch to Rust as my main language.
Final notes
- It is quite possible that I actually need attribute grammars or syntax-directed translation. I didn't explore this
- I suspect that my parser is arrow transformer (whatever this means)
Side note: I really want some pastebin for reproducible shell scripts (possibly dockerfiles), do you know such?
Answer me if you have any questions.
==
Askar Safin
http://safinaskar.com
https://sr.ht/~safinaskar
https://github.com/safinaskar
From ifl21.publicity at gmail.com Fri Jul 9 05:55:37 2021
From: ifl21.publicity at gmail.com (Pieter Koopman)
Date: Fri, 9 Jul 2021 05:55:37 +0000
Subject: [Haskell-cafe] IFL'21 Third call for papers
Message-ID:
================================================================================
IFL 2021
33rd Symposium on Implementation and Application of Functional Languages
venue: online
1 - 3 September 2021
https://ifl21.cs.ru.nl
================================================================================
News
- Paper submission details.
- Registration information added
Scope
The goal of the IFL symposia is to bring together researchers actively
engaged
in the implementation and application of functional and function-based
programming languages. IFL 2021 will be a venue for researchers to present
and
discuss new ideas and concepts, work in progress, and publication-ripe
results
related to the implementation and application of functional languages and
function-based programming.
Industrial track and topics of interest
This year's edition of IFL explicitly solicits original work concerning
*applications*
of functional programming in industry and academia. These contributions
will be reviewed by experts with an industrial background.
Topics of interest to IFL include, but are not limited to:
* language concepts
* type systems, type checking, type inferencing
* compilation techniques
* staged compilation
* run-time function specialisation
* run-time code generation
* partial evaluation
* (abstract) interpretation
* meta-programming
* generic programming
* automatic program generation
* array processing
* concurrent/parallel programming
* concurrent/parallel program execution
* embedded systems
* web applications
* (embedded) domain-specific languages
* security
* novel memory management techniques
* run-time profiling performance measurements
* debugging and tracing
* testing and proofing
* virtual/abstract machine architectures
* validation, verification of functional programs
* tools and programming techniques
* applications of functional programming in the industry, including
** functional programming techniques for large applications
** successes of the application functional programming
** challenges for functional programming encountered
** any topic related to the application of functional programming that is
interesting for the IFL community
Post-symposium peer-review
Following IFL tradition, IFL 2021 will use a post-symposium review process
to
produce the formal proceedings.
Before the symposium authors submit draft papers. These draft papers will be
screened by the program chairs to make sure that they are within the scope
of
IFL. The draft papers will be made available to all participants at the
symposium.
Each draft paper is presented by one of the authors at the symposium.
After the symposium every presenter is invited to submit a full paper,
incorporating feedback from discussions at the symposium. Work submitted to
IFL
may not be simultaneously submitted to other venues; submissions must
adhere to ACM SIGPLAN's republication policy. The program committee will
evaluate these submissions according to their correctness, novelty,
originality,
relevance, significance, and clarity, and will thereby determine whether the
paper is accepted or rejected for the formal proceedings. We plan to publish
these proceedings in the International Conference Proceedings Series of the
ACM Digital Library, as in previous years. Moreover, the proceedings will
also
be made publicly available as open access.
Important dates
Submission deadline of draft papers: 17 August 2021
Notification of acceptance for presentation: 19 August 2021
Registration deadline: 30 August 2021
IFL Symposium: 1-3 September 2021
Submission of papers for proceedings: 6 December 2021
Notification of acceptance: 3 February 2022
Camera-ready version: 15 March 2022
Submission details
All contributions must be written in English. Papers must use the ACM two
columns conference format, which can be found at:
http://www.acm.org/publications/proceedings-template
.
(For LaTeX users, start your document with
\documentclass[format=sigconf]{acmart}.)
Note that this format has a rather long but limited list of packages that
can be used.
Please make sure that your document adheres to this list.
The submission Web page for IFL21 is
https://easychair.org/conferences/?conf=ifl21
.
Peter Landin Prize
The Peter Landin Prize is awarded to the best paper presented at the
symposium every year. The honoured article is selected by the program
committee
based on the submissions received for the formal review process. The prize
carries a cash award equivalent to 150 Euros.
Organisation
IFL 2021 Chairs: Pieter Koopman and Peter Achten, Radboud University, The
Netherlands
IFL Publicity chair: Pieter Koopman, Radboud University, The Netherlands
PC:
Peter Achten (co-chair) - Radboud University, Netherlands
Thomas van Binsbergen - University of Amsterdam, Netherlands
Edwin Brady - University of St. Andrews, Scotland
Laura Castro - University of A Coruña, Spain
Youyou Cong - Tokyo Institute of Technology, Japan
Olaf Chitil - University of Kent, England
Andy Gill - University of Kansas, USA
Clemens Grelck - University of Amsterdam, Netherlands
John Hughes - Chalmers University, Sweden
Pieter Koopman (co-chair) - Radboud University, Netherlands
Cynthia Kop - Radboud University, Netherlands
Jay McCarthey - University of Massachussetts Lowell, USA
Neil Mitchell - Facebook, England
Jan De Muijnck-Hughes - Glasgow University, Scotland
Keiko Nakata - SAP Innovation Center Potsdam, Germany
Jurriën Stutterheim - Standard Chartered, Singapore
Simon Thompson - University of Kent, England
Melinda Tóth - Eötvos Loránd University, Hungary
Phil Trinder - Glasgow University, Scotland
Meng Wang - University of Bristol, England
Viktória Zsók - Eötvos Loránd University, Hungary
Virtual symposium
Because of the Covid-19 pandemic, this year IFL 2021 will be an online
event,
consisting of paper presentations, discussions and virtual social
gatherings.
Registered participants can take part from anywhere in the world.
Registration
Please use the below link to register for IFL 2021:
https://docs.google.com/forms/d/e/1FAIpQLSdMFjo-GumKjk4i7szs7n4DhWqKt96t8ofIqshfQFrf4jnvsA/viewform?usp=sf_link
Thanks to the sponsors and the support of the Radboud university
registration is free of charge.
[image: beacon]
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jaro.reinders at gmail.com Fri Jul 9 07:03:59 2021
From: jaro.reinders at gmail.com (Jaro Reinders)
Date: Fri, 9 Jul 2021 09:03:59 +0200
Subject: [Haskell-cafe] [ANN] parser-unbiased-choice-monad-embedding -
the best parsing library;
it is based on arrows (was: Pearl! I just proved theorem about impossibility
of monad transformer for parsing with (1) unbiased choice and (2) ambiguity
checking before running embedded monadic action (also, I THREAT I will
create another parsing lib))
In-Reply-To: <1625797102.508724549@f108.i.mail.ru>
References: <1625797102.508724549@f108.i.mail.ru>
Message-ID: <9412b74c-13f9-160c-8e16-d3120dcb9cd0@gmail.com>
You might also want to check out 'uu-parsinglib' [1]. It is also unbiased and
it has some features not mentioned on your comparison table, most notably
error-correction and online (lazy) results. Especially the lazy results can
allow you to write parsers that use constant memory [2]. The "Combinator
Parsing: A Short Tutorial" by Doaitse Swierstra describes the ideas and
implementation (the advanced features start in section 4) [3]. Unfortunately, I
think it is not maintained anymore.
There are also some other parsing libraries that also can deal with
left-recursion besides Earley, namely 'gll' [4] and 'grammatical-parsers' [5].
It might be worth adding them to the comparison.
Cheers,
Jaro
[1] https://hackage.haskell.org/package/uu-parsinglib
[2]
https://discourse.haskell.org/t/memory-usage-for-backtracking-in-infinite-stream-parsing/1384/12?u=jaror
[3] http://www.cs.uu.nl/research/techreps/repo/CS-2008/2008-044.pdf
[4] https://hackage.haskell.org/package/gll
[5] https://hackage.haskell.org/package/grammatical-parsers
On 09-07-2021 04:18, Askar Safin via Haskell-Cafe wrote:
> Hi.
>
> I announce my parsing library https://hackage.haskell.org/package/parser-unbiased-choice-monad-embedding .
> I think it is best parsing library, and you should always use it instead of other solutions. I will tell
> you why. You may check comparison table: https://paste.debian.net/1203863/ (if you don't
> understand the table, don't worry; come back to the table after reading this mail).
>
> My library is solution to problem described in this e-mail thread, so read it for motivation:
> https://mail.haskell.org/pipermail/haskell-cafe/2021-June/134094.html .
>
> Now let me describe my solution in detail.
>
> I will distinguish parser errors (i. e. "no parse" or "ambiguous parse") and semantic errors
> (i. e. "division by zero", "undefined identifier", "type mismatch", etc).
>
> So, now I will show you parser with unbiased choice, which allows monad embedding.
>
> As a very good introduction to arrows I recommend this:
> https://ocharles.org.uk/guest-posts/2014-12-21-arrows.html .
>
> We start from classic parser with this type:
>
> newtype ParserClassic t a = ParserClassic ([t] -> [(a, [t])])
>
> You can make it be instance of Functor, Applicative, Monad and Alternative.
>
> This type is similar to ReadS from base
> ( https://hackage.haskell.org/package/base-4.15.0.0/docs/Text-ParserCombinators-ReadP.html#t:ReadS )
> and to "type Parser = StateT String []" from example here:
> https://hackage.haskell.org/package/transformers-0.5.5.0/docs/Control-Monad-Trans-Class.html .
> I will not give more information, feel free to find it in internet.
>
> Now let's replace "a" with Kleisli arrow "b -> m c". We will get this:
>
> newtype Parser1 t m b c = Parser1 ([t] -> [(b -> m c, [t])])
>
> Here is resulting parsing library and example: https://godbolt.org/z/qsrdKefjT (backup:
> https://paste.debian.net/1203861/ ). We can use this parser in Applicative style. And when we
> need to lift something to embedded monad, we resort to Arrow style. I didn't test this code much.
>
> Parser1 cannot be Monad (I proved this in previous letters).
>
> At this point someone may ask: why we need arrows? Can we achieve same effect using
> Applicative only? Yes, we can. Here is result: https://godbolt.org/z/ocY3csWjs (backup:
> https://paste.debian.net/1203862/ ), I will call such method "anti-arrow". But we have two
> problems here: first, I think this method makes parser code uglier. Second, this method is
> limited, and I will show you why later.
>
> Okey, now back to arrows.
>
> Still, we have this problems (I'm about our arrow code):
> - Handling left-recursive grammars is tricky (it is possible, but not as simple as right-recursive
> ones)
> - Parsing errors (as opposed to semantic error messages) are not great. I. e. if there is no valid
> parses, we don't get any additional information, i. e. we don't know which token caused error
> - We don't track locations, so semantic errors are not great, too (i. e. we want to have location
> info to embed it into semantic error messages)
> - I suspect this parsing library will have very bad speed, possibly O(exp(input size))
>
> So, let's combine our ideas with package Earley ( https://hackage.haskell.org/package/Earley ).
> Earley has type "Prod r e t a". Let's replace "a" with Kleisli arrow "b -> m c".
> Also let's wrap this Kleisli arrow to "L" from srcloc ( https://hackage.haskell.org/package/srcloc )
> to get location info. Also, we wrap "t" to "L", too.
>
> Thus we get this type:
>
> newtype ArrowProd r e t m b c = ArrowProd (Prod r e (L t) (L (b -> m c)))
>
> Here is resulting library with example: https://hackage.haskell.org/package/parser-unbiased-choice-monad-embedding .
> (I didn't test much.) (I recommend to read docs to Earley first.) (My library is designed to be used
> with some external lexer, lexer-applicative is recommended.)
>
> The example also uses lexer-applicative, because lexer-applicative automatically wraps tokens into "L".
>
> So, what we got? We solved original goals, i. e.:
> - We have combinator parser with unbiased choice. Unfortunately, it is not monadic, but it is
> Applicative and Arrow
> - We can embed monad, for example, to handle semantic errors
> - We can test parsing errors before executing embedded monadic action
>
> Additionally we solved 4 remaining problems mentioned above, i. e.:
> - Handling left-recursive grammars is as simple as right-recursive ones (thanks to Earley's
> RecursiveDo)
> - Parsing errors are ok
> - We track locations and we can embed them into semantic errors
> - We have relatively good speed thanks to Earley
>
> What else? We can test grammar for ambiguity using
> https://hackage.haskell.org/package/Earley-0.13.0.1/docs/Text-Earley.html#v:upTo (I didn't wrap
> this function, but this can easily be done).
>
> Personally I think that my parsing library is simply best. :) And that one should always use it
> instead of all others. Why I think this? Because:
> - We all know that CFG is better than PEG, i. e. unbiased choice is better that biased
> - I don't want to merely produce AST, I want to process information while I parse, and this
> processing will uncover some semantic errors
> - So I want unbiased choice with handling semantic errors
> - The only existing solution, which can do this is my library (and also "happy")
> - But "happy" doesn't automatically track locations (as well as I know it tracks lines, but not columns)
> - So the best parsing solution is my library :)
>
> My library has another advantage over happy: by extensive use of Alternative's "many" (and
> arrow banana brackets) you can write things, which are impossible to write in happy. Consider
> this artificial example (subset of Pascal):
> ---
> var
> a: integer; b: integer;
> begin
> a := b + c;
> end.
> ---
> This is its parser, which uses my library (completely untested):
> ---
> arrowRule (proc () -> do {
> sym TVar -< ();
> declated <- many (ident <* sym TColon <* sym TInteger <* sym TSemicolon) -< ();
> sym TBegin -< ();
>
> -- Banana brackets
> (|many (do {
> x <- ident -< ();
> lift -< when (x `notElem` declated) $ Left "Undeclated identifier";
> sym TAssign -< ();
>
> -- My library doesn't have "sepBy", but it can be easily created
> (|sepBy (do {
> y <- ident -< ();
> lift -< when (y `notElem` declated) $ Left "Undeclated identifier";
> returnA -< ();
> })|) [TPlus];
> sym TSemicolon -< ();
> returnA -< ();
> |);
> sym TEnd -< ();
> sym TDot -< ();
> returnA -< ();
> })
> ---
> This is impossible to write similar code using "happy" with similar ergonomics, because in
> happy we would not have access to "declared" inside production for sum. The only way we
> can access "declared" is using state monad (for example, StateT) and put this "declared"
> into this state monad. But in my library you don't have to use any StateT!
>
> Now let me say why mentioned "anti-arrow" style is not enough. Let's try to rewrite this
> Pascal example using anti-arrow style:
> ---
> antiArrowLift $ do { -- ApplicativeDo
> sym TVar;
> declated <- many (ident <* sym TColon <* sym TInteger <* sym TSemicolon);
> sym TBegin;
> many $ antiArrowLift $ do {
> x <- ident;
> sym TAssign;
>
> -- My library doesn't have "sepBy", but it can be easily created
> sepBy [TPlus] $ antiArrowLift $ do {
> y <- ident;
> pure $ when (y `notElem` declated) $ Left "Undeclated identifier"; -- Oops
> };
> sym TSemicolon;
> pure $ when (x `notElem` declated) $ Left "Undeclated identifier"; -- Oops
> };
> sym TEnd;
> sym TDot;
> pure ();
> }
> ---
> Looks good. But there is a huge problem here: look at lines marked as "Oops". They refer to
> "declared", but they cannot refer to it, because outer "do" is ApplicativeDo. So, yes, merely
> Applicative is not enough.
>
> Does my library have disadvantages? Of course, it has!
> - It is not monadic
> - It cannot statically check that grammar is element of LR(1) set (as well as I understand,
> happy can do this)
> - My library has relatively good speed asymptotic (same as Earley), but it is still not fastest
> - My library will freeze on infinitely ambiguous grammars. Attempting to check such grammar
> for ambiguity using Earley's "upTo" will cause freezing, too. See also: https://github.com/ollef/Earley/issues/54
> - My library is based on unbiased choice and CFG (as opposed to biased choice and PEG).
> I consider this as advantage, but my library will not go if you want to parse language defined by some PEG
>
> My library is unfinished. The following things are needed:
> - We need combinator similar to Alternative's "many", but every item should have access to
> already parsed part of list. Such combinator should be made to be used by banana brackets
> - We need combinators similar to parsec's chainl and chainr (my library already supports left
> and right recursion thanks to Earley, but still such combinators would be useful)
> - Already mentioned "sepBy"
> - I didn't wrap all Earley functionality, for example, is left unwrapped
>
> I don't have motivation for fix this things, because I decided to switch to Rust as my main language.
>
> Final notes
> - It is quite possible that I actually need attribute grammars or syntax-directed translation. I didn't explore this
> - I suspect that my parser is arrow transformer (whatever this means)
>
> Side note: I really want some pastebin for reproducible shell scripts (possibly dockerfiles), do you know such?
>
> Answer me if you have any questions.
>
> ==
> Askar Safin
> http://safinaskar.com
> https://sr.ht/~safinaskar
> https://github.com/safinaskar
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
>
From borgauf at gmail.com Fri Jul 9 17:10:10 2021
From: borgauf at gmail.com (Galaxy Being)
Date: Fri, 9 Jul 2021 12:10:10 -0500
Subject: [Haskell-cafe] Help understanding John Hughes' "The Design of a
Pretty-printing Library"
Message-ID:
Getting stuck in *Real World Haskell*'s chapter on pretty printing (chap
5), I decided to take a look at one of the suggested sources, i.e., John
Hughes' *The Design of a Pretty-printing Library
*.
Not that I fully got the first three section, still, let me skip ahead to
section 4, *Designing a Sequence Type *where we're designing a supposedly
generic sequence type, I'm guessing in the same universal, fundamental way
that Peano's Axioms give a generic underpinning of the natural numbers.
Good. I get it. This is why Haskell is unique. So he says we take the
following *signature* as our starting point
nil :: Seq a
unit :: a -> Seq a
cat :: Seq a -> Seq a -> Seq a
list :: Seq a -> [a]
where nil , unit , and cat give us ways to build sequences, and list
is an *observation
*[explained before]. The correspondence with the usual list operations is
nil = []
unit x = [x]
cat = (++)
These operations are to satisfy the following laws:
xs `cat` ( ys `cat` zs ) = ( xs `cat` ys ) `cat` zs
nil `cat` xs = xs
xs `cat` nil = xs
list nil = []
list ( unit x `cat` xs ) = x : list xs
My first question is why must we satisfy these laws? Where did these laws
come from? Is this a universal truth thing? I've heard the word *monoid *and
*semigroup* tossed around. Is this germane here?
Next in *4.1 Term Representation* I'm immediately lost with the
concept of *term.
*On a hunch, I borrowed a book *Term Rewriting and All That *from Baader
and Nipkow which I'm sure goes very deep into the concept of terms, but I'm
not there yet. What is meant by *term*? As 4.1 continues, confusing is this
passage
The most direct way to represent values of [the?] sequence type is just as
terms of the
algebra [huh?], for example using
data Seq a = Nil j Unit a j Seq a `Cat` Seq a
But this trivial representation does not exploit the algebraic laws that we
know to
hold [huh?], and moreover the list observation will be a little tricky to
de ne (ideally we would like to implement observations by very simple,
non-recursive functions: the real work should be done in the
implementations of the Seq operators themselves). Instead, we may choose a
restricted subset of terms -- call them simplified forms -- into which
every term can be put using the algebraic laws. Then we can represent
sequences using a datatype that represents the syntax of simplified forms.
In this case, there is an obvious candidate for simplified forms: terms of
the form nil and unit x `cat` xs , where xs is also in simplified form.
Simplified forms can be represented using the type
data Seq a = Nil j a `UnitCat` Seq a
with the interpretation
Nil = nil
x `UnitCat` xs = unit x `cat` xs
All of this presupposes much math lore that I'm hereby asking someone on
this mailing list to point me towards. I threw in a few [huh?] as specific
points, but in general, I'm not there yet. Any explanations appreciated.
⨽
Lawrence Bottorff
Grand Marais, MN, USA
borgauf at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bob at redivi.com Fri Jul 9 18:59:01 2021
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 9 Jul 2021 11:59:01 -0700
Subject: [Haskell-cafe] Help understanding John Hughes' "The Design of a
Pretty-printing Library"
In-Reply-To:
References:
Message-ID:
On Fri, Jul 9, 2021 at 10:11 AM Galaxy Being wrote:
> Getting stuck in *Real World Haskell*'s chapter on pretty printing (chap
> 5), I decided to take a look at one of the suggested sources, i.e., John
> Hughes' *The Design of a Pretty-printing Library
> *.
> Not that I fully got the first three section, still, let me skip ahead to
> section 4, *Designing a Sequence Type *where we're designing a supposedly
> generic sequence type, I'm guessing in the same universal, fundamental way
> that Peano's Axioms give a generic underpinning of the natural numbers.
> Good. I get it. This is why Haskell is unique. So he says we take the
> following *signature* as our starting point
>
> nil :: Seq a
> unit :: a -> Seq a
> cat :: Seq a -> Seq a -> Seq a
> list :: Seq a -> [a]
>
> where nil , unit , and cat give us ways to build sequences, and list is
> an *observation *[explained before]. The correspondence with the usual
> list operations is
>
> nil = []
> unit x = [x]
> cat = (++)
>
> These operations are to satisfy the following laws:
>
> xs `cat` ( ys `cat` zs ) = ( xs `cat` ys ) `cat` zs
> nil `cat` xs = xs
> xs `cat` nil = xs
> list nil = []
> list ( unit x `cat` xs ) = x : list xs
>
> My first question is why must we satisfy these laws? Where did these laws
> come from? Is this a universal truth thing? I've heard the word *monoid *and
> *semigroup* tossed around. Is this germane here?
>
This is how the paper defines what the word Sequence means in a
mathematical way, otherwise there might be some ambiguity for what a
Sequence should be. The laws are the design for the Sequence type. Some of
them also happen to be the laws that define Monoid and Semigroup.
The first law is associativity, which means Sequence can implement
Semigroup. The next two laws are the identity laws, which means Sequence
can implement Monoid. https://wiki.haskell.org/Monoid#The_basics
The last two laws say that you can take any Sequence and make a list out of
it without losing any information (all elements in the same order). This is
also related to Foldable but it's stronger because of the ordering.
https://wiki.haskell.org/Foldable_and_Traversable
Reading all of the laws together (Monoid + Foldable with ordering) you can
infer that Sequence is isomorphic to list. With mconcat you can turn a list
into a Sequence and with list you can get the original list back. This also
means that Sequence should be able to lawfully (but perhaps not as
efficiently) implement any other typeclass or operation that is implemented
for list.
Next in *4.1 Term Representation* I'm immediately lost with the
concept of *term.
> *On a hunch, I borrowed a book *Term Rewriting and All That *from Baader
> and Nipkow which I'm sure goes very deep into the concept of terms, but I'm
> not there yet. What is meant by *term*? As 4.1 continues, confusing is
> this passage
>
> The most direct way to represent values of [the?] sequence type is just as
> terms of the
> algebra [huh?], for example using
>
> data Seq a = Nil j Unit a j Seq a `Cat` Seq a
>
> But this trivial representation does not exploit the algebraic laws that
> we know to
> hold [huh?], and moreover the list observation will be a little tricky to
> de ne (ideally we would like to implement observations by very simple,
> non-recursive functions: the real work should be done in the
> implementations of the Seq operators themselves). Instead, we may choose a
> restricted subset of terms -- call them simplified forms -- into which
> every term can be put using the algebraic laws. Then we can represent
> sequences using a datatype that represents the syntax of simplified forms.
>
This essentially states that the Unit a constructor is redundant because
it's equivalent to Seq a Nil. Having fewer possible representations greatly
simplifies how easy it is to reason about a data type and usually also
simplifies its implementation.
> In this case, there is an obvious candidate for simplified forms: terms of
> the form nil and unit x `cat` xs , where xs is also in simplified form.
> Simplified forms can be represented using the type
>
> data Seq a = Nil j a `UnitCat` Seq a
>
> with the interpretation
>
> Nil = nil
> x `UnitCat` xs = unit x `cat` xs
>
> All of this presupposes much math lore that I'm hereby asking someone on
> this mailing list to point me towards. I threw in a few [huh?] as specific
> points, but in general, I'm not there yet. Any explanations appreciated.
>
This section of the paper explains how a first attempt at implementing a
Sequence type from first principles could end up with a representation that
is indistinguishable from the list type other than the names (nil = []; cat
= (:)). The important part is the end of *4.1* where we learn why this
representation doesn't have the desired performance characteristics for
this use case and then in *4.2* a more suitable representation is derived.
-bob
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jclites at mac.com Fri Jul 9 19:36:36 2021
From: jclites at mac.com (Jeff Clites)
Date: Fri, 9 Jul 2021 12:36:36 -0700
Subject: [Haskell-cafe] Help understanding John Hughes' "The Design of a
Pretty-printing Library"
In-Reply-To:
References:
Message-ID: <331CAEC9-BA0C-462B-8ABF-E59BAA2095D8@mac.com>
> On Jul 9, 2021, at 10:10 AM, Galaxy Being wrote:
>
> Getting stuck in Real World Haskell's chapter on pretty printing (chap 5), I decided to take a look at one of the suggested sources, i.e., John Hughes' The Design of a Pretty-printing Library. Not that I fully got the first three section, still, let me skip ahead to section 4, Designing a Sequence Type where we're designing a supposedly generic sequence type, I'm guessing in the same universal, fundamental way that Peano's Axioms give a generic underpinning of the natural numbers.
Sort of. But he explicitly says he’s not trying to define a new sequence type (more general than lists or something), but rather see if we could start from some requirements and from there deduce the list type. This is in the context of “deriving functional programs from specifications”, as an example case. I didn't read the whole paper so I'm not sure if this is directly relevant to designing a pretty printing library, or if it's just warm up. I suspect that the paper is more about deriving programs from specification than it is about pretty printing.
> Good. I get it. This is why Haskell is unique.
BTW this isn’t how everyone approaches writing Haskell programs on a day-to-day basis.
> So he says we take the following signature as our starting point
>
> nil :: Seq a
> unit :: a -> Seq a
> cat :: Seq a -> Seq a -> Seq a
> list :: Seq a -> [a]
>
> where nil , unit , and cat give us ways to build sequences, and list is an observation [explained before]. The correspondence with the usual list operations is
>
> nil = []
> unit x = [x]
> cat = (++)
>
> These operations are to satisfy the following laws:
>
> xs `cat` ( ys `cat` zs ) = ( xs `cat` ys ) `cat` zs
> nil `cat` xs = xs
> xs `cat` nil = xs
> list nil = []
> list ( unit x `cat` xs ) = x : list xs
>
> My first question is why must we satisfy these laws? Where did these laws come from? Is this a universal truth thing?
These are just his requirements for how you’d expect a sequence to behave. (Concatenation is associative and appending an empty sequence does nothing, etc.)
> I've heard the word monoid and semigroup tossed around. Is this germane here?
The first three are the criteria for being a monoid, but that doesn’t explain anything because it doesn’t tell you why you’d want your sequence type to be a monoid. So it’s something to notice but it doesn’t explain the requirements. The requirements he just made up, from thinking about the concept of a sequence.
> Next in 4.1 Term Representation I'm immediately lost with the concept of term. On a hunch, I borrowed a book Term Rewriting and All That from Baader and Nipkow which I'm sure goes very deep into the concept of terms, but I'm not there yet. What is meant by term?
Oh no it’s much simpler. “Term” is just the math/logic version of the concept “value”. In programming, you have types (like Int) and values (like 3). In logic-speak 3 is a term (to distinguish it from “3 + 5”, which is an expression or a formula). A term is a mathematical object.
> As 4.1 continues, confusing is this passage
>
> The most direct way to represent values of [the?] sequence type is just as terms of the
> algebra [huh?], for example using
He’s talking about how you’d map the logical concepts into Haskell--how you would "represent the terms as values" or how you would decide what Haskell value would faithfully represent the conceptual term. (Basically, “value of the type” is the Haskell side, and “terms of the algebra” is the math/logic side.)
> data Seq a = Nil | Unit a | Seq a `Cat` Seq a
The nil/unit/cat from above are functions for constructing a sequence, so you could take that literally and represent a sequence by just making each of those functions be constructors. That’s sort of the simplest thing you could do, so he’s trying that first.
So in this approach:
nil = Nil
unit x = Unit x
cat x y = Cat x y
> But this trivial representation does not exploit the algebraic laws that we know to hold [huh?]
He’s just saying that doing it this simple way doesn’t satisfy the equalities we want above.
For instance:
nil `cat` xs
results in:
Cat Nil xs
which is not the same as xs (but rather, it's a new data structure with xs as part of it).
And then he goes on to build toward a different representation that satisfies the requirements.
I haven't looked at the Real World Haskell book in a while, but this paper is assuredly much more conceptual/abstract than the book, so you are probably getting help from something harder than where you started. It's interesting but keep in mind that research papers are often not simple to understand (and a book is at least trying to be).
I hope that helps a bit.
Jeff
From safinaskar at mail.ru Fri Jul 9 21:49:38 2021
From: safinaskar at mail.ru (=?UTF-8?B?QXNrYXIgU2FmaW4=?=)
Date: Sat, 10 Jul 2021 00:49:38 +0300
Subject: [Haskell-cafe]
=?utf-8?q?=5BANN=5D_parser-unbiased-choice-monad-e?=
=?utf-8?q?mbedding_-_the_best_parsing_library=3B_it_is_based_on_arrows_?=
=?utf-8?q?=28was=3A_Pearl!_I_just_proved_theorem_about_impossibility_of_m?=
=?utf-8?q?onad_transformer_for_parsing_with_=281=29_unbiased_choice_and_?=
=?utf-8?q?=282=29_ambiguity_checking_before_running_embedded_monadic_acti?=
=?utf-8?q?on_=28also=2C_I_THREAT_I_will_create_another_parsing_lib=29=29?=
In-Reply-To: <9412b74c-13f9-160c-8e16-d3120dcb9cd0@gmail.com>
References: <1625797102.508724549@f108.i.mail.ru>
<9412b74c-13f9-160c-8e16-d3120dcb9cd0@gmail.com>
Message-ID: <1625867378.604099887@f126.i.mail.ru>
Пятница, 9 июля 2021, 10:08 +03:00 от "Jaro Reinders" :
> You might also want to check out 'uu-parsinglib' [1]
Thanks for answer. It is essential for me to have unbiased choice, ability to embed a monad and ability to check parsing errors first and then semantic errors. I proved that this is possible with arrows only (in my previous letter and in June letters). So libraries you mentioned will help me only if they are arrow-based. I downloaded this libraries (uu-parsinglib, gll, grammatical-parsers) and found no line similar to "instance Arrow". So this libraries are not for me
==
Askar Safin
http://safinaskar.com
https://sr.ht/~safinaskar
https://github.com/safinaskar
From jaro.reinders at gmail.com Fri Jul 9 23:01:16 2021
From: jaro.reinders at gmail.com (Jaro Reinders)
Date: Sat, 10 Jul 2021 01:01:16 +0200
Subject: [Haskell-cafe] [ANN] parser-unbiased-choice-monad-embedding -
the best parsing library;
it is based on arrows (was: Pearl! I just proved theorem about
impossibility of monad transformer for parsing with (1)
unbiased choice and (2) ambiguity checking before running
embedded monadic action (also, I THREAT I will create another
parsing lib))
In-Reply-To: <1625867378.604099887@f126.i.mail.ru>
References: <1625797102.508724549@f108.i.mail.ru>
<9412b74c-13f9-160c-8e16-d3120dcb9cd0@gmail.com>
<1625867378.604099887@f126.i.mail.ru>
Message-ID:
I was mostly replying to your claim:
> I think it is best parsing library, and you should always use it instead of other solutions.
uu-parsinglib is indeed not based on arrows, but it does have some other features that make it stand out.
So my previous mail was mostly to show that there are other interesting points in the design space.
On July 9, 2021 11:49:38 PM GMT+02:00, Askar Safin wrote:
>Пятница, 9 июля 2021, 10:08 +03:00 от "Jaro Reinders" :
>> You might also want to check out 'uu-parsinglib' [1]
>
>Thanks for answer. It is essential for me to have unbiased choice, ability to embed a monad and ability to check parsing errors first and then semantic errors. I proved that this is possible with arrows only (in my previous letter and in June letters). So libraries you mentioned will help me only if they are arrow-based. I downloaded this libraries (uu-parsinglib, gll, grammatical-parsers) and found no line similar to "instance Arrow". So this libraries are not for me
>
>==
>Askar Safin
>http://safinaskar.com
>https://sr.ht/~safinaskar
>https://github.com/safinaskar
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From cdsmith at gmail.com Sat Jul 10 01:37:04 2021
From: cdsmith at gmail.com (Chris Smith)
Date: Fri, 9 Jul 2021 21:37:04 -0400
Subject: [Haskell-cafe] Invitation: Virtual Haskell Cohack,
July 10 (Saturday)
In-Reply-To:
References:
Message-ID:
Last minute reminder: the first Virtual Haskell Cohack is happening
tomorrow. Please review times and RSVP at
https://www.meetup.com/NY-Haskell/events/279067287/, and sign up to get the
Zoom link for the event.
On Fri, Jun 25, 2021 at 10:48 PM Chris Smith wrote:
> Hello everyone,
>
> I'd like to formally invite you to the Virtual Haskell Cohack, which will
> be running monthly beginning Saturday, July 10. Please review times and
> RSVP at https://www.meetup.com/NY-Haskell/events/279067287/ so that I
> know how many people are coming. I would need to upgrade my Zoom account
> if we exceed 100 people, but that would be amazing. Please make that
> happen!
>
> This is an event for the Haskell community to get together and work on
> collaborative projects. This could mean:
>
> - Teaching or learning Haskell
> - Writing code or docs on your favorite project and introducing it to
> others
> - Learning about other people's projects and getting some mentorship
> to contribute.
> - Pair programming on programming challenges like Advent of Code or
> Project Euler.
> - Just hanging out and geeking out talking category theory or GHC
> profiling techniques or whatever.
>
> The full agenda is listed at the link above, but it also includes
> lightning talks, which you can propose when you RSVP or just wait and
> volunteer to give one at the event. If you want to be sure you make it on
> the list, mention that you'd like to give a talk in your RSVP and you'll
> get priority on the signup list.
>
> Haskell-adjacent technologies are also welcome, so if you'd like to come
> hack in Elm, Idris, Agda, Coq, Ocaml, or whatever, you are welcome. The
> only rule we have is to be kind and welcoming to others.
>
> Thanks, and hope to see you there.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kc1956 at gmail.com Sat Jul 10 03:53:05 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Fri, 9 Jul 2021 20:53:05 -0700
Subject: [Haskell-cafe] What can I use for a weighted graph? I thought
Data.Graph would work but I see no provision for edge weights?
Message-ID:
Hi
What can I use for a weighted graph? I thought Data.Graph would work but I
see no provision for edge weights?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lemming at henning-thielemann.de Sat Jul 10 06:27:06 2021
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Sat, 10 Jul 2021 08:27:06 +0200 (CEST)
Subject: [Haskell-cafe] What can I use for a weighted graph? I thought
Data.Graph would work but I see no provision for edge weights?
In-Reply-To:
References:
Message-ID: <1b3eebe-6a3f-2e47-326c-683933bb8f84@henning-thielemann.de>
On Fri, 9 Jul 2021, Casey Hawthorne wrote:
> What can I use for a weighted graph? I thought Data.Graph would work but I see no provision for edge weights?
fgl or my comfort-graph package
From kc1956 at gmail.com Sat Jul 10 06:47:33 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Fri, 9 Jul 2021 23:47:33 -0700
Subject: [Haskell-cafe] What can I use for a weighted graph? I thought
Data.Graph would work but I see no provision for edge weights?
In-Reply-To: <1b3eebe-6a3f-2e47-326c-683933bb8f84@henning-thielemann.de>
References:
<1b3eebe-6a3f-2e47-326c-683933bb8f84@henning-thielemann.de>
Message-ID:
Thank you!
On Fri., Jul. 9, 2021, 11:27 p.m. Henning Thielemann, <
lemming at henning-thielemann.de> wrote:
>
> On Fri, 9 Jul 2021, Casey Hawthorne wrote:
>
> > What can I use for a weighted graph? I thought Data.Graph would work but
> I see no provision for edge weights?
>
> fgl or my comfort-graph package
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From safinaskar at mail.ru Sat Jul 10 22:49:04 2021
From: safinaskar at mail.ru (=?UTF-8?B?QXNrYXIgU2FmaW4=?=)
Date: Sun, 11 Jul 2021 01:49:04 +0300
Subject: [Haskell-cafe] =?utf-8?q?I_solved_my_reversible_parsing_problem_?=
=?utf-8?q?=28was=3A_How_to_do_reversible_parsing=3F=29?=
In-Reply-To: <7b44d537-123d-cd6c-0893-e9049ec005d0@htwk-leipzig.de>
References: <2933bd03-6c59-495a-d464-149fff943dc0@htwk-leipzig.de>
<7b44d537-123d-cd6c-0893-e9049ec005d0@htwk-leipzig.de>
Message-ID: <1625957344.441799369@f743.i.mail.ru>
Hi.
I found solution to problem I stated in this letter: https://mail.haskell.org/pipermail/haskell-cafe/2021-January/133275.html . And now I will describe it.
I considered combinator libraries for reversible parsing, such as this: https://hackage.haskell.org/package/invertible-syntax . And I rejected them, because they don't check grammar for ambiguity statically. I. e. they cannot check whether grammar is ambiguous before seeing input.
So I wrote this: https://paste.debian.net/1204012/ (tested with ghc 8.10.4, kleene-0.1, lattices-2.0.2, regex-applicative-0.3.4, QuickCheck-2.14.2, containers-0.6.2.1, check-cfg-ambiguity-0.0.0.1 [this is my package!], Earley-0.13.0.1, deepseq-1.4.4.0).
What is this? This is reversible lexer (function "lexer") and context-free parser (function "contextFree"). They are similar to alex+happy (or flex+bison). They are dynamic, i. e. they accept dynamic description of target language. And both functions output pair (parser, printer).
In case of lexing these two functions are guaranteed to satisfy these equations (slightly simplified):
parse :: text -> Maybe tokens
print :: tokens -> Maybe text
parse s == Just a ==> print a /= Nothing
print a == Just s ==> parse s == Just a
In case of context-free parsing and printing these functions satisfy these equations (slightly simplified):
parse :: tokens -> [ast]
print :: ast -> Maybe tokens
a `elem` parse s ==> print a /= Nothing
print a == Just s ==> a `elem` parse s
Additionally, "contextFree" performs heuristic ambiguity checking, so in most cases there is no more that one possible parse.
"lexer" parses using regular expressions (package "kleene") using usual "maximal munch" rule. "contextFree" takes CFG and parses using Earley algorithm from package "Earley". Function "contextFree" outputs AST as opposed to parse tree, i. e. "1 + (2 * 3)" and "1 + 2 * 3" give same output.
My library is dynamic, i. e. it is designed for case, when you don't know your target language at compilation time. For example, my library is perfectly suited to case of Isabelle's inner language, described in original mail. In more popular case (i. e. when target language is fixed) my library will not be so pleasant to deal with, because "contextFree" always outputs same AST type named "AST". I. e. you will need some additional stage to convert this universal AST type to your target type. This problem could be fixed in some hypothetical library based on Template Haskell, which will take grammar description in compile-time and generate needed code.
I will not give additional details. Feel free to explore code. Assume this code as public domain.
In original letter I stated 5 goals. So, here is their status:
1. Solved
2. Mostly solved (lexer is unambiguous, context free stage is checked using heuristic algorithm)
3. Solved
4. Solved
5. Solved (unfortunately, my solution is always dynamic, i. e. it perfectly fits "Isabelle's inner language" case, but more "normal" cases require boilerplate)
If you like this library, it is possible you will like my other libraries:
- https://hackage.haskell.org/package/check-cfg-ambiguity - checks CFG for ambiguity
- https://hackage.haskell.org/package/parser-unbiased-choice-monad-embedding - another parsing library. It doesn't support reversible parsing. It is designed for static case, i. e. for case when you know your target language beforehand. Check it out to see its difference from other libs
==
Askar Safin
http://safinaskar.com
https://sr.ht/~safinaskar
https://github.com/safinaskar
From travis.cardwell at extrema.is Mon Jul 12 05:58:40 2021
From: travis.cardwell at extrema.is (Travis Cardwell)
Date: Mon, 12 Jul 2021 14:58:40 +0900
Subject: [Haskell-cafe] Haskell books index with RSS
Message-ID:
Dear Café,
I have often wished that I could subscribe to RSS feeds that notify me
when new books are published about a certain topic or by a certain
author. I recently implemented the idea in the article system of my
personal website and created an index of Haskell books.
https://www.extrema.is/articles/haskell-books
The index provides a simple UI for browsing by tag, and each book page
has basic information and links. By subscribing to the following RSS
feed, you get a notification when a new book is added to the index.
https://www.extrema.is/articles/tag/index:haskell-books.rss
This is a humble first implementation, but I am posting here in case
others find it useful.
Regards,
Travis
From kc1956 at gmail.com Mon Jul 12 07:02:00 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Mon, 12 Jul 2021 00:02:00 -0700
Subject: [Haskell-cafe] What can I use for a weighted graph? I thought
Data.Graph would work but I see no provision for edge weights?
In-Reply-To:
References:
<1b3eebe-6a3f-2e47-326c-683933bb8f84@henning-thielemann.de>
Message-ID:
Hi
Can't I use Data.Graph and use Data.Map to hold the edge weights and have
operations that tie the edges to their weights in a combined data structure?
FDS ~ Functional Data Structure
Making a FDS using two or more FDS's
On Fri., Jul. 9, 2021, 11:47 p.m. Casey Hawthorne, wrote:
> Thank you!
>
> On Fri., Jul. 9, 2021, 11:27 p.m. Henning Thielemann, <
> lemming at henning-thielemann.de> wrote:
>
>>
>> On Fri, 9 Jul 2021, Casey Hawthorne wrote:
>>
>> > What can I use for a weighted graph? I thought Data.Graph would work
>> but I see no provision for edge weights?
>>
>> fgl or my comfort-graph package
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lanablack at amok.cc Mon Jul 12 18:24:57 2021
From: lanablack at amok.cc (Lana Black)
Date: Mon, 12 Jul 2021 18:24:57 +0000
Subject: [Haskell-cafe] Check a lack of a constraint?
Message-ID: <12973336.O9o76ZdvQC@glow>
Hello cafe,
Is it possible in Haskell to check a lack of a certain constraint?
For example,
```
foo :: C => a
foo = undefined
```
Here `foo` can only be compiled if called with C satisfied. How do I write the
opposite, so that `foo` is only possible to use when C is not satisfied?
With best regards.
From lemming at henning-thielemann.de Mon Jul 12 19:30:11 2021
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Mon, 12 Jul 2021 21:30:11 +0200 (CEST)
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <12973336.O9o76ZdvQC@glow>
References: <12973336.O9o76ZdvQC@glow>
Message-ID: <64afb69e-857-b95c-652-61d3296d35f2@henning-thielemann.de>
On Mon, 12 Jul 2021, Lana Black wrote:
> Hello cafe,
>
> Is it possible in Haskell to check a lack of a certain constraint?
>
> For example,
>
> ```
> foo :: C => a
> foo = undefined
>
> ```
>
> Here `foo` can only be compiled if called with C satisfied. How do I write the
> opposite, so that `foo` is only possible to use when C is not satisfied?
Do you mean "C a"?
Is it sensible to want this? Any instance that you import transitively
will reduce the applicability of 'foo'.
From ietf-dane at dukhovni.org Mon Jul 12 20:24:09 2021
From: ietf-dane at dukhovni.org (Viktor Dukhovni)
Date: Mon, 12 Jul 2021 16:24:09 -0400
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <12973336.O9o76ZdvQC@glow>
References: <12973336.O9o76ZdvQC@glow>
Message-ID:
> On 12 Jul 2021, at 2:24 pm, Lana Black wrote:
>
> Is it possible in Haskell to check a lack of a certain constraint?
Such a check is semantically dubious, because the complete list of
instances for a given type is unknowable; given orphan instances,
an instance could be defined in some module that has not been
imported at the call site. So wanting to do this suggests the
possibility of a design issue.
> For example,
>
> ```
> foo :: C => a
> foo = undefined
>
> ```
However, it is possible to get something along those lines with
a closed type family and an explicit list of verboten types:
{-# LANGUAGE DataKinds
, FlexibleContexts
, TypeFamilies
, UndecidableInstances #-}
import GHC.TypeLits (ErrorMessage(..), TypeError)
type family Filtered a where
Filtered Int = TypeError (Text "Ints not welcome here")
Filtered a = a
foo :: (Show a, a ~ Filtered a) => a -> String
foo = show
As seen in:
λ> foo ('a' :: Char)
"'a'"
λ> foo (1 :: Int)
:2:1: error:
• Ints not welcome here
• In the expression: foo (1 :: Int)
In an equation for ‘it’: it = foo (1 :: Int)
--
Viktor.
From ietf-dane at dukhovni.org Mon Jul 12 20:59:46 2021
From: ietf-dane at dukhovni.org (Viktor Dukhovni)
Date: Mon, 12 Jul 2021 16:59:46 -0400
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To:
References: <12973336.O9o76ZdvQC@glow>
Message-ID: <9B6128B1-B354-4B6A-8B46-58D3F23EC1A8@dukhovni.org>
> On 12 Jul 2021, at 4:24 pm, Viktor Dukhovni wrote:
>
> However, it is possible to get something along those lines with
> a closed type family and an explicit list of verboten types:
Somewhat cleaner (no complaints from -Wall, and the Filtered type family
now returns a constraint):
{-# LANGUAGE ConstraintKinds
, DataKinds
, FlexibleContexts
, TypeFamilies
, TypeOperators
, UndecidableInstances
#-}
import GHC.TypeLits (ErrorMessage(..), TypeError)
import Data.Kind (Constraint)
type family Filtered a :: Constraint where
Filtered Int = TypeError ('ShowType Int ':<>: 'Text "s not welcome here")
Filtered a = ()
foo :: (Show a, Filtered a) => a -> String
foo = show
--
Viktor.
From hecate at glitchbra.in Tue Jul 13 06:46:42 2021
From: hecate at glitchbra.in (=?UTF-8?Q?H=c3=a9cate?=)
Date: Tue, 13 Jul 2021 08:46:42 +0200
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <9B6128B1-B354-4B6A-8B46-58D3F23EC1A8@dukhovni.org>
References: <12973336.O9o76ZdvQC@glow>
<9B6128B1-B354-4B6A-8B46-58D3F23EC1A8@dukhovni.org>
Message-ID:
Oh, very nice approach Viktor. It really seems easier than having a
custom typeclass for which the blessed types have an instance, if the
set of verboten types is considerably smaller than the set of allowed types.
Le 12/07/2021 à 22:59, Viktor Dukhovni a écrit :
>
>> On 12 Jul 2021, at 4:24 pm, Viktor Dukhovni wrote:
>>
>> However, it is possible to get something along those lines with
>> a closed type family and an explicit list of verboten types:
> Somewhat cleaner (no complaints from -Wall, and the Filtered type family
> now returns a constraint):
>
> {-# LANGUAGE ConstraintKinds
> , DataKinds
> , FlexibleContexts
> , TypeFamilies
> , TypeOperators
> , UndecidableInstances
> #-}
>
> import GHC.TypeLits (ErrorMessage(..), TypeError)
> import Data.Kind (Constraint)
>
> type family Filtered a :: Constraint where
> Filtered Int = TypeError ('ShowType Int ':<>: 'Text "s not welcome here")
> Filtered a = ()
>
> foo :: (Show a, Filtered a) => a -> String
> foo = show
>
--
Hécate ✨
🐦: @TechnoEmpress
IRC: Hecate
WWW: https://glitchbra.in
RUN: BSD
From sylvain at haskus.fr Tue Jul 13 09:15:54 2021
From: sylvain at haskus.fr (Sylvain Henry)
Date: Tue, 13 Jul 2021 11:15:54 +0200
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <12973336.O9o76ZdvQC@glow>
References: <12973336.O9o76ZdvQC@glow>
Message-ID: <6de1d18f-984a-a22e-eb5c-81c66e8dd4a4@haskus.fr>
Hi,
I've proposed something like this in the past (which was rightfully
rejected). You may be interested in the discussion here:
https://github.com/ghc-proposals/ghc-proposals/pull/22
Sylvain
On 12/07/2021 20:24, Lana Black wrote:
> Hello cafe,
>
> Is it possible in Haskell to check a lack of a certain constraint?
>
> For example,
>
> ```
> foo :: C => a
> foo = undefined
>
> ```
>
> Here `foo` can only be compiled if called with C satisfied. How do I write the
> opposite, so that `foo` is only possible to use when C is not satisfied?
>
> With best regards.
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
From lanablack at amok.cc Tue Jul 13 15:07:39 2021
From: lanablack at amok.cc (Lana Black)
Date: Tue, 13 Jul 2021 15:07:39 +0000
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <9B6128B1-B354-4B6A-8B46-58D3F23EC1A8@dukhovni.org>
References: <12973336.O9o76ZdvQC@glow>
<9B6128B1-B354-4B6A-8B46-58D3F23EC1A8@dukhovni.org>
Message-ID: <5514870.LvFx2qVVIh@glow>
On Monday, 12 July 2021 20:59:46 UTC Viktor Dukhovni wrote:
> > On 12 Jul 2021, at 4:24 pm, Viktor Dukhovni
> > wrote:
> >
> > However, it is possible to get something along those lines with
>
> > a closed type family and an explicit list of verboten types:
> Somewhat cleaner (no complaints from -Wall, and the Filtered type family
> now returns a constraint):
>
> {-# LANGUAGE ConstraintKinds
> , DataKinds
> , FlexibleContexts
> , TypeFamilies
> , TypeOperators
> , UndecidableInstances
> #-}
>
> import GHC.TypeLits (ErrorMessage(..), TypeError)
> import Data.Kind (Constraint)
>
> type family Filtered a :: Constraint where
> Filtered Int = TypeError ('ShowType Int ':<>: 'Text "s not welcome
> here") Filtered a = ()
>
> foo :: (Show a, Filtered a) => a -> String
> foo = show
Thank you! I know this seems like an extreme case and I doubt I will ever use
your example in any real application.
My question was prompted by the package called reflection (https://
hackage.haskell.org/package/reflection-2.1.6/docs/Data-Reflection.html), that
allows to implicitly pass data to functions via a typeclass dictionary. The
big issue with it however is that you can pass values of same type multiple
times, therefore shooting yourself in the foot somewhere.
>From the manual:
>give :: forall a r. a -> (Given a => r) -> r
>Reify a value into an instance to be recovered with given.
>You should only give a single value for each type. If multiple instances are
in scope, then the behavior is implementation defined.
I was curious whether it would be possible to allow `give` to be used only
once in the same call stack with something like
give :: forall a r. Not (Given a) => a -> (Given a => r) -> r
If this even makes sense.
From ietf-dane at dukhovni.org Tue Jul 13 17:39:04 2021
From: ietf-dane at dukhovni.org (Viktor Dukhovni)
Date: Tue, 13 Jul 2021 13:39:04 -0400
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <5514870.LvFx2qVVIh@glow>
References: <12973336.O9o76ZdvQC@glow>
<9B6128B1-B354-4B6A-8B46-58D3F23EC1A8@dukhovni.org>
<5514870.LvFx2qVVIh@glow>
Message-ID:
On Tue, Jul 13, 2021 at 03:07:39PM +0000, Lana Black wrote:
> > type family Filtered a :: Constraint where
> > Filtered Int = TypeError ('ShowType Int ':<>: 'Text "s not welcome here")
> > Filtered a = ()
> >
> > foo :: (Show a, Filtered a) => a -> String
> > foo = show
>
> Thank you! I know this seems like an extreme case and I doubt I will ever use
> your example in any real application.
Indeed, since this is generally a rather odd thing to do.
> My question was prompted by the package called reflection (https://
> hackage.haskell.org/package/reflection-2.1.6/docs/Data-Reflection.html), that
> allows to implicitly pass data to functions via a typeclass dictionary. The
> big issue with it however is that you can pass values of same type multiple
> times, therefore shooting yourself in the foot somewhere.
This is only a problem if these multiple times are *nested*:
module Test (foo) where
import Data.Reflection
foo :: Int -> Int
foo x = give x given
bar :: Int -> Int
bar x = give x $
let y :: Int
y = given
in give (y + 5) given
in the above, you can call `foo` as many times as you like, with
separate values, but `bar` does not behave as one might wish.
> I was curious whether it would be possible to allow `give` to be used only
> once in the same call stack with something like
>
> give :: forall a r. Not (Given a) => a -> (Given a => r) -> r
>
> If this even makes sense.
If you're concerned about nested uses of `Given` the simplest solution
is to just use `reify` and `reflect` and avoid `given`:
baz :: Int -> Int
baz x = reify x $ \p ->
let y :: Int
y = reflect p
in reify (y + 5) reflect
--
Viktor.
From lanablack at amok.cc Tue Jul 13 17:56:34 2021
From: lanablack at amok.cc (Lana Black)
Date: Tue, 13 Jul 2021 17:56:34 +0000
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To:
References: <12973336.O9o76ZdvQC@glow> <5514870.LvFx2qVVIh@glow>
Message-ID: <2974970.tdWV9SEqCh@glow>
On Tuesday, 13 July 2021 17:39:04 UTC Viktor Dukhovni wrote:
>
> This is only a problem if these multiple times are *nested*:
Yes, this is exactly that I mean by "within the same call stack".
>
> If you're concerned about nested uses of `Given` the simplest solution
> is to just use `reify` and `reflect` and avoid `given`:
Frankly, I'd rather stick to multireaders. Reflection seems to me like a worse
case of extreme language abuse.
From ietf-dane at dukhovni.org Tue Jul 13 18:01:12 2021
From: ietf-dane at dukhovni.org (Viktor Dukhovni)
Date: Tue, 13 Jul 2021 14:01:12 -0400
Subject: [Haskell-cafe] Check a lack of a constraint?
In-Reply-To: <2974970.tdWV9SEqCh@glow>
References: <12973336.O9o76ZdvQC@glow> <5514870.LvFx2qVVIh@glow>
<2974970.tdWV9SEqCh@glow>
Message-ID:
> On 13 Jul 2021, at 1:56 pm, Lana Black wrote:
>
>> If you're concerned about nested uses of `Given` the simplest solution
>> is to just use `reify` and `reflect` and avoid `given`:
>
> Frankly, I'd rather stick to multireaders. Reflection seems to me like a worse
> case of extreme language abuse.
When clear and effective alternatives exist, by all means keep it simple.
With reflection you get run-time type classes, which can be useful, but
I agree should generally not be the first thing you reach for...
--
Viktor.
From a.pelenitsyn at gmail.com Tue Jul 13 19:43:35 2021
From: a.pelenitsyn at gmail.com (Artem Pelenitsyn)
Date: Tue, 13 Jul 2021 15:43:35 -0400
Subject: [Haskell-cafe] Haskell books index with RSS
In-Reply-To:
References:
Message-ID:
Hey Travis,
Cool idea!
Just curious, what does the free tag means? Given that I see it, say, on
Haskell in Depth, it's not free as in free beer, I gather.
--
Best, Artem
On Mon, Jul 12, 2021, 1:59 AM Travis Cardwell via Haskell-Cafe <
haskell-cafe at haskell.org> wrote:
> Dear Café,
>
> I have often wished that I could subscribe to RSS feeds that notify me
> when new books are published about a certain topic or by a certain
> author. I recently implemented the idea in the article system of my
> personal website and created an index of Haskell books.
>
> https://www.extrema.is/articles/haskell-books
>
> The index provides a simple UI for browsing by tag, and each book page
> has basic information and links. By subscribing to the following RSS
> feed, you get a notification when a new book is added to the index.
>
> https://www.extrema.is/articles/tag/index:haskell-books.rss
>
> This is a humble first implementation, but I am posting here in case
> others find it useful.
>
> Regards,
>
> Travis
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From olf at aatal-apotheke.de Tue Jul 13 23:09:30 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Wed, 14 Jul 2021 01:09:30 +0200
Subject: [Haskell-cafe] Check a lack of a constraint?
Message-ID: <4afd83cf21b3107a2687ba0533f7f5323cea2898.camel@aatal-apotheke.de>
>
> Hello cafe,
>
> Is it possible in Haskell to check a lack of a certain constraint?
>
> For example,
>
> ```
> foo :: C => a
> foo = undefined
>
> ```
>
> Here `foo` can only be compiled if called with C satisfied. How do I write the
> opposite, so that `foo` is only possible to use when C is not satisfied?
>
> With best regards.
Indeed I found myself wanting this, too. Such a feature might reduce
the amount of overlapping instances, but of course it bears the danger
that others pointed out.
For example, suppose you have a type
T :: (* -> *)
and the library author of T already defined an instance
instance C a => C (T a).
Now suppose you have a concrete type A which you know can never have a
(lawful) C A instance, but you can define an instance C (T A). The
latter would be formally overlapping with the former, even if you know
that the former will never be implemented.
Thus, instead of local non-satisfiability, a token for the proof that
an instance can not exist might be more useful (and more safe). This
would correspond to case 2 in Sylvain Henry's GHC proposal. I have no
idea, however, how such a proof can be presented [*] in a way that can
not be circumvented by a third author and lead to unsoundness.
As a concrete example, I defined an instance
MonadParsec (StateT String Maybe)
which makes a wonderfully fast parser that you can swap in for a
Megaparsec parser, but it is overlapping because of
MonadParsec Maybe => MonadParsec (StateT s Maybe)
although it is absurd to try to make Maybe an instance of MonadParsec.
Another example: Think about why the mtl library has an instance
MonadFoo m => MonadFoo (BarT m)
for every pair (Foo,Bar) of transformers.
(And there are quadratically many of these!)
It is precisely because a catch-all instance like
(MonadTrans t, MonadFoo m) => MonadFoo (t m)
will be overlapping as long as we can't rule out MonadFoo (t m) by
other means.
Olaf
[*] The logically natural way would be to have a Void constraint, so
that we can say:
MonadParsec Maybe => Void
From travis.cardwell at extrema.is Tue Jul 13 23:29:22 2021
From: travis.cardwell at extrema.is (Travis Cardwell)
Date: Wed, 14 Jul 2021 08:29:22 +0900
Subject: [Haskell-cafe] Haskell books index with RSS
In-Reply-To:
References:
Message-ID:
Hi Artem,
On Wed, Jul 14, 2021 at 4:43 AM Artem Pelenitsyn wrote:
> Cool idea!
Thanks!
> Just curious, what does the free tag means? Given that I see it, say,
> on Haskell in Depth, it's not free as in free beer, I gather.
There were a lot of non-obvious design decisions to make in order to
organize the Haskell books content, and this is one of them. I ended up
using the "free" tag for all books that can be read in full without
cost, including those that can only be read for free online.
The "Haskell in Depth" book can be read from the publisher page:
https://www.manning.com/books/haskell-in-depth
I see the following in the left column on that page:
FREE You can see this entire book for free. Click the table of
contents to start reading.
Clicking that takes you to the table of contents, a little bit further
down the page. Clicking on any of the chapter or section titles opens
the book in a modal interface.
I still think that this usage is the best of the possibilities that I
have thought of, but it can indeed be puzzling. Sorry about that!
If you would like to see the meaning of any other tags, I explicitly
define them on the meta page:
https://www.extrema.is/articles/haskell-books/meta#tags
Cheers,
Travis
From guthrie at miu.edu Wed Jul 14 00:38:17 2021
From: guthrie at miu.edu (Gregory Guthrie)
Date: Wed, 14 Jul 2021 00:38:17 +0000
Subject: [Haskell-cafe] Haskell books index with RSS
In-Reply-To:
References:
Message-ID:
Very nice, thanks.
One additional feature which would be useful is an ability for people to give ratings and leave comments.
Is a book current? Useful, advanced, ....etc.
I saw your disclaimer for "advanced" ratings, but IMHO although subjective, multiple such opinions are informative.
From travis.cardwell at extrema.is Wed Jul 14 10:41:03 2021
From: travis.cardwell at extrema.is (Travis Cardwell)
Date: Wed, 14 Jul 2021 19:41:03 +0900
Subject: [Haskell-cafe] Haskell books index with RSS
In-Reply-To:
References:
Message-ID:
On Wed, Jul 14, 2021 at 9:38 AM Gregory Guthrie wrote:
> One additional feature which would be useful is an ability for people
> to give ratings and leave comments.
>
> Is a book current? Useful, advanced, ....etc.
> I saw your disclaimer for "advanced" ratings, but IMHO although
> subjective, multiple such opinions are informative.
Thank you very much for the feedback! I agree that such information
would be very useful for people.
I implemented the index on my website because my (WIP) web framework
made it easy to do. I have since realized that it would be more useful
as separate software. I am thinking about creating a program that works
like a static site generator for this kind of index, and perhaps GitHub
would be a convenient place to host a Haskell books index using the
software. Additions and corrections could then be submitted via pull
requests, and multiple maintainers could serve as editors. The static
content could be hosted using GitHub Pages, and I am certain that GitHub
has better uptime than my VPS. Dedicated software would also make it
easy for others to create and maintain similar indexes for all sorts of
topics.
Comments could also be submitted and moderated using GitHub pull
requests. This would require each contributor to have a GitHub account
as well as be comfortable with creating pull requests. This might be a
high bar for beginners, but feedback and contributions could be accepted
via issues or email as well.
I think that allowing people to submit ratings would be more challenging
to implement because ratings are more easily gamed than comments. A
website could use GitHub OAuth for authentication and track ratings in a
database (or other form of persistence), but I do not know of an
acceptable way to implement ratings on a (standalone) static site
without raising the bar for contribution even higher.
I suggest GitHub, by the way, because many developers already have
GitHub accounts. The software would not be coupled with GitHub (or even
Git), however, so other services or websites could be used to manage the
source as well as host the generated web assets and RSS feeds.
I will continue to think about the design. Thanks again for the
feedback!
From benjamin.redelings at gmail.com Fri Jul 16 00:36:08 2021
From: benjamin.redelings at gmail.com (Benjamin Redelings)
Date: Thu, 15 Jul 2021 17:36:08 -0700
Subject: [Haskell-cafe] Lazy Probabilistic Programming System in Haskell
Message-ID:
Hi,
My program BAli-Phy implements probabilistic programming with models
written as Haskell programs.
http://www.bali-phy.org/models.php
Unlike [1], models are *lazy* in the sense that you can generate an
infinite list of random variables, as long as you only access a finite
number. I am also using MCMC for inference, not particle filters.
BAli-Phy implements a Haskell interpreter in C++ that records execution
traces of Haskell programs. When a random variable is modified (during
MCMC), the parts of the execution trace that depend on the changed
variables are invalidated and recomputed. This saves work by not
recomputing the probability of the new state from scratch.
Here's a example of a model written in Haskell:
https://github.com/bredelings/BAli-Phy/blob/master/tests/prob_prog/regression/LinearRegression.hs
WARNING: There are lots sharp edges. First and foremost, no type
system! (yet)
-BenRI
[1] Practical probabilistic programming with monads,
https://dl.acm.org/doi/10.1145/2804302.2804317
From Juan.Casanova at ed.ac.uk Fri Jul 16 01:10:36 2021
From: Juan.Casanova at ed.ac.uk (CASANOVA Juan)
Date: Fri, 16 Jul 2021 01:10:36 +0000
Subject: [Haskell-cafe] Compiled program running (extremely) slower than
interpreted equivalent (no results produced)
Message-ID:
Hello,
I have a (fairly complicated) Haskell program.
The main aspect I believe is relevant about it is that it produces an infinite output (something like an infinite list evaluated lazily, except I use my own data structures with better properties).
What I want is to see how fast the elements in the "list" are produced and check that they are correct, without reaching any particular point. Producing even just a couple elements of this list takes A LOT of computations.
I normally run everything interpreted, and this works fine. However, it is a tad too slow and I wanted to profile it, for which I need to compile it.
Here is my typical way to run it interpreted:
* stack ghci +RTS -M500m -RTS
* :load CESQResolverEval.hs
* main
This produces 4-5 outputs within a few seconds.
When I try to compile it:
* stack ghc --rts-options "-M500m" CESQResolverEval.hs
* stack exec ./CESQResolverEval
This dies. Moreover, if I run without the RTS options and/or let it run for a while, it completely kills my computer and I have to restart it.
I initially thought it may be a thing with output flushing but:
* I added a stdout flush after each individual element of the "list", and it changes nothing.
* I even tried changing what main does so that it stops after the first result. This works when interpreted (takes less than a second), but still the program produces nothing and kills my CPU when compiled.
Maybe relevant is that loading my program into GHCi takes about 30 seconds.
The only thing I've been able to find online is this: https://www.reddit.com/r/haskell/comments/3hu3sd/how_is_it_possible_that_compiled_code_is_slower/
But that seems to be about complicated flags and their implications. As you can see, my flags are fairly simple.
Anyone has any idea what this could be about, or how to avoid it? My purpose is to profile the program to see where most of the time is being spent, but I cannot do that without compiling it, and if the compiled code is running much slower than the interpreted code, then it seems absurd to even try to profile this (it must not be doing the same that the interpreted code is doing?).
Thanks in advance,
Juan.
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th' ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From olf at aatal-apotheke.de Fri Jul 16 08:30:15 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Fri, 16 Jul 2021 10:30:15 +0200 (CEST)
Subject: [Haskell-cafe] lost the ability to provide haddocks for hackage
Message-ID: <205504178.72291.1626424215829@webmail.strato.de>
Dear Cafe,
it seems I lost the ability to provide documentation for my hackage packages. While it is known that hackage sometimes fails to build documentation, the upload page [1] provides a guide to build and upload documentation. Stack does not have this ability yet [2] but it has the convenient option --no-haddock-deps that I can use as last resort when a dependency fails to build its Haddocks. I'd rather present documentation with broken links than no documentation at all. Cabal does not seem to provide such an option, while adding --keep-going and/or --enable-per-component to the suggested cabal command in [1] still aborts when one dependency fails to build.
Is there something I've missed? If there is a way to build hackage-documentation in presence of haddock errors in dependencies, please someone update [1] and [3]. Can [4] be used to circumvent my problem? Perhaps add a guide to tweak the links generated by stack haddock to be hackage-compatible, as suggested in [2]?
Strangely, the dependency package my cabal v2-haddock command fails on does have documentation on hackage. Would it be possible to pull these docs from hackage and use as a drop-in replacement?
Thanks,
Olaf
[1] https://hackage.haskell.org/upload
[2] https://github.com/commercialhaskell/stack/issues/737
[3] https://cabal.readthedocs.io/en/3.4/cabal-commands.html#cabal-v2-haddock
[4] http://neilmitchell.blogspot.co.uk/2014/10/fixing-haddock-docs-on-hackage.html
From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Fri Jul 16 09:05:29 2021
From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis)
Date: Fri, 16 Jul 2021 10:05:29 +0100
Subject: [Haskell-cafe] Compiled program running (extremely) slower than
interpreted equivalent (no results produced)
In-Reply-To:
References:
Message-ID: <20210716090529.GC28094@cloudinit-builder>
On Fri, Jul 16, 2021 at 01:10:36AM +0000, CASANOVA Juan wrote:
> Here is my typical way to run it interpreted:
>
> * stack ghci +RTS -M500m -RTS
> * :load CESQResolverEval.hs
> * main
>
> This produces 4-5 outputs within a few seconds.
>
> When I try to compile it:
>
> * stack ghc --rts-options "-M500m" CESQResolverEval.hs
> * stack exec ./CESQResolverEval
>
> This dies. Moreover, if I run without the RTS options and/or let it
> run for a while, it completely kills my computer and I have to
> restart it.
Sounds like you have a space leak. I don't know why the space leak
would be exhibited in 'stack ghci' but not 'stack ghc'. The first
thing I would try would be to compile with '-O0', '-O1' and '-O2' and
see if any of those make a difference (I don't know what 'stack ghc'
uses by default.).
Tom
From viercc at gmail.com Fri Jul 16 10:30:22 2021
From: viercc at gmail.com (=?UTF-8?B?5a6u6YeM5rS45Y+4?=)
Date: Fri, 16 Jul 2021 19:30:22 +0900
Subject: [Haskell-cafe] Compiled program running (extremely) slower than
interpreted equivalent (no results produced)
In-Reply-To: <20210716090529.GC28094@cloudinit-builder>
References:
<20210716090529.GC28094@cloudinit-builder>
Message-ID:
I can't be sure without looking at your program directly, but
deleting all build artifacts (*.hi, *.o, the executable file, and
*.hi-boot files if there's) before compiling might resolve the issue.
What "stack ghc" does is starting GHC with appropriate configuration
(mostly which packages are to be used.) It does not recompile modules
when compilation flags were changed, just only when the source file it
depends on is changed after last compilation.
This means GHC might have compiled module A with -O0, B with -O2,
C with profiling on, etc. This mix is known to make optimizations fail
often.
2021年7月16日(金) 18:06 Tom Ellis :
>
> On Fri, Jul 16, 2021 at 01:10:36AM +0000, CASANOVA Juan wrote:
> > Here is my typical way to run it interpreted:
> >
> > * stack ghci +RTS -M500m -RTS
> > * :load CESQResolverEval.hs
> > * main
> >
> > This produces 4-5 outputs within a few seconds.
> >
> > When I try to compile it:
> >
> > * stack ghc --rts-options "-M500m" CESQResolverEval.hs
> > * stack exec ./CESQResolverEval
> >
> > This dies. Moreover, if I run without the RTS options and/or let it
> > run for a while, it completely kills my computer and I have to
> > restart it.
>
> Sounds like you have a space leak. I don't know why the space leak
> would be exhibited in 'stack ghci' but not 'stack ghc'. The first
> thing I would try would be to compile with '-O0', '-O1' and '-O2' and
> see if any of those make a difference (I don't know what 'stack ghc'
> uses by default.).
>
> Tom
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
--
/* Koji Miyazato */
From olf at aatal-apotheke.de Fri Jul 16 15:27:12 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Fri, 16 Jul 2021 17:27:12 +0200
Subject: [Haskell-cafe] Lazy Probabilistic Programming System in Haskell
Message-ID:
> Hi,
>
> My program BAli-Phy implements probabilistic programming with models
> written as Haskell programs.
>
> http://www.bali-phy.org/models.php
Dear Benjamin,
last time you announced BAli-Phy I pestered you with questions about
semantics. In the meantime there was a discussion [1] on this list
regarding desirable properties of probabilistic languages and monads in
general. A desirable property of any probabilistic language is that
when you define a distribution but map a constant function over it,
then this has the same computational cost as returning the constant
directly. Can you say anything about that?
Cheers,
Olaf
[1]
https://mail.haskell.org/pipermail/haskell-cafe/2020-November/132905.html
From Juan.Casanova at ed.ac.uk Fri Jul 16 19:10:09 2021
From: Juan.Casanova at ed.ac.uk (CASANOVA Juan)
Date: Fri, 16 Jul 2021 19:10:09 +0000
Subject: [Haskell-cafe] Compiled program running (extremely) slower than
interpreted equivalent (no results produced)
In-Reply-To:
References:
<20210716090529.GC28094@cloudinit-builder>,
Message-ID:
Well, thanks for the tips.
Just for the sake of acknowledgment... it turns out it was something even more stupid. The program I was running was not the one I had just compiled, but rather an old version. The way I was compiling it it was producing no executable, only the .hi and .o files, so the executable was an old version from a previous compile, that did not produce any results.
Once I re-compiled it properly, it worked fine, and is slightly faster than the interpreted version.
Your comments were still useful and will be taken into account in the following days, though. So thank you again.
Juan.
________________________________
From: 宮里洸司
Sent: 16 July 2021 11:30
To: Haskell Cafe ; CASANOVA Juan
Subject: Re: [Haskell-cafe] Compiled program running (extremely) slower than interpreted equivalent (no results produced)
This email was sent to you by someone outside the University.
You should only click on links or attachments if you are certain that the email is genuine and the content is safe.
I can't be sure without looking at your program directly, but
deleting all build artifacts (*.hi, *.o, the executable file, and
*.hi-boot files if there's) before compiling might resolve the issue.
What "stack ghc" does is starting GHC with appropriate configuration
(mostly which packages are to be used.) It does not recompile modules
when compilation flags were changed, just only when the source file it
depends on is changed after last compilation.
This means GHC might have compiled module A with -O0, B with -O2,
C with profiling on, etc. This mix is known to make optimizations fail
often.
2021年7月16日(金) 18:06 Tom Ellis :
>
> On Fri, Jul 16, 2021 at 01:10:36AM +0000, CASANOVA Juan wrote:
> > Here is my typical way to run it interpreted:
> >
> > * stack ghci +RTS -M500m -RTS
> > * :load CESQResolverEval.hs
> > * main
> >
> > This produces 4-5 outputs within a few seconds.
> >
> > When I try to compile it:
> >
> > * stack ghc --rts-options "-M500m" CESQResolverEval.hs
> > * stack exec ./CESQResolverEval
> >
> > This dies. Moreover, if I run without the RTS options and/or let it
> > run for a while, it completely kills my computer and I have to
> > restart it.
>
> Sounds like you have a space leak. I don't know why the space leak
> would be exhibited in 'stack ghci' but not 'stack ghc'. The first
> thing I would try would be to compile with '-O0', '-O1' and '-O2' and
> see if any of those make a difference (I don't know what 'stack ghc'
> uses by default.).
>
> Tom
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
--
/* Koji Miyazato */
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhun Eideann, claraichte an Alba, aireamh claraidh SC005336.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From benjamin.redelings at gmail.com Fri Jul 16 23:35:56 2021
From: benjamin.redelings at gmail.com (Benjamin Redelings)
Date: Fri, 16 Jul 2021 16:35:56 -0700
Subject: [Haskell-cafe] Lazy Probabilistic Programming System in Haskell
In-Reply-To:
References:
Message-ID: <96bb4f3d-4cbb-540f-0651-4fcc4afeb10e@gmail.com>
Hi Olaf,
Are you asking if
run $ (const y) <$> normal 0 1
has the same cost as
run $ return y
for some interpreter `run`?
Yes, the cost is the same. In do-notation, we would have
run $ do
x <- normal 0 1
return $ (const y x)
Since `const y` never forces `x`, no time is spent evaluating `run $
normal 0 1`. That is basically what I mean by saying that the language
is lazy.
-BenRI
On 7/16/21 8:27 AM, Olaf Klinke wrote:
>> Hi,
>>
>> My program BAli-Phy implements probabilistic programming with models
>> written as Haskell programs.
>>
>> http://www.bali-phy.org/models.php
> Dear Benjamin,
>
> last time you announced BAli-Phy I pestered you with questions about
> semantics. In the meantime there was a discussion [1] on this list
> regarding desirable properties of probabilistic languages and monads in
> general. A desirable property of any probabilistic language is that
> when you define a distribution but map a constant function over it,
> then this has the same computational cost as returning the constant
> directly. Can you say anything about that?
>
> Cheers,
> Olaf
>
> [1]
> https://mail.haskell.org/pipermail/haskell-cafe/2020-November/132905.html
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mikolaj at well-typed.com Sat Jul 17 08:05:44 2021
From: mikolaj at well-typed.com (Mikolaj Konarski)
Date: Sat, 17 Jul 2021 10:05:44 +0200
Subject: [Haskell-cafe] lost the ability to provide haddocks for hackage
In-Reply-To: <205504178.72291.1626424215829@webmail.strato.de>
References: <205504178.72291.1626424215829@webmail.strato.de>
Message-ID:
Hi Olaf,
Nobody offered a workaround here and I can't find any (recent) related
ticket, so perhaps open a feature request at the cabal bug tracker?
Kind regards,
Mikolaj
On Fri, Jul 16, 2021 at 10:34 AM Olaf Klinke wrote:
>
> Dear Cafe,
>
> it seems I lost the ability to provide documentation for my hackage packages. While it is known that hackage sometimes fails to build documentation, the upload page [1] provides a guide to build and upload documentation. Stack does not have this ability yet [2] but it has the convenient option --no-haddock-deps that I can use as last resort when a dependency fails to build its Haddocks. I'd rather present documentation with broken links than no documentation at all. Cabal does not seem to provide such an option, while adding --keep-going and/or --enable-per-component to the suggested cabal command in [1] still aborts when one dependency fails to build.
>
> Is there something I've missed? If there is a way to build hackage-documentation in presence of haddock errors in dependencies, please someone update [1] and [3]. Can [4] be used to circumvent my problem? Perhaps add a guide to tweak the links generated by stack haddock to be hackage-compatible, as suggested in [2]?
>
> Strangely, the dependency package my cabal v2-haddock command fails on does have documentation on hackage. Would it be possible to pull these docs from hackage and use as a drop-in replacement?
>
> Thanks,
> Olaf
>
> [1] https://hackage.haskell.org/upload
> [2] https://github.com/commercialhaskell/stack/issues/737
> [3] https://cabal.readthedocs.io/en/3.4/cabal-commands.html#cabal-v2-haddock
> [4] http://neilmitchell.blogspot.co.uk/2014/10/fixing-haddock-docs-on-hackage.html
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
From olf at aatal-apotheke.de Sat Jul 17 19:29:43 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Sat, 17 Jul 2021 21:29:43 +0200 (CEST)
Subject: [Haskell-cafe] Lazy Probabilistic Programming System in Haskell
In-Reply-To: <96bb4f3d-4cbb-540f-0651-4fcc4afeb10e@gmail.com>
References:
<96bb4f3d-4cbb-540f-0651-4fcc4afeb10e@gmail.com>
Message-ID:
>>
>> On 7/16/21 8:27 AM, Olaf Klinke wrote:
>>
>>> Hi,
>>>
>>> My program BAli-Phy implements probabilistic programming with models
>>> written as Haskell programs.
>>>
>>> http://www.bali-phy.org/models.php
>>
>> Dear Benjamin,
>>
>> last time you announced BAli-Phy I pestered you with questions about
>> semantics. In the meantime there was a discussion [1] on this list
>> regarding desirable properties of probabilistic languages and monads in
>> general. A desirable property of any probabilistic language is that
>> when you define a distribution but map a constant function over it,
>> then this has the same computational cost as returning the constant
>> directly. Can you say anything about that?
>>
>> Cheers,
>> Olaf
>>
>> [1]
>>
>https://mail.haskell.org/pipermail/haskell-cafe/2020-November/132905.html
>>
>>
>>
>
>On Fri, 16 Jul 2021, Benjamin Redelings wrote:
>
>
> Hi Olaf,
>
> Are you asking if
>
> run $ (const y) <$> normal 0 1
>
> has the same cost as
>
> run $ return y
>
> for some interpreter `run`?
>
> Yes, the cost is the same. In do-notation, we would have
>
> run $ do
> x <- normal 0 1
> return $ (const y x)
>
> Since `const y` never forces `x`, no time is spent evaluating `run $ normal 0 1`. That is basically
> what I mean by saying that the language is lazy.
>
> -BenRI
>
Awesome! That is something you can not have with (random number-)state
based implementations, as far as I know, because
x <- normal 0 1
at least splits the random number generator. Hence running the above ten
thousand times even without evaluating the x does have a non-neglegible
cost. So how did you implement the lazyness you described above? Do you
have thunks just like in Haskell?
Last time I remarked that the online documentation contains no proper
definition of the model language. Some examples with explanations of
individual lines are not enough, IMHO. That appears not to have changed
since the last release. So why don't you include a list of keywords and
built-in functions and their meaning/semantics?
Regards,
Olaf
From eborsboom at fpcomplete.com Tue Jul 20 13:58:23 2021
From: eborsboom at fpcomplete.com (Emanuel Borsboom)
Date: Tue, 20 Jul 2021 13:58:23 +0000
Subject: [Haskell-cafe] ANN: stack-2.7.3
Message-ID: <74C4C491-AAD2-46C3-AA79-2B14EC5FB750@fpcomplete.com>
See https://haskellstack.org/ for installation and upgrade instructions.
**Changes since v2.7.1:**
Other enhancements:
* `stack upgrade` will download from `haskellstack.org` before trying
`github.com`. See
[#5288](https://github.com/commercialhaskell/stack/issues/5288)
* `stack upgrade` makes less assumptions about archive format. See
[#5288](https://github.com/commercialhaskell/stack/issues/5288)
* Add a `--no-run` flag to the `script` command when compiling.
Bug fixes:
* GHC source builds work properly for recent GHC versions again. See
[#5528](https://github.com/commercialhaskell/stack/issues/5528)
* `stack setup` always looks for the unpacked directory name to support
different tar file naming conventions. See
[#5545](https://github.com/commercialhaskell/stack/issues/5545)
* Bump `pantry` version for better OS support. See
[pantry#33](https://github.com/commercialhaskell/pantry/issues/33)
* When building the sanity check for a new GHC install, make sure to clear
`GHC_PACKAGE_PATH`.
* Specifying GHC RTS flags in the `stack.yaml` no longer fails with an error.
[#5568](https://github.com/commercialhaskell/stack/pull/5568)
* `stack setup` will look in sandboxed directories for executables, not
relying on `findExecutables. See
[GHC issue 20074](https://gitlab.haskell.org/ghc/ghc/-/issues/20074)
* Track changes to `setup-config` properly to avoid reconfiguring on every
change. See [#5578](https://github.com/commercialhaskell/stack/issues/5578)
**Thanks to all our contributors for this release:**
* Andreas Källberg
* Artur Gajowy
* Felix Yan
* fwcd
* Ketzacoatl
* Matt Audesse
* Michael Snoyman
* milesfrain
* parsonsmatt
* skforg
From benjamin.redelings at gmail.com Tue Jul 20 18:37:01 2021
From: benjamin.redelings at gmail.com (Benjamin Redelings)
Date: Tue, 20 Jul 2021 11:37:01 -0700
Subject: [Haskell-cafe] Help on syntactic sugar for combining lazy & strict
monads?
Message-ID:
Hi,
I'm working on a probabilistic programming language with Haskell syntax
[1]. I am trying to figure out how to intermingle lazy and strict
monadic code without requiring ugly-looking markers on the lazy code.
Does anybody have insights on this?
1. I'm taking a monadic approach that is similar to [2], but I'm using a
lazy interpreter. This allows code such as the following, which would
not terminate under a strict interpreter:
run_lazy $ do
xs <- sequence $ repeat $ normal 0 1
return $ take 10 xs
Here "xs" is an infinite list of Normal(0,1) random variables, of which
only 10 are returned. In a strict interpreter the line for "xs" never
completes. But in a lazy interpreter, it works fine.
2. However, a lazy interpreter causes problems when trying to introduce
*observation* statements (aka conditioning statements) into the monad
[3]. For example,
run_lazy $ do
x <- normal 0 1
y <- normal x 1
z <- normal y 1
2.0 `observe_from` normal z 1
return y
In the above code fragment, y will be forced because it is returned, and
y will force x. However, the "observe_from" statement will never be
forced, because it does not produce a result that is demanded.
3. My current approach is to use TWO monads -- one for random sampling
(Sample a), and one for observations (Observe a). The random sampling
monad can be lazy, because for random samples there is no need to force
a sampling event if the result is never used. The observation monad is
strict, because all the observations must be forced.
So this WORKS fine. However... the code looks ugly :-( Help?
4a. One idea is to nest the lazy code within the strict monad, using
some kind of tag "sample :: Sample a -> Observe a". Then we could write
something in the (Observe a) monad like:
run_strict $ do
w <- sample $ sequence $ repeat $ normal 0 1
x <- sample $ normal 0 1
2.0 `observe_from` normal x 1
y <- sample $ normal x 1
z <- sample $ normal y 1
2.0 `observe_from` normal z 1
return y
When the "run_strict" interpreter encounters a statement of the form
(sample $ _ ), it switches to the "run_lazy" interpreter for that statement.
In this case, the `observe_from` statement IS forced because it is in
the strict (Observe a) monad. Maybe somewhat surprisingly w, x, y, and
z are forced (ugh!) -- by the outer strict interpreter, not the inner
lazy interpreter. However, the internal components of w are NOT forced,
so the program is able to terminate.
QUESTION: Is there some way of doing this without manually writing
"sample" in front of all the sampling operations?
QUESTION: is there a way of doing this where the "sample $ _" lines do
NOT have their result forced?
4b. In order to write "sample" less, it is possible to factor the
sampling code into a separate function (here called "prior"):
prior :: Sample ([Double], Double, Double, Double)
prior = do
w <- sequence $ repeat $ normal 0 1
x <- normal 0 1
y <- normal x 1
z <- normal y 1
return (w,x,y,z)
model :: Observe Double
model = do
(w,x,y,z) <- sample $ prior
2.0 `observe_from` normal x 1
2.0 `observe_from` normal z 1
return y
This does mean that you have to write "sample" only once... but it (i)
splits the function in half and (ii) forces you to explicitly pass
(w,x,y,z) between the two functions. That obfuscates the code for no
benefit.
Interestingly, the logical conclusion of this movement of code from
(Observe a) to (Sample a) is to move EVERYTHING to the Sample monad:
prior :: Sample (Observe (), Double)
prior = do
w <- sequence $ repeat $ normal 0 1
x <- normal 0 1
let observations1 = [2.0 `observe_from` normal x 1]
y <- normal x 1
z <- normal y 1
let observations2 = [2.0 `observe_from` normal z 1]++observations1
return (observations2,y)
model :: Observe Double
model = do
(observations, y) <- sample $ prior
sequence_ observations
return y
Now, we have moved all the observations into the (Sample a) monad, but
using horrible syntax! Ugh :-( Also, now the function "model" is
basically the same for all models -- the entire model has been moved
into the prior.
QUESTION: is there some way to write a monadic function like "prior"
while accumulating things observations in a list?
Thanks for any insights!
-BenRI
[1] http://bali-phy.org/models.php
[2] Practical probabilistic programming with monads,
https://dl.acm.org/doi/10.1145/2804302.2804317
[3] https://github.com/tweag/monad-bayes/issues/32
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From benjamin.redelings at gmail.com Tue Jul 20 18:57:25 2021
From: benjamin.redelings at gmail.com (Benjamin Redelings)
Date: Tue, 20 Jul 2021 11:57:25 -0700
Subject: [Haskell-cafe] Lazy Probabilistic Programming System in Haskell
In-Reply-To:
References:
<96bb4f3d-4cbb-540f-0651-4fcc4afeb10e@gmail.com>
Message-ID:
On 7/17/21 12:29 PM, Olaf Klinke wrote:
>>>
>>> On 7/16/21 8:27 AM, Olaf Klinke wrote:
>>>
>>>> Hi,
>>>>
>>>> My program BAli-Phy implements probabilistic programming with
>>>> models written as Haskell programs.
>>>>
>>>> http://www.bali-phy.org/models.php
>>>
>>> Dear Benjamin,
>>> last time you announced BAli-Phy I pestered you with questions about
>>> semantics. In the meantime there was a discussion [1] on this list
>>> regarding desirable properties of probabilistic languages and monads in
>>> general. A desirable property of any probabilistic language is that
>>> when you define a distribution but map a constant function over it,
>>> then this has the same computational cost as returning the constant
>>> directly. Can you say anything about that?
>>> Cheers,
>>> Olaf
>>>
>>> [1]
>> https://mail.haskell.org/pipermail/haskell-cafe/2020-November/132905.html
>>
>>>
>>>
>>>
>>
>> On Fri, 16 Jul 2021, Benjamin Redelings wrote:
>>
>>
>> Hi Olaf,
>>
>> Are you asking if
>>
>> run $ (const y) <$> normal 0 1
>> has the same cost as
>>
>> run $ return y
>>
>> for some interpreter `run`?
>>
>> Yes, the cost is the same. In do-notation, we would have
>>
>> run $ do
>> x <- normal 0 1
>> return $ (const y x)
>>
>> Since `const y` never forces `x`, no time is spent evaluating `run $
>> normal 0 1`. That is basically
>> what I mean by saying that the language is lazy.
>>
>> -BenRI
>>
>
> Awesome! That is something you can not have with (random number-)state
> based implementations, as far as I know, because
> x <- normal 0 1
> at least splits the random number generator. Hence running the above
> ten thousand times even without evaluating the x does have a
> non-neglegible cost. So how did you implement the lazyness you
> described above? Do you have thunks just like in Haskell?
>
> Last time I remarked that the online documentation contains no proper
> definition of the model language. Some examples with explanations of
> individual lines are not enough, IMHO. That appears not to have
> changed since the last release. So why don't you include a list of
> keywords and built-in functions and their meaning/semantics?
>
> Regards,
> Olaf
Hi Olaf,
1. My VM is based on Sestoft (1997) "Deriving a lazy abstract machine".
However, when a closure is changeable, we do not overwrite the closure
with its results. Instead we store a allocate a new heap location for
the result, and record a pointer from the closure to its result. This
allows us to erase results when modifiable variables change, while
retaining reduction steps that do not depend on the modifiable variables.
The VM does have thunks. This means that there is an O(1) cost for "x
<- normal 0 1" in the program fragment above. So, it is more precise to
say that
run $ (const y) <$> normal 0 1
has the same cost as
run $ let x = (run $ normal 0 1) in return y
I think for my purposes, an O(1) cost is fine. But if you want NO cost
at all, then I think this would require code optimization to eliminate
the thunk allocated for x.
2. In response to the question about splitting the random number
generator, I have a few thoughts.
2a. Splitting the random number generator, should lead to an O(1) cost
per unforced random sample. I think an O(1) cost is fine, unless the
overhead is very high.
2b. In my code, the VM gets random numbers from a runtime library api
that delivers true random numbers. This could be implemented by a
hardware instruction for example. In practice it is delivered by a
pseudorandom number generate with its own internal state. However, for
my purposes, I think both are fine.
3. I would be happy to add more documentation, if it is actually helpful
in figuring out the language. When learning HTML, I did not read the
formal specification, but modified existing examples. Are the examples
actually hard to follow? I am afraid that if I try and write "formal"
specifications, then there will always be someone who declares them
improper and not formal enough. But if the documentation is simply bad
(which is probably true), I would be happy to improve it. Does that
make sense?
> So why don't you include a list of keywords and built-in functions and
> their meaning/semantics?
This seems reasonable. I will try to do that.
BTW, part of the confusion might stem from the fact that the language
has two monads: a strict monad for observations, and a lazy monad for
random sampling. Perhaps you would have insights on my question in my
previous e-mail about combining lazy and strict monads?
take care,
-BenRI
From csaba.hruska at gmail.com Wed Jul 21 14:00:22 2021
From: csaba.hruska at gmail.com (Csaba Hruska)
Date: Wed, 21 Jul 2021 16:00:22 +0200
Subject: [Haskell-cafe] Fwd: Haskell program introspection tooling
development.
In-Reply-To:
References:
Message-ID:
Hello,
I'm using the external STG interpreter
to introspect the
runtime behaviour of Haskell programs. Lately I've added an initial
call-graph construction feature that I plan to refine further.
https://twitter.com/csaba_hruska/status/1417486380536582151
Is there anyone who has dynamic analysis related research ambitions and
wants to study Haskell program runtime behaviour in detail?
If so then it would be great to talk.
Cheers,
Csaba
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From olf at aatal-apotheke.de Wed Jul 21 21:59:30 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Wed, 21 Jul 2021 23:59:30 +0200 (CEST)
Subject: [Haskell-cafe] Lazy Probabilistic Programming System in Haskell
In-Reply-To:
References:
<96bb4f3d-4cbb-540f-0651-4fcc4afeb10e@gmail.com>
Message-ID:
On Tue, 20 Jul 2021, Benjamin Redelings wrote:
>
> On 7/17/21 12:29 PM, Olaf Klinke wrote:
>>>>
>>>> On 7/16/21 8:27 AM, Olaf Klinke wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> My program BAli-Phy implements probabilistic programming with models
>>>>> written as Haskell programs.
>>>>>
>>>>> http://www.bali-phy.org/models.php
>>>>
>>>> Dear Benjamin,
>>>> last time you announced BAli-Phy I pestered you with questions about
>>>> semantics. In the meantime there was a discussion [1] on this list
>>>> regarding desirable properties of probabilistic languages and monads in
>>>> general. A desirable property of any probabilistic language is that
>>>> when you define a distribution but map a constant function over it,
>>>> then this has the same computational cost as returning the constant
>>>> directly. Can you say anything about that?
>>>> Cheers,
>>>> Olaf
>>>>
>>>> [1]
>>> https://mail.haskell.org/pipermail/haskell-cafe/2020-November/132905.html
>>>>
>>>>
>>>>
>>>
>>> On Fri, 16 Jul 2021, Benjamin Redelings wrote:
>>>
>>>
>>> Hi Olaf,
>>>
>>> Are you asking if
>>>
>>> run $ (const y) <$> normal 0 1
>>> has the same cost as
>>>
>>> run $ return y
>>>
>>> for some interpreter `run`?
>>>
>>> Yes, the cost is the same. In do-notation, we would have
>>>
>>> run $ do
>>> x <- normal 0 1
>>> return $ (const y x)
>>>
>>> Since `const y` never forces `x`, no time is spent evaluating `run $
>>> normal 0 1`. That is basically
>>> what I mean by saying that the language is lazy.
>>>
>>> -BenRI
>>>
>>
>> Awesome! That is something you can not have with (random number-)state
>> based implementations, as far as I know, because
>> x <- normal 0 1
>> at least splits the random number generator. Hence running the above ten
>> thousand times even without evaluating the x does have a non-neglegible
>> cost. So how did you implement the lazyness you described above? Do you
>> have thunks just like in Haskell?
>>
>> Last time I remarked that the online documentation contains no proper
>> definition of the model language. Some examples with explanations of
>> individual lines are not enough, IMHO. That appears not to have changed
>> since the last release. So why don't you include a list of keywords and
>> built-in functions and their meaning/semantics?
>>
>> Regards,
>> Olaf
>
> Hi Olaf,
>
> 1. My VM is based on Sestoft (1997) "Deriving a lazy abstract machine".
> However, when a closure is changeable, we do not overwrite the closure with
> its results. Instead we store a allocate a new heap location for the result,
> and record a pointer from the closure to its result. This allows us to erase
> results when modifiable variables change, while retaining reduction steps
> that do not depend on the modifiable variables.
>
> The VM does have thunks. This means that there is an O(1) cost for "x <-
> normal 0 1" in the program fragment above. So, it is more precise to say
> that
>
> run $ (const y) <$> normal 0 1
>
> has the same cost as
>
> run $ let x = (run $ normal 0 1) in return y
>
> I think for my purposes, an O(1) cost is fine. But if you want NO cost at
> all, then I think this would require code optimization to eliminate the thunk
> allocated for x.
>
> 2. In response to the question about splitting the random number generator, I
> have a few thoughts.
>
> 2a. Splitting the random number generator, should lead to an O(1) cost per
> unforced random sample. I think an O(1) cost is fine, unless the overhead is
> very high.
>
> 2b. In my code, the VM gets random numbers from a runtime library api that
> delivers true random numbers. This could be implemented by a hardware
> instruction for example. In practice it is delivered by a pseudorandom
> number generate with its own internal state. However, for my purposes, I
> think both are fine.
Yes, O(1) cost is probably okay, as long as the constant is low.
Not being an expert on sampling, I don't know how expensive generator
splits really are. If one has very deeply nested models, that might
eventually lead to problems.
>
> 3. I would be happy to add more documentation, if it is actually helpful in
> figuring out the language. When learning HTML, I did not read the formal
> specification, but modified existing examples.
My first address, to stay with this analogy, is the html tag list
reference.
> Are the examples actually
> hard to follow? I am afraid that if I try and write "formal" specifications,
> then there will always be someone who declares them improper and not formal
> enough.
At least there should be an explanation on what you, as the author, think
the operational semantics are. Whether this is formally proven or
formalized is secondary.
> But if the documentation is simply bad (which is probably true), I
> would be happy to improve it. Does that make sense?
Examples should in any case be shown early and often. And to be fair,
the documentation is probably good for the intended audience,
bioinformaticians. Since my interest is more theoretical, I was missing
some bits. But don't let that worry you too much. During my time in
bioinformatics I definitely spent too much time worrying about the
theoretical foundations. Tried to tell a statistician about monads once
and got back a bewildered stare.
>
>> So why don't you include a list of keywords and built-in functions and
>> their meaning/semantics?
> This seems reasonable. I will try to do that.
>
> BTW, part of the confusion might stem from the fact that the language has two
> monads: a strict monad for observations, and a lazy monad for random
> sampling. Perhaps you would have insights on my question in my previous
> e-mail about combining lazy and strict monads?
>
> take care,
>
>
> -BenRI
>
>
I do remember that I was confused about your typing. The model language
has no typing information attached. What would an example model look like
with type annotation? How do your two monads compare with traditional
monadic-probabilistic code, where there is only one monad of random
values?
Olaf
From kc1956 at gmail.com Thu Jul 22 02:20:38 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Wed, 21 Jul 2021 19:20:38 -0700
Subject: [Haskell-cafe] From Rabhi's & Lapalme's book 'Algorithms - A
Functional Programming Approach' I'm getting a non-exhaustive pattern match
Message-ID:
>From Rabhi's & Lapalme's book 'Algorithms - A Functional Programming
Approach' 1999, I'm getting a non-exhaustive pattern match
weight x y g = w where (Just w) = g!(x,y)
I can't figure out why
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From raoknz at gmail.com Thu Jul 22 02:33:55 2021
From: raoknz at gmail.com (Richard O'Keefe)
Date: Thu, 22 Jul 2021 14:33:55 +1200
Subject: [Haskell-cafe] From Rabhi's & Lapalme's book 'Algorithms - A
Functional Programming Approach' I'm getting a non-exhaustive pattern match
In-Reply-To:
References:
Message-ID:
where (Just
On Thu, 22 Jul 2021 at 14:21, Casey Hawthorne wrote:
>
> From Rabhi's & Lapalme's book 'Algorithms - A Functional Programming Approach' 1999, I'm getting a non-exhaustive pattern match
>
> weight x y g = w where (Just w) = g!(x,y)
>
> I can't figure out why
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
From mukeshtiwari.iiitm at gmail.com Thu Jul 22 02:57:44 2021
From: mukeshtiwari.iiitm at gmail.com (mukesh tiwari)
Date: Thu, 22 Jul 2021 12:57:44 +1000
Subject: [Haskell-cafe] From Rabhi's & Lapalme's book 'Algorithms - A
Functional Programming Approach' I'm getting a non-exhaustive pattern match
In-Reply-To:
References:
Message-ID:
g!(x, y) can return 'Nothing' as well, but you are not considering it
and therefore the non-exhaustive pattern.
Best,
Mukesh
On Thu, Jul 22, 2021 at 12:21 PM Casey Hawthorne wrote:
>
> From Rabhi's & Lapalme's book 'Algorithms - A Functional Programming Approach' 1999, I'm getting a non-exhaustive pattern match
>
> weight x y g = w where (Just w) = g!(x,y)
>
> I can't figure out why
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
From olf at aatal-apotheke.de Thu Jul 22 06:15:24 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Thu, 22 Jul 2021 08:15:24 +0200 (CEST)
Subject: [Haskell-cafe] Help on syntactic sugar for combining lazy &
strict monads?
Message-ID:
> However, a lazy interpreter causes problems when trying to introduce
> *observation* statements (aka conditioning statements) into the monad
> [3]. For example,
>
> run_lazy $ do
> x <- normal 0 1
> y <- normal x 1
> z <- normal y 1
> 2.0 `observe_from` normal z 1
> return y
>
> In the above code fragment, y will be forced because it is returned, and
> y will force x. However, the "observe_from" statement will never be
> forced, because it does not produce a result that is demanded.
I'm very confused. If the observe_from statement is never demanded, then
what semantics should it have? What is the type of observe_from? It seems
it is
a -> m a -> m ()
for whatever monad m you are using. But conditioning usually is a function
Observation a -> Dist a -> Dist a
so you must use the result of the conditioning somehow. And isn't the
principle of Monte Carlo to approximate the posterior by sampling from it?
I tend to agree with your suggestion that observations and sampling can not
be mixed (in the same do-notation) but the latter have to be collected in
a prior, then conditioned by an observation.
What is the semantic connection between your sample and obsersvation
monad? What is the connection between both and the semantic probability
distributions? I claim that once you have typed everything, it becomes
clear where the problem is.
Olaf
P.S. It has always bugged me that probabilists use elements and
events interchangingly, while this can only be done on discrete
spaces. So above I would rather like to write
(2.0==) `observe_from` (normal 0 1)
which still is a non-sensical statement if (normal 0 1) is a continuous
distribution where each point set has probability zero.
From kc1956 at gmail.com Fri Jul 23 23:27:20 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Fri, 23 Jul 2021 16:27:20 -0700
Subject: [Haskell-cafe] What would be the equivalent in Haskell of Knuth's
Dancing Links method on a doubly linked circular list?
Message-ID:
Hi
What would be the equivalent in Haskell of Knuth's Dancing Links method on
a doubly linked circular list?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lemming at henning-thielemann.de Fri Jul 23 23:33:04 2021
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Sat, 24 Jul 2021 01:33:04 +0200 (CEST)
Subject: [Haskell-cafe] What would be the equivalent in Haskell of
Knuth's Dancing Links method on a doubly linked circular list?
In-Reply-To:
References:
Message-ID: <43cdbc8d-8ad4-4ce7-a21b-1166f8da7c32@henning-thielemann.de>
On Fri, 23 Jul 2021, Casey Hawthorne wrote:
> What would be the equivalent in Haskell of Knuth's Dancing Links method
> on a doubly linked circular list?
I have written the exact set cover solver just with bit manipulation.
Linked lists would require much more memory in most cases.
https://hackage.haskell.org/package/set-cover
From kc1956 at gmail.com Sat Jul 24 20:00:14 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Sat, 24 Jul 2021 13:00:14 -0700
Subject: [Haskell-cafe] What would be the equivalent in Haskell of
Knuth's Dancing Links method on a doubly linked circular list?
In-Reply-To: <43cdbc8d-8ad4-4ce7-a21b-1166f8da7c32@henning-thielemann.de>
References:
<43cdbc8d-8ad4-4ce7-a21b-1166f8da7c32@henning-thielemann.de>
Message-ID:
Hi
First
Would using a Bloom Filter work?
Depending on size of filter, one can still get a low percentage of false
positives
I saw Bloom Filters in Rwal World Haskell
Second
I saw the idea of using dictionaries in Python instead of dancing links here
https://www.cs.mcgill.ca/~aassaf9/python/algorithm_x.html
Which should also be applicable to Haskell
On Fri., Jul. 23, 2021, 4:33 p.m. Henning Thielemann, <
lemming at henning-thielemann.de> wrote:
>
> On Fri, 23 Jul 2021, Casey Hawthorne wrote:
>
> > What would be the equivalent in Haskell of Knuth's Dancing Links method
> > on a doubly linked circular list?
>
> I have written the exact set cover solver just with bit manipulation.
> Linked lists would require much more memory in most cases.
>
> https://hackage.haskell.org/package/set-cover
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kc1956 at gmail.com Sat Jul 24 20:33:29 2021
From: kc1956 at gmail.com (Casey Hawthorne)
Date: Sat, 24 Jul 2021 13:33:29 -0700
Subject: [Haskell-cafe] What would be the equivalent in Haskell of
Knuth's Dancing Links method on a doubly linked circular list?
In-Reply-To:
References:
<43cdbc8d-8ad4-4ce7-a21b-1166f8da7c32@henning-thielemann.de>
Message-ID:
Hi
Corrected
First
Would using a Bloom Filter work?
Depending on size of filter, one can still get a low percentage of false
positives
I saw Bloom Filters in Real World Haskell
Could one use a second Bloom Filter for deletions from the first?
Second
I saw the idea of using dictionaries in Python instead of dancing links here
https://www.cs.mcgill.ca/~aassaf9/python/algorithm_x.html
Which should also be applicable to Haskell
On Sat., Jul. 24, 2021, 1:00 p.m. Casey Hawthorne, wrote:
> Hi
>
> First
> Would using a Bloom Filter work?
> Depending on size of filter, one can still get a low percentage of false
> positives
> I saw Bloom Filters in Rwal World Haskell
>
>
> Second
> I saw the idea of using dictionaries in Python instead of dancing links
> here
>
> https://www.cs.mcgill.ca/~aassaf9/python/algorithm_x.html
>
> Which should also be applicable to Haskell
>
>
> On Fri., Jul. 23, 2021, 4:33 p.m. Henning Thielemann, <
> lemming at henning-thielemann.de> wrote:
>
>>
>> On Fri, 23 Jul 2021, Casey Hawthorne wrote:
>>
>> > What would be the equivalent in Haskell of Knuth's Dancing Links method
>> > on a doubly linked circular list?
>>
>> I have written the exact set cover solver just with bit manipulation.
>> Linked lists would require much more memory in most cases.
>>
>> https://hackage.haskell.org/package/set-cover
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From frase at frase.id.au Sun Jul 25 05:50:03 2021
From: frase at frase.id.au (Fraser Tweedale)
Date: Sun, 25 Jul 2021 15:50:03 +1000
Subject: [Haskell-cafe] base64-bytestring memory corruption bug
Message-ID:
Hello,
I want to bring to wider attention a memory bug present in
base64-bytestring[1]. In summary, in some cases too few bytes are
allocated for the output when performing base64url decoding. This
can lead to memory corruption (which I have observed[2]), and
possibly crashes (which I have not observed).
I submitted a pull request[2] that fixes the issue some days ago,
but did not receive a response from the maintainers yet. I
understand that maintainers may be busy or unavailable, and that is
fine. So I am posting here mainly to ensure that USERS are aware of
the issue.
To maintainers: let me know if I can provider further assistance to
resolve this issue and release a fix.
[1] https://github.com/haskell/base64-bytestring/issues/44
[2] https://github.com/frasertweedale/hs-jose/issues/102
[3] https://github.com/haskell/base64-bytestring/pull/45
Thanks,
Fraser
From leah at vuxu.org Mon Jul 26 12:16:55 2021
From: leah at vuxu.org (Leah Neukirchen)
Date: Mon, 26 Jul 2021 14:16:55 +0200
Subject: [Haskell-cafe] Munich Virtual Haskell Meeting, 2021-07-28 @ 19:30
Message-ID: <87tukh9snc.fsf@vuxu.org>
Dear all,
This week, our monthly Munich Haskell Meeting will take place again
on Wednesday, July 28 on Jitsi at 19:30 CEST.
**Due to bad weather for meetings outside this meeting will take place online!**
For details see here:
https://muenchen.haskell.bayern/dates.html
A Jitsi link to join the room is provided on the page.
Everybody is welcome, especially the Haskellers from Bavaria that do
not usually come to our Munich meetings due to travel distance!
cu,
--
Leah Neukirchen https://leahneukirchen.org/
From stuart.hungerford at gmail.com Wed Jul 28 00:25:06 2021
From: stuart.hungerford at gmail.com (Stuart Hungerford)
Date: Wed, 28 Jul 2021 10:25:06 +1000
Subject: [Haskell-cafe] Looking for exemplars of "boring" or "simple"
Haskell projects...
Message-ID:
Greetings Haskellers,
I've been reading more about the ideas behind "simple" or "boring"
Haskell and I am wondering if there are libraries or projects that
exemplify the ideas behind these approaches?
TIA,
Stu
From lemming at henning-thielemann.de Wed Jul 28 06:59:19 2021
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Wed, 28 Jul 2021 08:59:19 +0200 (CEST)
Subject: [Haskell-cafe] Looking for exemplars of "boring" or "simple"
Haskell projects...
In-Reply-To:
References:
Message-ID:
On Wed, 28 Jul 2021, Stuart Hungerford wrote:
> I've been reading more about the ideas behind "simple" or "boring"
> Haskell and I am wondering if there are libraries or projects that
> exemplify the ideas behind these approaches?
If it counts, I maintain some libraries that are essentially Haskell 98 +
hierarchical modules. I hoped that it helps users of non-GHC compilers but
it seems there are currently no maintained Haskell compilers other than
GHC and variants. At least, Hugs is still available in Debian.
https://hackage.haskell.org/package/alsa-core
https://hackage.haskell.org/package/alsa-pcm
https://hackage.haskell.org/package/alsa-seq
https://hackage.haskell.org/package/apportionment
https://hackage.haskell.org/package/audacity
https://hackage.haskell.org/package/battleship-combinatorics
https://hackage.haskell.org/package/bibtex
https://hackage.haskell.org/package/blas-ffi
https://hackage.haskell.org/package/board-games
https://hackage.haskell.org/package/bool8
https://hackage.haskell.org/package/buffer-pipe
https://hackage.haskell.org/package/cabal-flatpak
https://hackage.haskell.org/package/cabal-sort
https://hackage.haskell.org/package/calendar-recycling
https://hackage.haskell.org/package/car-pool
https://hackage.haskell.org/package/check-pvp
https://hackage.haskell.org/package/checksum
https://hackage.haskell.org/package/combinatorial
https://hackage.haskell.org/package/comfort-graph
https://hackage.haskell.org/package/concurrent-split
https://hackage.haskell.org/package/cpuid
https://hackage.haskell.org/package/cutter
https://hackage.haskell.org/package/data-accessor-transformers
https://hackage.haskell.org/package/data-accessor
https://hackage.haskell.org/package/data-ref
https://hackage.haskell.org/package/database-study
https://hackage.haskell.org/package/doctest-exitcode-stdio
https://hackage.haskell.org/package/doctest-extract
https://hackage.haskell.org/package/doctest-lib
https://hackage.haskell.org/package/dsp
https://hackage.haskell.org/package/enumset
https://hackage.haskell.org/package/equal-files
https://hackage.haskell.org/package/event-list
https://hackage.haskell.org/package/explicit-exception
https://hackage.haskell.org/package/fftw-ffi
https://hackage.haskell.org/package/gnuplot
https://hackage.haskell.org/package/group-by-date
https://hackage.haskell.org/package/guarded-allocation
https://hackage.haskell.org/package/hackage-processing
https://hackage.haskell.org/package/hgl-example
https://hackage.haskell.org/package/http-monad
https://hackage.haskell.org/package/iff
https://hackage.haskell.org/package/internetmarke
https://hackage.haskell.org/package/interpolation
https://hackage.haskell.org/package/jack
https://hackage.haskell.org/package/lapack-ffi-tools
https://hackage.haskell.org/package/lapack-ffi
https://hackage.haskell.org/package/latex
https://hackage.haskell.org/package/lazyio
https://hackage.haskell.org/package/llvm-ffi-tools
https://hackage.haskell.org/package/llvm-ffi
https://hackage.haskell.org/package/llvm-pkg-config
https://hackage.haskell.org/package/markov-chain
https://hackage.haskell.org/package/mbox-utility
https://hackage.haskell.org/package/med-module
https://hackage.haskell.org/package/midi-alsa
https://hackage.haskell.org/package/midi-music-box
https://hackage.haskell.org/package/midi
https://hackage.haskell.org/package/mohws
https://hackage.haskell.org/package/monoid-transformer
https://hackage.haskell.org/package/netlib-ffi
https://hackage.haskell.org/package/non-empty
https://hackage.haskell.org/package/non-negative
https://hackage.haskell.org/package/numeric-quest
https://hackage.haskell.org/package/opensoundcontrol-ht
https://hackage.haskell.org/package/pathtype
https://hackage.haskell.org/package/poll
https://hackage.haskell.org/package/pooled-io
https://hackage.haskell.org/package/prelude-compat
https://hackage.haskell.org/package/prelude2010
https://hackage.haskell.org/package/probability
https://hackage.haskell.org/package/quickcheck-transformer
https://hackage.haskell.org/package/reactive-balsa
https://hackage.haskell.org/package/reactive-banana-bunch
https://hackage.haskell.org/package/reactive-jack
https://hackage.haskell.org/package/reactive-midyim
https://hackage.haskell.org/package/sample-frame
https://hackage.haskell.org/package/set-cover
https://hackage.haskell.org/package/shell-utility
https://hackage.haskell.org/package/sound-collage
https://hackage.haskell.org/package/sox
https://hackage.haskell.org/package/soxlib
https://hackage.haskell.org/package/split-record
https://hackage.haskell.org/package/spreadsheet
https://hackage.haskell.org/package/stm-split
https://hackage.haskell.org/package/storable-enum
https://hackage.haskell.org/package/storable-record
https://hackage.haskell.org/package/storable-tuple
https://hackage.haskell.org/package/storablevector-carray
https://hackage.haskell.org/package/storablevector
https://hackage.haskell.org/package/supercollider-ht
https://hackage.haskell.org/package/supercollider-midi
https://hackage.haskell.org/package/tagchup
https://hackage.haskell.org/package/toilet
https://hackage.haskell.org/package/unicode
https://hackage.haskell.org/package/unique-logic
https://hackage.haskell.org/package/unsafe
https://hackage.haskell.org/package/utility-ht
https://hackage.haskell.org/package/wraxml
https://hackage.haskell.org/package/xml-basic
https://hackage.haskell.org/package/youtube
If they are not strictly Haskell 98, I hope they are still "simple"
enough.
From stuart.hungerford at gmail.com Wed Jul 28 07:25:18 2021
From: stuart.hungerford at gmail.com (Stuart Hungerford)
Date: Wed, 28 Jul 2021 17:25:18 +1000
Subject: [Haskell-cafe] Looking for exemplars of "boring" or "simple"
Haskell projects...
In-Reply-To:
References:
Message-ID:
Holy mackerel that’s a treasure trove of learning opportunities right there.
On Wed, 28 Jul 2021 at 4:59 pm, Henning Thielemann <
lemming at henning-thielemann.de> wrote:
>
> On Wed, 28 Jul 2021, Stuart Hungerford wrote:
>
> > I've been reading more about the ideas behind "simple" or "boring"
> > Haskell and I am wondering if there are libraries or projects that
> > exemplify the ideas behind these approaches?
>
> If it counts, I maintain some libraries that are essentially Haskell 98 +
> hierarchical modules. I hoped that it helps users of non-GHC compilers but
> it seems there are currently no maintained Haskell compilers other than
> GHC and variants. At least, Hugs is still available in Debian.
>
> https://hackage.haskell.org/package/alsa-core
> https://hackage.haskell.org/package/alsa-pcm
> https://hackage.haskell.org/package/alsa-seq
> https://hackage.haskell.org/package/apportionment
> https://hackage.haskell.org/package/audacity
> https://hackage.haskell.org/package/battleship-combinatorics
> https://hackage.haskell.org/package/bibtex
> https://hackage.haskell.org/package/blas-ffi
> https://hackage.haskell.org/package/board-games
> https://hackage.haskell.org/package/bool8
> https://hackage.haskell.org/package/buffer-pipe
> https://hackage.haskell.org/package/cabal-flatpak
> https://hackage.haskell.org/package/cabal-sort
> https://hackage.haskell.org/package/calendar-recycling
> https://hackage.haskell.org/package/car-pool
> https://hackage.haskell.org/package/check-pvp
> https://hackage.haskell.org/package/checksum
> https://hackage.haskell.org/package/combinatorial
> https://hackage.haskell.org/package/comfort-graph
> https://hackage.haskell.org/package/concurrent-split
> https://hackage.haskell.org/package/cpuid
> https://hackage.haskell.org/package/cutter
> https://hackage.haskell.org/package/data-accessor-transformers
> https://hackage.haskell.org/package/data-accessor
> https://hackage.haskell.org/package/data-ref
> https://hackage.haskell.org/package/database-study
> https://hackage.haskell.org/package/doctest-exitcode-stdio
> https://hackage.haskell.org/package/doctest-extract
> https://hackage.haskell.org/package/doctest-lib
> https://hackage.haskell.org/package/dsp
> https://hackage.haskell.org/package/enumset
> https://hackage.haskell.org/package/equal-files
> https://hackage.haskell.org/package/event-list
> https://hackage.haskell.org/package/explicit-exception
> https://hackage.haskell.org/package/fftw-ffi
> https://hackage.haskell.org/package/gnuplot
> https://hackage.haskell.org/package/group-by-date
> https://hackage.haskell.org/package/guarded-allocation
> https://hackage.haskell.org/package/hackage-processing
> https://hackage.haskell.org/package/hgl-example
> https://hackage.haskell.org/package/http-monad
> https://hackage.haskell.org/package/iff
> https://hackage.haskell.org/package/internetmarke
> https://hackage.haskell.org/package/interpolation
> https://hackage.haskell.org/package/jack
> https://hackage.haskell.org/package/lapack-ffi-tools
> https://hackage.haskell.org/package/lapack-ffi
> https://hackage.haskell.org/package/latex
> https://hackage.haskell.org/package/lazyio
> https://hackage.haskell.org/package/llvm-ffi-tools
> https://hackage.haskell.org/package/llvm-ffi
> https://hackage.haskell.org/package/llvm-pkg-config
> https://hackage.haskell.org/package/markov-chain
> https://hackage.haskell.org/package/mbox-utility
> https://hackage.haskell.org/package/med-module
> https://hackage.haskell.org/package/midi-alsa
> https://hackage.haskell.org/package/midi-music-box
> https://hackage.haskell.org/package/midi
> https://hackage.haskell.org/package/mohws
> https://hackage.haskell.org/package/monoid-transformer
> https://hackage.haskell.org/package/netlib-ffi
> https://hackage.haskell.org/package/non-empty
> https://hackage.haskell.org/package/non-negative
> https://hackage.haskell.org/package/numeric-quest
> https://hackage.haskell.org/package/opensoundcontrol-ht
> https://hackage.haskell.org/package/pathtype
> https://hackage.haskell.org/package/poll
> https://hackage.haskell.org/package/pooled-io
> https://hackage.haskell.org/package/prelude-compat
> https://hackage.haskell.org/package/prelude2010
> https://hackage.haskell.org/package/probability
> https://hackage.haskell.org/package/quickcheck-transformer
> https://hackage.haskell.org/package/reactive-balsa
> https://hackage.haskell.org/package/reactive-banana-bunch
> https://hackage.haskell.org/package/reactive-jack
> https://hackage.haskell.org/package/reactive-midyim
> https://hackage.haskell.org/package/sample-frame
> https://hackage.haskell.org/package/set-cover
> https://hackage.haskell.org/package/shell-utility
> https://hackage.haskell.org/package/sound-collage
> https://hackage.haskell.org/package/sox
> https://hackage.haskell.org/package/soxlib
> https://hackage.haskell.org/package/split-record
> https://hackage.haskell.org/package/spreadsheet
> https://hackage.haskell.org/package/stm-split
> https://hackage.haskell.org/package/storable-enum
> https://hackage.haskell.org/package/storable-record
> https://hackage.haskell.org/package/storable-tuple
> https://hackage.haskell.org/package/storablevector-carray
> https://hackage.haskell.org/package/storablevector
> https://hackage.haskell.org/package/supercollider-ht
> https://hackage.haskell.org/package/supercollider-midi
> https://hackage.haskell.org/package/tagchup
> https://hackage.haskell.org/package/toilet
> https://hackage.haskell.org/package/unicode
> https://hackage.haskell.org/package/unique-logic
> https://hackage.haskell.org/package/unsafe
> https://hackage.haskell.org/package/utility-ht
> https://hackage.haskell.org/package/wraxml
> https://hackage.haskell.org/package/xml-basic
> https://hackage.haskell.org/package/youtube
>
> If they are not strictly Haskell 98, I hope they are still "simple"
> enough.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nikhil at acm.org Wed Jul 28 15:00:32 2021
From: nikhil at acm.org (Rishiyur Nikhil)
Date: Wed, 28 Jul 2021 11:00:32 -0400
Subject: [Haskell-cafe] Re. Looking for exemplars of "boring" or "simple"
Haskell projects...
Message-ID:
A couple of years ago I wrote "Forvis", a "complete" semantics of the
RISC-V instruction set in "extremely elementary" Haskell.
It's free and open-source, at:
https://github.com/rsnikhil/Forvis_RISCV-ISA-Spec
"Extremely Elementary" Haskell: small subset of Haskell 98. No
type classes, no monads (except the top-level driver that uses
standard IO), no GHC extensions, nothing. Just vanilla function
definitions and vanilla algebraic types (a "Miranda" subset).
Motivation: appeal to hardware designers, compiler writers, and others
who are keenly interested in a clearly readable and executable precise
spec of RISC-V semantics, but are not at all interested in learning
Haskell.
It's about 12K-13K lines of Haskell, all in one 'src/' directory.
Full "standard ISA" coverage:
- Unprivileged ISA:
- RV32I (32-bit) and RV64I (64-bit) basic Integer instructions
- Standard ISA extensions M (Integer Mult/Div), A (Atomics), C
(Compressed) and FD (Single- and Double-precision floating
point)
- Privileged ISA: Modes M (Machine), S (Supervisor) and U (User),
including full complement of CSRs (Control and Status Registers).
This includes full trap and interrupt handling, and RISC-V
Virtual Memory schemes Sv32, Sv39 and Sv48.
I've tested it on all the standard RISC-V ISA tests (pass), booted a
Linux kernel (about 200 Million RISC-V Instructions), and the much
smaller real-time OS FreeRTOS. I'm sure it'll work for any other
RISC-V software as well. For this, the sources contain additional
code to package the "CPU" into a small "system", by adding Haskell
models of memory and a UART.
I haven't looked at it since early 2020, but it should all still work fine.
Nikhil
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stuart.hungerford at gmail.com Thu Jul 29 23:12:56 2021
From: stuart.hungerford at gmail.com (Stuart Hungerford)
Date: Fri, 30 Jul 2021 09:12:56 +1000
Subject: [Haskell-cafe] Re. Looking for exemplars of "boring" or "simple"
Haskell projects... (Rishiyur Nikhil)
In-Reply-To:
References:
Message-ID:
On Thu, Jul 29, 2021 at 10:07 PM wrote:
> [...]
> A couple of years ago I wrote "Forvis", a "complete" semantics of the
> RISC-V instruction set in "extremely elementary" Haskell.
>
> It's free and open-source, at:
> https://github.com/rsnikhil/Forvis_RISCV-ISA-Spec
>
> "Extremely Elementary" Haskell: small subset of Haskell 98. No
> type classes, no monads (except the top-level driver that uses
> standard IO), no GHC extensions, nothing. Just vanilla function
> definitions and vanilla algebraic types (a "Miranda" subset).
>
> Motivation: appeal to hardware designers, compiler writers, and others
> who are keenly interested in a clearly readable and executable precise
> spec of RISC-V semantics, but are not at all interested in learning
> Haskell.
Thanks for that, I'll have a look.
Stu
From benjamin.redelings at gmail.com Fri Jul 30 06:35:13 2021
From: benjamin.redelings at gmail.com (Benjamin Redelings)
Date: Thu, 29 Jul 2021 23:35:13 -0700
Subject: [Haskell-cafe] Help on syntactic sugar for combining lazy &
strict monads?
In-Reply-To:
References:
Message-ID: <89e88e9f-b956-6722-fefe-ed34ccd0f862@gmail.com>
Hi Olaf,
I think you need to look at two things:
1. The Giry monad, and how it deals with continuous spaces.
2. The paper "Practical Probabilistic Programming with Monads" -
https://doi.org/10.1145/2804302.2804317
Also, observing 2.0 from a continuous distribution is not nonsensical.
-BenRI
On 7/21/21 11:15 PM, Olaf Klinke wrote:
>> However, a lazy interpreter causes problems when trying to introduce
>> *observation* statements (aka conditioning statements) into the monad
>> [3]. For example,
>>
>> run_lazy $ do
>> x <- normal 0 1
>> y <- normal x 1
>> z <- normal y 1
>> 2.0 `observe_from` normal z 1
>> return y
>>
>> In the above code fragment, y will be forced because it is returned, and
>> y will force x. However, the "observe_from" statement will never be
>> forced, because it does not produce a result that is demanded.
>
> I'm very confused. If the observe_from statement is never demanded,
> then what semantics should it have? What is the type of observe_from?
> It seems it is
> a -> m a -> m ()
> for whatever monad m you are using. But conditioning usually is a
> function
> Observation a -> Dist a -> Dist a
> so you must use the result of the conditioning somehow. And isn't the
> principle of Monte Carlo to approximate the posterior by sampling from
> it? I tend to agree with your suggestion that observations and
> sampling can not be mixed (in the same do-notation) but the latter
> have to be collected in a prior, then conditioned by an observation.
>
> What is the semantic connection between your sample and obsersvation
> monad? What is the connection between both and the semantic
> probability distributions? I claim that once you have typed
> everything, it becomes clear where the problem is.
>
> Olaf
>
> P.S. It has always bugged me that probabilists use elements and events
> interchangingly, while this can only be done on discrete spaces. So
> above I would rather like to write
> (2.0==) `observe_from` (normal 0 1)
> which still is a non-sensical statement if (normal 0 1) is a
> continuous distribution where each point set has probability zero.
From benjamin.redelings at gmail.com Fri Jul 30 06:55:15 2021
From: benjamin.redelings at gmail.com (Benjamin Redelings)
Date: Thu, 29 Jul 2021 23:55:15 -0700
Subject: [Haskell-cafe] Help on syntactic sugar for combining lazy &
strict monads?
In-Reply-To: <89e88e9f-b956-6722-fefe-ed34ccd0f862@gmail.com>
References:
<89e88e9f-b956-6722-fefe-ed34ccd0f862@gmail.com>
Message-ID: <91ce6419-8492-b845-4951-a19f31f609d4@gmail.com>
The idea of changing observation to look like `Observation a -> Dist a
-> Dist a` is interesting, but I am not sure if this works in practice.
Generally you cannot actually produce an exact sample from a
distribution plus an observation. MCMC, for example, produces
collections of samples that you can average against, and the error
decreases as the number of samples increases. But you can't generate a
single point that is a sample from the posterior.
Maybe it would be possible to change use separate types for
distributions from which you cannot directly sample? Something like
`Observation a -> SampleableDist -> NonsampleableDist a`.
I will think about whether this would solve the problem with laziness...
-BenRI
On 7/29/21 11:35 PM, Benjamin Redelings wrote:
> Hi Olaf,
>
> I think you need to look at two things:
>
> 1. The Giry monad, and how it deals with continuous spaces.
>
> 2. The paper "Practical Probabilistic Programming with Monads" -
> https://doi.org/10.1145/2804302.2804317
>
> Also, observing 2.0 from a continuous distribution is not nonsensical.
>
> -BenRI
>
> On 7/21/21 11:15 PM, Olaf Klinke wrote:
>>> However, a lazy interpreter causes problems when trying to introduce
>>> *observation* statements (aka conditioning statements) into the monad
>>> [3]. For example,
>>>
>>> run_lazy $ do
>>> x <- normal 0 1
>>> y <- normal x 1
>>> z <- normal y 1
>>> 2.0 `observe_from` normal z 1
>>> return y
>>>
>>> In the above code fragment, y will be forced because it is returned,
>>> and
>>> y will force x. However, the "observe_from" statement will never be
>>> forced, because it does not produce a result that is demanded.
>>
>> I'm very confused. If the observe_from statement is never demanded,
>> then what semantics should it have? What is the type of observe_from?
>> It seems it is
>> a -> m a -> m ()
>> for whatever monad m you are using. But conditioning usually is a
>> function
>> Observation a -> Dist a -> Dist a
>> so you must use the result of the conditioning somehow. And isn't the
>> principle of Monte Carlo to approximate the posterior by sampling
>> from it? I tend to agree with your suggestion that observations and
>> sampling can not be mixed (in the same do-notation) but the latter
>> have to be collected in a prior, then conditioned by an observation.
>>
>> What is the semantic connection between your sample and obsersvation
>> monad? What is the connection between both and the semantic
>> probability distributions? I claim that once you have typed
>> everything, it becomes clear where the problem is.
>>
>> Olaf
>>
>> P.S. It has always bugged me that probabilists use elements and
>> events interchangingly, while this can only be done on discrete
>> spaces. So above I would rather like to write
>> (2.0==) `observe_from` (normal 0 1)
>> which still is a non-sensical statement if (normal 0 1) is a
>> continuous distribution where each point set has probability zero.
From olf at aatal-apotheke.de Fri Jul 30 21:33:29 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Fri, 30 Jul 2021 23:33:29 +0200 (CEST)
Subject: [Haskell-cafe] Help on syntactic sugar for combining lazy &
strict monads?
In-Reply-To: <25286850-f924-4694-c886-3ba5890512dc@gmail.com>
References:
<25286850-f924-4694-c886-3ba5890512dc@gmail.com>
Message-ID:
On Thu, 29 Jul 2021, Benjamin Redelings wrote:
> Hi Olaf,
>
> I think you need to look at two things:
>
> 1. The Giry monad, and how it deals with continuous spaces.
I believe I understand the Giry monad well, and it is my measuring stick
for functional probabilistic programming. Even more well-suited for
programming ought to be the valuation monad, because that is a monad on
domains, the [*] semantic spaces of the lambda calculus. Unfortunately, to
my knowledge until now attempts were unsuccessful to find a cartesian
closed category of domains which is also closed under this probabilistic
powerdomain construction.
[*] There are of course other semantics, domains being one option.
>
> 2. The paper "Practical Probabilistic Programming with Monads" -
> https://doi.org/10.1145/2804302.2804317
Wasn't that what you linked to in your original post? As said above,
measure spaces is the wrong target category, in my opinion. There is too
much non constructive stuff in there. See the work of Culbertson and
Sturtz, which is categorically nice but not very constructive.
>
> Also, observing 2.0 from a continuous distribution is not nonsensical.
>
> -BenRI
Perhaps I am being too much of a point-free topologist here. Call me
pedantic. Or I don't understand sampling at all. To me, a point is an
idealised object, only open sets really exist and are observable. If the
space is discrete, points are open sets. But on the real line you can not
measure with infinite precision, so any observation must contain an
interval. That aligns very nicely with functional programming, where only
finite parts of infinite lazy structures are ever observable, and these
finite parts are the open sets in the domain semantics.
So please explain how observing 2.0 from a continuous distribution is not
nonsensical.
Olaf
>
> On 7/21/21 11:15 PM, Olaf Klinke wrote:
>>> However, a lazy interpreter causes problems when trying to introduce
>>> *observation* statements (aka conditioning statements) into the monad
>>> [3]. For example,
>>>
>>> run_lazy $ do
>>> x <- normal 0 1
>>> y <- normal x 1
>>> z <- normal y 1
>>> 2.0 `observe_from` normal z 1
>>> return y
>>>
>>> In the above code fragment, y will be forced because it is returned, and
>>> y will force x. However, the "observe_from" statement will never be
>>> forced, because it does not produce a result that is demanded.
>>
>> I'm very confused. If the observe_from statement is never demanded, then
>> what semantics should it have? What is the type of observe_from? It seems
>> it is
>> a -> m a -> m ()
>> for whatever monad m you are using. But conditioning usually is a function
>> Observation a -> Dist a -> Dist a
>> so you must use the result of the conditioning somehow. And isn't the
>> principle of Monte Carlo to approximate the posterior by sampling from it?
>> I tend to agree with your suggestion that observations and sampling can not
>> be mixed (in the same do-notation) but the latter have to be collected in a
>> prior, then conditioned by an observation.
>>
>> What is the semantic connection between your sample and obsersvation monad?
>> What is the connection between both and the semantic probability
>> distributions? I claim that once you have typed everything, it becomes
>> clear where the problem is.
>>
>> Olaf
>>
>> P.S. It has always bugged me that probabilists use elements and events
>> interchangingly, while this can only be done on discrete spaces. So above I
>> would rather like to write
>> (2.0==) `observe_from` (normal 0 1)
>> which still is a non-sensical statement if (normal 0 1) is a continuous
>> distribution where each point set has probability zero.
>
From stuart.hungerford at gmail.com Sat Jul 31 01:28:30 2021
From: stuart.hungerford at gmail.com (Stuart Hungerford)
Date: Sat, 31 Jul 2021 11:28:30 +1000
Subject: [Haskell-cafe] Sets, typeclasses and functional dependencies
Message-ID:
Hi Haskellers,
After reading:
https://stackoverflow.com/questions/34790721/where-is-the-set-type-class
https://stackoverflow.com/questions/25191659/why-is-haskell-missing-obvious-typeclasses
https://stackoverflow.com/questions/11508642/haskell-how-can-i-define-a-type-class-for-sets
I can see why Haskell base does not provide typeclasses for sets. I'm
wondering now though if I did create a Set typeclass what it would
look like. There's several approaches using various language
extensions and to me this approach using functional dependencies seems
simpler than the other approaches:
class Setish set a | set -> a where
empty :: set
singleton :: a -> set
-- and so on.
My question is how does the functional dependency in Setish interact
with "extra" types needed for richer set operations like finding the
powerset or taking a cartesian product? Something like:
class Setish set a | set -> a where
empty :: set
singleton :: a -> set
power_set :: set -> ?something
product :: set -> ?something -> ?something
-- and so on.
TIA,
Stu
From olf at aatal-apotheke.de Sat Jul 31 08:22:20 2021
From: olf at aatal-apotheke.de (Olaf Klinke)
Date: Sat, 31 Jul 2021 10:22:20 +0200 (CEST)
Subject: [Haskell-cafe] Help on syntactic sugar for combining lazy &
strict monads?
In-Reply-To: <91ce6419-8492-b845-4951-a19f31f609d4@gmail.com>
References:
<89e88e9f-b956-6722-fefe-ed34ccd0f862@gmail.com>
<91ce6419-8492-b845-4951-a19f31f609d4@gmail.com>
Message-ID:
On Thu, 29 Jul 2021, Benjamin Redelings wrote:
> The idea of changing observation to look like `Observation a -> Dist a ->
> Dist a` is interesting, but I am not sure if this works in practice.
> Generally you cannot actually produce an exact sample from a distribution
> plus an observation. MCMC, for example, produces collections of samples that
> you can average against, and the error decreases as the number of samples
> increases. But you can't generate a single point that is a sample from the
> posterior.
Isn't that a problem of the concrete implementation of the probability
monad (MCMC)? Certainly not something that I, as the modeller, would
like to be concerned with. Other implementations might not have this
limitation.
Are we talking about the same thing, the mathematical conditional
probability? This has the type I described, so it should not be
an unusual design choice. Moreover, it is a partial function. What happens
if I try to condition on an impossible observation in Bali-phy?
>
> Maybe it would be possible to change use separate types for distributions
> from which you cannot directly sample? Something like `Observation a ->
> SampleableDist -> NonsampleableDist a`.
I haven't seen types of much else in your system, so I can not provide
meaningful insights here. My concern would be that parts of models become
tainted as Nonsampleable too easily, a trapdoor operation. But this is
just guesswork, I don't grasp sampling well enough, apparently. Here is my
attempt, please correct me if this is wrong.
Sampling is a monad morphism from a "true" monad of probability
distributions (e.g. the Giry monad) to a state monad transforming a
source of randomness, in such a way that any infinite stream of samples
has the property that for any measurable set U, the probability measure of
U equals the limit of the ratio of samples inside and outside of U, as the
prefix of the infinite stream grows to infinity.
Conditioning on an observation O should translate to the state monad in a
way so that the sample-producer is now forbidden to output anything
outside O.
Hence conditioning on an impossible observation must produce a state
transformer that never outputs anything: the bottom function.
>
> I will think about whether this would solve the problem with laziness...
>
> -BenRI
>
> On 7/29/21 11:35 PM, Benjamin Redelings wrote:
>> Hi Olaf,
>>
>> I think you need to look at two things:
>>
>> 1. The Giry monad, and how it deals with continuous spaces.
>>
>> 2. The paper "Practical Probabilistic Programming with Monads" -
>> https://doi.org/10.1145/2804302.2804317
>>
>> Also, observing 2.0 from a continuous distribution is not nonsensical.
>>
>> -BenRI
>>
>> On 7/21/21 11:15 PM, Olaf Klinke wrote:
>>>> However, a lazy interpreter causes problems when trying to introduce
>>>> *observation* statements (aka conditioning statements) into the monad
>>>> [3]. For example,
>>>>
>>>> run_lazy $ do
>>>> x <- normal 0 1
>>>> y <- normal x 1
>>>> z <- normal y 1
>>>> 2.0 `observe_from` normal z 1
>>>> return y
>>>>
>>>> In the above code fragment, y will be forced because it is returned, and
>>>> y will force x. However, the "observe_from" statement will never be
>>>> forced, because it does not produce a result that is demanded.
>>>
>>> I'm very confused. If the observe_from statement is never demanded, then
>>> what semantics should it have? What is the type of observe_from? It seems
>>> it is
>>> a -> m a -> m ()
>>> for whatever monad m you are using. But conditioning usually is a function
>>> Observation a -> Dist a -> Dist a
>>> so you must use the result of the conditioning somehow. And isn't the
>>> principle of Monte Carlo to approximate the posterior by sampling from it?
>>> I tend to agree with your suggestion that observations and sampling can
>>> not be mixed (in the same do-notation) but the latter have to be collected
>>> in a prior, then conditioned by an observation.
>>>
>>> What is the semantic connection between your sample and obsersvation
>>> monad? What is the connection between both and the semantic probability
>>> distributions? I claim that once you have typed everything, it becomes
>>> clear where the problem is.
>>>
>>> Olaf
>>>
>>> P.S. It has always bugged me that probabilists use elements and events
>>> interchangingly, while this can only be done on discrete spaces. So above
>>> I would rather like to write
>>> (2.0==) `observe_from` (normal 0 1)
>>> which still is a non-sensical statement if (normal 0 1) is a continuous
>>> distribution where each point set has probability zero.
>
From lemming at henning-thielemann.de Sat Jul 31 09:14:29 2021
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Sat, 31 Jul 2021 11:14:29 +0200 (CEST)
Subject: [Haskell-cafe] Sets, typeclasses and functional dependencies
In-Reply-To:
References:
Message-ID: <5982b2a2-c5d1-b2ef-8b32-d9d691c1fb75@henning-thielemann.de>
On Sat, 31 Jul 2021, Stuart Hungerford wrote:
> I can see why Haskell base does not provide typeclasses for sets. I'm
> wondering now though if I did create a Set typeclass what it would
> look like. There's several approaches using various language
> extensions and to me this approach using functional dependencies seems
> simpler than the other approaches:
>
> class Setish set a | set -> a where
>
> empty :: set
>
> singleton :: a -> set
>
> -- and so on.
A more modern approach would use type functions:
class Setish set where
type Element set
empty :: set
singleton :: Element set -> set
> My question is how does the functional dependency in Setish interact
> with "extra" types needed for richer set operations like finding the
> powerset or taking a cartesian product?
powerset would need a type like:
powerset ::
(Powersettish powerset, Element powerset ~ set, Setish set) =>
set -> powerset
with an extra Powersettish class.
However, the type checker could not guess the exact powerset type, it
could be, e.g.
powerset :: Set a -> Set (Set a)
or
powerset :: Set a -> [Set a]
If you want that the powerset type is determined by the set type, then you
might define
class Settish set where
...
type Powerset set
powerset :: set -> Powerset set
From stuart.hungerford at gmail.com Sat Jul 31 10:14:36 2021
From: stuart.hungerford at gmail.com (Stuart Hungerford)
Date: Sat, 31 Jul 2021 20:14:36 +1000
Subject: [Haskell-cafe] Sets, typeclasses and functional dependencies
In-Reply-To: <5982b2a2-c5d1-b2ef-8b32-d9d691c1fb75@henning-thielemann.de>
References:
<5982b2a2-c5d1-b2ef-8b32-d9d691c1fb75@henning-thielemann.de>
Message-ID:
On Sat, Jul 31, 2021 at 7:14 PM Henning Thielemann
wrote:
> [...]
> A more modern approach would use type functions:
>
> class Setish set where
> type Element set
> empty :: set
> singleton :: Element set -> set
Thanks for the pointer.
> > My question is how does the functional dependency in Setish interact
> > with "extra" types needed for richer set operations like finding the
> > powerset or taking a cartesian product?
>
> powerset would need a type like:
>
> powerset ::
> (Powersettish powerset, Element powerset ~ set, Setish set) =>
> set -> powerset
>
> with an extra Powersettish class.
>
> However, the type checker could not guess the exact powerset type, it
> could be, e.g.
> powerset :: Set a -> Set (Set a)
> or
> powerset :: Set a -> [Set a]
Okay, I'm starting to see why the "Set" typeclass examples I could
find don't include a powerset or cartesian product method.
Thanks again,
Stu