From damien.mattei at gmail.com Fri Mar 1 09:57:32 2019 From: damien.mattei at gmail.com (Damien Mattei) Date: Fri, 1 Mar 2019 10:57:32 +0100 Subject: [Haskell-cafe] show in monad In-Reply-To: <0ac89f99-b922-04b5-9736-34568864f900@tu-dortmund.de> References: <56c5af80-4b6f-c288-f8ea-ea8adeb5019e@tu-dortmund.de> <0ac89f99-b922-04b5-9736-34568864f900@tu-dortmund.de> Message-ID: it's not clear, redaing back the thread it seems that it is not possible in a monad with the bind operator to have a display with show and now that adding the constraint and changing the type of flatten will do it, so i change the definition of flatten but it does not compile: flatten :: Show a => Prob (Prob a) -> Prob a flatten (Prob xs) = trace (" flatten " ++ (show xs)) Prob $ concat $ map multAll xs where multAll (Prob innerxs,p) = trace (" multAll p= " ++ (show p) ++ " ") map (\(x,r) -> (x,p*r)) innerxs Prelude> :load monade.hs [1 of 1] Compiling Main ( monade.hs, interpreted ) monade.hs:37:13: error: • No instance for (Show b) arising from a use of ‘flatten’ Possible fix: add (Show b) to the context of the type signature for: (>>=) :: forall a b. Prob a -> (a -> Prob b) -> Prob b • In the second argument of ‘trace’, namely ‘flatten’ In the expression: trace " Monad Prob >>= " flatten (fmap f m) In an equation for ‘>>=’: m >>= f = trace " Monad Prob >>= " flatten (fmap f m) | 37 | flatten (fmap f m) | ^^^^^^^ Failed, no modules loaded. Damien On Thu, Feb 28, 2019 at 4:58 PM Jos Kusiek wrote: > You do not need to change the Show instance. The one generated by deriving > Show is fine. As I said, you need to change the type of flatten and add the > constraint. > > flatten :: Show a => Prob (Prob a) -> Prob a > > On 28.02.19 15:30, Damien Mattei wrote: > > even with a definition of show i can not use it in flatten: > import Control.Monad > > import Data.Ratio > import Data.List (all) > import Debug.Trace > > newtype Prob a = Prob { getProb :: [(a,Rational)] }-- deriving Show > > > > instance Show a => Show (Prob a) where > show (Prob [(x,r)]) = ((show x) ++ " _ " ++ (show r)) > > > instance Functor Prob where > fmap f (Prob xs) = trace " Functor Prob " > Prob $ map (\(x,p) -> (f x,p)) xs > > > > > flatten :: Prob (Prob a) -> Prob a > flatten (Prob xs) = trace (" flatten " ++ (show xs)) > Prob $ concat $ map multAll xs > where multAll (Prob innerxs,p) = trace (" multAll p= " ++ (show p) ++ " > ") > map (\(x,r) -> (x,p*r)) innerxs > > monade.hs:23:44: error: > • No instance for (Show a) arising from a use of ‘show’ > Possible fix: > add (Show a) to the context of > the type signature for: > flatten :: forall a. Prob (Prob a) -> Prob a > • In the second argument of ‘(++)’, namely ‘(show xs)’ > In the first argument of ‘trace’, namely > ‘(" flatten " ++ (show xs))’ > In the expression: trace (" flatten " ++ (show xs)) Prob > | > 23 | flatten (Prob xs) = trace (" flatten " ++ (show xs)) > | ^^^^^^^ > Failed, no modules loaded. > > it seems show i defined is not in the context of flatten??? > > damien > > > > On Thu, Feb 28, 2019 at 12:57 PM Jos Kusiek > wrote: > >> You simply cannot do that. To be more precise, you cannot use show inside >> the bind operator on Prob (but you could use it in flatten). Deriving Show >> creates a Show instance which looks something like that: >> >> instance Show a => Show (Prob a) where ... >> >> This instance needs "a" to instanciate Show, so you can only use show >> with Prob types, where "a" is an instance of Show itself, e. g. Prob Int. >> Your flatten function does not guarantee that "a" is an instance of Show. >> The type says, any type for "a" will do it. You can easily restrict that >> with a class constraint: >> >> flatten :: Show a => Prob (Prob a) -> Prob a >> >> But now you have a problem with the bind operator. You can no longer use >> flatten here. The bind operator for Prob has the following type: >> >> (>>=) :: Prob a -> (a -> Prob b) -> Prob b >> >> There are no constraints here and you cannot add any constraints. The >> type is predefined by the Monad class. So it is not guaranteed, that this >> Prob type has a show function and you cannot guarantee it in any way. So >> you cannot use show on your first parameter type (Prob a) or your result >> type (Prob b) inside the bind or any function that is called by bind. >> >> On 28.02.19 11:00, Damien Mattei wrote: >> >> just for tracing the monad i have this : >> >> import Control.Monad >> >> import Data.Ratio >> import Data.List (all) >> import Debug.Trace >> >> newtype Prob a = Prob { getProb :: [(a,Rational)] } deriving Show >> >> instance Functor Prob where >> fmap f (Prob xs) = trace " Functor Prob " >> Prob $ map (\(x,p) -> (f x,p)) xs >> >> >> t >> >> >> flatten :: Prob (Prob a) -> Prob a >> flatten (Prob xs) = trace (" flatten " ++ show xs) >> Prob $ concat $ map multAll xs >> where multAll (Prob innerxs,p) = trace " multAll " >> map (\(x,r) -> (x,p*r)) innerxs >> >> >> instance Applicative Prob where >> pure = trace " Applicative Prob return " return >> (<*>) = trace " Applicative Prob ap " ap >> >> instance Monad Prob where >> return x = trace " Monad Prob return " >> Prob [(x,1%1)] >> m >>= f = trace " Monad Prob >>= " >> flatten (fmap f m) >> fail _ = trace " Monad Prob fail " >> Prob [] >> >> >> {- >> instance Applicative Prob where >> >> pure a = Prob [(a,1%1)] >> >> Prob fs <*> Prob as = Prob [(f a,x*y) | (f,x) <- fs, (a,y) <- as] >> >> >> instance Monad Prob where >> >> Prob as >>= f = Prob [(b,x*y) | (a,x) <- as, let Prob bs = f a, (b,y) >> <- bs] >> >> -} >> >> >> >> in this : >> >> flatten :: Prob (Prob a) -> Prob a >> flatten (Prob xs) = trace (" flatten " ++ show xs) >> Prob $ concat $ map multAll xs >> where multAll (Prob innerxs,p) = trace " multAll " >> map (\(x,r) -> (x,p*r)) innerxs >> >> >> i have this error: >> >> [1 of 1] Compiling Main ( monade.hs, interpreted ) >> >> monade.hs:22:43: error: >> • No instance for (Show a) arising from a use of ‘show’ >> Possible fix: >> add (Show a) to the context of >> the type signature for: >> flatten :: forall a. Prob (Prob a) -> Prob a >> • In the second argument of ‘(++)’, namely ‘show xs’ >> In the first argument of ‘trace’, namely ‘(" flatten " ++ show xs)’ >> In the expression: trace (" flatten " ++ show xs) Prob >> | >> 22 | flatten (Prob xs) = trace (" flatten " ++ show xs) >> | ^^^^^^^ >> Failed, no modules loaded. >> >> how can i implement a show for xs ? >> regards, >> damien >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> >> >> -- >> Dipl.-Inf. Jos Kusiek >> >> Technische Universität Dortmund >> Fakultät 4 - Informatik / Lehrstuhl 1 - Logik in der Informatik >> Otto-Hahn-Straße 12, Raum 3.020 >> 44227 Dortmund >> >> Tel.: +49 231-755 7523 >> >> > -- > Dipl.-Inf. Jos Kusiek > > Technische Universität Dortmund > Fakultät 4 - Informatik / Lehrstuhl 1 - Logik in der Informatik > Otto-Hahn-Straße 12, Raum 3.020 > 44227 Dortmund > > Tel.: +49 231-755 7523 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.i at gmx.at Fri Mar 1 15:16:35 2019 From: m.i at gmx.at (Miguel) Date: Fri, 1 Mar 2019 16:16:35 +0100 Subject: [Haskell-cafe] wiki-account-request Message-ID: <20190301151635.kkpq2m4x7fkbftqj@megaloman.softwarefools.com> Hi, Not receiving any response from the other channel I hereby request a wiki.haskell.org account with the preferred username: "oo_miguel" Thanks in Advance, Michal Idziorek Ref: https://wiki.haskell.org/index.php?title=Special:UserLogin "If you would like an account please email 'wiki-account-request' (at the domain haskell dot org) or, if you find that unresponsive, on the haskell-cafe mailing list." From simon at joyful.com Sat Mar 2 02:25:04 2019 From: simon at joyful.com (Simon Michael) Date: Fri, 1 Mar 2019 18:25:04 -0800 Subject: [Haskell-cafe] ANN: hledger-1.14 Message-ID: <8C18825D-85D1-4703-B8D1-E2749D2B5EB6@joyful.com> I'm pleased to announce the release of hledger 1.14! Thank you release contributors Jakob Schöttl and Jakub Zárybnický. hledger is a robust, cross-platform plain text accounting tool, for tracking money, time, stocks, cryptocurrencies or any other commodity, using double-entry accounting, private or shared plain text files, revision control, and command-line, curses or web UIs. Find out more at http://hledger.org and http://plaintextaccounting.org. Release notes summary: ---------------------- Inclusive balance assertions, commodities command, --invert option, JSON get/add support in hledger-web. For full details, see http://hledger.org/release-notes. Getting started: ---------------- All install methods are described at http://hledger.org/download . (system packages, windows binaries, docker, nix, cabal, stack, hledger-install bash script..) Some of these might take a few days to become up to date. Tutorials and all docs and support options are linked at http://hledger.org . Get help via chat in #hledger on Freenode: http://irc.hledger.org Or connect via Matrix: http://riot.hledger.org New and old users, contributors, sponsors, positive/negative feedback, always welcome! Best, -Simon From lazybonesxp at gmail.com Sat Mar 2 11:29:54 2019 From: lazybonesxp at gmail.com (=?UTF-8?B?0JjQstCw0L0g0J3QuNC60L7Qu9Cw0LXQsg==?=) Date: Sat, 2 Mar 2019 14:29:54 +0300 Subject: [Haskell-cafe] PR to "double-conversion" Message-ID: Hi Bryan! May I ask you to take a look at my PR to your double-conversion library? I’ve made it about six months ago and sent you an email. A few days ago I updated it. In the PR mentioned above I’m adding several things which need your attention: 1. Conversion to bytestring and text builder. 2. Conversion of Float (using type family, but without performance degrading). 3. More benchmarks. 4. Update of the original double-conversion library. Now a bunch of 20k doubles is converted to strict bytestring in 5-8 ms using bytestring builder instead of >200-300ms using just bytestring. Cheers, Rinat -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard.b.golden at gmail.com Mon Mar 4 06:11:38 2019 From: howard.b.golden at gmail.com (Howard B. Golden) Date: Sun, 3 Mar 2019 22:11:38 -0800 Subject: [Haskell-cafe] Haskell wiki account request response time Message-ID: Hi Michal, You requested a Haskell wiki account at Fri, 1 Mar 2019 16:08:10 +0100 > Date: Fri, 1 Mar 2019 16:08:10 +0100 > From: Miguel > To: wiki-account-request at haskell.org > Subject: wiki-account-request > oo_miguel Then, 8 minutes, 25 seconds later, you sent this message to Haskell-Cafe: > From: m.i at gmx.at (Miguel) > Date: Fri, 1 Mar 2019 16:16:35 +0100 > Subject: [Haskell-cafe] wiki-account-request > Message-ID: <20190301151635.kkpq2m4x7fkbftqj at megaloman.softwarefools.com> > Hi, > Not receiving any response from the other channel I hereby request a > wiki.haskell.org account with the preferred username: "oo_miguel" > Thanks in Advance, > Michal Idziorek > Ref: > https://wiki.haskell.org/index.php?title=Special:UserLogin > "If you would like an account please email 'wiki-account-request' > (at the domain haskell dot org) or, if you find that unresponsive, > on the haskell-cafe mailing list." At Fri, 1 Mar 2019 18:36:00 +0100, Henk-Jan_van_Tuyl created your account. This was 2 hours and 28 minutes after your initial request. Generally, we create accounts within 24 hours of their request, often much sooner. All of us who create accounts are volunteers. We create accounts manually to prevent spam. Regards, Howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From porrifolius at gmail.com Mon Mar 4 07:05:06 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 4 Mar 2019 20:05:06 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols Message-ID: Hi, this will probably be a somewhat rambling, open-ended question... my apologies in advance. There are some concrete questions after I explain what I'm trying to achieve, and some of them may even make sense! I'm planning on writing a multi-party, distributed protocol. I would like to informally model the protocol itself and the environment it would run in and then simulate it to see if it achieves what I expect. Then it would need to be implemented and tested of course. I think a formal, mathematical model of the protocol would be beyond my abilities. I'm looking for any advice or tool/library recommendations I can get on the modelling and simulating part of the process and how that work could be leveraged in the implementation to reduce the work and help ensure correctness. I have some ideas of how this could work, but I don't know if I'm approaching it with a suitably Haskell-like mindset... What I envisage at the moment is defining the possible behaviour of each party, who are not homogeneous, as well as things which form the external environment of the parties but will impact their behaviour. By external things I mean, for example, communication links which may drop or reorder packets, storage devices which may lose or corrupt data, machines that freeze or slew their clock... that sort of thing. I'm also picturing a 'universal coordinator' that controls interaction of the parties/environment during simulation. Modelling each party and external as a Harel-like statechart seems plausible. Perhaps some parts could be simpler FSMs, but I think many will be reasonably complex and involve internal state. It would be nice if the simulation could, to the limit of given processing time, exhaustively check the model rather than just randomly, as per quickcheck. If at each point the universal coordinator got a list of possible next states from each party/external it could simulate the simultaneity/sequencing of events by enumerating all combinations, of every size, of individual party/external transitions and then recursing with each separate combination applied as being the next set of simultaneous state transitions. To reduce the space to exhaustively search it would be nice if any values that the parties used, and were being tested by, were abstracted. Not sure what the technical term is but I mean asserting relationships between variables in a given domain, rather than their actual value. For example, imagine a distributed vote between two nodes where each node must vote for a random integer greater than their last vote. Each node is a party in the protocol. In the current branch of the exhaustive search of the protocol a relation between node 1's existing vote, v1, and node 2's existing vote, v2, has already been asserted: v1>v2. So when the coordinator enumerates the possibilities for sets of next-state transitions, with each node n asserting vn'>vn in their individual party state transition, it will prune any search branch which doesn't satisfy the union of the assertions (v1>v2, v1'>v1, v2'>v2), and recurse into the search branch alternatives with v1'v2'. So, some questions... Is what I'm suggesting even vaguely sensible way to approach the problem in Haskell? Or am I getting carried away and some neat tricks zipping list monoids or something will get the job done? Is there some fantastic tool/library which already does everything I want? Are there any good tools or libraries in Haskell for defining and manipulating statecharts? The sessiontypes, and sessiontypes-distributed, libraries are cool, representing the protocol at the type level... but these only handle point-to-point, two-party protocols, right? I expect that extending them would be a _serious_ amount of work. Can quickcheck do the sort of exhaustive coverage testing that I'd like? Again, not sure of the correct terminology but can quickcheck generate tests at a variable-relation assertion level? I.e. inspect the operators and boolean tests over a domain (+, ==, >=, append, length, contains, etc) and explore test cases according to whether certain relations between variables hold. Any feelings/opinions about whether Oleg Kiselyov's Typed Tagless Final Interpreter (http://okmij.org/ftp/tagless-final/index.html) work might be a useful mechanism to define the behaviour of parties within the protocol (the statecharts, basically) and then have different interpreters that enable both the testing and implementation of the protocol, to keep them more closely aligned? I read that work years ago and have wanted to try it out ever since, so I may be overlooking the fact that it's totally unsuitable! Perhaps superseded now, but if it can allow more of the protocol to be expressed at the type level that would be good. Yeah, rambling. Sorry about that. Thanks! porrifolius From K.Bleijenberg at lijbrandt.nl Mon Mar 4 11:50:13 2019 From: K.Bleijenberg at lijbrandt.nl (Kees Bleijenberg) Date: Mon, 4 Mar 2019 12:50:13 +0100 Subject: [Haskell-cafe] cannot catch exception Message-ID: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> Hi all, The program reads lots of small Text files. readCDFile handles the encoding. Below is the simplest version of readCDFile. If I call readCDFile "/home/kees/freeDB/inputError/" "blah" (the file blah does not exist) I get: Left "MyError: /home/kees/freeDB/inputError/blah: openBinaryFile: does not exist (No such file or directory)". The exception is caught by exceptionHandler If I call readCDFile "/home/kees/freeDB/inputError/" "67129209" I get freeDB: Cannot decode byte '\xa0': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream. The exception is not caught by exceptionHandler (No "MyError: " in front). The file 67129209 is indeed bad encoded. I'am using SomeException. Still, this 'bad encoding exception' is not caught. Why? Kees import qualified Data.Text as T import System.FilePath.Posix import qualified Data.Text.Encoding as TE import qualified Data.ByteString.Lazy as B import Prelude hiding (catch) import Control.Exception main :: IO () main = do res <- readCDFile "/home/kees/freeDB/inputError/" "67129209" print res readCDFile :: FilePath -> FilePath -> IO (Either String T.Text) readCDFile baseDir fn = do catch ( do buffer <- B.readFile (combine baseDir fn) let bufferStrict = B.toStrict buffer return $ Right $ TE.decodeUtf8 bufferStrict ) exceptionHandler exceptionHandler :: SomeException -> IO (Either String T.Text) exceptionHandler e = do let err = show e return $ Left $ "MyError: " ++ err --- Dit e-mailbericht is gecontroleerd op virussen met Avast antivirussoftware. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsf at seereason.com Mon Mar 4 13:35:23 2019 From: dsf at seereason.com (David Fox) Date: Mon, 4 Mar 2019 05:35:23 -0800 Subject: [Haskell-cafe] cannot catch exception In-Reply-To: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> References: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> Message-ID: This fixes it by forcing the evaluation of the decode where it can be caught: return $ Right $! TE.decodeUtf8 bufferStrict or Right <$> evaluate (TE.decodeUtf8 bufferStrict) On Mon, Mar 4, 2019 at 3:50 AM Kees Bleijenberg wrote: > Hi all, > > > > The program reads lots of small Text files. readCDFile handles the > encoding. Below is the simplest version of readCDFile. > > If I call readCDFile "/home/kees/freeDB/inputError/" "blah" (the file blah > does not exist) I get: > > Left "MyError: /home/kees/freeDB/inputError/blah: openBinaryFile: does not > exist (No such file or directory)". The exception is caught by > exceptionHandler > > If I call readCDFile "/home/kees/freeDB/inputError/" "67129209" I get > freeDB: Cannot decode byte '\xa0': Data.Text.Internal.Encoding.decodeUtf8: > Invalid UTF-8 stream. The exception is not caught by exceptionHandler (No > “MyError: ” in front). The file 67129209 is indeed bad encoded. > > I’am using SomeException. Still, this ‘bad encoding exception’ is not > caught. Why? > > > > Kees > > > > import qualified Data.Text as T > > import System.FilePath.Posix > > import qualified Data.Text.Encoding as TE > > import qualified Data.ByteString.Lazy as B > > import Prelude hiding (catch) > > import Control.Exception > > > > main :: IO () > > main = do > > res <- readCDFile "/home/kees/freeDB/inputError/" "67129209" > > print res > > > > readCDFile :: FilePath -> FilePath -> IO (Either String T.Text) > > readCDFile baseDir fn = do > > catch ( do > > buffer <- B.readFile (combine baseDir fn) > > let bufferStrict = B.toStrict buffer > > return $ Right $ TE.decodeUtf8 bufferStrict > > ) exceptionHandler > > > > exceptionHandler :: SomeException -> IO (Either String T.Text) > > exceptionHandler e = do let err = show e > > return $ Left $ "MyError: " ++ err > > > > > > > Virusvrij. > www.avast.com > > <#m_4240515626836095125_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Mar 4 14:15:57 2019 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 4 Mar 2019 16:15:57 +0200 Subject: [Haskell-cafe] cannot catch exception In-Reply-To: References: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> Message-ID: Another approach would be to use the Data.Text.IO.hGetContents function on a file handle that explicitly sets its character encoding to UTF-8. This is what we do in rio: https://www.stackage.org/haddock/lts-13.10/rio-0.1.8.0/src/RIO.Prelude.IO.html#readFileUtf8 On Mon, Mar 4, 2019 at 3:36 PM David Fox wrote: > This fixes it by forcing the evaluation of the decode where it can be > caught: > > return $ Right $! TE.decodeUtf8 bufferStrict > > or > > Right <$> evaluate (TE.decodeUtf8 bufferStrict) > > > > On Mon, Mar 4, 2019 at 3:50 AM Kees Bleijenberg < > K.Bleijenberg at lijbrandt.nl> wrote: > >> Hi all, >> >> >> >> The program reads lots of small Text files. readCDFile handles the >> encoding. Below is the simplest version of readCDFile. >> >> If I call readCDFile "/home/kees/freeDB/inputError/" "blah" (the file >> blah does not exist) I get: >> >> Left "MyError: /home/kees/freeDB/inputError/blah: openBinaryFile: does >> not exist (No such file or directory)". The exception is caught by >> exceptionHandler >> >> If I call readCDFile "/home/kees/freeDB/inputError/" "67129209" I get >> freeDB: Cannot decode byte '\xa0': Data.Text.Internal.Encoding.decodeUtf8: >> Invalid UTF-8 stream. The exception is not caught by exceptionHandler (No >> “MyError: ” in front). The file 67129209 is indeed bad encoded. >> >> I’am using SomeException. Still, this ‘bad encoding exception’ is not >> caught. Why? >> >> >> >> Kees >> >> >> >> import qualified Data.Text as T >> >> import System.FilePath.Posix >> >> import qualified Data.Text.Encoding as TE >> >> import qualified Data.ByteString.Lazy as B >> >> import Prelude hiding (catch) >> >> import Control.Exception >> >> >> >> main :: IO () >> >> main = do >> >> res <- readCDFile "/home/kees/freeDB/inputError/" "67129209" >> >> print res >> >> >> >> readCDFile :: FilePath -> FilePath -> IO (Either String T.Text) >> >> readCDFile baseDir fn = do >> >> catch ( do >> >> buffer <- B.readFile (combine baseDir fn) >> >> let bufferStrict = B.toStrict buffer >> >> return $ Right $ TE.decodeUtf8 bufferStrict >> >> ) exceptionHandler >> >> >> >> exceptionHandler :: SomeException -> IO (Either String T.Text) >> >> exceptionHandler e = do let err = show e >> >> return $ Left $ "MyError: " ++ err >> >> >> >> >> >> >> Virusvrij. >> www.avast.com >> >> <#m_152412307694788344_m_4240515626836095125_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Mon Mar 4 14:33:14 2019 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Mon, 04 Mar 2019 09:33:14 -0500 Subject: [Haskell-cafe] Informal modelling and simulation of protocols Message-ID: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> > Is there some fantastic tool/library which already does everything I want? Have you looked into model checkers like Spin, which was developed for the very purpose of exhaustively checking protocols? See spinroot.com Doug McIlroy From K.Bleijenberg at lijbrandt.nl Mon Mar 4 20:26:41 2019 From: K.Bleijenberg at lijbrandt.nl (Kees Bleijenberg) Date: Mon, 4 Mar 2019 21:26:41 +0100 Subject: [Haskell-cafe] cannot catch exception In-Reply-To: References: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> Message-ID: <000001d4d2c8$93fed2b0$bbfc7810$@lijbrandt.nl> David, I tried your first suggestion $!. Nothing changed. When I tried ‘Right <$> evaluate (TE.decodeUtf8 bufferStrict)’ success. handleException catches the exception. I don’t understand why. Maybe the documentation for the evaluate function below has to do with it: There is a subtle difference between evaluate x and return $! x, analogous to the difference between throwIO and throw. If the lazy value x throws an exception, return $! x will fail to return an IO action and will throw an exception instead. evaluate x, on the other hand, always produces an IO action; that action will throw an exception upon execution iff x throws an exception upon evaluation. I don’t fully understand this, but evaluate works. Thanks! Kees readCDFile :: FilePath -> FilePath -> IO (Either String T.Text) readCDFile baseDir fn = do catch ( do buffer <- B.readFile (combine baseDir fn) --reads strict the whole file let bufferStrict = B.toStrict buffer return $ Right $! TE.decodeUtf8 bufferStrict -- this doesn’t work Right <$> evaluate (TE.decodeUtf8 bufferStrict) –- this does liftM Right $ evaluate (TE.decodeUtf8 bufferStrict) – this works too ) exceptionHandler Van: David Fox [mailto:dsf at seereason.com] This fixes it by forcing the evaluation of the decode where it can be caught: return $ Right $! TE.decodeUtf8 bufferStrict or Right <$> evaluate (TE.decodeUtf8 bufferStrict) --- Dit e-mailbericht is gecontroleerd op virussen met Avast antivirussoftware. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Mon Mar 4 20:36:37 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 4 Mar 2019 15:36:37 -0500 Subject: [Haskell-cafe] cannot catch exception In-Reply-To: <000001d4d2c8$93fed2b0$bbfc7810$@lijbrandt.nl> References: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> <000001d4d2c8$93fed2b0$bbfc7810$@lijbrandt.nl> Message-ID: The problem is non-strict evaluation aka "laziness". "return" doesn't force evaluation of the pure expression you give to it (actual I/O involving its value generally would, but return just puts a wrapper around it), so its evaluation is forced later outside of your code to catch it, when the value is actually examined. In extreme cases you end up using deepseq's rnf to force full evaluation, but here it's enough to force evaluation to WHNF with ($!) before handing the result off to return. With a lazy ByteString you might need rnf, since ($!) would only force part of the structure of the list of chunks but you would need to force all of the chunks' contents to trigger the decode exception. On Mon, Mar 4, 2019 at 3:27 PM Kees Bleijenberg wrote: > David, > > > > I tried your first suggestion $!. Nothing changed. > > When I tried ‘Right <$> evaluate (TE.decodeUtf8 bufferStrict)’ success. > handleException catches the exception. > > I don’t understand why. Maybe the documentation for the evaluate function > below has to do with it: > > There is a subtle difference between evaluate > > x and return > > $! > > x, analogous to the difference between throwIO > > and throw > . > If the lazy value x throws an exception, return > > $! > > x will fail to return an IO > action > and will throw an exception instead. evaluate > > x, on the other hand, always produces an IO > action; > that action will throw an exception upon *execution* iff x throws an > exception upon *evaluation*. > > I don’t fully understand this, but evaluate works. Thanks! > > > > Kees > > > > readCDFile :: FilePath -> FilePath -> IO (Either String T.Text) > > readCDFile baseDir fn = do > > catch ( do > > buffer <- B.readFile (combine baseDir fn) --reads strict the > whole file > > let bufferStrict = B.toStrict buffer > > return $ Right $! TE.decodeUtf8 bufferStrict -- this doesn’t work > > Right <$> evaluate (TE.decodeUtf8 bufferStrict) –- this does > > liftM Right $ evaluate (TE.decodeUtf8 bufferStrict) – this works too > > ) exceptionHandler > > > > > > *Van:* David Fox [mailto:dsf at seereason.com] > > This fixes it by forcing the evaluation of the decode where it can be > caught: > > > > return $ Right $! TE.decodeUtf8 bufferStrict > > > > or > > > > Right <$> evaluate (TE.decodeUtf8 bufferStrict) > > > > > > > Virusvrij. > www.avast.com > > <#m_1428190607071173671_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Mon Mar 4 22:02:26 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 4 Mar 2019 17:02:26 -0500 Subject: [Haskell-cafe] cannot catch exception In-Reply-To: <000001d4d2c8$93fed2b0$bbfc7810$@lijbrandt.nl> References: <000601d4d280$6d9cdab0$48d69010$@lijbrandt.nl> <000001d4d2c8$93fed2b0$bbfc7810$@lijbrandt.nl> Message-ID: <20190304220226.GN916@straasha.imrryr.org> On Mon, Mar 04, 2019 at 09:26:41PM +0100, Kees Bleijenberg wrote: > readCDFile :: FilePath -> FilePath -> IO (Either String T.Text) > readCDFile baseDir fn = do > catch ( do > buffer <- B.readFile (combine baseDir fn) --reads strict the whole file > let bufferStrict = B.toStrict buffer > return $ Right $! TE.decodeUtf8 bufferStrict -- this doesn’t work > Right <$> evaluate (TE.decodeUtf8 bufferStrict) –- this does > liftM Right $ evaluate (TE.decodeUtf8 bufferStrict) -- this works too > ) exceptionHandler Take a close look at the three increasingly strict examples below: ghci> do { x <- return $ Right $ undefined ; putStrLn "delayed..."; case x of { Right _ -> return 1; _ -> return 0 } } delayed... 1 ghci> do { x <- return $ Right $! undefined; putStrLn "delayed..."; case x of { Right _ -> return 1; _ -> return 0 } } delayed... *** Exception: Prelude.undefined CallStack (from HasCallStack): error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err undefined, called at :8:29 in interactive:Ghci3 ghci> do { x <- return $! Right $! undefined ; putStrLn "delayed..."; case x of { Right _ -> return 1; _ -> return 0 } } *** Exception: Prelude.undefined CallStack (from HasCallStack): error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err undefined, called at :9:30 in interactive:Ghci3 While "Right $! x" is strict in x, "return $ Right $! x" is not! You need "return $! Right $! x" to make that happen. -- Viktor. From Graham.Hutton at nottingham.ac.uk Tue Mar 5 10:09:23 2019 From: Graham.Hutton at nottingham.ac.uk (Graham Hutton) Date: Tue, 5 Mar 2019 10:09:23 +0000 Subject: [Haskell-cafe] Second call for papers, MPC 2019, Portugal Message-ID: Dear all, The next Mathematics of Program Construction (MPC) conference will be held in Portugal in October 2019, co-located with the Symposium on Formal Methods (FM). Paper submission is 3rd May 2019. Please share, and submit your best papers! Best wishes, Graham Hutton Program Chair, MPC 2019 ====================================================================== *** SECOND CALL FOR PAPERS -- MPC 2019 *** 13th International Conference on Mathematics of Program Construction 7-9 October 2019, Porto, Portugal Co-located with Formal Methods 2019 https://tinyurl.com/MPC-Porto ====================================================================== TIMELINE: Abstract submission 26th April 2019 Paper submission 3rd May 2019 Author notification 14th June 2019 Camera ready copy 12th July 2019 Conference 7-9 October 2019 KEYNOTE SPEAKERS: Assia Mahboubi INRIA, France Annabelle McIver Macquarie University, Australia BACKGROUND: The International Conference on Mathematics of Program Construction (MPC) aims to promote the development of mathematical principles and techniques that are demonstrably practical and effective in the process of constructing computer programs. MPC 2019 will be held in Porto, Portugal from 7-9 October 2019, and is co-located with the International Symposium on Formal Methods, FM 2019. Previous conferences were held in Königswinter, Germany (2015); Madrid, Spain (2012); Québec City, Canada (2010); Marseille, France (2008); Kuressaare, Estonia (2006); Stirling, UK (2004); Dagstuhl, Germany (2002); Ponte de Lima, Portugal (2000); Marstrand, Sweden (1998); Kloster Irsee, Germany (1995); Oxford, UK (1992); Twente, The Netherlands (1989). SCOPE: MPC seeks original papers on mathematical methods and tools put to use in program construction. Topics of interest range from algorithmics to support for program construction in programming languages and systems. Typical areas include type systems, program analysis and transformation, programming language semantics, security, and program logics. The notion of a 'program' is interpreted broadly, ranging from algorithms to hardware. Theoretical contributions are welcome, provided that their relevance to program construction is clear. Reports on applications are welcome, provided that their mathematical basis is evident. We also encourage the submission of 'programming pearls' that present elegant and instructive examples of the mathematics of program construction. SUBMISSION: Submission is in two stages. Abstracts (plain text, maximum 250 words) must be submitted by 26th April 2019. Full papers (pdf, formatted using the llncs.sty style file for LaTex) must be submitted by 3rd May 2019. There is no prescribed page limit, but authors should strive for brevity. Both abstracts and papers will be submitted using EasyChair. Papers must present previously unpublished work, and not be submitted concurrently to any other publication venue. Submissions will be evaluated by the program committee according to their relevance, correctness, significance, originality, and clarity. Each submission should explain its contributions in both general and technical terms, clearly identifying what has been accomplished, explaining why it is significant, and comparing it with previous work. Accepted papers must be presented in person at the conference by one of the authors. The proceedings of MPC 2019 will be published in the Lecture Notes in Computer Science (LNCS) series, as with all previous instances of the conference. Authors of accepted papers will be expected to transfer copyright to Springer for this purpose. After the conference, authors of the best papers from MPC 2019 and MPC 2015 will be invited to submit revised versions to a special issue of Science of Computer Programming (SCP). For any queries about submission please contact the program chair, Graham Hutton . PROGRAM COMMITTEE: Patrick Bahr IT University of Copenhagen, Denmark Richard Bird University of Oxford, UK Corina Cîrstea University of Southampton, UK Brijesh Dongol University of Surrey, UK João F. Ferreira University of Lisbon, Portugal Jennifer Hackett University of Nottingham, UK William Harrison University of Missouri, USA Ralf Hinze University of Kaiserslautern, Germany Zhenjiang Hu National Institute of Informatics, Japan Graham Hutton (chair) University of Nottingham, UK Cezar Ionescu University of Oxford, UK Mauro Jaskelioff National University of Rosario, Argentina Ranjit Jhala University of California, USA Gabriele Keller Utrecht University, The Netherlands Ekaterina Komendantskaya Heriot-Watt University, UK Chris Martens North Carolina State University, USA Bernhard Möller University of Augsburg, Germany Shin-Cheng Mu Academia Sinica, Taiwan Mary Sheeran Chalmers University of Technology, Sweden Alexandra Silva University College London, UK Georg Struth University of Sheffield, UK CONFERENE VENUE: The conference will be held at the Alfândega Porto Congress Centre, a 150 year old former custom's house located in the historic centre of Porto on the bank of the river Douro. The venue was renovated by a Pritzer prize winning architect and has received many awards. LOCAL ORGANISERS: José Nuno Oliveira University of Minho, Portugal For any queries about local issues please contact the local organiser, José Nuno Oliveira . ====================================================================== This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law. From will.yager at gmail.com Tue Mar 5 10:11:27 2019 From: will.yager at gmail.com (Will Yager) Date: Tue, 5 Mar 2019 18:11:27 +0800 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> References: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> Message-ID: <61B2B1A3-0CAA-4DD2-B8FD-5406B715B536@gmail.com> http://hackage.haskell.org/package/dejafu might do what you want > On Mar 4, 2019, at 10:33 PM, Doug McIlroy wrote: > > >> Is there some fantastic tool/library which already does everything I want? > > Have you looked into model checkers like Spin, which was developed > for the very purpose of exhaustively checking protocols? See spinroot.com > > Doug McIlroy > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From damien.mattei at gmail.com Tue Mar 5 10:41:53 2019 From: damien.mattei at gmail.com (Damien Mattei) Date: Tue, 5 Mar 2019 11:41:53 +0100 Subject: [Haskell-cafe] creating standalone executable with haskell Message-ID: i want to run Haskell code on a system i'm not administrator??? if i compile my haskell program rather than use interpreter (i never compile it) can i run the executable on another Linux platform which has nothing of haskell installed on it ( no runtime library,etc...) ? i already did that with Racket Scheme could it be done with GHC compiler or should i ask the system admin to install haskell platform? regards, damien -------------- next part -------------- An HTML attachment was scrubbed... URL: From bneijt at gmail.com Tue Mar 5 12:29:30 2019 From: bneijt at gmail.com (Bram Neijt) Date: Tue, 5 Mar 2019 13:29:30 +0100 Subject: [Haskell-cafe] creating standalone executable with haskell In-Reply-To: References: Message-ID: For binaries there is the concept of a static binary that does not require any libraries. It is bound to the kernel ABI, but you could try that approach: https://www.fpcomplete.com/blog/2016/10/static-compilation-with-stack Good luck, Bram (Now also on the list) On Tue, 5 Mar 2019, 11:42 Damien Mattei wrote: > i want to run Haskell code on a system i'm not administrator??? > if i compile my haskell program rather than use interpreter (i never > compile it) can i run the executable on another Linux platform which has > nothing of haskell installed on it ( no runtime library,etc...) ? > i already did that with Racket Scheme could it be done with GHC compiler > or should i ask the system admin to install haskell platform? > regards, > damien > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Tue Mar 5 12:33:46 2019 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 5 Mar 2019 13:33:46 +0100 Subject: [Haskell-cafe] creating standalone executable with haskell In-Reply-To: References: Message-ID: The two topics of creating standalone executables, and you not being a system admin, are very separate and don't have much to do with each other. On the admin topic: You can set up a full Haskell build environment in your home directory even if you are not an admin. On the standalone executable topic: Most Haskell compiled executables can easily run on other Linux systems. In the default build, a Haskell executable has only a few runtime dependencies via dynamic linking. Here's and example for a HelloWorld program when compiled with `ghc --make`: % ldd Hello linux-vdso.so.1 => (0x00007ffcd9dea000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f3799dcd000) libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f3799b4d000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f3799945000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3799741000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3799524000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f379915a000) /lib64/ld-linux-x86-64.so.2 (0x00007f379a0d6000) As you can see, a few C libraries must be present on the system. They are already present on most systems. Again, you don't need to be admin to get those; if they are missing, you can also ship them along with your executable and set LD_LIBRARY_PATH. In general, executables created this way on one Linux flavour work reasonably well on other Linux flavours, but not always. And example (if I remember correctly) where it doesn't work, is Centos 6 vs. newer Debian; one has libgmp.so.3 and one libgmp.so.6. This means that if you are using the default dynamic linking of C dependencies, you may have to ship "a few" (usually two) flavours of your executable. In general, executables created on one OS version work well on newer ones, e.g. something created on Ubuntu 16.04 will work well on newer Ubuntus. You can also link everything statically. Then your executable should work on any Linux system. But you need to learn a few more things to do so; I try to make it as convenient as possible with my project https://github.com/nh2/static-haskell-nix/ Niklas From ivanperezdominguez at gmail.com Tue Mar 5 13:29:41 2019 From: ivanperezdominguez at gmail.com (Ivan Perez) Date: Tue, 5 Mar 2019 08:29:41 -0500 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: Hi Partly related; for recent work we did two things: - Model part of the collision avoidance logic of a satellite as a state machine, implement it in haskell (14 locs) and use small check to verify that the state transitions fulfill the properties we expect for collision avoidance. - Model small sats as Monadic Stream Functions [1], implement a communication protocol on top (also using MSFs), and use quickcheck + fault injection to verify distributed consensus in the presence of faults. The next (immediate) step will be to bound the trace length and use smallcheck or something that exhausts the input space. I have also implemented Raft using MSFs, and the idea is to verify properties in a similar way. I know this is vague. Let me know if you have any questions or want more details. Ivan [1] https://github.com/ivanperez-keera/dunai [2] https://arc.aiaa.org/doi/abs/10.2514/6.2019-1187 On Mon, 4 Mar 2019 at 02:05, P Orrifolius wrote: > Hi, > > this will probably be a somewhat rambling, open-ended question... my > apologies in advance. > There are some concrete questions after I explain what I'm trying to > achieve, and some of them may even make sense! > > I'm planning on writing a multi-party, distributed protocol. I would > like to informally model the protocol itself and the environment it > would run in and then simulate it to see if it achieves what I expect. > Then it would need to be implemented and tested of course. > I think a formal, mathematical model of the protocol would be beyond > my abilities. > > I'm looking for any advice or tool/library recommendations I can get > on the modelling and simulating part of the process and how that work > could be leveraged in the implementation to reduce the work and help > ensure correctness. > > > I have some ideas of how this could work, but I don't know if I'm > approaching it with a suitably Haskell-like mindset... > What I envisage at the moment is defining the possible behaviour of > each party, who are not homogeneous, as well as things which form the > external environment of the parties but will impact their behaviour. > By external things I mean, for example, communication links which may > drop or reorder packets, storage devices which may lose or corrupt > data, machines that freeze or slew their clock... that sort of thing. > I'm also picturing a 'universal coordinator' that controls interaction > of the parties/environment during simulation. > > Modelling each party and external as a Harel-like statechart seems > plausible. Perhaps some parts could be simpler FSMs, but I think many > will be reasonably complex and involve internal state. > > It would be nice if the simulation could, to the limit of given > processing time, exhaustively check the model rather than just > randomly, as per quickcheck. > If at each point the universal coordinator got a list of possible next > states from each party/external it could simulate the > simultaneity/sequencing of events by enumerating all combinations, of > every size, of individual party/external transitions and then > recursing with each separate combination applied as being the next set > of simultaneous state transitions. > > To reduce the space to exhaustively search it would be nice if any > values that the parties used, and were being tested by, were > abstracted. Not sure what the technical term is but I mean asserting > relationships between variables in a given domain, rather than their > actual value. > For example, imagine a distributed vote between two nodes where each > node must vote for a random integer greater than their last vote. > Each node is a party in the protocol. > In the current branch of the exhaustive search of the protocol a > relation between node 1's existing vote, v1, and node 2's existing > vote, v2, has already been asserted: v1>v2. > So when the coordinator enumerates the possibilities for sets of > next-state transitions, with each node n asserting vn'>vn in their > individual party state transition, it will prune any search branch > which doesn't satisfy the union of the assertions (v1>v2, v1'>v1, > v2'>v2), and recurse into the search branch alternatives with v1' or v1'=v2', or v1'>v2'. > > > > So, some questions... > > Is what I'm suggesting even vaguely sensible way to approach the > problem in Haskell? Or am I getting carried away and some neat tricks > zipping list monoids or something will get the job done? > > Is there some fantastic tool/library which already does everything I want? > > Are there any good tools or libraries in Haskell for defining and > manipulating statecharts? > > The sessiontypes, and sessiontypes-distributed, libraries are cool, > representing the protocol at the type level... but these only handle > point-to-point, two-party protocols, right? > I expect that extending them would be a _serious_ amount of work. > > Can quickcheck do the sort of exhaustive coverage testing that I'd like? > > Again, not sure of the correct terminology but can quickcheck generate > tests at a variable-relation assertion level? I.e. inspect the > operators and boolean tests over a domain (+, ==, >=, append, length, > contains, etc) and explore test cases according to whether certain > relations between variables hold. > > Any feelings/opinions about whether Oleg Kiselyov's Typed Tagless > Final Interpreter (http://okmij.org/ftp/tagless-final/index.html) work > might be a useful mechanism to define the behaviour of parties within > the protocol (the statecharts, basically) and then have different > interpreters that enable both the testing and implementation of the > protocol, to keep them more closely aligned? > I read that work years ago and have wanted to try it out ever since, > so I may be overlooking the fact that it's totally unsuitable! > Perhaps superseded now, but if it can allow more of the protocol to be > expressed at the type level that would be good. > > > Yeah, rambling. Sorry about that. > > > Thanks! > porrifolius > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Mar 5 17:00:38 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 5 Mar 2019 12:00:38 -0500 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: TLA plus might suit you! I've actually had some success going back and forth between copattern style code and TLA specs, its not easy and not perfect, but it works i've been slowly experimenting with modelling protocols as coinductive / copattern style trick https://www.reddit.com/r/haskell/comments/4aju8f/simple_example_of_emulating_copattern_matching_in/ is a small toy exposition i did i am messing around with some reusable abstractions, though its definitely a challenging problem On Mon, Mar 4, 2019 at 2:05 AM P Orrifolius wrote: > Hi, > > this will probably be a somewhat rambling, open-ended question... my > apologies in advance. > There are some concrete questions after I explain what I'm trying to > achieve, and some of them may even make sense! > > I'm planning on writing a multi-party, distributed protocol. I would > like to informally model the protocol itself and the environment it > would run in and then simulate it to see if it achieves what I expect. > Then it would need to be implemented and tested of course. > I think a formal, mathematical model of the protocol would be beyond > my abilities. > > I'm looking for any advice or tool/library recommendations I can get > on the modelling and simulating part of the process and how that work > could be leveraged in the implementation to reduce the work and help > ensure correctness. > > > I have some ideas of how this could work, but I don't know if I'm > approaching it with a suitably Haskell-like mindset... > What I envisage at the moment is defining the possible behaviour of > each party, who are not homogeneous, as well as things which form the > external environment of the parties but will impact their behaviour. > By external things I mean, for example, communication links which may > drop or reorder packets, storage devices which may lose or corrupt > data, machines that freeze or slew their clock... that sort of thing. > I'm also picturing a 'universal coordinator' that controls interaction > of the parties/environment during simulation. > > Modelling each party and external as a Harel-like statechart seems > plausible. Perhaps some parts could be simpler FSMs, but I think many > will be reasonably complex and involve internal state. > > It would be nice if the simulation could, to the limit of given > processing time, exhaustively check the model rather than just > randomly, as per quickcheck. > If at each point the universal coordinator got a list of possible next > states from each party/external it could simulate the > simultaneity/sequencing of events by enumerating all combinations, of > every size, of individual party/external transitions and then > recursing with each separate combination applied as being the next set > of simultaneous state transitions. > > To reduce the space to exhaustively search it would be nice if any > values that the parties used, and were being tested by, were > abstracted. Not sure what the technical term is but I mean asserting > relationships between variables in a given domain, rather than their > actual value. > For example, imagine a distributed vote between two nodes where each > node must vote for a random integer greater than their last vote. > Each node is a party in the protocol. > In the current branch of the exhaustive search of the protocol a > relation between node 1's existing vote, v1, and node 2's existing > vote, v2, has already been asserted: v1>v2. > So when the coordinator enumerates the possibilities for sets of > next-state transitions, with each node n asserting vn'>vn in their > individual party state transition, it will prune any search branch > which doesn't satisfy the union of the assertions (v1>v2, v1'>v1, > v2'>v2), and recurse into the search branch alternatives with v1' or v1'=v2', or v1'>v2'. > > > > So, some questions... > > Is what I'm suggesting even vaguely sensible way to approach the > problem in Haskell? Or am I getting carried away and some neat tricks > zipping list monoids or something will get the job done? > > Is there some fantastic tool/library which already does everything I want? > > Are there any good tools or libraries in Haskell for defining and > manipulating statecharts? > > The sessiontypes, and sessiontypes-distributed, libraries are cool, > representing the protocol at the type level... but these only handle > point-to-point, two-party protocols, right? > I expect that extending them would be a _serious_ amount of work. > > Can quickcheck do the sort of exhaustive coverage testing that I'd like? > > Again, not sure of the correct terminology but can quickcheck generate > tests at a variable-relation assertion level? I.e. inspect the > operators and boolean tests over a domain (+, ==, >=, append, length, > contains, etc) and explore test cases according to whether certain > relations between variables hold. > > Any feelings/opinions about whether Oleg Kiselyov's Typed Tagless > Final Interpreter (http://okmij.org/ftp/tagless-final/index.html) work > might be a useful mechanism to define the behaviour of parties within > the protocol (the statecharts, basically) and then have different > interpreters that enable both the testing and implementation of the > protocol, to keep them more closely aligned? > I read that work years ago and have wanted to try it out ever since, > so I may be overlooking the fact that it's totally unsuitable! > Perhaps superseded now, but if it can allow more of the protocol to be > expressed at the type level that would be good. > > > Yeah, rambling. Sorry about that. > > > Thanks! > porrifolius > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Mar 6 01:21:09 2019 From: ben at well-typed.com (Ben Gamari) Date: Tue, 05 Mar 2019 20:21:09 -0500 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration Message-ID: <8736o0ssqo.fsf@smart-cactus.org> Hi everyone, Over the past few weeks we have been hard at work sorting out the last batch of issues in GHC's Trac-to-GitLab import [1]. At this point I believe we have sorted out the issues which are necessary to perform the final migration: * We are missing only two tickets (#1436 and #2074 which will require a bit of manual intervention to import due to extremely large description lengths) * A variety of markup issues have been resolved * More metadata is now preserved via labels. We may choose to reorganize or eliminate some of these labels in time but it's easier to remove metadata after import than it is to reintroduce it. The logic which maps Trac metadata to GitLab labels can be found here [2] * We now generate a Wiki table of contents [3] which is significantly more readable than GitLab's default page list. This will be updated by a cron job until underlying GitLab pages list becomes more readable. * We now generate redirects for Trac ticket and Wiki links (although this isn't visible in the staging instance) * Milestones are now properly closed when closed in Trac * Mapping between Trac and GitLab usernames is now a bit more robust As in previous test imports, we would appreciate it if you could have a look over the import and let us know of any problems your encounter. If no serious issues are identified with the staging site we plan to proceed with the migration this coming weekend. The current migration plan is to perform the final import on gitlab.haskell.org on Saturday, 9 March 2019. This will involve both gitlab.haskell.org and ghc.haskell.org being down for likely the entirety of the day Saturday and likely some of Sunday (EST time zone). Read-only access will be available to gitlab.staging.haskell.org for ticket lookup while the import is underway. After the import we will wait at least a week or so before we begin the process of decommissioning Trac, which will be kept in read-only mode for the duration. Do let me know if the 9 March timing is problematic. Cheers, - Ben [1] https://gitlab.staging.haskell.org/ghc/ghc [2] https://github.com/bgamari/trac-to-remarkup/blob/master/TicketImport.hs#L227 [3] https://gitlab.staging.haskell.org/ghc/ghc/wikis/index -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Wed Mar 6 06:32:44 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 6 Mar 2019 09:32:44 +0300 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: <8736o0ssqo.fsf@smart-cactus.org> References: <8736o0ssqo.fsf@smart-cactus.org> Message-ID: This look great, thanks to everyone involved! Some feedback: - When I click to the "Wiki" link on the left it opens "Home" page and I don't know how to go to the index from there. I think we may want index to be the home page for the wiki? - Redirects don't seem to work: https://gitlab.staging.haskell.org/ghc/ghc/wikis/commentary/rts/heap-objects - Comparing these two pages: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/HeapObjects?redirectedfrom=Commentary/Rts/HeapObjects https://gitlab.staging.haskell.org/ghc/ghc/wikis/commentary/rts/storage/heap-objects The Gitlab page doesn't have images that Trac page has. Secondly, the "_|_" string used in the Trac page is migrated as italic "|" in Gitlab. Ömer Ben Gamari , 6 Mar 2019 Çar, 04:21 tarihinde şunu yazdı: > > Hi everyone, > > Over the past few weeks we have been hard at work sorting out the > last batch of issues in GHC's Trac-to-GitLab import [1]. At this point I > believe we have sorted out the issues which are necessary to perform the > final migration: > > * We are missing only two tickets (#1436 and #2074 which will require a > bit of manual intervention to import due to extremely large > description lengths) > > * A variety of markup issues have been resolved > > * More metadata is now preserved via labels. We may choose to > reorganize or eliminate some of these labels in time but it's easier > to remove metadata after import than it is to reintroduce it. The > logic which maps Trac metadata to GitLab labels can be found here [2] > > * We now generate a Wiki table of contents [3] which is significantly > more readable than GitLab's default page list. This will be updated > by a cron job until underlying GitLab pages list becomes more > readable. > > * We now generate redirects for Trac ticket and Wiki links (although > this isn't visible in the staging instance) > > * Milestones are now properly closed when closed in Trac > > * Mapping between Trac and GitLab usernames is now a bit more robust > > As in previous test imports, we would appreciate it if you could have a > look over the import and let us know of any problems your encounter. > > If no serious issues are identified with the staging site we plan to > proceed with the migration this coming weekend. The current migration > plan is to perform the final import on gitlab.haskell.org on Saturday, 9 > March 2019. > > This will involve both gitlab.haskell.org and ghc.haskell.org being down > for likely the entirety of the day Saturday and likely some of Sunday > (EST time zone). Read-only access will be available to > gitlab.staging.haskell.org for ticket lookup while the import is > underway. > > After the import we will wait at least a week or so before we begin the > process of decommissioning Trac, which will be kept in read-only mode > for the duration. > > Do let me know if the 9 March timing is problematic. > > Cheers, > > - Ben > > > [1] https://gitlab.staging.haskell.org/ghc/ghc > [2] https://github.com/bgamari/trac-to-remarkup/blob/master/TicketImport.hs#L227 > [3] https://gitlab.staging.haskell.org/ghc/ghc/wikis/index > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From tdammers at gmail.com Wed Mar 6 10:55:15 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Wed, 6 Mar 2019 11:55:15 +0100 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: References: <8736o0ssqo.fsf@smart-cactus.org> Message-ID: <20190306105513.wmozsv7qnlmv2jzd@nibbler> On Wed, Mar 06, 2019 at 09:32:44AM +0300, Ömer Sinan Ağacan wrote: > - Redirects don't seem to work: > https://gitlab.staging.haskell.org/ghc/ghc/wikis/commentary/rts/heap-objects I believe this is an unfortunate result of the way we migrate wiki pages. The way that works is that we don't actually parse the original Trac markup; instead, we scrape the rendered HTML directly from the live Trac instance, and massage that into GitLab markup. This has a few interesting consequences: 1. "Wiki processors", such as for example dynamically-generated TOCs and issue lists, get to run on the Trac instance as we request the page, and thus capture a snapshot of the dynamic data at the time of migration. 2. Redirects, being implemented as such wiki processors, cause client-side redirects, which our scraper will not follow. Hence, the converted page is based on an HTML page body that you don't normally get to see, and no actual redirect is generated on the GitLab side of things. 3. The scraper only looks at what is normally the actual page content; any additional UI generated outside of the main content element is ignored. Hence, when Trac generates links to the redirect target for clients that do not support client-side redirects, those links don't make it into the converted page. 4. Because redirects are usually the last thing to be added to a page, that page's history ends there, and becomes the "current" version on the GitLab side. So we end up with what you're seeing: a nonsensical page that contains the fallback content, a somewhat cryptic question asking whether it should redirect, and no way to answer that question. Since GitLab doesn't have an equivalent to those "wiki processors", and AFAIK does not cater for such redirects, the question is how we should handle these. I can think of several options: 1. Do nothing; when anyone complains, fix the offending pages manually (either by converting the useless redirect message into a proper hyperlink, or by manually adding a rewrite entry to the nginx configuration). 2. Generate a list of redirecting pages from the Trac dataset, either as part of the import (2a), or with some grep/sed/awk magic based on the converted git repo after the fact (2b); then use that list to generate suitable nginx redirects. 3. Extend the import script to detect redirects, and special-case those so that they render as proper links to the redirect target. 4. Do more research and see if there is a way to make GitLab redirect based on wiki content, then extend the import script like in step 3, but render redirecting pages to use the (currently hypothetical) redirect feature. Personally, I'm inclined to say let's go with option 2b: run the import, then grep for 'redirect(wiki:', and massage that into nginx redirects. TL;DR: the import currently ignores Trac wiki redirects, and I'm not sure what the best way is to deal with this. From ben at well-typed.com Wed Mar 6 11:02:45 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 06 Mar 2019 06:02:45 -0500 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: References: <8736o0ssqo.fsf@smart-cactus.org> Message-ID: On March 6, 2019 1:32:44 AM EST, "Ömer Sinan Ağacan" wrote: >This look great, thanks to everyone involved! > >Some feedback: > >- When I click to the "Wiki" link on the left it opens "Home" page and >I don't >know how to go to the index from there. I think we may want index to be >the > home page for the wiki? > Yes, I do think we at least want to link to the index from the wiki home page. >- Redirects don't seem to work: >https://gitlab.staging.haskell.org/ghc/ghc/wikis/commentary/rts/heap-objects > Yes this needs to be fixed. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From ben at well-typed.com Wed Mar 6 11:09:35 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 06 Mar 2019 06:09:35 -0500 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: <20190306105513.wmozsv7qnlmv2jzd@nibbler> References: <8736o0ssqo.fsf@smart-cactus.org> <20190306105513.wmozsv7qnlmv2jzd@nibbler> Message-ID: <58274B81-E1C1-4607-ABFD-9427967B5B40@well-typed.com> The lacking redirect support is unfortunate. In my opinion this is something we will need to handle going forward as well; a one time solution like adding nginx redirects doesn't seem like the right approach to me. I would rather advocate either option 3 or one of the following options: 5. Detect redirects and convert them to symbolic links in the repo 6. Request redirect support in the gitlab wiki. On March 6, 2019 5:55:15 AM EST, Tobias Dammers wrote: >On Wed, Mar 06, 2019 at 09:32:44AM +0300, Ömer Sinan Ağacan wrote: >> - Redirects don't seem to work: >> >https://gitlab.staging.haskell.org/ghc/ghc/wikis/commentary/rts/heap-objects > >I believe this is an unfortunate result of the way we migrate wiki >pages. The way that works is that we don't actually parse the original >Trac markup; instead, we scrape the rendered HTML directly from the >live >Trac instance, and massage that into GitLab markup. > >This has a few interesting consequences: > >1. "Wiki processors", such as for example dynamically-generated TOCs >and >issue lists, get to run on the Trac instance as we request the page, >and >thus capture a snapshot of the dynamic data at the time of migration. >2. Redirects, being implemented as such wiki processors, cause >client-side redirects, which our scraper will not follow. Hence, the >converted page is based on an HTML page body that you don't normally >get >to see, and no actual redirect is generated on the GitLab side of >things. >3. The scraper only looks at what is normally the actual page content; >any additional UI generated outside of the main content element is >ignored. Hence, when Trac generates links to the redirect target for >clients that do not support client-side redirects, those links don't >make it into the converted page. >4. Because redirects are usually the last thing to be added to a page, >that page's history ends there, and becomes the "current" version on >the >GitLab side. So we end up with what you're seeing: a nonsensical page >that contains the fallback content, a somewhat cryptic question asking >whether it should redirect, and no way to answer that question. > >Since GitLab doesn't have an equivalent to those "wiki processors", and >AFAIK does not cater for such redirects, the question is how we should >handle these. I can think of several options: > >1. Do nothing; when anyone complains, fix the offending pages manually >(either by converting the useless redirect message into a proper >hyperlink, or by manually adding a rewrite entry to the nginx >configuration). >2. Generate a list of redirecting pages from the Trac dataset, either >as >part of the import (2a), or with some grep/sed/awk magic based on the >converted git repo after the fact (2b); then use that list to generate >suitable nginx redirects. >3. Extend the import script to detect redirects, and special-case those >so that they render as proper links to the redirect target. >4. Do more research and see if there is a way to make GitLab redirect >based on wiki content, then extend the import script like in step 3, >but >render redirecting pages to use the (currently hypothetical) redirect >feature. > >Personally, I'm inclined to say let's go with option 2b: run the >import, >then grep for 'redirect(wiki:', and massage that into nginx redirects. > >TL;DR: the import currently ignores Trac wiki redirects, and I'm not >sure what the best way is to deal with this. >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Wed Mar 6 11:11:49 2019 From: me at ara.io (Ara Adkins) Date: Wed, 6 Mar 2019 11:11:49 +0000 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: <8736o0ssqo.fsf@smart-cactus.org> References: <8736o0ssqo.fsf@smart-cactus.org> Message-ID: Super excited for this! Thank you to everyone whose put in so much hard work to get it done! One question: what is happening with the trac tickets mailing list? I imagine it’ll be going away, but for those of us that use it to keep track of things is there a recommended alternative? Best, _ara > On 6 Mar 2019, at 01:21, Ben Gamari wrote: > > Hi everyone, > > Over the past few weeks we have been hard at work sorting out the > last batch of issues in GHC's Trac-to-GitLab import [1]. At this point I > believe we have sorted out the issues which are necessary to perform the > final migration: > > * We are missing only two tickets (#1436 and #2074 which will require a > bit of manual intervention to import due to extremely large > description lengths) > > * A variety of markup issues have been resolved > > * More metadata is now preserved via labels. We may choose to > reorganize or eliminate some of these labels in time but it's easier > to remove metadata after import than it is to reintroduce it. The > logic which maps Trac metadata to GitLab labels can be found here [2] > > * We now generate a Wiki table of contents [3] which is significantly > more readable than GitLab's default page list. This will be updated > by a cron job until underlying GitLab pages list becomes more > readable. > > * We now generate redirects for Trac ticket and Wiki links (although > this isn't visible in the staging instance) > > * Milestones are now properly closed when closed in Trac > > * Mapping between Trac and GitLab usernames is now a bit more robust > > As in previous test imports, we would appreciate it if you could have a > look over the import and let us know of any problems your encounter. > > If no serious issues are identified with the staging site we plan to > proceed with the migration this coming weekend. The current migration > plan is to perform the final import on gitlab.haskell.org on Saturday, 9 > March 2019. > > This will involve both gitlab.haskell.org and ghc.haskell.org being down > for likely the entirety of the day Saturday and likely some of Sunday > (EST time zone). Read-only access will be available to > gitlab.staging.haskell.org for ticket lookup while the import is > underway. > > After the import we will wait at least a week or so before we begin the > process of decommissioning Trac, which will be kept in read-only mode > for the duration. > > Do let me know if the 9 March timing is problematic. > > Cheers, > > - Ben > > > [1] https://gitlab.staging.haskell.org/ghc/ghc > [2] https://github.com/bgamari/trac-to-remarkup/blob/master/TicketImport.hs#L227 > [3] https://gitlab.staging.haskell.org/ghc/ghc/wikis/index > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From ben at well-typed.com Wed Mar 6 11:21:34 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 06 Mar 2019 06:21:34 -0500 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: References: <8736o0ssqo.fsf@smart-cactus.org> Message-ID: <91121787-E6C0-45E4-BB7B-B25FBF2B440E@well-typed.com> On March 6, 2019 6:11:49 AM EST, Ara Adkins wrote: >Super excited for this! Thank you to everyone whose put in so much hard >work to get it done! > >One question: what is happening with the trac tickets mailing list? I >imagine it’ll be going away, but for those of us that use it to keep >track of things is there a recommended alternative? > The ghc-commits list will continue to work. The ghc-tickets list is a good question. I suspect that under gitlab there will be less need for this list but we may still want to continue maintaining it regardless for continuity's sake. Thoughts? Cheers, - Ben >Best, >_ara > >> On 6 Mar 2019, at 01:21, Ben Gamari wrote: >> >> Hi everyone, >> >> Over the past few weeks we have been hard at work sorting out the >> last batch of issues in GHC's Trac-to-GitLab import [1]. At this >point I >> believe we have sorted out the issues which are necessary to perform >the >> final migration: >> >> * We are missing only two tickets (#1436 and #2074 which will require >a >> bit of manual intervention to import due to extremely large >> description lengths) >> >> * A variety of markup issues have been resolved >> >> * More metadata is now preserved via labels. We may choose to >> reorganize or eliminate some of these labels in time but it's >easier >> to remove metadata after import than it is to reintroduce it. The >> logic which maps Trac metadata to GitLab labels can be found here >[2] >> >> * We now generate a Wiki table of contents [3] which is significantly >> more readable than GitLab's default page list. This will be updated >> by a cron job until underlying GitLab pages list becomes more >> readable. >> >> * We now generate redirects for Trac ticket and Wiki links (although >> this isn't visible in the staging instance) >> >> * Milestones are now properly closed when closed in Trac >> >> * Mapping between Trac and GitLab usernames is now a bit more robust >> >> As in previous test imports, we would appreciate it if you could have >a >> look over the import and let us know of any problems your encounter. >> >> If no serious issues are identified with the staging site we plan to >> proceed with the migration this coming weekend. The current migration >> plan is to perform the final import on gitlab.haskell.org on >Saturday, 9 >> March 2019. >> >> This will involve both gitlab.haskell.org and ghc.haskell.org being >down >> for likely the entirety of the day Saturday and likely some of Sunday >> (EST time zone). Read-only access will be available to >> gitlab.staging.haskell.org for ticket lookup while the import is >> underway. >> >> After the import we will wait at least a week or so before we begin >the >> process of decommissioning Trac, which will be kept in read-only mode >> for the duration. >> >> Do let me know if the 9 March timing is problematic. >> >> Cheers, >> >> - Ben >> >> >> [1] https://gitlab.staging.haskell.org/ghc/ghc >> [2] >https://github.com/bgamari/trac-to-remarkup/blob/master/TicketImport.hs#L227 >> [3] https://gitlab.staging.haskell.org/ghc/ghc/wikis/index >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From me at ara.io Wed Mar 6 11:33:39 2019 From: me at ara.io (Ara Adkins) Date: Wed, 6 Mar 2019 11:33:39 +0000 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: <91121787-E6C0-45E4-BB7B-B25FBF2B440E@well-typed.com> References: <8736o0ssqo.fsf@smart-cactus.org> <91121787-E6C0-45E4-BB7B-B25FBF2B440E@well-typed.com> Message-ID: Personally I would like to see it continued, but it may not be worth the work if I’m in a minority here. A potential stopgap would be to ‘watch’ the GHC project on our gitlab instance, but I can’t see any way to decide to get emails for notifications rather than having to check in at GitLab all the time. _ara > On 6 Mar 2019, at 11:21, Ben Gamari wrote: > > > >> On March 6, 2019 6:11:49 AM EST, Ara Adkins wrote: >> Super excited for this! Thank you to everyone whose put in so much hard >> work to get it done! >> >> One question: what is happening with the trac tickets mailing list? I >> imagine it’ll be going away, but for those of us that use it to keep >> track of things is there a recommended alternative? >> > The ghc-commits list will continue to work. > > The ghc-tickets list is a good question. I suspect that under gitlab there will be less need for this list but we may still want to continue maintaining it regardless for continuity's sake. Thoughts? > > Cheers, > > - Ben > > > >> Best, >> _ara >> >>> On 6 Mar 2019, at 01:21, Ben Gamari wrote: >>> >>> Hi everyone, >>> >>> Over the past few weeks we have been hard at work sorting out the >>> last batch of issues in GHC's Trac-to-GitLab import [1]. At this >> point I >>> believe we have sorted out the issues which are necessary to perform >> the >>> final migration: >>> >>> * We are missing only two tickets (#1436 and #2074 which will require >> a >>> bit of manual intervention to import due to extremely large >>> description lengths) >>> >>> * A variety of markup issues have been resolved >>> >>> * More metadata is now preserved via labels. We may choose to >>> reorganize or eliminate some of these labels in time but it's >> easier >>> to remove metadata after import than it is to reintroduce it. The >>> logic which maps Trac metadata to GitLab labels can be found here >> [2] >>> >>> * We now generate a Wiki table of contents [3] which is significantly >>> more readable than GitLab's default page list. This will be updated >>> by a cron job until underlying GitLab pages list becomes more >>> readable. >>> >>> * We now generate redirects for Trac ticket and Wiki links (although >>> this isn't visible in the staging instance) >>> >>> * Milestones are now properly closed when closed in Trac >>> >>> * Mapping between Trac and GitLab usernames is now a bit more robust >>> >>> As in previous test imports, we would appreciate it if you could have >> a >>> look over the import and let us know of any problems your encounter. >>> >>> If no serious issues are identified with the staging site we plan to >>> proceed with the migration this coming weekend. The current migration >>> plan is to perform the final import on gitlab.haskell.org on >> Saturday, 9 >>> March 2019. >>> >>> This will involve both gitlab.haskell.org and ghc.haskell.org being >> down >>> for likely the entirety of the day Saturday and likely some of Sunday >>> (EST time zone). Read-only access will be available to >>> gitlab.staging.haskell.org for ticket lookup while the import is >>> underway. >>> >>> After the import we will wait at least a week or so before we begin >> the >>> process of decommissioning Trac, which will be kept in read-only mode >>> for the duration. >>> >>> Do let me know if the 9 March timing is problematic. >>> >>> Cheers, >>> >>> - Ben >>> >>> >>> [1] https://gitlab.staging.haskell.org/ghc/ghc >>> [2] >> https://github.com/bgamari/trac-to-remarkup/blob/master/TicketImport.hs#L227 >>> [3] https://gitlab.staging.haskell.org/ghc/ghc/wikis/index >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. From tdammers at gmail.com Wed Mar 6 12:05:28 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Wed, 6 Mar 2019 13:05:28 +0100 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: <58274B81-E1C1-4607-ABFD-9427967B5B40@well-typed.com> References: <8736o0ssqo.fsf@smart-cactus.org> <20190306105513.wmozsv7qnlmv2jzd@nibbler> <58274B81-E1C1-4607-ABFD-9427967B5B40@well-typed.com> Message-ID: <20190306120527.mrinpp7j5j3koxk6@nibbler> On Wed, Mar 06, 2019 at 06:09:35AM -0500, Ben Gamari wrote: > The lacking redirect support is unfortunate. In my opinion this is something we will need to handle going forward as well; a one time solution like adding nginx redirects doesn't seem like the right approach to me. > > I would rather advocate either option 3 or one of the following options: > > 5. Detect redirects and convert them to symbolic links in the repo > 6. Request redirect support in the gitlab wiki. OK, I'll see what I can do about option 3. Option 5 is something that I believe we can still do after the fact if need be. Option 6, I think, we should do anyway, because we will want that feature for future pages, and the solutions outlined so far only take care of existing pages. From tdammers at gmail.com Wed Mar 6 13:29:23 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Wed, 6 Mar 2019 14:29:23 +0100 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: <20190306120527.mrinpp7j5j3koxk6@nibbler> References: <8736o0ssqo.fsf@smart-cactus.org> <20190306105513.wmozsv7qnlmv2jzd@nibbler> <58274B81-E1C1-4607-ABFD-9427967B5B40@well-typed.com> <20190306120527.mrinpp7j5j3koxk6@nibbler> Message-ID: <20190306132922.43jpj6vtqwnzbkr3@nibbler> For context: there is a total of 22 pages that use the redirect feature. So it may actually be feasible to just do this manually. On Wed, Mar 06, 2019 at 01:05:28PM +0100, Tobias Dammers wrote: > On Wed, Mar 06, 2019 at 06:09:35AM -0500, Ben Gamari wrote: > > The lacking redirect support is unfortunate. In my opinion this is something we will need to handle going forward as well; a one time solution like adding nginx redirects doesn't seem like the right approach to me. > > > > I would rather advocate either option 3 or one of the following options: > > > > 5. Detect redirects and convert them to symbolic links in the repo > > 6. Request redirect support in the gitlab wiki. > > OK, I'll see what I can do about option 3. Option 5 is something that I > believe we can still do after the fact if need be. Option 6, I think, we > should do anyway, because we will want that feature for future pages, > and the solutions outlined so far only take care of existing pages. > -- Tobias Dammers - tdammers at gmail.com From damien.mattei at gmail.com Wed Mar 6 15:11:52 2019 From: damien.mattei at gmail.com (Damien Mattei) Date: Wed, 6 Mar 2019 16:11:52 +0100 Subject: [Haskell-cafe] creating standalone executable with haskell In-Reply-To: References: Message-ID: thank for all the advices, i will keep all that in mind when it's time to update the DB on the server. regards, damin On Tue, Mar 5, 2019 at 1:33 PM Niklas Hambüchen wrote: > The two topics of creating standalone executables, and you not being a > system admin, are very separate and don't have much to do with each other. > > On the admin topic: > You can set up a full Haskell build environment in your home directory > even if you are not an admin. > > On the standalone executable topic: > Most Haskell compiled executables can easily run on other Linux systems. > In the default build, a Haskell executable has only a few runtime > dependencies via dynamic linking. Here's and example for a HelloWorld > program when compiled with `ghc --make`: > > % ldd Hello > linux-vdso.so.1 => (0x00007ffcd9dea000) > libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f3799dcd000) > libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 > (0x00007f3799b4d000) > librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f3799945000) > libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3799741000) > libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 > (0x00007f3799524000) > libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f379915a000) > /lib64/ld-linux-x86-64.so.2 (0x00007f379a0d6000) > > As you can see, a few C libraries must be present on the system. > They are already present on most systems. > Again, you don't need to be admin to get those; if they are missing, you > can also ship them along with your executable and set LD_LIBRARY_PATH. > > In general, executables created this way on one Linux flavour work > reasonably well on other Linux flavours, but not always. > And example (if I remember correctly) where it doesn't work, is Centos 6 > vs. newer Debian; one has libgmp.so.3 and one libgmp.so.6. > This means that if you are using the default dynamic linking of C > dependencies, you may have to ship "a few" (usually two) flavours of your > executable. > In general, executables created on one OS version work well on newer ones, > e.g. something created on Ubuntu 16.04 will work well on newer Ubuntus. > > You can also link everything statically. > Then your executable should work on any Linux system. > But you need to learn a few more things to do so; I try to make it as > convenient as possible with my project > https://github.com/nh2/static-haskell-nix/ > > Niklas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Wed Mar 6 16:13:39 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Wed, 6 Mar 2019 11:13:39 -0500 Subject: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration In-Reply-To: References: <8736o0ssqo.fsf@smart-cactus.org> Message-ID: > On Mar 6, 2019, at 1:32 AM, Ömer Sinan Ağacan wrote: > > - Comparing these two pages: > > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/HeapObjects?redirectedfrom=Commentary/Rts/HeapObjects > https://gitlab.staging.haskell.org/ghc/ghc/wikis/commentary/rts/storage/heap-objects > > The Gitlab page doesn't have images that Trac page has. Secondly, the "_|_" > string used in the Trac page is migrated as italic "|" in Gitlab. The missing "images" (structure layout diagrams, ...) do make it difficult to follow the exposition. I do hope those are ultimately migrated. -- Viktor. From newsletters at kaushikc.org Thu Mar 7 01:13:54 2019 From: newsletters at kaushikc.org (Kaushik Chakraborty) Date: Wed, 06 Mar 2019 20:13:54 -0500 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: <6d82987f-fb1b-46e2-97c5-3cee27498ce3@www.fastmail.com> Hi, > - Model small sats as Monadic Stream Functions [1], implement a communication protocol on top (also using MSFs), and use quickcheck + fault injection to verify distributed consensus in the presence of faults. The next (immediate) step will be to bound the trace length and use smallcheck or something that exhausts the input space. I have also implemented Raft using MSFs, and the idea is to verify properties in a similar way. I am interested to know the "fault injection" part. Did you inject the same randomly as in general Chaos Engg style or used formal techniques like LDFI (lineage driven fault injection )? Is there some reference code that you can share. Thanks in advance. Cheers! Kaushik https://keybase.io/kaushikc/ On Tue, Mar 5, 2019, at 19:00, Ivan Perez wrote: > Hi > > Partly related; for recent work we did two things: > > - Model part of the collision avoidance logic of a satellite as a state machine, implement it in haskell (14 locs) and use small check to verify that the state transitions fulfill the properties we expect for collision avoidance. > > - Model small sats as Monadic Stream Functions [1], implement a communication protocol on top (also using MSFs), and use quickcheck + fault injection to verify distributed consensus in the presence of faults. The next (immediate) step will be to bound the trace length and use smallcheck or something that exhausts the input space. I have also implemented Raft using MSFs, and the idea is to verify properties in a similar way. > > I know this is vague. Let me know if you have any questions or want more details. > > Ivan > > [1] https://github.com/ivanperez-keera/dunai > [2] https://arc.aiaa.org/doi/abs/10.2514/6.2019-1187 > > On Mon, 4 Mar 2019 at 02:05, P Orrifolius wrote: >> Hi, >> >> this will probably be a somewhat rambling, open-ended question... my >> apologies in advance. >> There are some concrete questions after I explain what I'm trying to >> achieve, and some of them may even make sense! >> >> I'm planning on writing a multi-party, distributed protocol. I would >> like to informally model the protocol itself and the environment it >> would run in and then simulate it to see if it achieves what I expect. >> Then it would need to be implemented and tested of course. >> I think a formal, mathematical model of the protocol would be beyond >> my abilities. >> >> I'm looking for any advice or tool/library recommendations I can get >> on the modelling and simulating part of the process and how that work >> could be leveraged in the implementation to reduce the work and help >> ensure correctness. >> >> >> I have some ideas of how this could work, but I don't know if I'm >> approaching it with a suitably Haskell-like mindset... >> What I envisage at the moment is defining the possible behaviour of >> each party, who are not homogeneous, as well as things which form the >> external environment of the parties but will impact their behaviour. >> By external things I mean, for example, communication links which may >> drop or reorder packets, storage devices which may lose or corrupt >> data, machines that freeze or slew their clock... that sort of thing. >> I'm also picturing a 'universal coordinator' that controls interaction >> of the parties/environment during simulation. >> >> Modelling each party and external as a Harel-like statechart seems >> plausible. Perhaps some parts could be simpler FSMs, but I think many >> will be reasonably complex and involve internal state. >> >> It would be nice if the simulation could, to the limit of given >> processing time, exhaustively check the model rather than just >> randomly, as per quickcheck. >> If at each point the universal coordinator got a list of possible next >> states from each party/external it could simulate the >> simultaneity/sequencing of events by enumerating all combinations, of >> every size, of individual party/external transitions and then >> recursing with each separate combination applied as being the next set >> of simultaneous state transitions. >> >> To reduce the space to exhaustively search it would be nice if any >> values that the parties used, and were being tested by, were >> abstracted. Not sure what the technical term is but I mean asserting >> relationships between variables in a given domain, rather than their >> actual value. >> For example, imagine a distributed vote between two nodes where each >> node must vote for a random integer greater than their last vote. >> Each node is a party in the protocol. >> In the current branch of the exhaustive search of the protocol a >> relation between node 1's existing vote, v1, and node 2's existing >> vote, v2, has already been asserted: v1>v2. >> So when the coordinator enumerates the possibilities for sets of >> next-state transitions, with each node n asserting vn'>vn in their >> individual party state transition, it will prune any search branch >> which doesn't satisfy the union of the assertions (v1>v2, v1'>v1, >> v2'>v2), and recurse into the search branch alternatives with v1'> or v1'=v2', or v1'>v2'. >> >> >> >> So, some questions... >> >> Is what I'm suggesting even vaguely sensible way to approach the >> problem in Haskell? Or am I getting carried away and some neat tricks >> zipping list monoids or something will get the job done? >> >> Is there some fantastic tool/library which already does everything I want? >> >> Are there any good tools or libraries in Haskell for defining and >> manipulating statecharts? >> >> The sessiontypes, and sessiontypes-distributed, libraries are cool, >> representing the protocol at the type level... but these only handle >> point-to-point, two-party protocols, right? >> I expect that extending them would be a _serious_ amount of work. >> >> Can quickcheck do the sort of exhaustive coverage testing that I'd like? >> >> Again, not sure of the correct terminology but can quickcheck generate >> tests at a variable-relation assertion level? I.e. inspect the >> operators and boolean tests over a domain (+, ==, >=, append, length, >> contains, etc) and explore test cases according to whether certain >> relations between variables hold. >> >> Any feelings/opinions about whether Oleg Kiselyov's Typed Tagless >> Final Interpreter (http://okmij.org/ftp/tagless-final/index.html) work >> might be a useful mechanism to define the behaviour of parties within >> the protocol (the statecharts, basically) and then have different >> interpreters that enable both the testing and implementation of the >> protocol, to keep them more closely aligned? >> I read that work years ago and have wanted to try it out ever since, >> so I may be overlooking the fact that it's totally unsuitable! >> Perhaps superseded now, but if it can allow more of the protocol to be >> expressed at the type level that would be good. >> >> >> Yeah, rambling. Sorry about that. >> >> >> Thanks! >> porrifolius >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian at zenhack.net Thu Mar 7 05:15:02 2019 From: ian at zenhack.net (Ian Denhardt) Date: Thu, 07 Mar 2019 00:15:02 -0500 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: <155193570256.866.1299519749156433494@localhost.localdomain> This library may be of interest: https://hackage.haskell.org/package/dejafu Quoting P Orrifolius (2019-03-04 02:05:06) > Hi, > > this will probably be a somewhat rambling, open-ended question... my > apologies in advance. > There are some concrete questions after I explain what I'm trying to > achieve, and some of them may even make sense! > > I'm planning on writing a multi-party, distributed protocol. I would > like to informally model the protocol itself and the environment it > would run in and then simulate it to see if it achieves what I expect. > Then it would need to be implemented and tested of course. > I think a formal, mathematical model of the protocol would be beyond > my abilities. > > I'm looking for any advice or tool/library recommendations I can get > on the modelling and simulating part of the process and how that work > could be leveraged in the implementation to reduce the work and help > ensure correctness. > > > I have some ideas of how this could work, but I don't know if I'm > approaching it with a suitably Haskell-like mindset... > What I envisage at the moment is defining the possible behaviour of > each party, who are not homogeneous, as well as things which form the > external environment of the parties but will impact their behaviour. > By external things I mean, for example, communication links which may > drop or reorder packets, storage devices which may lose or corrupt > data, machines that freeze or slew their clock... that sort of thing. > I'm also picturing a 'universal coordinator' that controls interaction > of the parties/environment during simulation. > > Modelling each party and external as a Harel-like statechart seems > plausible. Perhaps some parts could be simpler FSMs, but I think many > will be reasonably complex and involve internal state. > > It would be nice if the simulation could, to the limit of given > processing time, exhaustively check the model rather than just > randomly, as per quickcheck. > If at each point the universal coordinator got a list of possible next > states from each party/external it could simulate the > simultaneity/sequencing of events by enumerating all combinations, of > every size, of individual party/external transitions and then > recursing with each separate combination applied as being the next set > of simultaneous state transitions. > > To reduce the space to exhaustively search it would be nice if any > values that the parties used, and were being tested by, were > abstracted. Not sure what the technical term is but I mean asserting > relationships between variables in a given domain, rather than their > actual value. > For example, imagine a distributed vote between two nodes where each > node must vote for a random integer greater than their last vote. > Each node is a party in the protocol. > In the current branch of the exhaustive search of the protocol a > relation between node 1's existing vote, v1, and node 2's existing > vote, v2, has already been asserted: v1>v2. > So when the coordinator enumerates the possibilities for sets of > next-state transitions, with each node n asserting vn'>vn in their > individual party state transition, it will prune any search branch > which doesn't satisfy the union of the assertions (v1>v2, v1'>v1, > v2'>v2), and recurse into the search branch alternatives with v1' or v1'=v2', or v1'>v2'. > > > > So, some questions... > > Is what I'm suggesting even vaguely sensible way to approach the > problem in Haskell? Or am I getting carried away and some neat tricks > zipping list monoids or something will get the job done? > > Is there some fantastic tool/library which already does everything I want? > > Are there any good tools or libraries in Haskell for defining and > manipulating statecharts? > > The sessiontypes, and sessiontypes-distributed, libraries are cool, > representing the protocol at the type level... but these only handle > point-to-point, two-party protocols, right? > I expect that extending them would be a _serious_ amount of work. > > Can quickcheck do the sort of exhaustive coverage testing that I'd like? > > Again, not sure of the correct terminology but can quickcheck generate > tests at a variable-relation assertion level? I.e. inspect the > operators and boolean tests over a domain (+, ==, >=, append, length, > contains, etc) and explore test cases according to whether certain > relations between variables hold. > > Any feelings/opinions about whether Oleg Kiselyov's Typed Tagless > Final Interpreter (http://okmij.org/ftp/tagless-final/index.html) work > might be a useful mechanism to define the behaviour of parties within > the protocol (the statecharts, basically) and then have different > interpreters that enable both the testing and implementation of the > protocol, to keep them more closely aligned? > I read that work years ago and have wanted to try it out ever since, > so I may be overlooking the fact that it's totally unsuitable! > Perhaps superseded now, but if it can allow more of the protocol to be > expressed at the type level that would be good. > > > Yeah, rambling. Sorry about that. > > > Thanks! > porrifolius > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From monkleyon at gmail.com Thu Mar 7 13:27:37 2019 From: monkleyon at gmail.com (MarLinn) Date: Thu, 7 Mar 2019 14:27:37 +0100 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: 2cents: AFAIK in existing general-purpose simulation frameworks the type of simulation you need for this type of problem is typically implemented in a way that very closely resembles "classical" FRP, particularly it's definition of events. The framework basically manages huge queues of time-action-pairs. So several of the FRP libraries might be a good candidate. I haven't had a chance to experiment with MSF's yet, but seeing as they are a development in FRP space, Ivan's approach sounds like quite a good idea. Cheers. On 05/03/2019 14.29, Ivan Perez wrote: > Hi > > Partly related; for recent work we did two things: > > - Model part of the collision avoidance logic of a satellite as a state > machine, implement it in haskell (14 locs) and use small check to verify > that the state transitions fulfill the properties we expect for collision > avoidance. > > - Model small sats as Monadic Stream Functions [1], implement a > communication protocol on top (also using MSFs), and use quickcheck + fault > injection to verify distributed consensus in the presence of faults. The > next (immediate) step will be to bound the trace length and use smallcheck > or something that exhausts the input space. I have also implemented Raft > using MSFs, and the idea is to verify properties in a similar way. > > I know this is vague. Let me know if you have any questions or want more > details. > > Ivan > > [1] https://github.com/ivanperez-keera/dunai > [2] https://arc.aiaa.org/doi/abs/10.2514/6.2019-1187 From a.pelenitsyn at gmail.com Thu Mar 7 20:14:05 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Thu, 7 Mar 2019 15:14:05 -0500 Subject: [Haskell-cafe] Coercible between data A a b = A a b and (a, b) In-Reply-To: <79ADCC30-CCC9-4FEB-A055-C2C3584016B9@iki.fi> References: <79ADCC30-CCC9-4FEB-A055-C2C3584016B9@iki.fi> Message-ID: Hello Oleg, Is there a good, somewhat up-to-date source of reference to find out what exactly is different? Is it the STG paper from 1992 (“Implementing lazy functional languages on stock hardware”)? -- Best wishes, Artem Pelenitstyn On Mon, 25 Feb 2019 at 10:17 Oleg Grenrus wrote: > Two data types are `Coercible` (as in Data.Coerce) if their representation > in memory is the same: they are representiationally equivalent You ask for > looser structural isomorphism, consider e.g. > > data B b = B {-# UNPACK #-} !Int b > > and > > (Int, b) > > The values of these types have quite different representation/memory > layout. > > - Oleg > > On 25 Feb 2019, at 16.38, Georgi Lyubenov wrote: > > Greetings! > > Is there any reason behind/what is the reason behind > ``` > data A a b = A a b > ``` > and > ``` > (a, b) > ``` > > not being coercible? > > And in general all n-ary constructors not being coercible to one another? > > Thanks in advance! > > ======= > Georgi > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Sun Mar 10 00:35:57 2019 From: erkokl at gmail.com (Levent Erkok) Date: Sat, 9 Mar 2019 16:35:57 -0800 Subject: [Haskell-cafe] [ANNOUNCE] New release of SBV (v8.1) Message-ID: Levent Erkok Thu, Jul 20, 2017, 1:10 AM to Haskell I'm pleased to announce v8.1 release of SBV, a library for integrating SMT solvers into Haskell. This is a release with many new features. Some highlights: * Support for symbolic sum-types: Maybe/Either. (Thanks to Joel Burget for initiating these.) * Support for symbolic sets. * Support for uninterpreted-function values in models. * Support for validation of models coming from SMT-solvers for high-assurance. * An implementation of Dijkstra's weakest-precondition logic and a toy language for showing how to do Hoare style total-correctness proofs in SBV. * A new and better interface to the optimization engine supporting arbitrary metric spaces. * A bunch of other fixes, improvements, and examples. Full release notes are here: https://github.com/LeventErkok/sbv/blob/master/CHANGES.md Hackage: https://hackage.haskell.org/package/sbv Homepage: http://leventerkok.github.io/sbv/ Feedback and bug reports are most welcome. Happy proving! -Levent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis.raddle at gmail.com Sun Mar 10 20:28:18 2019 From: dennis.raddle at gmail.com (Dennis Raddle) Date: Sun, 10 Mar 2019 13:28:18 -0700 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" Message-ID: I have a thought about OO vs. functional programming . I'm moderately experienced with functional languages such as Haskell and Purescript and have used OO for many years at work (Python, C++). I'm giving tutoring lessons in Python to a professional who wants to use programming as part of his work. (He's an entrepreneur.) He is eager to understand functional and OO styles within Python--he keeps asking about the real benefit of each, not content with a superficial approach. I've explained about "encapsulation": gathering related functions together and minimizing coupling to the rest of the program. He asks "why do we encapsulate?" I've explained that one reason is to help we, the programmers, be confident that our code is correct. For example, in a class we try to isolate the implementation details from the rest of the program, and the compiler helps enforce that.So when we're writing our code and confirming to ourselves it's correct, we can make simplifying assumptions. We know the rest of the program won't modify our private variables and won't access us in any way other than the interface we've presented to the world. I explained that a large program is like a corporation, and pieces of the program are like individual employees. When we write a class or function, in a sense we "become" that employee and "forget about" a lot of the details of other employees. (i.e., encapsulation). Then has asked why functional programming is helpful. I explained about referential transparency. Say we're a function: then other "employees" (other functions) can trust us with simplifying assumptions, like no side-effects. Then this thought occurred to me, which is nifty but maybe not the whole story. """ Say we're a class. Then the simplifying assumptions of OO allow *us* to trust the *rest of the program* won't mess with us. Say we're a function. Then the simplifying assumptions of functional style help the *rest of the program* trust that *we* won't mess with it. """ D -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at jackkelly.name Sun Mar 10 21:20:04 2019 From: jack at jackkelly.name (Jack Kelly) Date: Mon, 11 Mar 2019 08:20:04 +1100 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: (Dennis Raddle's message of "Sun, 10 Mar 2019 13:28:18 -0700") References: Message-ID: <87va0q1l6j.fsf@jackkelly.name> Dennis Raddle writes: > I have a thought about OO vs. functional programming -snip- > > """ > Say we're a class. Then the simplifying assumptions of OO allow *us* to > trust the *rest of the program* won't mess with us. > > Say we're a function. Then the simplifying assumptions of functional style > help the *rest of the program* trust that *we* won't mess with it. > """ Interesting. Perhaps this is why it feels like OO scales for a little bit, but seems to hit diminishing returns? You get _some_ encapsulation, but "the rest of the program" grows faster than "us/we", and pure FP gives us guarantees about "the rest of the program", which is a larger set? -- Jack From ian at zenhack.net Sun Mar 10 21:33:58 2019 From: ian at zenhack.net (Ian Denhardt) Date: Sun, 10 Mar 2019 17:33:58 -0400 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: <87va0q1l6j.fsf@jackkelly.name> References: <87va0q1l6j.fsf@jackkelly.name> Message-ID: <155225363879.14683.7892964679949379208@localhost.localdomain> Quoting Jack Kelly (2019-03-10 17:20:04) > > Say we're a class. Then the simplifying assumptions of OO allow *us* to > > trust the *rest of the program* won't mess with us. Worth noting that modules do this for us in Haskell. Encapsulation of implementation details is something that languages of all stripes give you in one way or another. From rri at silentyak.com Sun Mar 10 21:50:59 2019 From: rri at silentyak.com (Ramnath R Iyer) Date: Sun, 10 Mar 2019 14:50:59 -0700 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: Dennis, I suspect there are a few tangentially related ideas that are getting clubbed together in your explanation, but the analogies are a nice way to go. 1. Why do we encapsulate (aka "abstract")? This is a matter of convenience, as humans - we have a need to group things together into categories and patterns so that we have fewer things we can hold in our minds. See http://psychclassics.yorku.ca/Miller/. 2. How do we encapsulate? Like any problem, there are several different solutions that offer different degrees of correctness (with some tradeoffs). OO and functional are two kinds of solutions; they are not mutually exclusive in the real world (example, combinations of these, modules, packages, micro-services et cetera - we usually try take multiple approaches to solve this problem). 3. What are the 'degrees of correctness' mentioned above? What are the tradeoffs? The basic problem is that of leaky abstractions - we try to categorize complicated reality into categories or patterns, but often the details that we want to hide end up becoming important enough that they can't stay hidden in practice. OO and functional take different approaches to abstraction; OO relies on categorization based on 'what things are' (aka objects) whereas functional relies on categorization based on 'what actions can be done' (aka functions). Obviously, there is nothing inherently natural or unnatural about either, but objects do naturally encapsulate state (memory of past actions) and when coded inappropriately can lead to 'hidden mutable state'. When state is hidden and can change silently (think multi-threaded, multi-process, multiple micro-services), the behavior of the program depends very much on details that we want to have kept hidden. OO is closer to the 'real world'; the benefit is arguably a lower barrier to entry for being productive, and a greater opportunity for creating leaky abstractions. What makes functional different? Well, the programming idioms encourage not permitting hidden mutable state. In certain languages, this is enforced by the language compiler. The primary benefit of functional here is in limiting the programmer's ability to create leaky abstractions. The tradeoffs w.r.t. composition are probably worthy of a more thorough explanation by itself on how functions compose and locks don't etc. 4. What about coupling? Coupling is about how one component's program code varies with another. You could achieve a good tradeoff with the points above and still end up with tightly coupled code. What this means is that when you need to achieve some practical change relevant to the purpose of the software (ex: a new feature), you end up having to change many different parts of the software. The changes are not reasonably isolated such that you can minimize the cost of the change or the amount of the 'stuff' the person making the change needs to understand (or learn) in order to be successful. We reduce coupling by making the commonly required changes easy. -- RRI On Sun, Mar 10, 2019 at 1:29 PM Dennis Raddle wrote: > I have a thought about OO vs. functional programming . > > I'm moderately experienced with functional languages such as Haskell and > Purescript and have used OO for many years at work (Python, C++). > > I'm giving tutoring lessons in Python to a professional who wants to use > programming as part of his work. (He's an entrepreneur.) He is eager to > understand functional and OO styles within Python--he keeps asking about > the real benefit of each, not content with a superficial approach. > > I've explained about "encapsulation": gathering related functions together > and minimizing coupling to the rest of the program. > > He asks "why do we encapsulate?" I've explained that one reason is to help > we, the programmers, be confident that our code is correct. > > For example, in a class we try to isolate the implementation details from > the rest of the program, and the compiler helps enforce that.So when we're > writing our code and confirming to ourselves it's correct, we can make > simplifying assumptions. We know the rest of the program won't modify our > private variables and won't access us in any way other than the interface > we've presented to the world. > > I explained that a large program is like a corporation, and pieces of the > program are like individual employees. When we write a class or function, > in a sense we "become" that employee and "forget about" a lot of the > details of other employees. (i.e., encapsulation). > > Then has asked why functional programming is helpful. I explained about > referential transparency. Say we're a function: then other "employees" > (other functions) can trust us with simplifying assumptions, like no > side-effects. > > Then this thought occurred to me, which is nifty but maybe not the whole > story. > > > """ > Say we're a class. Then the simplifying assumptions of OO allow *us* to > trust the *rest of the program* won't mess with us. > > Say we're a function. Then the simplifying assumptions of functional style > help the *rest of the program* trust that *we* won't mess with it. > """ > > D > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Ramnath R Iyer -------------- next part -------------- An HTML attachment was scrubbed... URL: From porrifolius at gmail.com Sun Mar 10 22:48:15 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 11:48:15 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> References: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> Message-ID: On Tue, 5 Mar 2019 at 03:33, Doug McIlroy wrote: > > > > Is there some fantastic tool/library which already does everything I want? > > Have you looked into model checkers like Spin, which was developed > for the very purpose of exhaustively checking protocols? See spinroot.com Thanks for lead. Spin does seem to be an appropriate tool for the job. I was/am sort of hoping the protocol modelling and implementation would both be in Haskell to get some reuse, but the consensus seems to be that Spin's modelling language is pretty easy to use so maybe it would be no extra work overall. I'm sure I've still got the implementation and protocol itself somewhat confused, so what state is truly required for the protocol is a bit muddled. Not certain how I would represent it in spin, just need to play around with it I guess. That's partly what I was getting at with my 'modelling operators over domains' ramble... trying to avoid increasing the state space by enumerating through all data type values without abstracting the protocol model by just checking boundary cases. Or at least abstracting in a disciplined way that makes me confident all boundary cases are checked, not just those I happened to think of and model. But perhaps Spin already has some state space compression tricks of that nature and I'm worrying over nothing. Anyway, thanks for the pointer! From porrifolius at gmail.com Mon Mar 11 00:20:20 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 13:20:20 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: On Wed, 6 Mar 2019 at 02:30, Ivan Perez wrote: > > Hi > > Partly related; for recent work we did two things: > > - Model part of the collision avoidance logic of a satellite as a state machine, implement it in haskell (14 locs) and use small check to verify that the state transitions fulfill the properties we expect for collision avoidance. > > - Model small sats as Monadic Stream Functions [1], implement a communication protocol on top (also using MSFs), and use quickcheck + fault injection to verify distributed consensus in the presence of faults. The next (immediate) step will be to bound the trace length and use smallcheck or something that exhausts the input space. I have also implemented Raft using MSFs, and the idea is to verify properties in a similar way. > > I know this is vague. Let me know if you have any questions or want more details. An appropriately vague response for a vague question! :) No, that's good, thanks. I have considered using an FRP library for the _implementation_ but I couldn't really get my head around how I would use it for modelling and checking the protocol itself. I was considering reflex as it seems to be well regarded, and the browser/mobile capabilities appear to be good which could prove useful for me in the future. I'm not sure how reflex is classified but I guess it's a 'classic' FRP system, so perhaps the issues I was facing would be analogous to those when using MSF/dunai. Your email has made me take a second look at a FRP option, and I think I was seeing problems where there aren't any. One difficulty I perceived was the heterogeneity of the participants in the protocol and the multi-point communications, well beyond a simple sending/receiving distinction. That worries me less now. It seems that if I could come up with a statechart description of each participant it's probably a fairly mechanical process to represent that in a FRP network. Perhaps that explains why I couldn't find many libraries to do with more 'elaborate' state representations (as in statecharts are more elaborate than FSM/automata/etc). Maybe once you reach that complexity they're not the best way to think about the problem in Haskell. The 'complex heterogeneity' reinforced my main problem, which now I think might just be a case premature optimisation. It seemed that if I wanted to enumerate all states while testing the protocol I would need to 'branch' each participant whenever they changed and recursively test each temporal alternative... and that probably involved building, modifying, reorganising, restoring networks on the fly. Maybe achievable, but I couldn't immediately see how. But a simple succession of root-to-tip searches/executions of that tree could be perfectly adequate. Just have to try it out but I think I was getting carried away. Anyway, after that additional vagueness... a bit of a tangent. :) One problem I have with the Haskell ecosystem is I end up falling down the rabbit-hole whenever I start researching something. And within each rabbit-hole I find more interesting-thing-to-research-rabbit-holes, and within them... etc, etc. Soon I'm so deep down the rabbit-holes I'm smothered in quantum foam and don't know which way is up. On my last foray I started with MSF and came upon Streamly: https://hackage.haskell.org/package/streamly-0.6.0 Are you familiar with it? To my inexperienced eye it seems quite similar to MSF/dunai, do you have any thoughts on it? Relative merits in different use cases? Reflex appeals to me because of the possibility of learning it and leveraging it's UI layers in other projects. But for a protocol implementation it feels like maybe I should be using something more, um... stripped back/fundamental. Again, thanks for the reply. From porrifolius at gmail.com Mon Mar 11 00:38:05 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 13:38:05 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: <61B2B1A3-0CAA-4DD2-B8FD-5406B715B536@gmail.com> References: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> <61B2B1A3-0CAA-4DD2-B8FD-5406B715B536@gmail.com> Message-ID: Hi, Will (and Ian) On Tue, 5 Mar 2019 at 23:11, Will Yager wrote: > > http://hackage.haskell.org/package/dejafu might do what you want Thanks for the suggestion, that does look useful. >From a quick look at the documentation... am I right in thinking that this test framework only covers concurrency when you're directly using IOVars and STM? Any concurrency that might involve other primitives, or IOVar/STM uses in libraries, wouldn't be testable? Unless the primitives/library had also been written to use the dejafu io classes and monads of course... which might be a good idea in and of itself. Definitely looks good for helping ensure correctness at my protocol implementation layer, and I can conceive of building a protocol model testable by it, so thanks for the link. Need to think more about the relative merits of that approach vs Spin vs FRP. Thanks. From porrifolius at gmail.com Mon Mar 11 01:04:38 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 14:04:38 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: On Wed, 6 Mar 2019 at 06:00, Carter Schonwald wrote: > > TLA plus might suit you! I've actually had some success going back and forth between copattern style code and TLA specs, its not easy and not perfect, but it works > > i've been slowly experimenting with modelling protocols as coinductive / copattern style trick > https://www.reddit.com/r/haskell/comments/4aju8f/simple_example_of_emulating_copattern_matching_in/ is a small toy exposition i did > > i am messing around with some reusable abstractions, though its definitely a challenging problem I will be honest and confess that I can't get my head around it. Not your fault though! :) I think I have a basic understanding of co-(recursion|induction|patterns) so, at a lay-person's level of understanding, can I conceptually think of these things as streams very similar to, for example, those processed by the Monadic Stream Functions that Ivan Perez mentions in another email? And then there is some additional analytical or performance advantages, maybe especially in cases of infinite streams, achieved by representing them in this manner? I think I'll need to study up to really appreciate it but, at the very least, I'm thankful for the link to your post as your reference to the machines library led me off down an interesting research rabbit-hole. :) Thanks! From porrifolius at gmail.com Mon Mar 11 01:13:03 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 14:13:03 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: <6d82987f-fb1b-46e2-97c5-3cee27498ce3@www.fastmail.com> References: <6d82987f-fb1b-46e2-97c5-3cee27498ce3@www.fastmail.com> Message-ID: On Thu, 7 Mar 2019 at 14:14, Kaushik Chakraborty wrote: > > Hi, > > - Model small sats as Monadic Stream Functions [1], implement a communication protocol on top (also using MSFs), and use quickcheck + fault injection to verify distributed consensus in the presence of faults. The next (immediate) step will be to bound the trace length and use smallcheck or something that exhausts the input space. I have also implemented Raft using MSFs, and the idea is to verify properties in a similar way. > > > I am interested to know the "fault injection" part. Did you inject the same randomly as in general Chaos Engg style or used formal techniques like LDFI (lineage driven fault injection > )? Is there some reference code that you can share. > Thanks in advance. I'm interested in that part of the system as well. For what it's worth I was envisaging explicitly modelling each external process/component that could experience faults so that I could enumerate those failure states, and all combinations of them, exhaustively. I've also given passing thought to modelling faults within the internal processes so I could see what happens when dealing with Byzantine failures. Definitely something I should read up on. From ivanperezdominguez at gmail.com Mon Mar 11 01:45:19 2019 From: ivanperezdominguez at gmail.com (Ivan Perez) Date: Sun, 10 Mar 2019 21:45:19 -0400 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: On Sun, 10 Mar 2019 at 20:20, P Orrifolius wrote: > On Wed, 6 Mar 2019 at 02:30, Ivan Perez > wrote: > > > > Hi > > > > Partly related; for recent work we did two things: > > > > - Model part of the collision avoidance logic of a satellite as a state > machine, implement it in haskell (14 locs) and use small check to verify > that the state transitions fulfill the properties we expect for collision > avoidance. > > > > - Model small sats as Monadic Stream Functions [1], implement a > communication protocol on top (also using MSFs), and use quickcheck + fault > injection to verify distributed consensus in the presence of faults. The > next (immediate) step will be to bound the trace length and use smallcheck > or something that exhausts the input space. I have also implemented Raft > using MSFs, and the idea is to verify properties in a similar way. > > > > I know this is vague. Let me know if you have any questions or want more > details. > > An appropriately vague response for a vague question! :) > > No, that's good, thanks. I have considered using an FRP library for > the _implementation_ but I couldn't really get my head around how I > would use it for modelling and checking the protocol itself. Well, the basic idea of MSFs is that they are just a function: MSF m a b = a -> m (b, MSF m a b). That means we can very easily prove properties about them. For example, we've proven the arrow laws ( http://www.cs.nott.ac.uk/~psxip1/papers/msfmathprops.pdf). If you want to mechanize proofs, it's helpful to have support for coinduction in your language. > I was > considering reflex as it seems to be well regarded, and the > browser/mobile capabilities appear to be good which could prove useful > for me in the future. I'm not sure how reflex is classified but I > guess it's a 'classic' FRP system, so perhaps the issues I was facing > would be analogous to those when using MSF/dunai. > Well, here my position would be a bit biased, of course. But there's good reason why we created dunai. Our position with dunai is that there is a more fundamental construct that a lot of existing ideas emerge from. We have seen that many FRP implementations can functionally be expressed in terms of MSFs, or MSFs + some extension (which you are more than welcome to implement, since the library is extensible by design). We use this for GUI-stuff. All the time. It's actually one of the initial use cases that motivated creating dunai. Plus, you can run Yampa on top of dunai. So, we have Yampa mobile games running linked using bearriver (a Yampa-compatible FRP layer). Yampa (and dunai) also runs on the browser (see the GHCjs branch of haskanoid). I have also a layer for the GUI-oriented reactive library Keera Hails. It's quite fast too. For a recent paper, I had to run complex physics simulations in Yampa, and this was clocked at 60Hz rendering (vsync) and about 10000 simulation cycles per frame (meaning 60K simulation cycles per second). Even though MSFs look arrowized, an MSF with unit input is a stream, and with time in the environment it is a signal. It's also a functor and an applicative. So you can write the same classic FRP style, if you want. Your email has made me take a second look at a FRP option, and I think > I was seeing problems where there aren't any. > > One difficulty I perceived was the heterogeneity of the participants > in the protocol and the multi-point communications, well beyond a > simple sending/receiving distinction. > That worries me less now. It seems that if I could come up with a > statechart description of each participant it's probably a fairly > mechanical process to represent that in a FRP network. > Perhaps that explains why I couldn't find many libraries to do with > more 'elaborate' state representations (as in statecharts are more > elaborate than FSM/automata/etc). Maybe once you reach that > complexity they're not the best way to think about the problem in > Haskell. > > The 'complex heterogeneity' reinforced my main problem, which now I > think might just be a case premature optimisation. It seemed that if > I wanted to enumerate all states while testing the protocol I would > need to 'branch' each participant whenever they changed and > recursively test each temporal alternative... and that probably > involved building, modifying, reorganising, restoring networks on the > fly. Maybe achievable, but I couldn't immediately see how. But a > simple succession of root-to-tip searches/executions of that tree > could be perfectly adequate. > Just have to try it out but I think I was getting carried away. > > Anyway, after that additional vagueness... a bit of a tangent. :) > One problem I have with the Haskell ecosystem is I end up falling down > the rabbit-hole whenever I start researching something. And within > each rabbit-hole I find more > interesting-thing-to-research-rabbit-holes, and within them... etc, > etc. Soon I'm so deep down the rabbit-holes I'm smothered in quantum > foam and don't know which way is up. > On my last foray I started with MSF and came upon Streamly: > https://hackage.haskell.org/package/streamly-0.6.0 > Are you familiar with it? I have not explored it yet. But it seems related. Just for info, there's a package called Rhine that extends Dunai with type-safe parallelism and asynchronicity. It ensures that communications specify buffers that are compatible. To my inexperienced eye it seems quite > similar to MSF/dunai, do you have any thoughts on it? Relative merits > in different use cases? > Reflex appeals to me because of the possibility of learning it and > leveraging it's UI layers in other projects. But for a protocol > implementation it feels like maybe I should be using something more, > um... stripped back/fundamental. > > Again, thanks for the reply. > My pleasure! Let me know if you have any questions. I'd recommend that you take a look at these papers: - "FRP Refactored": https://dl.acm.org/citation.cfm?id=2976010 (the original MSF paper, shows classic and arrowized FRP) - "Rhine: FRP with type-level clocks": https://dl.acm.org/citation.cfm?id=3242757 (asynchronicity, concurrency and parallelism with MSFs) - "Testing and debugging FRP": https://dl.acm.org/citation.cfm?id=3110246 (quickcheck + temporal logic + FRP) - "Fault Tolerant FRP": https://dl.acm.org/citation.cfm?id=3236791 I'm happy to provide copies of these papers if you want them :) My website ( http://www.cs.nott.ac.uk/~psxip1/) should have links to all of them. All the best, Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From porrifolius at gmail.com Mon Mar 11 01:47:49 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 14:47:49 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: On Fri, 8 Mar 2019 at 02:28, MarLinn wrote: > AFAIK in existing general-purpose simulation frameworks the type of > simulation you need for this type of problem is typically implemented in > a way that very closely resembles "classical" FRP, particularly it's > definition of events. The framework basically manages huge queues of > time-action-pairs. Interesting. So it's possible that the Spin model checker takes a similar approach, along with all sorts of state-space compression tricks and such like. > So several of the FRP libraries might be a good candidate. I haven't had > a chance to experiment with MSF's yet, but seeing as they are a > development in FRP space, Ivan's approach sounds like quite a good idea. I am coming (back) around to some sort of FRP modelling... it'd be hubris to expect to top Spin, but as long as I'm learning it's fine I suppose. It appears to me now that a FRP network of streams may allow quite a lot of re-use between protocol modelling and implementation... it may be pretty straight forward to substitute the 2 alternative sub-networks of the modelling FRP that carried out "receive (good|bad) message -> emit message count+(1|0)" with the implementation "receive message -> check signature -> write to disk -> etc -> emit message count+(1|0)". And it occurs to me now that using CRDTs, and Lamport or vector clocks, to manage variables/state on the individual simulation runs could provide quite an effective state-space compression, collapse the tree of runs down to a graph. Thanks for the 2 cents. From porrifolius at gmail.com Mon Mar 11 02:07:19 2019 From: porrifolius at gmail.com (P Orrifolius) Date: Mon, 11 Mar 2019 15:07:19 +1300 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: Message-ID: On Mon, 11 Mar 2019 at 14:45, Ivan Perez wrote: > > My pleasure! Let me know if you have any questions. I dare say I will. :) Lots of pointers there, I'll take a look and try to digest it. A couple of points there that are already correcting/clarifying things for me I think. I see more quantum foam in my future... but I appreciate the help nonetheless. Thanks! :) From raoknz at gmail.com Mon Mar 11 02:23:05 2019 From: raoknz at gmail.com (Richard O'Keefe) Date: Mon, 11 Mar 2019 15:23:05 +1300 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: Serious question: does Python _have_ encapsulation? I know about the single-underscore convention and the double-underscore convention, but even double- underscore attributes and methods can be accessed freely from the outside if you use the class-name prefix. That is, it makes it difficult to breach encapsulation by accident, but it doesn't take the Lock-Picking Lawyer from YouTube to breach encapsulation on purpose without breaking stride. On Mon, 11 Mar 2019 at 09:29, Dennis Raddle wrote: > I have a thought about OO vs. functional programming . > > I'm moderately experienced with functional languages such as Haskell and > Purescript and have used OO for many years at work (Python, C++). > > I'm giving tutoring lessons in Python to a professional who wants to use > programming as part of his work. (He's an entrepreneur.) He is eager to > understand functional and OO styles within Python--he keeps asking about > the real benefit of each, not content with a superficial approach. > > I've explained about "encapsulation": gathering related functions together > and minimizing coupling to the rest of the program. > > He asks "why do we encapsulate?" I've explained that one reason is to help > we, the programmers, be confident that our code is correct. > > For example, in a class we try to isolate the implementation details from > the rest of the program, and the compiler helps enforce that.So when we're > writing our code and confirming to ourselves it's correct, we can make > simplifying assumptions. We know the rest of the program won't modify our > private variables and won't access us in any way other than the interface > we've presented to the world. > > I explained that a large program is like a corporation, and pieces of the > program are like individual employees. When we write a class or function, > in a sense we "become" that employee and "forget about" a lot of the > details of other employees. (i.e., encapsulation). > > Then has asked why functional programming is helpful. I explained about > referential transparency. Say we're a function: then other "employees" > (other functions) can trust us with simplifying assumptions, like no > side-effects. > > Then this thought occurred to me, which is nifty but maybe not the whole > story. > > > """ > Say we're a class. Then the simplifying assumptions of OO allow *us* to > trust the *rest of the program* won't mess with us. > > Say we're a function. Then the simplifying assumptions of functional style > help the *rest of the program* trust that *we* won't mess with it. > """ > > D > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raoknz at gmail.com Mon Mar 11 02:28:29 2019 From: raoknz at gmail.com (Richard O'Keefe) Date: Mon, 11 Mar 2019 15:28:29 +1300 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: PS: surely one of the main claims for both OO and FP is reuse and composition. OO tends to do composition by either object-level application (Smalltalk, Ruby, Javascript, Java) or type-level application (Eiffel, Swift, Java, C++). not exhaustive lists. On Mon, 11 Mar 2019 at 15:23, Richard O'Keefe wrote: > Serious question: does Python _have_ encapsulation? > I know about the single-underscore convention and > the double-underscore convention, but even double- > underscore attributes and methods can be accessed > freely from the outside if you use the class-name > prefix. That is, it makes it difficult to breach > encapsulation by accident, but it doesn't take the > Lock-Picking Lawyer from YouTube to breach > encapsulation on purpose without breaking stride. > > On Mon, 11 Mar 2019 at 09:29, Dennis Raddle > wrote: > >> I have a thought about OO vs. functional programming . >> >> I'm moderately experienced with functional languages such as Haskell and >> Purescript and have used OO for many years at work (Python, C++). >> >> I'm giving tutoring lessons in Python to a professional who wants to use >> programming as part of his work. (He's an entrepreneur.) He is eager to >> understand functional and OO styles within Python--he keeps asking about >> the real benefit of each, not content with a superficial approach. >> >> I've explained about "encapsulation": gathering related functions >> together and minimizing coupling to the rest of the program. >> >> He asks "why do we encapsulate?" I've explained that one reason is to >> help we, the programmers, be confident that our code is correct. >> >> For example, in a class we try to isolate the implementation details from >> the rest of the program, and the compiler helps enforce that.So when we're >> writing our code and confirming to ourselves it's correct, we can make >> simplifying assumptions. We know the rest of the program won't modify our >> private variables and won't access us in any way other than the interface >> we've presented to the world. >> >> I explained that a large program is like a corporation, and pieces of the >> program are like individual employees. When we write a class or function, >> in a sense we "become" that employee and "forget about" a lot of the >> details of other employees. (i.e., encapsulation). >> >> Then has asked why functional programming is helpful. I explained about >> referential transparency. Say we're a function: then other "employees" >> (other functions) can trust us with simplifying assumptions, like no >> side-effects. >> >> Then this thought occurred to me, which is nifty but maybe not the whole >> story. >> >> >> """ >> Say we're a class. Then the simplifying assumptions of OO allow *us* to >> trust the *rest of the program* won't mess with us. >> >> Say we're a function. Then the simplifying assumptions of functional >> style help the *rest of the program* trust that *we* won't mess with it. >> """ >> >> D >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis.raddle at gmail.com Mon Mar 11 02:40:08 2019 From: dennis.raddle at gmail.com (Dennis Raddle) Date: Sun, 10 Mar 2019 19:40:08 -0700 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: Indeed, Python does not enforce encapsulation. It has weakly enforced types as well (duck typing for most objects). But it's still an opportunity to follow an "encapsulation" convention and explain the purpose, while telling my student he won't see the full OO implementation until he gets into Java or C++. I want to comment on Ramnath's point that encapsulation helps us hold fewer things in our minds. I would agree that's part of it, but in a general sense programming always involves more than we can hold in our minds and never requires us to hold all of it at once. Consider a mathematical proof that runs to 300 pages. We don't need to hold the whole thing in our minds to create and understand it--but it's still worth finding a way to simplify it so it takes fewer pages or less advanced math. Although we rarely prove a program's correctness in a strict sense, we probably give some thought to a rough justification of the correctness of our design choices, even if such a justification requires more thoughts than we can hold at once. There is a benefit in encapsulating so that the justification requires fewer thoughts -- say 20 thoughts instead of 200 thoughts. Now, I'm not sure this is any closer to the heart of the matter. Maybe Ramnath is more essentially correct. Comments welcome. D On Sun, Mar 10, 2019 at 7:23 PM Richard O'Keefe wrote: > Serious question: does Python _have_ encapsulation? > I know about the single-underscore convention and > the double-underscore convention, but even double- > underscore attributes and methods can be accessed > freely from the outside if you use the class-name > prefix. That is, it makes it difficult to breach > encapsulation by accident, but it doesn't take the > Lock-Picking Lawyer from YouTube to breach > encapsulation on purpose without breaking stride. > > On Mon, 11 Mar 2019 at 09:29, Dennis Raddle > wrote: > >> I have a thought about OO vs. functional programming . >> >> I'm moderately experienced with functional languages such as Haskell and >> Purescript and have used OO for many years at work (Python, C++). >> >> I'm giving tutoring lessons in Python to a professional who wants to use >> programming as part of his work. (He's an entrepreneur.) He is eager to >> understand functional and OO styles within Python--he keeps asking about >> the real benefit of each, not content with a superficial approach. >> >> I've explained about "encapsulation": gathering related functions >> together and minimizing coupling to the rest of the program. >> >> He asks "why do we encapsulate?" I've explained that one reason is to >> help we, the programmers, be confident that our code is correct. >> >> For example, in a class we try to isolate the implementation details from >> the rest of the program, and the compiler helps enforce that.So when we're >> writing our code and confirming to ourselves it's correct, we can make >> simplifying assumptions. We know the rest of the program won't modify our >> private variables and won't access us in any way other than the interface >> we've presented to the world. >> >> I explained that a large program is like a corporation, and pieces of the >> program are like individual employees. When we write a class or function, >> in a sense we "become" that employee and "forget about" a lot of the >> details of other employees. (i.e., encapsulation). >> >> Then has asked why functional programming is helpful. I explained about >> referential transparency. Say we're a function: then other "employees" >> (other functions) can trust us with simplifying assumptions, like no >> side-effects. >> >> Then this thought occurred to me, which is nifty but maybe not the whole >> story. >> >> >> """ >> Say we're a class. Then the simplifying assumptions of OO allow *us* to >> trust the *rest of the program* won't mess with us. >> >> Say we're a function. Then the simplifying assumptions of functional >> style help the *rest of the program* trust that *we* won't mess with it. >> """ >> >> D >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Mon Mar 11 04:50:27 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 11 Mar 2019 00:50:27 -0400 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: <20190311045027.GH916@straasha.imrryr.org> On Sun, Mar 10, 2019 at 01:28:18PM -0700, Dennis Raddle wrote: > Then this thought occurred to me, which is nifty but maybe not the whole > story. > > """ > Say we're a class. Then the simplifying assumptions of OO allow *us* to > trust the *rest of the program* won't mess with us. > > Say we're a function. Then the simplifying assumptions of functional style > help the *rest of the program* trust that *we* won't mess with it. > """ I am curious what readers of this thread make of: "Object-Oriented Programming is Bad": https://www.youtube.com/watch?v=QM1iUe6IofM The main thesis appears to be that OO encapsultion makes code harder to read, and fails to deliver on the benefits of encapsulation of state when objects interact with other objects. -- Viktor. From aquagnu at gmail.com Mon Mar 11 07:18:10 2019 From: aquagnu at gmail.com (PY) Date: Mon, 11 Mar 2019 09:18:10 +0200 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> 10.03.2019 23:50, Ramnath R Iyer wrote: > whereas functional relies on categorization based on 'what actions can > be done' (aka functions) that's what interfaces are for. From damien.mattei at gmail.com Mon Mar 11 07:51:28 2019 From: damien.mattei at gmail.com (Damien Mattei) Date: Mon, 11 Mar 2019 08:51:28 +0100 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> References: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> Message-ID: I think functional programming really help simplifying the things: you need to do a complex thing, so you build it with more simple functions that you compose, you make a simple function and reuse it in a more complex one... for each function you have arguments in input and get an object (list, array, int ,string....) in output. OO programming seems at first glance to provide a way to do this by calling methods of a class that are encapsulated in it, but when using a method of a class you always have to instantiate an object, you have to set some things before doing the work. For any simple things your minds tends to create virtual or real classes that you will perhaps have the needs,example you needs a square object, nothing else, but you will tends to create first edges, as class of square, and put square in a super class named polygons which will also be in a class of.... you understand the problem? ;-) so after another people reading your code will say, "hey what's all this mess?" , and you're code will be hard to read , and even for you when making big program you will have tons of classes, inheritance and will move in a maze of classes!!! a nightmare.... sometimes, i experienced it! at the opposite functional programming is so easy, so clear for your mind. If you had strong typing which almost always the case in OO programming, non immutable object like in C++, copy constructor, object requiring canonical normal form of Copine, OO programming can become really stressing to develop with in my opinion. Damien On Mon, Mar 11, 2019 at 8:18 AM PY wrote: > 10.03.2019 23:50, Ramnath R Iyer wrote: > > whereas functional relies on categorization based on 'what actions can > > be done' (aka functions) > that's what interfaces are for. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aquagnu at gmail.com Mon Mar 11 10:55:29 2019 From: aquagnu at gmail.com (PY) Date: Mon, 11 Mar 2019 12:55:29 +0200 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> Message-ID: 11.03.2019 09:51, Damien Mattei wrote: > a method of a class you always have to instantiate an object... not always, static methods do not need this. > so after another people reading your code will say,  "hey what's all this mess?" , and you're code will be hard to read , and even for you when making big program you will have tons of classes, inheritance and will move in a maze of classes!!! a nightmare.... sometimes, i experienced it! at the opposite functional programming is so easy, so clear for your mind. Absolutely no matter what you slice the code: functions in modules or methods in classes. In Haskell we also have a lot of functions, modules, types, etc. But the cause of the complexity is not linked directly to the number of entities (and in most cases it's not): programming in Smalltalk/Java/VB/Python/etc is simple and fast and increasing of the number of used classes and methods does not need more specific knowledge of the language and can be done by newbie who already knows the syntax, as you said it before - it's enough to instantiate the object and to call some its method! By the way, Smalltalk, as I know, even has not DI - I suppose due to image :) And this is false for Haskell: it's not enough to know language syntax (it's relatively simple): each library can involve own level of abstraction, own eDSLs, etc. And if somebody built his library on arrows, pipes, free monads, etc - it's not enough to know language's syntax only. Imagine a big house built with simple and same bricks. And some Baroque theater where anything is complex and unique. So, languages like Haskell are more complex and need more time to learn and create valuable applications. From damien.mattei at gmail.com Mon Mar 11 12:42:53 2019 From: damien.mattei at gmail.com (Damien Mattei) Date: Mon, 11 Mar 2019 13:42:53 +0100 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> Message-ID: when i was talking about functional programming, i was thinking to Scheme not Haskell, when you use scheme there is no strong typing, no typing at all in almost cases... when you use map it's map, no need to ask yourself if it is mapM or Prelude.map or anything else.... there is no gothic cathedral only old strong roman church style ;-) On Mon, Mar 11, 2019 at 11:55 AM PY wrote: > 11.03.2019 09:51, Damien Mattei wrote: > > a method of a class you always have to instantiate an object... > > not always, static methods do not need this. > > > so after another people reading your code will say, "hey what's all > this mess?" , and you're code will be hard to read , and even for you > when making big program you will have tons of classes, inheritance and > will move in a maze of classes!!! a nightmare.... sometimes, i > experienced it! at the opposite functional programming is so easy, so > clear for your mind. > > Absolutely no matter what you slice the code: functions in modules or > methods in classes. In Haskell we also have a lot of functions, modules, > types, etc. But the cause of the complexity is not linked directly to > the number of entities (and in most cases it's not): programming in > Smalltalk/Java/VB/Python/etc is simple and fast and increasing of the > number of used classes and methods does not need more specific knowledge > of the language and can be done by newbie who already knows the syntax, > as you said it before - it's enough to instantiate the object and to > call some its method! By the way, Smalltalk, as I know, even has not DI > - I suppose due to image :) > > And this is false for Haskell: it's not enough to know language syntax > (it's relatively simple): each library can involve own level of > abstraction, own eDSLs, etc. And if somebody built his library on > arrows, pipes, free monads, etc - it's not enough to know language's > syntax only. Imagine a big house built with simple and same bricks. And > some Baroque theater where anything is complex and unique. > > So, languages like Haskell are more complex and need more time to learn > and create valuable applications. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tozkanli2023 at gmail.com Mon Mar 11 13:42:13 2019 From: tozkanli2023 at gmail.com (=?UTF-8?Q?Tarik_=C3=96ZKANLI?=) Date: Mon, 11 Mar 2019 16:42:13 +0300 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> Message-ID: When you get a deep run-time type incompatibility with a difficult to debug case, the Gothic Church that you talked about may change it's direction and function and chase after you at a very inappropriate midnight. :) On Mon, 11 Mar 2019 at 15:43, Damien Mattei wrote: > when i was talking about functional programming, i was thinking to Scheme > not Haskell, when you use scheme there is no strong typing, no typing at > all in almost cases... when you use map it's map, no need to ask yourself > if it is mapM or Prelude.map or anything else.... there is no gothic > cathedral only old strong roman church style ;-) > > On Mon, Mar 11, 2019 at 11:55 AM PY wrote: > >> 11.03.2019 09:51, Damien Mattei wrote: >> > a method of a class you always have to instantiate an object... >> >> not always, static methods do not need this. >> >> > so after another people reading your code will say, "hey what's all >> this mess?" , and you're code will be hard to read , and even for you >> when making big program you will have tons of classes, inheritance and >> will move in a maze of classes!!! a nightmare.... sometimes, i >> experienced it! at the opposite functional programming is so easy, so >> clear for your mind. >> >> Absolutely no matter what you slice the code: functions in modules or >> methods in classes. In Haskell we also have a lot of functions, modules, >> types, etc. But the cause of the complexity is not linked directly to >> the number of entities (and in most cases it's not): programming in >> Smalltalk/Java/VB/Python/etc is simple and fast and increasing of the >> number of used classes and methods does not need more specific knowledge >> of the language and can be done by newbie who already knows the syntax, >> as you said it before - it's enough to instantiate the object and to >> call some its method! By the way, Smalltalk, as I know, even has not DI >> - I suppose due to image :) >> >> And this is false for Haskell: it's not enough to know language syntax >> (it's relatively simple): each library can involve own level of >> abstraction, own eDSLs, etc. And if somebody built his library on >> arrows, pipes, free monads, etc - it's not enough to know language's >> syntax only. Imagine a big house built with simple and same bricks. And >> some Baroque theater where anything is complex and unique. >> >> So, languages like Haskell are more complex and need more time to learn >> and create valuable applications. >> > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen at chordify.net Mon Mar 11 14:08:29 2019 From: jeroen at chordify.net (Jeroen Bransen) Date: Mon, 11 Mar 2019 15:08:29 +0100 Subject: [Haskell-cafe] Apply Haskell at scale in a music start-up Message-ID: * Dear Haskellers, Chordify is hiring! Chordify is a young and fast growing music e-learning platform that helps musicians to play their favorite music. We automatically analyse the chords of a piece of music and display them in an intuitive player. Try it yourself at: https://chordify.net/or download one of our apps https://chordify.net/app The cool thing is: our backend serving our apps and website has been written mostly in Haskell. With over 8 million users per month we apply functional programming at scale. We hope to broaden our team with a functional programmer. We are looking for people who are pro-active, independent, and creative to improve Chordify. You’d have the opportunity to be productive with advanced type-system features and powerful GHC extensions. Our back-end is powered by libraries like Servant, Persistent and Esqueleto and we distribute computation using Cloud Haskell. If you are interested in working at Chordify, please have a look at: https://jobs.chordify.net/ All the best, Jeroen Bransen * -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Mon Mar 11 16:10:35 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 11 Mar 2019 12:10:35 -0400 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: <5175370c-be1f-5e2e-e671-d4399dab0af6@gmail.com> Message-ID: On Mon, Mar 11, 2019 at 6:55 AM PY wrote: > And this is false for Haskell: it's not enough to know language syntax > (it's relatively simple): each library can involve own level of > abstraction, own eDSLs, etc. And if somebody built his library on > arrows, pipes, free monads, etc - it's not enough to know language's > syntax only. Imagine a big house built with simple and same bricks. And > some Baroque theater where anything is complex and unique. > > So, languages like Haskell are more complex and need more time to learn > and create valuable applications. > OO has its own version of this… more insidiously. You're prone to see a class hierarchy and think you understand it up front because it's all familiar things, but every application in effect has its own distinct notion of what a given class means, exposed as either custom methods or custom implementations thereof implementing the app's specific logic. Haskell's FP style makes you expose this directly. (But it's the same amount of complexity underneath, so ultimately not really different; it just taxes the programmer in different ways.) -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Mar 11 17:29:16 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 11 Mar 2019 13:29:16 -0400 Subject: [Haskell-cafe] Haskell Symposium: Early Track due this Friday, March 15 Message-ID: ================================================================================ ACM SIGPLAN CALL FOR SUBMISSIONS Haskell Symposium 2019 Berlin, Germany 22--23 August, 2019 http://www.haskell.org/haskell-symposium/2019/ ================================================================================ The ACM SIGPLAN Haskell Symposium 2019 will be co-located with the 2019 International Conference on Functional Programming (ICFP). **NEW THIS YEAR**: We will be using a lightweight double-blind reviewing process. See further information below. The Haskell Symposium presents original research on Haskell, discusses practical experience and future development of the language, and promotes other forms of declarative programming. Topics of interest include: * Language design, with a focus on possible extensions and modifications of Haskell as well as critical discussions of the status quo; * Theory, such as formal semantics of the present language or future extensions, type systems, effects, metatheory, and foundations for program analysis and transformation; * Implementations, including program analysis and transformation, static and dynamic compilation for sequential, parallel, and distributed architectures, memory management, as well as foreign function and component interfaces; * Libraries, that demonstrate new ideas or techniques for functional programming in Haskell; * Tools, such as profilers, tracers, debuggers, preprocessors, and testing tools; * Applications, to scientific and symbolic computing, databases, multimedia, telecommunication, the web, and so forth; * Functional Pearls, being elegant and instructive programming examples; * Experience Reports, to document general practice and experience in education, industry, or other contexts; * System Demonstrations, based on running software rather than novel research results. Regular papers should explain their research contributions in both general and technical terms, identifying what has been accomplished, explaining why it is significant, and relating it to previous work, and to other languages where appropriate. Experience reports and functional pearls need not necessarily report original academic research results. For example, they may instead report reusable programming idioms, elegant ways to approach a problem, or practical experience that will be useful to other users, implementers, or researchers. The key criterion for such a paper is that it makes a contribution from which other Haskellers can benefit. It is not enough simply to describe a standard solution to a standard programming problem, or report on experience where you used Haskell in the standard way and achieved the result you were expecting. System demonstrations should summarize the system capabilities that would be demonstrated. The proposals will be judged on whether the ensuing session is likely to be important and interesting to the Haskell community at large, whether on grounds academic or industrial, theoretical or practical, technical, social or artistic. Please contact the program chair with any questions about the relevance of a proposal. Submission Details ================== Early and Regular Track ----------------------- The Haskell Symposium uses a two-track submission process so that some papers can gain early feedback. Strong papers submitted to the early track are accepted outright, and the others will be given their reviews and invited to resubmit to the regular track. Papers accepted via the early and regular tracks are considered of equal value and will not be distinguished in the proceedings. Although all papers may be submitted to the early track, authors of functional pearls and experience reports are particularly encouraged to use this mechanism. The success of these papers depends heavily on the way they are presented, and submitting early will give the program committee a chance to provide feedback and help draw out the key ideas. Formatting ---------- Submitted papers should be in portable document format (PDF), formatted using the ACM SIGPLAN style guidelines. Authors should use the `acmart` format, with the `sigplan` sub-format for ACM proceedings. For details, see: http://www.sigplan.org/Resources/Author/#acmart-format It is recommended to use the `review` option when submitting a paper; this option enables line numbers for easy reference in reviews. Functional pearls, experience reports, and demo proposals should be labelled clearly as such. Lightweight Double-blind Reviewing ---------------------------------- Haskell Symposium 2019 will use a lightweight double-blind reviewing process. To facilitate this, submitted papers must adhere to two rules: 1. Author names and institutions must be omitted, and 2. References to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult (e.g., important background references should not be omitted or anonymized). In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. A reviewer will learn the identity of the author(s) of a paper after a review is submitted. Page Limits ----------- The length of submissions should not exceed the following limits: Regular paper: 12 pages Functional pearl: 12 pages Experience report: 6 pages Demo proposal: 2 pages There is no requirement that all pages are used. For example, a functional pearl may be much shorter than 12 pages. In all cases, the list of references is not counted against these page limits. Deadlines --------- Early track: Submission deadline: 15 March 2019 (Fri) Notification: 19 April 2019 (Fri) Regular track and demos: Submission deadline: 10 May 2019 (Fri) Notification: 21 June 2019 (Fri) Camera-ready deadline for accepted papers: 30 June 2019 (Sun) Deadlines are valid anywhere on Earth. Submission ---------- Submissions must adhere to SIGPLAN's republication policy (http://sigplan.org/Resources/Policies/Republication/), and authors should be aware of ACM's policies on plagiarism (https://www.acm.org/publications/policies/plagiarism). The paper submission deadline and length limitations are firm. There will be no extensions, and papers violating the length limitations will be summarily rejected. Papers should be submitted through HotCRP at: https://haskell19.hotcrp.com/ Improved versions of a paper may be submitted at any point before the submission deadline using the same web interface. Supplementary material: Authors have the option to attach supplementary material to a submission, on the understanding that reviewers may choose not to look at it. This supplementary material should not be submitted as part of the main document; instead, it should be uploaded as a separate PDF document or tarball. Supplementary material should be uploaded at submission time, not by providing a URL in the paper that points to an external repository. Authors are free to upload both anonymized and non-anonymized supplementary material. Anonymized supplementary material will be visible to reviewers immediately; non-anonymized supplementary material will be revealed to reviewers only after they have submitted their review of the paper and learned the identity of the author(s). Resubmitted Papers: Authors who submit a revised version of a paper that has previously been rejected by another conference have the option to attach an annotated copy of the reviews of their previous submission(s), explaining how they have addressed these previous reviews in the present submission. If a reviewer identifies him/herself as a reviewer of this previous submission and wishes to see how his/her comments have been addressed, the principal editor will communicate to this reviewer the annotated copy of his/her previous review. Otherwise, no reviewer will read the annotated copies of the previous reviews. Travel Support ============== Student attendees with accepted papers can apply for a SIGPLAN PAC grant to help cover travel expenses. PAC also offers other support, such as for child-care expenses during the meeting or for travel costs for companions of SIGPLAN members with physical disabilities, as well as for travel from locations outside of North America and Europe. For details on the PAC program, see its web page (http://pac.sigplan.org). Proceedings =========== Accepted papers will be included in the ACM Digital Library. Authors must grant ACM publication rights upon acceptance (http://authors.acm.org/main.html). Authors are encouraged to publish auxiliary material with their paper (source code, test data, etc.); they retain copyright of auxiliary material. Accepted proposals for system demonstrations will be posted on the symposium website but not formally published in the proceedings. All accepted papers and proposals will be posted on the conference website one week before the meeting. Publication date: The official publication date of accepted papers is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work. Program Committee ================= Ki-Yung Ahn Hannam University Christiaan Baaij QBayLogic B.V. José Manuel Calderón Trilla Galois, Inc Benjamin Delaware Purdue University Richard Eisenberg (chair) Bryn Mawr College Jennifer Hackett University of Nottingham Kazutaka Matsuda Tohoku University Trevor McDonell Utrecht University Ivan Perez NIA / NASA Formal Methods Nadia Polikarpova University of California, San Diego Norman Ramsey Tufts University Christine Rizkallah University of New South Wales Eric Seidel Bloomberg LP Alejandro Serrano Mena Utrecht University John Wiegley Dfinity Foundation Thomas Winant Well-Typed LLP Ningning Xie University of Hong Kong If you have questions, please contact the chair at: rae at richarde.dev ================================================================================ From heetsankesara3 at gmail.com Mon Mar 11 17:37:09 2019 From: heetsankesara3 at gmail.com (Heet Sankesara) Date: Mon, 11 Mar 2019 23:07:09 +0530 Subject: [Haskell-cafe] Machine Learning Library in Haskell Message-ID: Hello community, Myself Heet sankesara. I am a machine learning practitioner. I've been doing it for a year. I recently learned Haskell to implement Markov Logic Networks. I found the language intuitive. It is far easier to express logic and formulas in Haskell as compared to languages like Python or R. So I decided to do some data science using it but I couldn't because there is no library like Sklearn in Python. The few libraries I found are mostly disoriented and undermanaged. I want to work on machine learning library which can be used easily and efficiently by machine learning practitioners. It would be helpful for everyone to have a dedicated machine learning library. For a practitioner, it would be easier to tweak and test the model and try different algorithms quickly. For the community, the dedicated library leads the developers to focus on it and improve it which would result in a more efficient and flexible library. The list of algorithms I am planning to implement are as follows: 1. Linear and Logistic Regression 2. Ridge Regression 3. Perceptron 4. SVM classifier and regressor (Both Linear and Non-Linear) 5. Stochastic Gradient Descent 6. K-means clustering and KNN classifier 7. Naive Bayes 8. Decision trees 9. Random Forest 10. Gradient boosting 11. Ada boost 12. Voting classifier 13. Neural Network 14. Gradient Descent, Momentum, Nesterov accelerated gradient 15. Adaptive Moment Estimation (Adam optimizer) Please consider this idea for GSoC this year. I am be happy to talk about the idea and possible algorithms that can be implemented in the upcoming summer. With regards, Heet Sankesara -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at bsima.me Mon Mar 11 20:22:55 2019 From: ben at bsima.me (Ben Sima) Date: Mon, 11 Mar 2019 13:22:55 -0700 Subject: [Haskell-cafe] Apply Haskell at scale in a music start-up In-Reply-To: References: Message-ID: <87pnqx40v4.fsf@bsima.me> Jeroen Bransen writes: > If you are interested in working at Chordify, please have a look at: > https://jobs.chordify.net/ The advert in the link seems to indicate the job is based in the Netherlands. Does Chordify accept remote work as well? From ian at zenhack.net Mon Mar 11 20:52:07 2019 From: ian at zenhack.net (Ian Denhardt) Date: Mon, 11 Mar 2019 16:52:07 -0400 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> <61B2B1A3-0CAA-4DD2-B8FD-5406B715B536@gmail.com> Message-ID: <155233752744.8343.1679387872621141214@localhost.localdomain> Quoting P Orrifolius (2019-03-10 20:38:05) > On Tue, 5 Mar 2019 at 23:11, Will Yager wrote: > > > > http://hackage.haskell.org/package/dejafu might do what you want > > Thanks for the suggestion, that does look useful. > From a quick look at the documentation... am I right in thinking that > this test framework only covers concurrency when you're directly using > IOVars and STM? Any concurrency that might involve other primitives, > or IOVar/STM uses in libraries, wouldn't be testable? Unless the > primitives/library had also been written to use the dejafu io classes > and monads of course... which might be a good idea in and of itself. Yeah, if you have dependencies not written in terms of those type classes, it may be difficult to integrate. If it's just your own code, it shouldn't be too hard to go through and swap in the type class constraints everywhere, but I do wish Haskell made it easier to wedge an abstraction layer like that under existing libraries after the fact :(. -Ian From mike at barrucadu.co.uk Mon Mar 11 21:31:41 2019 From: mike at barrucadu.co.uk (Michael Walker) Date: Mon, 11 Mar 2019 21:31:41 +0000 Subject: [Haskell-cafe] Informal modelling and simulation of protocols In-Reply-To: References: <201903041433.x24EXEi7001004@coolidge.cs.Dartmouth.EDU> <61B2B1A3-0CAA-4DD2-B8FD-5406B715B536@gmail.com> Message-ID: Hi, > On Tue, 5 Mar 2019 at 23:11, Will Yager wrote: > > > > http://hackage.haskell.org/package/dejafu might do what you want > > Thanks for the suggestion, that does look useful. > From a quick look at the documentation... am I right in thinking that > this test framework only covers concurrency when you're directly using > IOVars and STM? Any concurrency that might involve other primitives, > or IOVar/STM uses in libraries, wouldn't be testable? Unless the > primitives/library had also been written to use the dejafu io classes > and monads of course... which might be a good idea in and of itself. > Yes, that's the case. It's a bit of an unfortunate limitation but I don't currently have a way around it. You may find it interesting to look at adjoint.io's libraft, which uses dejafu for some tests: https://github.com/adjoint-io/raft/blob/master/test/TestDejaFu.hs The way they do it is by writing their library in terms of some "MonadRaft" classes, and give a concrete implementation of those classes in terms of dejafu's ConcIO monad for testing. -- Michael Walker (http://www.barrucadu.co.uk) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdsmith at gmail.com Tue Mar 12 00:56:14 2019 From: cdsmith at gmail.com (Chris Smith) Date: Mon, 11 Mar 2019 20:56:14 -0400 Subject: [Haskell-cafe] Hard cases for "Note 5" in Haskell layout parsing Message-ID: Quick question. I'm trying to resolve Haskell layout in order to provide better syntax highlighting, but without committing to building a full Haskell parser. The reason this is hard at face value is because of the "Note 5" in the relevant part of the Haskell Report, which says that a layout context closes whenever (a) the next token without closing the layout context would not be the start of any valid Haskell syntax, but (b) the current statement followed by an implicit '}' could be valid Haskell syntax. This rule is why it's okay to write "let x = 5 in x^2", even though let introduces a new layout syntax: the "in" implicitly closes it because "x = 5 in" isn't the start of any valid syntax, so the layout context is implicitly closed before the "in". Now for my question. Does anyone know other cases besides let/in where this commonly comes up? Everywhere I've seen this before uses let/in as an example, but then concludes that full parsing is needed and gives up on simpler answers. But the specific example with let/in is easily handled with a special-purpose rule that closes layout contexts as needed when an "in" keyword shows up. I cannot seem to construct any other examples, because other layout-introducing keywords have their contents at the end of syntactic element. Is there something I've missed here? I don't even care if it's a situation where something is technically valid but a horrible edge case. I'm interested in realistic counterexamples. Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Mar 12 01:35:14 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 11 Mar 2019 21:35:14 -0400 Subject: [Haskell-cafe] Hard cases for "Note 5" in Haskell layout parsing In-Reply-To: References: Message-ID: Consider the case where someone has done: let x = do ... There's also the opposite edge case, which people doing one-liners in ghci or lambdabot run into fairly often: ... do ...; let x = 5; ... Where the semicolon continues the let bindings, not the do; you must use braces to disambiguate. On Mon, Mar 11, 2019 at 8:56 PM Chris Smith wrote: > Quick question. I'm trying to resolve Haskell layout in order to provide > better syntax highlighting, but without committing to building a full > Haskell parser. The reason this is hard at face value is because of the > "Note 5" in the relevant part of the Haskell Report, which says that a > layout context closes whenever (a) the next token without closing the > layout context would not be the start of any valid Haskell syntax, but (b) > the current statement followed by an implicit '}' could be valid Haskell > syntax. This rule is why it's okay to write "let x = 5 in x^2", even > though let introduces a new layout syntax: the "in" implicitly closes it > because "x = 5 in" isn't the start of any valid syntax, so the layout > context is implicitly closed before the "in". > > Now for my question. Does anyone know other cases besides let/in where > this commonly comes up? Everywhere I've seen this before uses let/in as an > example, but then concludes that full parsing is needed and gives up on > simpler answers. But the specific example with let/in is easily handled > with a special-purpose rule that closes layout contexts as needed when an > "in" keyword shows up. I cannot seem to construct any other examples, > because other layout-introducing keywords have their contents at the end of > syntactic element. > > Is there something I've missed here? I don't even care if it's a > situation where something is technically valid but a horrible edge case. > I'm interested in realistic counterexamples. > > Thanks, > Chris > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rri at silentyak.com Tue Mar 12 04:44:45 2019 From: rri at silentyak.com (Ramnath R Iyer) Date: Mon, 11 Mar 2019 21:44:45 -0700 Subject: [Haskell-cafe] [OT] thoughts about OO vs. functional "philosophy" In-Reply-To: References: Message-ID: To give credit where due, I finally recalled where I had come across that particular reference to encapsulation. Chapter 1.3, https://github.com/hmemcpy/milewski-ctfp-pdf. On Sun, Mar 10, 2019 at 19:40 Dennis Raddle wrote: > Indeed, Python does not enforce encapsulation. It has weakly enforced > types as well (duck typing for most objects). But it's still an opportunity > to follow an "encapsulation" convention and explain the purpose, while > telling my student he won't see the full OO implementation until he gets > into Java or C++. > > I want to comment on Ramnath's point that encapsulation helps us hold > fewer things in our minds. I would agree that's part of it, but in a > general sense programming always involves more than we can hold in our > minds and never requires us to hold all of it at once. > > Consider a mathematical proof that runs to 300 pages. We don't need to > hold the whole thing in our minds to create and understand it--but it's > still worth finding a way to simplify it so it takes fewer pages or less > advanced math. > > Although we rarely prove a program's correctness in a strict sense, we > probably give some thought to a rough justification of the correctness of > our design choices, even if such a justification requires more thoughts > than we can hold at once. There is a benefit in encapsulating so that the > justification requires fewer thoughts -- say 20 thoughts instead of 200 > thoughts. > > Now, I'm not sure this is any closer to the heart of the matter. Maybe > Ramnath is more essentially correct. Comments welcome. > > D > > > > On Sun, Mar 10, 2019 at 7:23 PM Richard O'Keefe wrote: > >> Serious question: does Python _have_ encapsulation? >> I know about the single-underscore convention and >> the double-underscore convention, but even double- >> underscore attributes and methods can be accessed >> freely from the outside if you use the class-name >> prefix. That is, it makes it difficult to breach >> encapsulation by accident, but it doesn't take the >> Lock-Picking Lawyer from YouTube to breach >> encapsulation on purpose without breaking stride. >> >> On Mon, 11 Mar 2019 at 09:29, Dennis Raddle >> wrote: >> >>> I have a thought about OO vs. functional programming . >>> >>> I'm moderately experienced with functional languages such as Haskell and >>> Purescript and have used OO for many years at work (Python, C++). >>> >>> I'm giving tutoring lessons in Python to a professional who wants to use >>> programming as part of his work. (He's an entrepreneur.) He is eager to >>> understand functional and OO styles within Python--he keeps asking about >>> the real benefit of each, not content with a superficial approach. >>> >>> I've explained about "encapsulation": gathering related functions >>> together and minimizing coupling to the rest of the program. >>> >>> He asks "why do we encapsulate?" I've explained that one reason is to >>> help we, the programmers, be confident that our code is correct. >>> >>> For example, in a class we try to isolate the implementation details >>> from the rest of the program, and the compiler helps enforce that.So when >>> we're writing our code and confirming to ourselves it's correct, we can >>> make simplifying assumptions. We know the rest of the program won't modify >>> our private variables and won't access us in any way other than the >>> interface we've presented to the world. >>> >>> I explained that a large program is like a corporation, and pieces of >>> the program are like individual employees. When we write a class or >>> function, in a sense we "become" that employee and "forget about" a lot of >>> the details of other employees. (i.e., encapsulation). >>> >>> Then has asked why functional programming is helpful. I explained about >>> referential transparency. Say we're a function: then other "employees" >>> (other functions) can trust us with simplifying assumptions, like no >>> side-effects. >>> >>> Then this thought occurred to me, which is nifty but maybe not the whole >>> story. >>> >>> >>> """ >>> Say we're a class. Then the simplifying assumptions of OO allow *us* to >>> trust the *rest of the program* won't mess with us. >>> >>> Say we're a function. Then the simplifying assumptions of functional >>> style help the *rest of the program* trust that *we* won't mess with it. >>> """ >>> >>> D >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >> >> _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Ramnath R Iyer -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdsmith at gmail.com Tue Mar 12 05:34:21 2019 From: cdsmith at gmail.com (Chris Smith) Date: Tue, 12 Mar 2019 01:34:21 -0400 Subject: [Haskell-cafe] Hard cases for "Note 5" in Haskell layout parsing In-Reply-To: References: Message-ID: Thanks for the answer, Brandon. I don't think either of these is a counterexample, though. Both seem to be handled just fine by the ad hoc rule of breaking layout contexts for Note 5 only when an "in" otherwise would not match a "let". Maybe I'm missing something, though, if you're expecting to fill in the "..." parts in some clever way. Do you have a complete example where this wouldn't be enough? On Mon, Mar 11, 2019 at 9:35 PM Brandon Allbery wrote: > Consider the case where someone has done: let x = do ... > > There's also the opposite edge case, which people doing one-liners in ghci > or lambdabot run into fairly often: ... do ...; let x = 5; ... > Where the semicolon continues the let bindings, not the do; you must use > braces to disambiguate. > > On Mon, Mar 11, 2019 at 8:56 PM Chris Smith wrote: > >> Quick question. I'm trying to resolve Haskell layout in order to provide >> better syntax highlighting, but without committing to building a full >> Haskell parser. The reason this is hard at face value is because of the >> "Note 5" in the relevant part of the Haskell Report, which says that a >> layout context closes whenever (a) the next token without closing the >> layout context would not be the start of any valid Haskell syntax, but (b) >> the current statement followed by an implicit '}' could be valid Haskell >> syntax. This rule is why it's okay to write "let x = 5 in x^2", even >> though let introduces a new layout syntax: the "in" implicitly closes it >> because "x = 5 in" isn't the start of any valid syntax, so the layout >> context is implicitly closed before the "in". >> >> Now for my question. Does anyone know other cases besides let/in where >> this commonly comes up? Everywhere I've seen this before uses let/in as an >> example, but then concludes that full parsing is needed and gives up on >> simpler answers. But the specific example with let/in is easily handled >> with a special-purpose rule that closes layout contexts as needed when an >> "in" keyword shows up. I cannot seem to construct any other examples, >> because other layout-introducing keywords have their contents at the end of >> syntactic element. >> >> Is there something I've missed here? I don't even care if it's a >> situation where something is technically valid but a horrible edge case. >> I'm interested in realistic counterexamples. >> >> Thanks, >> Chris >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen at chordify.net Tue Mar 12 08:40:44 2019 From: jeroen at chordify.net (Jeroen Bransen) Date: Tue, 12 Mar 2019 09:40:44 +0100 Subject: [Haskell-cafe] Apply Haskell at scale in a music start-up In-Reply-To: <87pnqx40v4.fsf@bsima.me> References: <87pnqx40v4.fsf@bsima.me> Message-ID: <0e484e9e-785f-8e13-81e6-a39fc92c287f@chordify.net> >> If you are interested in working at Chordify, please have a look at: >> https://jobs.chordify.net/ > The advert in the link seems to indicate the job is based in the > Netherlands. Does Chordify accept remote work as well? * Chordify is based in the Netherlands, and not everyone who would like to work at Chordify can or is willing to relocate to this nice, but relatively rainy part of the world. Therefore, you are not the first person to ask whether working remotely is an option. We have do not have principal reasons to reject remote working, but having two offices in the Netherlands, and using all digital means available to streamline our communication, our experience is that nothing beats face-to-face contact. Also, we really try to make our office a pleasant and fun place, and we believe it adds value to be part of our office culture. Therefore, we prefer working in our office over working remotely. That said, working part-time or working from home is something that we do on a regular basis. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkarni at gmail.com Tue Mar 12 14:21:02 2019 From: jkarni at gmail.com (Julian Arni) Date: Tue, 12 Mar 2019 15:21:02 +0100 Subject: [Haskell-cafe] ANN: Radicle Message-ID: Today we're releasing the alpha version of Radicle. Radicle is a p2p code collaboration platform (think Github replacement). Issues, patches (like PRs) and optionally code are distributed via IPFS, without a centralized server, and with tooling that's meant to work directly from your terminal. There's a lot more info on the webpage [0]. Radicle is written in Haskell (minus the parts that are written in the Radicle language itself, though the language is implemented in Haskell), and open-source, which is why I thought to post here. [0] www.radicle.xyz -------------- next part -------------- An HTML attachment was scrubbed... URL: From mats.rauhala at gmail.com Wed Mar 13 07:05:48 2019 From: mats.rauhala at gmail.com (Mats Rauhala) Date: Wed, 13 Mar 2019 09:05:48 +0200 Subject: [Haskell-cafe] ANN: Radicle In-Reply-To: References: Message-ID: <20190313070548.5nowafg5pnvadolq@peitto> Maybe add a note to https://github.com/ipfs/awesome-ipfs as well? From K.Bleijenberg at lijbrandt.nl Thu Mar 14 10:18:43 2019 From: K.Bleijenberg at lijbrandt.nl (Kees Bleijenberg) Date: Thu, 14 Mar 2019 11:18:43 +0100 Subject: [Haskell-cafe] How to use alex with param -g in cabal Message-ID: <000001d4da4f$4d4f6f30$e7ee4d90$@lijbrandt.nl> Hi all, Alex is in the build-tools part of the cabal file. The docs of Alex says that invoking Alex with -g makes the resulting lexer significantly faster and smaller. How can I pass the -g parameter in the cabal file? Kees --- Dit e-mailbericht is gecontroleerd op virussen met Avast antivirussoftware. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanessa.mchale at iohk.io Thu Mar 14 14:34:18 2019 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Thu, 14 Mar 2019 09:34:18 -0500 Subject: [Haskell-cafe] How to use alex with param -g in cabal In-Reply-To: <000001d4da4f$4d4f6f30$e7ee4d90$@lijbrandt.nl> References: <000001d4da4f$4d4f6f30$e7ee4d90$@lijbrandt.nl> Message-ID: I think cabal passes -g by default. But if not you can something like: https://github.com/vmchale/atspkg/blob/master/cabal.project#L30 Cheers, Vanessa McHale On 3/14/19 5:18 AM, Kees Bleijenberg wrote: > > Hi all, > >   > > Alex is in the build-tools part of the cabal file. The docs of Alex > says that invoking Alex with –g makes the resulting lexer > significantly faster and smaller. How can I pass the –g parameter in > the cabal file? > >   > > Kees > > > > Virusvrij. www.avast.com > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From vanessa.mchale at iohk.io Fri Mar 15 03:54:58 2019 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Thu, 14 Mar 2019 22:54:58 -0500 Subject: [Haskell-cafe] Annoying problem when pattern matching negative integer literals and wildcards Message-ID: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> I have the following program: module Bug ( encryptionResult ) where data EncryptionResult = HasEncryption                       | EncryptionUnknown encryptionResult :: Int -> EncryptionResult encryptionResult 1 = HasEncryption encryptionResult -1 = EncryptionUnknown encryptionResult _ = error "Internal error." When I try to compile it with GHC I get [1 of 1] Compiling Bug              ( Bug.hs, Bug.o ) Bug.hs:9:1: error:     Multiple declarations of ‘encryptionResult’     Declared at: Bug.hs:7:1                  Bug.hs:9:1   | 9 | encryptionResult _ = error "Internal error."   | ^^^^^^^^^^^^^^^^ I can replicate this in Hugs, viz. ERROR "Bug.hs":7 - "encryptionResult" multiply defined However, everything compiles fine when I write module Bug ( encryptionResult ) where data EncryptionResult = HasEncryption                       | EncryptionUnknown encryptionResult :: Int -> EncryptionResult encryptionResult 1 = HasEncryption encryptionResult -1 = EncryptionUnknown or module Bug ( encryptionResult ) where data EncryptionResult = HasEncryption                       | EncryptionUnknown encryptionResult :: Int -> EncryptionResult encryptionResult 1 = HasEncryption encryptionResult 0 = EncryptionUnknown encryptionResult _ = error "Internal error." Am I doing something obviously screwy? This seems like a pretty annoying feature on the language (to the point where I assumed it was a GHC bug until I got the same behavior with Hugs) and I can't figure out why it exists. Cheers, Vanessa McHale -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From vanessa.mchale at iohk.io Fri Mar 15 03:57:11 2019 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Thu, 14 Mar 2019 22:57:11 -0500 Subject: [Haskell-cafe] Annoying problem when pattern matching negative integer literals and wildcards In-Reply-To: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> References: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> Message-ID: <772dadbb-5bf9-f3e8-5a66-365964838d67@iohk.io> Ah nevermind, I figured it out: the second bit was being treated as a definition for (-) On 3/14/19 10:54 PM, Vanessa McHale wrote: > I have the following program: > > module Bug ( encryptionResult ) where > > data EncryptionResult = HasEncryption >                       | EncryptionUnknown > > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult -1 = EncryptionUnknown > encryptionResult _ = error "Internal error." > > When I try to compile it with GHC I get > > [1 of 1] Compiling Bug              ( Bug.hs, Bug.o ) > > Bug.hs:9:1: error: >     Multiple declarations of ‘encryptionResult’ >     Declared at: Bug.hs:7:1 >                  Bug.hs:9:1 >   | > 9 | encryptionResult _ = error "Internal error." >   | ^^^^^^^^^^^^^^^^ > > I can replicate this in Hugs, viz. > > ERROR "Bug.hs":7 - "encryptionResult" multiply defined > > However, everything compiles fine when I write > > module Bug ( encryptionResult ) where > > data EncryptionResult = HasEncryption >                       | EncryptionUnknown > > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult -1 = EncryptionUnknown > > or > > module Bug ( encryptionResult ) where > > data EncryptionResult = HasEncryption >                       | EncryptionUnknown > > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult 0 = EncryptionUnknown > encryptionResult _ = error "Internal error." > > Am I doing something obviously screwy? This seems like a pretty annoying > feature on the language (to the point where I assumed it was a GHC bug > until I got the same behavior with Hugs) and I can't figure out why it > exists. > > Cheers, > Vanessa McHale -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From allbery.b at gmail.com Fri Mar 15 03:59:06 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 14 Mar 2019 23:59:06 -0400 Subject: [Haskell-cafe] Annoying problem when pattern matching negative integer literals and wildcards In-Reply-To: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> References: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> Message-ID: Maybe one of the numeric literal extensions will help. The root problem is that, without parentheses or one of the extensions, this ends up defining something different than you intended: (-) with `encryptionResult` as a local binding. On Thu, Mar 14, 2019 at 11:55 PM Vanessa McHale wrote: > I have the following program: > > module Bug ( encryptionResult ) where > > data EncryptionResult = HasEncryption > | EncryptionUnknown > > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult -1 = EncryptionUnknown > encryptionResult _ = error "Internal error." > > When I try to compile it with GHC I get > > [1 of 1] Compiling Bug ( Bug.hs, Bug.o ) > > Bug.hs:9:1: error: > Multiple declarations of ‘encryptionResult’ > Declared at: Bug.hs:7:1 > Bug.hs:9:1 > | > 9 | encryptionResult _ = error "Internal error." > | ^^^^^^^^^^^^^^^^ > > I can replicate this in Hugs, viz. > > ERROR "Bug.hs":7 - "encryptionResult" multiply defined > > However, everything compiles fine when I write > > module Bug ( encryptionResult ) where > > data EncryptionResult = HasEncryption > | EncryptionUnknown > > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult -1 = EncryptionUnknown > > or > > module Bug ( encryptionResult ) where > > data EncryptionResult = HasEncryption > | EncryptionUnknown > > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult 0 = EncryptionUnknown > encryptionResult _ = error "Internal error." > > Am I doing something obviously screwy? This seems like a pretty annoying > feature on the language (to the point where I assumed it was a GHC bug > until I got the same behavior with Hugs) and I can't figure out why it > exists. > > Cheers, > Vanessa McHale > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Fri Mar 15 04:47:44 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Fri, 15 Mar 2019 00:47:44 -0400 Subject: [Haskell-cafe] Annoying problem when pattern matching negative integer literals and wildcards In-Reply-To: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> References: <5d595ee5-dea1-ff3d-13ab-28b046ec38c7@iohk.io> Message-ID: <20190315044744.GF3822@straasha.imrryr.org> On Thu, Mar 14, 2019 at 10:54:58PM -0500, Vanessa McHale wrote: > encryptionResult :: Int -> EncryptionResult > encryptionResult 1 = HasEncryption > encryptionResult -1 = EncryptionUnknown > encryptionResult _ = error "Internal error." The way I keep it straight, is that just as one must write: let x = encryptionResult (-1) to evaluate "encryptionResult" at (-1), one must also write: encryptionResult :: Int -> EncryptionResult encryptionResult 1 = HasEncryption encryptionResult (-1) = EncryptionUnknown encryptionResult _ = error "Internal error." to define the function, because binary minus takes precedence over unary minus (which in turn takes precedence over sections, thus (subtract 1), not (- 1)). -- Viktor. From K.Bleijenberg at lijbrandt.nl Fri Mar 15 11:16:14 2019 From: K.Bleijenberg at lijbrandt.nl (Kees Bleijenberg) Date: Fri, 15 Mar 2019 12:16:14 +0100 Subject: [Haskell-cafe] initialization of AlexUserState Message-ID: <000001d4db20$80b7c2b0$82274810$@lijbrandt.nl> Hi All, I have a Alex lexer with wrapper "monadUserState-bytestring". The AlexUserState = AlexUserState {sheetNames:: {String], currSheet:: Int}. The lexer should check whether a found sheetname is in sheetNames. The sheetNames from AlexUserState come from outside. I want to invoke the lexer many times with different AlexUserStates. Therefore the alexInitUserState function is not what I want. I'de like to initialize the AlexUserState while starting the lexer, i.e. as a parameter to runAlex. Can that be done? Kees --- Dit e-mailbericht is gecontroleerd op virussen met Avast antivirussoftware. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From P.Achten at cs.ru.nl Fri Mar 15 12:15:47 2019 From: P.Achten at cs.ru.nl (Peter Achten) Date: Fri, 15 Mar 2019 13:15:47 +0100 Subject: [Haskell-cafe] [TFP'19] second call for papers: Trends in Functional Programming 2019, 12-14 June 2019, Vancouver, BC, CA Message-ID: -------------------------------- 2 N D C A L L F O R P A P E R S -------------------------------- ====== TFP 2019 ====== 20th Symposium on Trends in Functional Programming 12-14 June, 2019 Vancouver, BC, CA https://www.tfp2019.org/index.html == Important Dates == Submission Deadline for pre-symposium formal review Thursday, March 28, 2019 Sumbission Deadline for Draft Papers Thursday, May 9, 2019 Notification for pre-symposium submissions Thursday, May 2, 2019 Notification for Draft Papers Tuesday, May 14, 1029 TFPIE Tuesday, June 11, 2019 Symposium Wednesday, June 12, 2019 – Friday, June 14, 2019 Notification of Student Paper Feedback Friday June 21, 2019 Submission Deadline for revised Draft Papers (post-symposium formal review) Thursday, August 1, 2019 Notification for post-symposium submissions Thursday, October 24, 2019 Camera Ready Deadline (both pre- and post-symposium) Friday, November 29, 2019 The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trends in the area. It aspires to be a lively environment for presenting the latest research results, and other contributions (see below at scope). Please be aware that TFP uses two distinct rounds of submissions (see below at submission details). TFP 2019 will be the main event of a pair of functional programming events. TFP 2019 will be accompanied by the International Workshop on Trends in Functional Programming in Education (TFPIE), which will take place on June 11. == Scope == The symposium recognizes that new trends may arise through various routes. As part of the Symposium's focus on trends we therefore identify the following five article categories. High-quality articles are solicited in any of these categories: Research Articles: Leading-edge, previously unpublished research work Position Articles: On what new trends should or should not be Project Articles: Descriptions of recently started new projects Evaluation Articles: What lessons can be drawn from a finished project Overview Articles: Summarizing work with respect to a trendy subject Articles must be original and not simultaneously submitted for publication to any other forum. They may consider any aspect of functional programming: theoretical, implementation-oriented, or experience-oriented. Applications of functional programming techniques to other languages are also within the scope of the symposium. Topics suitable for the symposium include, but are not limited to: Functional programming and multicore/manycore computing Functional programming in the cloud High performance functional computing Extra-functional (behavioural) properties of functional programs Dependently typed functional programming Validation and verification of functional programs Debugging and profiling for functional languages Functional programming in different application areas: security, mobility, telecommunications applications, embedded systems, global computing, grids, etc. Interoperability with imperative programming languages Novel memory management techniques Program analysis and transformation techniques Empirical performance studies Abstract/virtual machines and compilers for functional languages (Embedded) domain specific languages New implementation strategies Any new emerging trend in the functional programming area If you are in doubt on whether your article is within the scope of TFP, please contact the TFP 2019 program chairs, William J. Bowman and Ron Garcia. == Best Paper Awards == To reward excellent contributions, TFP awards a prize for the best paper accepted for the formal proceedings. TFP traditionally pays special attention to research students, acknowledging that students are almost by definition part of new subject trends. A student paper is one for which the authors state that the paper is mainly the work of students, the students are listed as first authors, and a student would present the paper. A prize for the best student paper is awarded each year. In both cases, it is the PC of TFP that awards the prize. In case the best paper happens to be a student paper, that paper will then receive both prizes. == Instructions to Author == Papers must be submitted at: https://easychair.org/conferences/?conf=tfp2019 Authors of papers have the choice of having their contributions formally reviewed either before or after the Symposium. == Pre-symposium formal review == Papers to be formally reviewed before the symposium should be submitted before an early deadline and receive their reviews and notification of acceptance for both presentation and publication before the symposium. A paper that has been rejected in this process may still be accepted for presentation at the symposium, but will not be considered for the post-symposium formal review. == Post-symposium formal review == Papers submitted for post-symposium review (draft papers) will receive minimal reviews and notification of acceptance for presentation at the symposium. Authors of draft papers will be invited to submit revised papers based on the feedback received at the symposium. A post-symposium refereeing process will then select a subset of these articles for formal publication. == Paper categories == There are two types of submission, each of which can be submitted either for pre-symposium or post-symposium review: Extended abstracts. Extended abstracts are 4 to 10 pages in length. Full papers. Full papers are up to 20 pages in length. Each submission also belongs to a category: research position project evaluation overview paper Each submission should clearly indicate to which category it belongs. Additionally, a draft paper submission—of either type (extended abstract or full paper) and any category—can be considered a student paper. A student paper is one for which primary authors are research students and the majority of the work described was carried out by the students. The submission should indicate that it is a student paper. Student papers will receive additional feedback from the PC shortly after the symposium has taken place and before the post-symposium submission deadline. Feedback is only provided for accepted student papers, i.e., papers submitted for presentation and post-symposium formal review that are accepted for presentation. If a student paper is rejected for presentation, then it receives no further feedback and cannot be submitted for post-symposium review. == Format == Papers must be written in English, and written using the LNCS style. For more information about formatting please consult the Springer LNCS web site (http://www.springer.com/lncs). == Program Committee == Program Co-chairs William J. Bowman University of British Columbia Ronald Garcia University of British Columbia Matteo Cimini University of Massachusetts Lowell Ryan Culpepper Czech Technical Institute Joshua Dunfield Queen's University Sam Lindley University of Edinburgh Assia Mahboubi INRIA Nantes Christine Rizkallah University of New South Wales Satnam Singh Google AI Marco T. Morazán Seton Hall University John Hughes Chalmers University and Quviq Nicolas Wu University of Bristol Tom Schrijvers KU Leuven Scott Smith Johns Hopkins University Stephanie Balzer Carnegie Mellon University Viktória Zsók Eötvös Loránd University -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanessa.mchale at iohk.io Fri Mar 15 14:43:08 2019 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Fri, 15 Mar 2019 09:43:08 -0500 Subject: [Haskell-cafe] initialization of AlexUserState In-Reply-To: <000001d4db20$80b7c2b0$82274810$@lijbrandt.nl> References: <000001d4db20$80b7c2b0$82274810$@lijbrandt.nl> Message-ID: I would just add a function setAlexUserState :: AlexUserState -> Alex () setAlexUserState = ... and then write another run :: AlexUserState -> Alex () run st = do     setAlexUserState st     alexMonadScan or what have you On 3/15/19 6:16 AM, Kees Bleijenberg wrote: > > Hi All, > >   > > I have a Alex lexer with wrapper "monadUserState-bytestring". The > AlexUserState = AlexUserState {sheetNames:: {String], currSheet:: > Int}.  The lexer should check whether a found sheetname is in > sheetNames. The sheetNames from AlexUserState come from outside. I > want to invoke the lexer many times with different AlexUserStates. >  Therefore the alexInitUserState function is not what I want. I’de > like to initialize the AlexUserState while starting the lexer, i.e. as > a parameter to runAlex. Can that be done? > >   > > Kees > > > > Virusvrij. www.avast.com > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From capn.freako at gmail.com Fri Mar 15 17:45:12 2019 From: capn.freako at gmail.com (David Banas) Date: Fri, 15 Mar 2019 10:45:12 -0700 Subject: [Haskell-cafe] Preferred location for symlink-bindir? Message-ID: Hi all, In moving to the cabal 'new-install' command, is there a preferred definition for the ‘symlink-bindir’ cabal configuration variable on Mac OS X? Thanks, -db From jaysinhp at gmail.com Sun Mar 17 08:21:20 2019 From: jaysinhp at gmail.com (Jaysinh shukla) Date: Sun, 17 Mar 2019 13:51:20 +0530 Subject: [Haskell-cafe] Learn you a Haskell for great good! Group readnig Message-ID: Respected Member, I am trying to learn Haskell from the book Learn you a Haskell for great good! [1]. I livestream my reading sessions at this [2] Twitch channel. Group reading is helping me to learn better. It is way more productive than reading alone. If you are learning Haskell, please join us. If you already know Haskell, please join to teach. Teaching will give you a good revision. Don't worry about the commitment to attend each and every session. It is fine to attend sessions on the day you are available and skip the rest. The fun part is, all the sessions will be on weekends. You can watch videos of past sessions here [3]. I am trying to maintain an event calendar for upcoming sessions here [4]. Please share these resources with whoever learning the Haskell. Don't hesitate to ask your questions here or in a personal mail. Thanks! 1: http://learnyouahaskell.com 2: https://www.twitch.tv/jaysinhp 3: https://www.twitch.tv/jaysinhp/videos 4: https://www.twitch.tv/jaysinhp/events From trebla at vex.net Mon Mar 18 00:25:34 2019 From: trebla at vex.net (Albert Y. C. Lai) Date: Sun, 17 Mar 2019 20:25:34 -0400 Subject: [Haskell-cafe] Hard cases for "Note 5" in Haskell layout parsing In-Reply-To: References: Message-ID: Does this help? x1 = case do Nothing; Nothing of _ -> () x2 = case do Nothing Nothing of _ -> () In fact in f2 play with all ways of positioning the line "of _ -> ()" to see there is almost no wrong placement! On 2019-03-11 8:56 p.m., Chris Smith wrote: > Quick question.  I'm trying to resolve Haskell layout in order to > provide better syntax highlighting, but without committing to building a > full Haskell parser.  The reason this is hard at face value is because > of the "Note 5" in the relevant part of the Haskell Report, which says > that a layout context closes whenever (a) the next token without closing > the layout context would not be the start of any valid Haskell syntax, > but (b) the current statement followed by an implicit '}' could be valid > Haskell syntax.  This rule is why it's okay to write "let x = 5 in x^2", > even though let introduces a new layout syntax: the "in" implicitly > closes it because "x = 5 in" isn't the start of any valid syntax, so the > layout context is implicitly closed before the "in". > > Now for my question.  Does anyone know other cases besides let/in where > this commonly comes up?  Everywhere I've seen this before uses let/in as > an example, but then concludes that full parsing is needed and gives up > on simpler answers.  But the specific example with let/in is easily > handled with a special-purpose rule that closes layout contexts as > needed when an "in" keyword shows up.  I cannot seem to construct any > other examples, because other layout-introducing keywords have their > contents at the end of syntactic element. > > Is there something I've missed here?  I don't even care if it's a > situation where something is technically valid but a horrible edge > case.  I'm interested in realistic counterexamples. > > Thanks, > Chris > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > From damien.mattei at gmail.com Mon Mar 18 10:52:33 2019 From: damien.mattei at gmail.com (Damien Mattei) Date: Mon, 18 Mar 2019 11:52:33 +0100 Subject: [Haskell-cafe] Fwd: [Fwd: linux version compatible with GHC 4.3 ?] In-Reply-To: <9ed1b6a440d354b627a40c2a4c817413ff631272.camel@univ-cotedazur.fr> References: <9ed1b6a440d354b627a40c2a4c817413ff631272.camel@univ-cotedazur.fr> Message-ID: ---------- Forwarded message --------- From: Damien Mattei Date: Mon, Mar 18, 2019 at 11:46 AM Subject: [Fwd: linux version compatible with GHC 4.3 ?] To: ---------- Forwarded message ---------- From: Damien Mattei To: Haskell Cafe Cc: Bcc: Date: Mon, 18 Mar 2019 11:39:25 +0100 Subject: linux version compatible with GHC 4.3 ? Hi, does anyone know a linux distrib where haskell platform installs out of the box? it fails on fedora core 28 (32bits) : [root at localhost Downloads]# ghci /usr/local/haskell/ghc-8.4.3-i386/lib/ghc-8.4.3/bin/ghc: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory but works on older system such as CentOS Linux release 7.2.1511 (Core) 64bits: [mattei at moita ~]$ cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [mattei at moita ~]$ ghci GHCi, version 8.4.3: http://www.haskell.org/ghc/ :? for help Prelude> does it works on Ubuntu? Damien -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.pelenitsyn at gmail.com Mon Mar 18 22:58:36 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 18 Mar 2019 18:58:36 -0400 Subject: [Haskell-cafe] Fwd: [Fwd: linux version compatible with GHC 4.3 ?] In-Reply-To: References: <9ed1b6a440d354b627a40c2a4c817413ff631272.camel@univ-cotedazur.fr> Message-ID: Hello Damien! Here are a couple of points: ---------- Forwarded message ---------- > From: Damien Mattei > Subject: linux version compatible with GHC 4.3 ? > Did you mean 8.4.3? All major modern Linux distributions are compatible with this version. But you might run into installation issues, just like with *any other piece of software*. > does anyone know a linux distrib where haskell platform installs out of > the box? > ... > I see you were able to install GHC on CentOS 7.2, but not on Fedora Core 28. It happens. I can't really tell what might have gone wrong with your installation on Fedora without additional details, such as: what did you actually do for the installation. But it definitely should be possible to install GHC on Fedora. > does it works on Ubuntu? > Yes, I'm using Ubuntu with several different versions of GHC including 8.4.3. I use HVR's ppa for installation: https://launchpad.net/~hvr/+archive/ubuntu/ghc -- Best wishes, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Tue Mar 19 00:12:42 2019 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 18 Mar 2019 20:12:42 -0400 Subject: [Haskell-cafe] Fwd: [Fwd: linux version compatible with GHC 4.3 ?] In-Reply-To: References: <9ed1b6a440d354b627a40c2a4c817413ff631272.camel@univ-cotedazur.fr> Message-ID: <14499C96-82AD-4EAB-934D-17398A852A66@dukhovni.org> > On Mar 18, 2019, at 6:58 PM, Artem Pelenitsyn wrote: > > I see you were able to install GHC on CentOS 7.2, but not on Fedora Core 28. It happens. I can't really tell what might have gone wrong with your installation on Fedora without additional details, such as: what did you actually do for the installation. But it definitely should be possible to install GHC on Fedora. https://github.com/commercialhaskell/stack/issues/1012#issuecomment-232206593 -- Viktor. From dennis.raddle at gmail.com Tue Mar 19 01:58:46 2019 From: dennis.raddle at gmail.com (Dennis Raddle) Date: Mon, 18 Mar 2019 18:58:46 -0700 Subject: [Haskell-cafe] sublime-haskell and hsdev Message-ID: I'm just now learning Sublime Text 3 and trying to install sublime-haskell. It requires me to build hsdev, which I did with "cabal install hsdev". However the latter operation installs version > 3 of hsdev, while sublime-haskell requires < 3. So first question, how do I build and install an earlier version of hsdev? Also, I'm not clear if sublime-haskell is integrated with stack. I use stack for everything and I've been using Intero up til now in Emacs. The docs mention stack but seem to be clearer about cabal. D -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Tue Mar 19 06:02:32 2019 From: jo at durchholz.org (Joachim Durchholz) Date: Tue, 19 Mar 2019 07:02:32 +0100 Subject: [Haskell-cafe] Fwd: [Fwd: linux version compatible with GHC 4.3 ?] In-Reply-To: <14499C96-82AD-4EAB-934D-17398A852A66@dukhovni.org> References: <9ed1b6a440d354b627a40c2a4c817413ff631272.camel@univ-cotedazur.fr> <14499C96-82AD-4EAB-934D-17398A852A66@dukhovni.org> Message-ID: Am 19.03.19 um 01:12 schrieb Viktor Dukhovni: > > >> On Mar 18, 2019, at 6:58 PM, Artem Pelenitsyn wrote: >> >> I see you were able to install GHC on CentOS 7.2, but not on Fedora Core 28. It happens. I can't really tell what might have gone wrong with your installation on Fedora without additional details, such as: what did you actually do for the installation. But it definitely should be possible to install GHC on Fedora. > > https://github.com/commercialhaskell/stack/issues/1012#issuecomment-232206593 I switched to installing language tools somewhere in my home directory long ago, because - more frequent breakage than for other types of packages - I want a newer version anyway - common permission issues when installing plugins/libraries My guess about the reasons: - package tools don't have a concept of user-installable extension so many language tools are broken by design (I'm looking at you, Elipse-on-Debian) - the packaged version tends to be older than what you wanted; the faster the language evolves, the more this is a problem - language tools tend to be complicated and fragile, so they have a higher chance of breaking anyway - if there's an issue, the easiest fix for most users is to just install the language tools in the home directory, instead of fixing the package (which often is too slow to solve the immediate problem - this might be different for rolling-release distros, I never tried one of these) Regards, Jo From schernichkin at gmail.com Tue Mar 19 21:58:07 2019 From: schernichkin at gmail.com (=?UTF-8?B?0KHRgtCw0L3QuNGB0LvQsNCyINCn0LXRgNC90LjRh9C60LjQvQ==?=) Date: Wed, 20 Mar 2019 00:58:07 +0300 Subject: [Haskell-cafe] Yet another binary reader (10x faster than Hackage's binary; ) Message-ID: I was investigating Haskell’s binary serialization libraries and has found that lot of them (binary, cereal and protocol-buffers) has pretty similar design. They capable to consume lazy bytestrings and either produce result, report error or produce partial result, indicating that more input data required. Such design leads to quite complex readers, and often ends up with producing closures in runtime. If you take a close look at binary’s microbenches (binary in general bit faster than cereal, so I focused on binary) you will discover, that it triggers GC even during simple tasks, such as reading a bunch of Word64 and summing results. And it relatively slow. Binary’s performance limited to hundreds of megabytes per second even in simple scenarios, but actual memory bandwidth should allow us process gigabytes per second. So, I decided to try a different approach. My approach is based on two assumptions so if it does not suit your case it obliviously will not work for you. First – when dealing with binary data we always know exactly how many bytes we want to read. This happens because we either read a fixed size primitive type or length-prefixed variable size data (possibly not “prefixed”, but you always know expected message length from somewhere). Second – chunks of data are continuous. I.e. we working with strict bytestrings more often. Even when you processing chunked stream of data with unknown number of messages (such as Mesos RecordIO Format) you should always be able to allocate continuous memory buffer for a single entity you want to deserialize. If it’s ok for you lets go on. First thing I’ve created was so called static reader. Static – because reader knows how mush bytes it wants to consume at compile time. Such readers suitable for reading combinations of primitive values. Here the implementation https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Static.hs Static reader is somewhat monadic (you can use do-syntax with RebindableSyntax extension like here https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Lev/Reader/Static.hs ) but it actually indexed monad and it unfortunately incompatible with traditional monads. Offsets and sizes are calculates at type level during the compile time and according to microbences static reader does not introduced any overhead compared to hand-written reader using the low-level indexInt64OffAddr# function ( https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Handwritten.hs ) Next thing was a dynamic reader. Dynamic readers are composed from static readers, they are fully monadic and allow you to read length-prefixed data structures (you read length with wrapped static reader and then perform normal monadic binding to supply a reader which reads a value). Here the implementation https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Dynamic.hs I’ve discovered, that typeclasses are surprisingly fast. So, it’s completely ok to have something like “(Cursor c) => … Reader” where cursor contains functions for overflow checking and memory buffer progression. I was not able to achieve same performance when I passed buffer callbacks explicitly, possibly because of inlining of specialization magic. So, I've created ByteStringCursor ( https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/ByteString.hs ) which allows to read strict bytestrings. I suppose that cursors introduce flexibility: you may read strict bytestring or you may define custom cursor for reading custom memory buffers. Finally, I’ve written some microbenches for my cases and also ported one microbench from binary. And my implementation was able to run 10x-15x faster than binary on strict bytestring. Moreover, it does not trigger GC during the run at all! You may find the project and all the benches here https://github.com/PROTEINE-INSAIDERS/lev-tolstoy So, what's the purpose of my message? It’s quite unlikely that I’ll release my code on the Hackage. This is because I’m a hobbyist Haskell developer and doing it all in my free time. But before mr. Putin will close Internet in Russia I’d like to demonstrate that it’s possible to write very fast binary deserializer taking in account several assumptions which seems quite reasonably for me. So, feel free to borrow the ideas, or share the ideas, if you know how it could be implemented better. -- Sincerely, Stanislav Chernichkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clintonmead at gmail.com Tue Mar 19 23:55:40 2019 From: clintonmead at gmail.com (Clinton Mead) Date: Wed, 20 Mar 2019 10:55:40 +1100 Subject: [Haskell-cafe] Yet another binary reader (10x faster than Hackage's binary; ) In-Reply-To: References: Message-ID: Sounds fascinating. I don't currently have a strong need for a fast binary serialiser but your explorations inside the GHC optimiser sound very interesting. If you can get around to doing a blog/presentation on your explorations and techniques I'll sure be interested and I'm sure would be many in the Haskell community. On Wed, Mar 20, 2019 at 8:58 AM Станислав Черничкин wrote: > I was investigating Haskell’s binary serialization libraries and has found > that lot of them (binary, cereal and protocol-buffers) has pretty similar > design. They capable to consume lazy bytestrings and either produce > result, report error or produce partial result, indicating that more input > data required. Such design leads to quite complex readers, and often ends > up with producing closures in runtime. If you take a close look at binary’s > microbenches (binary in general bit faster than cereal, so I focused on > binary) you will discover, that it triggers GC even during simple tasks, > such as reading a bunch of Word64 and summing results. And it relatively > slow. Binary’s performance limited to hundreds of megabytes per second even > in simple scenarios, but actual memory bandwidth should allow us process > gigabytes per second. So, I decided to try a different approach. > > > > My approach is based on two assumptions so if it does not suit your case > it obliviously will not work for you. > > > > First – when dealing with binary data we always know exactly how many > bytes we want to read. This happens because we either read a fixed size > primitive type or length-prefixed variable size data (possibly not > “prefixed”, but you always know expected message length from somewhere). > > > > Second – chunks of data are continuous. I.e. we working with strict > bytestrings more often. Even when you processing chunked stream of data > with unknown number of messages (such as Mesos RecordIO Format) you > should always be able to allocate continuous memory buffer for a single > entity you want to deserialize. > > > > If it’s ok for you lets go on. First thing I’ve created was so called > static reader. Static – because reader knows how mush bytes it wants to > consume at compile time. Such readers suitable for reading combinations of > primitive values. Here the implementation > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Static.hs Static > reader is somewhat monadic (you can use do-syntax with RebindableSyntax extension > like here > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Lev/Reader/Static.hs ) > but it actually indexed monad and it unfortunately incompatible with > traditional monads. Offsets and sizes are calculates at type level during > the compile time and according to microbences static reader does not > introduced any overhead compared to hand-written reader using the > low-level indexInt64OffAddr# function ( > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Handwritten.hs > ) > > > > Next thing was a dynamic reader. Dynamic readers are composed from static > readers, they are fully monadic and allow you to read length-prefixed data > structures (you read length with wrapped static reader and then perform > normal monadic binding to supply a reader which reads a value). Here the > implementation > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Dynamic.hs > > > > > I’ve discovered, that typeclasses are surprisingly fast. So, it’s > completely ok to have something like “(Cursor c) => … Reader” where cursor > contains functions for overflow checking and memory buffer progression. I > was not able to achieve same performance when I passed buffer callbacks explicitly, > possibly because of inlining of specialization magic. So, I've created > ByteStringCursor ( > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/ByteString.hs ) > which allows to read strict bytestrings. I suppose that cursors introduce > flexibility: you may read strict bytestring or you may define custom > cursor for reading custom memory buffers. > > > > Finally, I’ve written some microbenches for my cases and also ported one > microbench from binary. And my implementation was able to run 10x-15x > faster than binary on strict bytestring. Moreover, it does not trigger GC > during the run at all! You may find the project and all the benches here > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy > > > > So, what's the purpose of my message? It’s quite unlikely that I’ll > release my code on the Hackage. This is because I’m a hobbyist Haskell > developer and doing it all in my free time. But before mr. Putin will > close Internet in Russia I’d like to demonstrate that it’s possible to write > very fast binary deserializer taking in account several assumptions which > seems quite reasonably for me. So, feel free to borrow the ideas, or share > the ideas, if you know how it could be implemented better. > > -- > Sincerely, Stanislav Chernichkin. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.reich at gmail.com Wed Mar 20 00:42:45 2019 From: ryan.reich at gmail.com (Ryan Reich) Date: Tue, 19 Mar 2019 17:42:45 -0700 Subject: [Haskell-cafe] Yet another binary reader (10x faster than Hackage's binary; ) In-Reply-To: References: Message-ID: I will enjoy learning more about optimization in Haskell from this repo. Thanks for writing (it and your email)! Ryan Reich On Tue, Mar 19, 2019, 14:58 Станислав Черничкин wrote: > I was investigating Haskell’s binary serialization libraries and has found > that lot of them (binary, cereal and protocol-buffers) has pretty similar > design. They capable to consume lazy bytestrings and either produce > result, report error or produce partial result, indicating that more input > data required. Such design leads to quite complex readers, and often ends > up with producing closures in runtime. If you take a close look at binary’s > microbenches (binary in general bit faster than cereal, so I focused on > binary) you will discover, that it triggers GC even during simple tasks, > such as reading a bunch of Word64 and summing results. And it relatively > slow. Binary’s performance limited to hundreds of megabytes per second even > in simple scenarios, but actual memory bandwidth should allow us process > gigabytes per second. So, I decided to try a different approach. > > > > My approach is based on two assumptions so if it does not suit your case > it obliviously will not work for you. > > > > First – when dealing with binary data we always know exactly how many > bytes we want to read. This happens because we either read a fixed size > primitive type or length-prefixed variable size data (possibly not > “prefixed”, but you always know expected message length from somewhere). > > > > Second – chunks of data are continuous. I.e. we working with strict > bytestrings more often. Even when you processing chunked stream of data > with unknown number of messages (such as Mesos RecordIO Format) you > should always be able to allocate continuous memory buffer for a single > entity you want to deserialize. > > > > If it’s ok for you lets go on. First thing I’ve created was so called > static reader. Static – because reader knows how mush bytes it wants to > consume at compile time. Such readers suitable for reading combinations of > primitive values. Here the implementation > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Static.hs Static > reader is somewhat monadic (you can use do-syntax with RebindableSyntax extension > like here > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Lev/Reader/Static.hs ) > but it actually indexed monad and it unfortunately incompatible with > traditional monads. Offsets and sizes are calculates at type level during > the compile time and according to microbences static reader does not > introduced any overhead compared to hand-written reader using the > low-level indexInt64OffAddr# function ( > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Handwritten.hs > ) > > > > Next thing was a dynamic reader. Dynamic readers are composed from static > readers, they are fully monadic and allow you to read length-prefixed data > structures (you read length with wrapped static reader and then perform > normal monadic binding to supply a reader which reads a value). Here the > implementation > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Dynamic.hs > > > > > I’ve discovered, that typeclasses are surprisingly fast. So, it’s > completely ok to have something like “(Cursor c) => … Reader” where cursor > contains functions for overflow checking and memory buffer progression. I > was not able to achieve same performance when I passed buffer callbacks explicitly, > possibly because of inlining of specialization magic. So, I've created > ByteStringCursor ( > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/ByteString.hs ) > which allows to read strict bytestrings. I suppose that cursors introduce > flexibility: you may read strict bytestring or you may define custom > cursor for reading custom memory buffers. > > > > Finally, I’ve written some microbenches for my cases and also ported one > microbench from binary. And my implementation was able to run 10x-15x > faster than binary on strict bytestring. Moreover, it does not trigger GC > during the run at all! You may find the project and all the benches here > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy > > > > So, what's the purpose of my message? It’s quite unlikely that I’ll > release my code on the Hackage. This is because I’m a hobbyist Haskell > developer and doing it all in my free time. But before mr. Putin will > close Internet in Russia I’d like to demonstrate that it’s possible to write > very fast binary deserializer taking in account several assumptions which > seems quite reasonably for me. So, feel free to borrow the ideas, or share > the ideas, if you know how it could be implemented better. > > -- > Sincerely, Stanislav Chernichkin. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Wed Mar 20 05:28:39 2019 From: jo at durchholz.org (Joachim Durchholz) Date: Wed, 20 Mar 2019 06:28:39 +0100 Subject: [Haskell-cafe] Yet another binary reader (10x faster than Hackage's binary; ) In-Reply-To: References: Message-ID: Am 19.03.19 um 22:58 schrieb Станислав Черничкин: > possibly because of inlining of > specialization magic. I have seen people validate this kind of assumption, by looking at the various intermediate representations of code. If you're after performance, you may want to do that. You may find that things work differently than expected, with the potential for becoming a bottleneck when stuff is scaled up (more varied datatypes in the input stream, or more complex target datatypes, stuff hidden behind "oh I don't need GC yet", that kind of finding). Regards, Jo From drkoster at qq.com Wed Mar 20 06:20:08 2019 From: drkoster at qq.com (=?gb18030?B?RHIuS29zdGVy?=) Date: Wed, 20 Mar 2019 14:20:08 +0800 Subject: [Haskell-cafe] =?gb18030?q?Re=A3=BA__Yet_another_binary_reader_?= =?gb18030?q?=2810x_faster_thanHackage=27s_binary=3B_=29?= Message-ID: These two assumptions are basically the most challenging issues stop people from getting more performance, e.g. If somehow you want to support incremental parsing, a primitive like `ensureN` probably need to do something like ``` ensureN n = Parser $ \ buf ... k -> if n < length buf then k buf ... else partial buf ... k ``` The k appear twice inside `ensureN` 's both branch, which will stop parsers from getting inline sooner or later since inlining k into both branch will produce exponential amount of code, so instead GHC will allocate k and make a jump, the best we can get is probably to try make it a joint point, so no allocation is needed, but that's the best you can get from argument passing buffers. Regards Han Dong ------------------ 原始邮件 ------------------ 发件人: "Joachim Durchholz"; 发送时间: 2019年3月20日(星期三) 中午1:28 收件人: "haskell-cafe"; 主题: Re: [Haskell-cafe] Yet another binary reader (10x faster thanHackage's binary; ) Am 19.03.19 um 22:58 schrieb Станислав Черничкин: > possibly because of inlining of > specialization magic. I have seen people validate this kind of assumption, by looking at the various intermediate representations of code. If you're after performance, you may want to do that. You may find that things work differently than expected, with the potential for becoming a bottleneck when stuff is scaled up (more varied datatypes in the input stream, or more complex target datatypes, stuff hidden behind "oh I don't need GC yet", that kind of finding). Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.kolera at gmail.com Wed Mar 20 06:45:57 2019 From: ben.kolera at gmail.com (Ben Kolera) Date: Wed, 20 Mar 2019 16:45:57 +1000 Subject: [Haskell-cafe] Data.Functor.Compose Combinator Message-ID: Hi, I was wondering if this function exists somewhere (i'm not attached to the name): (<<$>>) :: (Functor f) => (a -> g b) -> f a -> Compose f g b (<<$>>) mkG = Compose . fmap mkG It's handy if you're working with an applicative (e.g Reflex.Dynamic) and want to layer on top a validation type function that returns a different applicative that need composing. So I figure that it must exist somewhere! :) E.g. data Person = Person Text Email nameTextDyn :: Dynamic t Text emailTextDyn :: Dynamic t Text validEmail :: Text -> Validation (NonEmpty Text) Email validPersonDyn :: Dynamic t (Validation (NonEmpty Text) Person = getCompose $ Person <$> (pure <<$>> name) <*> (validEmail <<$>> emailTextDyn) Cheers, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From schernichkin at gmail.com Wed Mar 20 13:23:02 2019 From: schernichkin at gmail.com (=?UTF-8?B?0KHRgtCw0L3QuNGB0LvQsNCyINCn0LXRgNC90LjRh9C60LjQvQ==?=) Date: Wed, 20 Mar 2019 16:23:02 +0300 Subject: [Haskell-cafe] =?utf-8?q?Re=EF=BC=9A_Yet_another_binary_reader_?= =?utf-8?q?=2810x_faster_thanHackage=27s_binary=3B_=29?= In-Reply-To: References: Message-ID: >These two assumptions are basically the most challenging issues stop people from getting more performance I guess so, that is why I've decided to introduce it. Actually this design was inspired by my job task. I'm working on graph processing library written in Scala, which stores large amount of data in of-heap memory in binary format. I want to read this data. Storable is not good enough because there is a variable-length data types as well. On other hand all always available loaded in memory and no incremental parsing required. So, my approach is something between Storable and fully featured serializes such as binary/cereal. ср, 20 мар. 2019 г. в 09:20, Dr.Koster : > These two assumptions are basically the most challenging issues stop > people from getting more performance, e.g. If somehow you want to support > incremental parsing, a primitive like `ensureN` probably need to do > something like > > ``` > ensureN n = Parser $ \ buf ... k -> > if n < length buf then k buf ... > else partial buf ... k > ``` > > The k appear twice inside `ensureN` 's both branch, which will stop > parsers from getting inline sooner or later since inlining k into both > branch will produce exponential amount of code, so instead GHC will > allocate k and make a jump, the best we can get is probably to try make it > a joint point, so no allocation is needed, but that's the best you can get > from argument passing buffers. > > Regards > Han Dong > > > ------------------ 原始邮件 ------------------ > *发件人:* "Joachim Durchholz"; > *发送时间:* 2019年3月20日(星期三) 中午1:28 > *收件人:* "haskell-cafe"; > *主题:* Re: [Haskell-cafe] Yet another binary reader (10x faster > thanHackage's binary; ) > > Am 19.03.19 um 22:58 schrieb Станислав Черничкин: > > possibly because of inlining of > > specialization magic. > > I have seen people validate this kind of assumption, by looking at the > various intermediate representations of code. > If you're after performance, you may want to do that. You may find that > things work differently than expected, with the potential for becoming a > bottleneck when stuff is scaled up (more varied datatypes in the input > stream, or more complex target datatypes, stuff hidden behind "oh I > don't need GC yet", that kind of finding). > > Regards, > Jo > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Sincerely, Stanislav Chernichkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From olf at aatal-apotheke.de Wed Mar 20 19:35:04 2019 From: olf at aatal-apotheke.de (Olaf Klinke) Date: Wed, 20 Mar 2019 20:35:04 +0100 Subject: [Haskell-cafe] Yet another binary reader (10x faster than Hackage's binary) Message-ID: <73E05D15-BF23-4DEC-AE3F-BBEE2C576A10@aatal-apotheke.de> My (admittedly limited) experience with parsers, of which deserializers are a special case, is that with complicated data structures the majority of computing time is spent transforming the raw data (i.e. chunks of bytes) into meaningful information such as Doubles, Timestamps, tree structures and so on. Of course you could argue that such conversions are not the scope of the deserializer and that these operations must be performed regardless which library one chooses. Nevertheless it somewhat relativizes the importance of the choice of serializing library. Your benchmarks do not go beyond adding integers as far as I can see, which is a relatively cheap operation. I'd be interested in a benchmark where e.g. a huge Set or an array of custom records are used. You seem to work in bioinformatics. There are plenty of examples from this field, e.g. genomic annotations. Cheers, Olaf From schernichkin at gmail.com Wed Mar 20 20:58:14 2019 From: schernichkin at gmail.com (=?UTF-8?B?0KHRgtCw0L3QuNGB0LvQsNCyINCn0LXRgNC90LjRh9C60LjQvQ==?=) Date: Wed, 20 Mar 2019 23:58:14 +0300 Subject: [Haskell-cafe] Yet another binary reader (10x faster than Hackage's binary; ) In-Reply-To: References: Message-ID: Recently I found http://hackage.haskell.org/package/store which I missed during my previous investigation. It based on same principles and runs at same speed as my prototype. And it ready-to-use. It seems I just reinvented what was done before by FP Complete :) ср, 20 мар. 2019 г. в 00:58, Станислав Черничкин : > I was investigating Haskell’s binary serialization libraries and has found > that lot of them (binary, cereal and protocol-buffers) has pretty similar > design. They capable to consume lazy bytestrings and either produce > result, report error or produce partial result, indicating that more input > data required. Such design leads to quite complex readers, and often ends > up with producing closures in runtime. If you take a close look at binary’s > microbenches (binary in general bit faster than cereal, so I focused on > binary) you will discover, that it triggers GC even during simple tasks, > such as reading a bunch of Word64 and summing results. And it relatively > slow. Binary’s performance limited to hundreds of megabytes per second even > in simple scenarios, but actual memory bandwidth should allow us process > gigabytes per second. So, I decided to try a different approach. > > > > My approach is based on two assumptions so if it does not suit your case > it obliviously will not work for you. > > > > First – when dealing with binary data we always know exactly how many > bytes we want to read. This happens because we either read a fixed size > primitive type or length-prefixed variable size data (possibly not > “prefixed”, but you always know expected message length from somewhere). > > > > Second – chunks of data are continuous. I.e. we working with strict > bytestrings more often. Even when you processing chunked stream of data > with unknown number of messages (such as Mesos RecordIO Format) you > should always be able to allocate continuous memory buffer for a single > entity you want to deserialize. > > > > If it’s ok for you lets go on. First thing I’ve created was so called > static reader. Static – because reader knows how mush bytes it wants to > consume at compile time. Such readers suitable for reading combinations of > primitive values. Here the implementation > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Static.hs Static > reader is somewhat monadic (you can use do-syntax with RebindableSyntax extension > like here > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Lev/Reader/Static.hs ) > but it actually indexed monad and it unfortunately incompatible with > traditional monads. Offsets and sizes are calculates at type level during > the compile time and according to microbences static reader does not > introduced any overhead compared to hand-written reader using the > low-level indexInt64OffAddr# function ( > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/bench/Bench/Handwritten.hs > ) > > > > Next thing was a dynamic reader. Dynamic readers are composed from static > readers, they are fully monadic and allow you to read length-prefixed data > structures (you read length with wrapped static reader and then perform > normal monadic binding to supply a reader which reads a value). Here the > implementation > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/Dynamic.hs > > > > > I’ve discovered, that typeclasses are surprisingly fast. So, it’s > completely ok to have something like “(Cursor c) => … Reader” where cursor > contains functions for overflow checking and memory buffer progression. I > was not able to achieve same performance when I passed buffer callbacks explicitly, > possibly because of inlining of specialization magic. So, I've created > ByteStringCursor ( > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy/blob/master/src/Lev/Reader/ByteString.hs ) > which allows to read strict bytestrings. I suppose that cursors introduce > flexibility: you may read strict bytestring or you may define custom > cursor for reading custom memory buffers. > > > > Finally, I’ve written some microbenches for my cases and also ported one > microbench from binary. And my implementation was able to run 10x-15x > faster than binary on strict bytestring. Moreover, it does not trigger GC > during the run at all! You may find the project and all the benches here > https://github.com/PROTEINE-INSAIDERS/lev-tolstoy > > > > So, what's the purpose of my message? It’s quite unlikely that I’ll > release my code on the Hackage. This is because I’m a hobbyist Haskell > developer and doing it all in my free time. But before mr. Putin will > close Internet in Russia I’d like to demonstrate that it’s possible to write > very fast binary deserializer taking in account several assumptions which > seems quite reasonably for me. So, feel free to borrow the ideas, or share > the ideas, if you know how it could be implemented better. > > -- > Sincerely, Stanislav Chernichkin. > -- Sincerely, Stanislav Chernichkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanessa.mchale at iohk.io Thu Mar 21 02:22:03 2019 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Wed, 20 Mar 2019 21:22:03 -0500 Subject: [Haskell-cafe] [ANN] libarchive Message-ID: <8f1cb18c-b4fd-f03e-4219-42b690027e6a@iohk.io> Hi all, I just released libarchive (http://hackage.haskell.org/package/libarchive), which can be used to create and unpack tar archives as well as several other formats. It has a reasonably complete high-level API and full bindings to the C library. Cheers, Vanessa McHale -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From shiv369shiv at gmail.com Thu Mar 21 09:37:14 2019 From: shiv369shiv at gmail.com (Shiv) Date: Thu, 21 Mar 2019 15:07:14 +0530 Subject: [Haskell-cafe] More graph algorithms for Alga & Streaming JSON/YAML parser Message-ID: Dear mentors, I am Shiv Pratap Singh from IIIT-Allahabad, India. I am currently in 4th year and Computer Science Undergraduate. After lot of brainstorming and carefully choosing the project idea based on my skills/expertise/experience, I am highly interested in project idea mentioned in Subject of the mail. As the proposal submission date is very close, I am keen to discuss the idea in depth so that various subtasks can be finalized .If any prior contribution/patch is requirement for applying, please let me know. Please mention some resources links in order to gain insights on project. Hoping to work together! Thank you, Shiv -------------- next part -------------- An HTML attachment was scrubbed... URL: From shiv369shiv at gmail.com Thu Mar 21 19:12:41 2019 From: shiv369shiv at gmail.com (Shiv) Date: Fri, 22 Mar 2019 00:42:41 +0530 Subject: [Haskell-cafe] More graph algorithms for Alga & Streaming JSON/YAML parser In-Reply-To: References: Message-ID: Any help? please on Graph algorithms On Thu, 21 Mar 2019 at 15:07, Shiv wrote: > Dear mentors, > > I am Shiv Pratap Singh from IIIT-Allahabad, India. I am currently in > 4th year and Computer Science Undergraduate. After lot of brainstorming and > carefully choosing the project idea based on my > skills/expertise/experience, I am highly interested in project idea > mentioned in Subject of the mail. > > As the proposal submission date is very close, I am keen to discuss the > idea in depth so that various subtasks can be finalized .If any prior > contribution/patch is requirement for applying, please let me know. > > Please mention some resources links in order to gain insights on project. > > Hoping to work together! > > Thank you, > > Shiv > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aeroboy94 at gmail.com Fri Mar 22 07:56:12 2019 From: aeroboy94 at gmail.com (Arian van Putten) Date: Fri, 22 Mar 2019 08:56:12 +0100 Subject: [Haskell-cafe] More graph algorithms for Alga & Streaming JSON/YAML parser In-Reply-To: References: Message-ID: Hey shiv, I'm not from the Summer of Code Committee, But as far as I understand registration for these projects will open on March 25 on https://summerofcode.withgoogle.com Where you can register. More information can be found on https://summer.haskell.org And specific inquiries are probably better to be asked to the emails on https://summer.haskell.org/contact.html instead of Haskell cafe. I hope that helps and hope you will have a fun summer of code! On Thu, Mar 21, 2019, 20:13 Shiv wrote: > Any help? please on Graph algorithms > > On Thu, 21 Mar 2019 at 15:07, Shiv wrote: > >> Dear mentors, >> >> I am Shiv Pratap Singh from IIIT-Allahabad, India. I am currently in >> 4th year and Computer Science Undergraduate. After lot of brainstorming and >> carefully choosing the project idea based on my >> skills/expertise/experience, I am highly interested in project idea >> mentioned in Subject of the mail. >> >> As the proposal submission date is very close, I am keen to discuss the >> idea in depth so that various subtasks can be finalized .If any prior >> contribution/patch is requirement for applying, please let me know. >> >> Please mention some resources links in order to gain insights on project. >> >> Hoping to work together! >> >> Thank you, >> >> Shiv >> > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffbrown.the at gmail.com Fri Mar 22 18:25:00 2019 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Fri, 22 Mar 2019 13:25:00 -0500 Subject: [Haskell-cafe] RedMonk ranks Haskell among top 20 most popular programming languages Message-ID: based on GitHub and StackOverflow. https://redmonk.com/sogrady/2019/03/20/language-rankings-1-19/ -- Jeff Brown | Jeffrey Benjamin Brown Website | Facebook | LinkedIn (spammy, so I often miss messages here) | Github -------------- next part -------------- An HTML attachment was scrubbed... URL: From scooter.phd at gmail.com Fri Mar 22 22:21:11 2019 From: scooter.phd at gmail.com (Scott Michel) Date: Fri, 22 Mar 2019 18:21:11 -0400 Subject: [Haskell-cafe] sublime-haskell and hsdev In-Reply-To: References: Message-ID: <5c955fd5.1c69fb81.d3996.3963@mx.google.com> Dennis: I would suggest installing LSP and haskell-ide-engine. It works very reliably under Unix and MacOS. I’m personally having some issues with it under Windows 10 Home Edition. ‘stack’, in particular, will bail out because it tries to recursively delete a directory, but the directory does not appear to be empty. But, if you restart the build process again, it will succeed. I’m just not sure why I don’t get much feedback from hie-8.6.4, even though it’s running… ‘hsdev’ was good until the developer went off and bumped the major version number without a lot of notice. ‘hsdev-3’ had a few build issues as well, IIRC. V/R -scooter Sent from Mail for Windows 10 From: Dennis Raddle Sent: Monday, March 18, 2019 9:59 PM To: haskell-cafe Subject: [Haskell-cafe] sublime-haskell and hsdev I'm just now learning Sublime Text 3 and trying to install sublime-haskell. It requires me to build hsdev, which I did with "cabal install hsdev". However the latter operation installs version > 3 of hsdev, while sublime-haskell requires < 3. So first question, how do I build and install an earlier version of hsdev? Also, I'm not clear if sublime-haskell is integrated with stack. I use stack for everything and I've been using Intero up til now in Emacs. The docs mention stack but seem to be clearer about cabal. D -------------- next part -------------- An HTML attachment was scrubbed... URL: From leah at vuxu.org Sun Mar 24 17:08:50 2019 From: leah at vuxu.org (Leah Neukirchen) Date: Sun, 24 Mar 2019 18:08:50 +0100 Subject: [Haskell-cafe] Munich Haskell Meeting, 2019-03-27 @ 19:30 Message-ID: <87h8bsb3od.fsf@vuxu.org> Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Wednesday, March 27 at Rumpler at 19h30. For details see here: http://muenchen.haskell.bayern/dates.html If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-mar-2019/ Everybody is welcome! cu & see you perhaps tomorrow at Munich Lambda meetup, -- Leah Neukirchen http://leahneukirchen.org/ From ionut.g.stan at gmail.com Mon Mar 25 08:25:37 2019 From: ionut.g.stan at gmail.com (=?UTF-8?Q?Ionu=c8=9b_G=2e_Stan?=) Date: Mon, 25 Mar 2019 10:25:37 +0200 Subject: [Haskell-cafe] Munich Haskell Meeting, 2019-03-27 @ 19:30 In-Reply-To: <87h8bsb3od.fsf@vuxu.org> References: <87h8bsb3od.fsf@vuxu.org> Message-ID: <8c8d445c-d98e-d5b9-c571-5bfeb1e34faf@gmail.com> Hi, Leah! I'm in Munich for a few days, but my German is very weak. Do people feel comfortable talking in English at these events? I'm tempted to join to get a feel of the local community and see what people are working on here in Munich. Cheers! On 24/03/2019 19:08, Leah Neukirchen wrote: > Dear all, > > Next week, our monthly Munich Haskell Meeting will take place again on > Wednesday, March 27 at Rumpler at 19h30. For details see here: > > http://muenchen.haskell.bayern/dates.html > > If you plan to join, please add yourself to this dudle so we can > reserve enough seats! It is OK to add yourself to the dudle > anonymously or pseudonymously. > > https://dudle.inf.tu-dresden.de/haskell-munich-mar-2019/ > > Everybody is welcome! > > cu & see you perhaps tomorrow at Munich Lambda meetup, > -- Ionuț G. Stan | http://igstan.ro | http://bucharestfp.ro From alex at slab.org Mon Mar 25 15:53:02 2019 From: alex at slab.org (Alex McLean) Date: Mon, 25 Mar 2019 16:53:02 +0100 Subject: [Haskell-cafe] Munich Haskell Meeting, 2019-03-27 @ 19:30 In-Reply-To: <8c8d445c-d98e-d5b9-c571-5bfeb1e34faf@gmail.com> References: <87h8bsb3od.fsf@vuxu.org> <8c8d445c-d98e-d5b9-c571-5bfeb1e34faf@gmail.com> Message-ID: I'm giving a talk tonight in Munich in English (my German is non-existent sadly!) https://www.meetup.com/Munich-Lambda/events/259261769/ I've been along to a Haskell Munich meetup before as well and was made to feel welcome! take care On Mon, 25 Mar 2019 at 09:26, Ionuț G. Stan wrote: > > Hi, Leah! > > I'm in Munich for a few days, but my German is very weak. Do people feel > comfortable talking in English at these events? I'm tempted to join to > get a feel of the local community and see what people are working on > here in Munich. > > Cheers! > > On 24/03/2019 19:08, Leah Neukirchen wrote: > > Dear all, > > > > Next week, our monthly Munich Haskell Meeting will take place again on > > Wednesday, March 27 at Rumpler at 19h30. For details see here: > > > > http://muenchen.haskell.bayern/dates.html > > > > If you plan to join, please add yourself to this dudle so we can > > reserve enough seats! It is OK to add yourself to the dudle > > anonymously or pseudonymously. > > > > https://dudle.inf.tu-dresden.de/haskell-munich-mar-2019/ > > > > Everybody is welcome! > > > > cu & see you perhaps tomorrow at Munich Lambda meetup, > > > > -- > Ionuț G. Stan | http://igstan.ro | http://bucharestfp.ro > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- blog: http://slab.org/ From m at jaspervdj.be Tue Mar 26 15:13:44 2019 From: m at jaspervdj.be (Jasper Van der Jeugt) Date: Tue, 26 Mar 2019 22:13:44 +0700 Subject: [Haskell-cafe] GSoC 2019 Student Applications now open Message-ID: <20190326151344.GB2632@kakigori> Hi all, We'd like to remind you that Google has opened student applications for Google Summer of Code 2019 [1]. Applications can be submitted through their dashboard [2]. If you are thinking of applying, there is a list of ideas submitted by our wonderful community [3] that can serve as inspiration. Of course, we also welcome other proposals! Please contact us [4] if you have any other question about the program. Warm regards Jasper Van der Jeugt on behalf of the Haskell.Org Committee [1]: https://summerofcode.withgoogle.com/ [2]: https://summerofcode.withgoogle.com/dashboard/ [3]: https://summer.haskell.org/ideas.html [4]: https://summer.haskell.org/contact.html From javran.c at gmail.com Wed Mar 27 07:16:33 2019 From: javran.c at gmail.com (Javran Cheng) Date: Wed, 27 Mar 2019 00:16:33 -0700 Subject: [Haskell-cafe] a bug in Read instance of Int? Message-ID: Hi Cafe, Not sure if there are existing discussions, but I recently run into a problem with Read instance of Int: Prelude> :set -XTypeApplications Prelude> reads @Int "123." [(123,".")] Prelude> reads @Int "123.aaa" [(123,".aaa")] Prelude> reads @Int "123.456aaa" [] -- I expected this to be [(123,".456aaa")] Prelude> reads @Double "123.234aaa" [(123.234,"aaa")] Further investigation shows that [realNumber]( http://hackage.haskell.org/package/base/docs/src/GHC.Read.html#readNumber) is used for Read instance of Int. I think what happened is that when the leading parser input can be parsed as a floating number, it will do so and commit to that decision, making backtracking impossible. I do understand that Read just need to be able to parse whatever Show can produce and is not designed to deal with raw inputs, but this is still a surprising behavior to me. When I'm using Text.ParserCombinators.ReadP, I really appreciates it that I can use `readP_to_S read` to parse simple values (integers and floating points in particular), but it bothers me that parsing Int from "123.aaa" is fine but "123.1aa" will fail simply because the not-yet-consumed part of input is different. Javran -------------- next part -------------- An HTML attachment was scrubbed... URL: From P.Achten at cs.ru.nl Wed Mar 27 08:48:48 2019 From: P.Achten at cs.ru.nl (Peter Achten) Date: Wed, 27 Mar 2019 09:48:48 +0100 Subject: [Haskell-cafe] [TFPIE'19] Call for papers: Trends in Functional Programming in Education 2019, 11 June 2019, Vancouver, BC, CA Message-ID: <86cdefa7-993b-a689-00cb-0b968de6f30f@cs.ru.nl> TFPIE 2019 Call for papers http://www.staff.science.uu.nl/~hage0101/tfpie2019/index.html (June 11th, University of British Columbia, Vancouver Canada, co-located with TFP 2019) TFPIE 2019 welcomes submissions describing techniques used in the classroom, tools used in and/or developed for the classroom and any creative use of functional programming (FP) to aid education in or outside Computer Science. Topics of interest include, but are not limited to:   FP and beginning CS students   FP and Computational Thinking   FP and Artificial Intelligence   FP in Robotics   FP and Music   Advanced FP for undergraduates   FP in graduate education   Engaging students in research using FP   FP in Programming Languages   FP in the high school curriculum   FP as a stepping stone to other CS topics   FP and Philosophy   The pedagogy of teaching FP   FP and e-learning: MOOCs, automated assessment etc.   Best Lectures Ð more details below In addition to papers, we are requesting best lecture presentations. What's your best lecture topic in an FP related course? Do you have a fun way to present FP concepts to novices or perhaps an especially interesting presentation of a difficult topic? In either case, please consider sharing it. Best lecture topics will be selected for presentation based on a short abstract describing the lecture and its interest to TFPIE attendees. The length of the presentation should be comparable to that of a paper. On top of the lecture itself, the presentation can also provide commentary on the lecture. Submissions Potential presenters are invited to submit an extended abstract (4-6 pages) or a draft paper (up to 16 pages) in EPTCS style. The authors of accepted presentations will have their preprints and their slides made available on the workshop's website. Papers and abstracts can be submitted via easychair at the following link: https://easychair.org/conferences/?conf=tfpie2019 After the workshop, presenters will be invited to submit (a revised version of) their article for review. The PC will select the best articles that will be published in the Electronic Proceedings in Theoretical Computer Science (EPTCS). Articles rejected for presentation and extended abstracts will not be formally reviewed by the PC. Dates Submission deadline:          May  14th 2019, Anywhere on Earth. Notification:                 May  20th Workshop:                     June 11th Submission for formal review: August 18th 2019, Anywhere on Earth Notification of full article: October 6th Camera ready:                 November 1st Program Committee Alex Gerdes           - University of Gothenburg / Chalmers Jurriaan Hage (Chair) - Utrecht University Pieter Koopman        - Radboud University, the Netherlands Elena Machkasova      - University of Minnesota, Morris, USA Heather Miller        - Carnegie Mellon University and EPFL Lausanne Prabhakar Ragde       - University of Waterloo, Waterloo, Ontario, Canada Simon Thompson        - University of Kent, UK Sharon Tuttle         - Humboldt State University, Arcata, USA Note: information on TFP is available at https://www.tfp2019.org/index.html From icfp.publicity at googlemail.com Thu Mar 28 02:45:25 2019 From: icfp.publicity at googlemail.com (Sam Tobin-Hochstadt) Date: Wed, 27 Mar 2019 22:45:25 -0400 Subject: [Haskell-cafe] Call for Tutorial Proposals: ICFP 2019 Message-ID: <5c9c3545571ff_79832ab973eda5c41006ee@homer.mail> CALL FOR TUTORIAL PROPOSALS ICFP 2019 24th ACM SIGPLAN International Conference on Functional Programming August 18 - 23, 2019 Berlin, Germany https://icfp19.sigplan.org/ The 24th ACM SIGPLAN International Conference on Functional Programming will be held in Berlin, Germany on August 18-23, 2019. ICFP provides a forum for researchers and developers to hear about the latest work on the design, implementations, principles, and uses of functional programming. Proposals are invited for tutorials, lasting approximately 3 hours each, to be presented during ICFP and its co-located workshops and other events. These tutorials are the successor to the CUFP tutorials from previous years, but we also welcome tutorials whose primary audience is researchers rather than practitioners. Tutorials may focus either on a concrete technology or on a theoretical or mathematical tool. Ideally, tutorials will have a concrete result, such as "Learn to do X with Y" rather than "Learn language Y". Tutorials may occur after ICFP co-located with the associated workshops, from August 22 till August 23. ---------------------------------------------------------------------- Submission details Deadline for submission: May 10th, 2019 Notification of acceptance: May 17th, 2019 Prospective organizers of tutorials are invited to submit a completed tutorial proposal form in plain text format to the ICFP 2018 workshop co-chairs (Jennifer Hackett and Christophe Scholliers), via email to icfp-workshops-2019 at googlegroups.com by May 10th, 2019. Please note that this is a firm deadline. Organizers will be notified if their event proposal is accepted by May 17, 2019. The proposal form is available at: http://www.icfpconference.org/icfp2019-files/icfp19-tutorials-form.txt ---------------------------------------------------------------------- Selection committee The proposals will be evaluated by a committee comprising the following members of the ICFP 2019 organizing committee. Tutorials Co-Chair: Jennifer Hackett (University of Nottingham) Tutorials Co-Chair: Christophe Scholliers (University of Ghent) General Chair: Derek Dreyer (MPI-SWS) Program Chair: François Pottier ( Inria, France) ---------------------------------------------------------------------- Further information Any queries should be addressed to the tutorial co-chairs ( Jennifer Hackett and Christophe Scholliers), via email to icfp-workshops-2019 at googlegroups.com From omeragacan at gmail.com Thu Mar 28 10:07:42 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 28 Mar 2019 13:07:42 +0300 Subject: [Haskell-cafe] a bug in Read instance of Int? In-Reply-To: References: Message-ID: Hi, I'm not sure if this is a bug, but I think the problem is in the lexer, not in the part that that tries to convert a single token to a value of the given type. Example: λ:1> lex "123" [("123","")] λ:2> lex "123.aaa" [("123",".aaa")] λ:3> lex "123.456aaa" [("123.456","aaa")] So in "123.aaa" you actually convert "123" to an Int, but in "123.456aaa" you try to convert "123.456" to an Int, which fails. Not sure how hard would it be to improve this (while keeping things standard-compliant) or whether it's worth the effort. Ömer Javran Cheng , 27 Mar 2019 Çar, 10:17 tarihinde şunu yazdı: > > Hi Cafe, > > Not sure if there are existing discussions, but I recently run into a problem with Read instance of Int: > > Prelude> :set -XTypeApplications > Prelude> reads @Int "123." > [(123,".")] > Prelude> reads @Int "123.aaa" > [(123,".aaa")] > Prelude> reads @Int "123.456aaa" > [] -- I expected this to be [(123,".456aaa")] > Prelude> reads @Double "123.234aaa" > [(123.234,"aaa")] > > Further investigation shows that [realNumber](http://hackage.haskell.org/package/base/docs/src/GHC.Read.html#readNumber) is used for Read instance of Int. > I think what happened is that when the leading parser input can be parsed as a floating number, it will do so and commit to that decision, making backtracking impossible. > > I do understand that Read just need to be able to parse whatever Show can produce and is not designed to deal with raw inputs, but this is still a surprising behavior to me. > When I'm using Text.ParserCombinators.ReadP, I really appreciates it that I can use `readP_to_S read` to parse simple values (integers and floating points in particular), > but it bothers me that parsing Int from "123.aaa" is fine but "123.1aa" will fail simply because the not-yet-consumed part of input is different. > > Javran > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From byorgey at gmail.com Thu Mar 28 11:09:21 2019 From: byorgey at gmail.com (Brent Yorgey) Date: Thu, 28 Mar 2019 06:09:21 -0500 Subject: [Haskell-cafe] a bug in Read instance of Int? In-Reply-To: References: Message-ID: The 2010 Haskell Report specifies that (x,"") is an element of (readsPrec d (showsPrec d x "")) Notice it does NOT say that (x,s) is an element of (readsPrec d (showsPrec d x s)) which would be false, as your example shows by setting x = 123 and s = ".456". read really isn't meant to be a general-purpose parser; it only works on sequences of lexical tokens. As Ömer points out, reads @Int and reads @Double both depend on 'lex', and there is no way for 'lex' to know what kind of thing reads is ultimately trying to parse. If you changed lex so that lex "123.456aaa" returns [("123", ".456aaa")] then reads @Double would break. I suppose you could try to change lex so that lex "123.456aaa" returns [("123", ".456aaa"), ("123.456", "aaa")] but this seems like a lot of work for little benefit. If you are running into this kind of issue with Read instances, it strongly suggests to me that you should be using a proper parsing library rather than Read. -Brent On Thu, Mar 28, 2019 at 5:08 AM Ömer Sinan Ağacan wrote: > Hi, > > I'm not sure if this is a bug, but I think the problem is in the lexer, > not in > the part that that tries to convert a single token to a value of the given > type. > Example: > > λ:1> lex "123" > [("123","")] > > λ:2> lex "123.aaa" > [("123",".aaa")] > > λ:3> lex "123.456aaa" > [("123.456","aaa")] > > So in "123.aaa" you actually convert "123" to an Int, but in "123.456aaa" > you > try to convert "123.456" to an Int, which fails. > > Not sure how hard would it be to improve this (while keeping things > standard-compliant) or whether it's worth the effort. > > Ömer > > Javran Cheng , 27 Mar 2019 Çar, 10:17 tarihinde şunu > yazdı: > > > > Hi Cafe, > > > > Not sure if there are existing discussions, but I recently run into a > problem with Read instance of Int: > > > > Prelude> :set -XTypeApplications > > Prelude> reads @Int "123." > > [(123,".")] > > Prelude> reads @Int "123.aaa" > > [(123,".aaa")] > > Prelude> reads @Int "123.456aaa" > > [] -- I expected this to be [(123,".456aaa")] > > Prelude> reads @Double "123.234aaa" > > [(123.234,"aaa")] > > > > Further investigation shows that [realNumber]( > http://hackage.haskell.org/package/base/docs/src/GHC.Read.html#readNumber) > is used for Read instance of Int. > > I think what happened is that when the leading parser input can be > parsed as a floating number, it will do so and commit to that decision, > making backtracking impossible. > > > > I do understand that Read just need to be able to parse whatever Show > can produce and is not designed to deal with raw inputs, but this is still > a surprising behavior to me. > > When I'm using Text.ParserCombinators.ReadP, I really appreciates it > that I can use `readP_to_S read` to parse simple values (integers and > floating points in particular), > > but it bothers me that parsing Int from "123.aaa" is fine but "123.1aa" > will fail simply because the not-yet-consumed part of input is different. > > > > Javran > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From claude at mathr.co.uk Thu Mar 28 17:01:59 2019 From: claude at mathr.co.uk (Claude Heiland-Allen) Date: Thu, 28 Mar 2019 17:01:59 +0000 Subject: [Haskell-cafe] art exhibition in London, April 2019 Message-ID: <0dc0ed68-bc9e-730c-1dde-f5aa62660f0d@mathr.co.uk> Hi all, I have an art exhibition opening soon in London, UK. https://sonicelectronicsfestival.org/exhibition/ Featuring: - raytracing in curved space - badly-played tetris - generative techno - audiovisual sliding tile puzzle - interactive graph-directed IFS - zooming hybrid fractals Opening 11th April 2019 6pm, until 27th April, at Chalton Gallery, 96 Chalton Street, Camden, London NW1 1HJ, UK.  Check website for times. Two of the works are realized with Haskell: - "Wedged" implements aesthetic constraints to search tetris packings - "Hybrids" uses a formula compiler to generate C code for rendering fractals Curated by Laura Netz. Supported using public funding by the National Lottery through Arts Council England. Claude -- https://mathr.co.uk From vanessa.mchale at iohk.io Thu Mar 28 18:26:22 2019 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Thu, 28 Mar 2019 13:26:22 -0500 Subject: [Haskell-cafe] Resources/papers on lazy I/O Message-ID: <5a4dbbf5-4f6b-3d8a-7d57-de330892c228@iohk.io> Hi all, I recently finished up writing streaming facilities via libarchive bindings and lazy bytestrings. It ended up working nicely - reading from a file lazily and then unpacking the archive was more efficient in time + allocations than reading the file all at once. That got me thinking - what exactly is wrong with lazy I/O? I've seen Oleg Kiselyov's paper (http://okmij.org/ftp/Haskell/#lazyIO-not-True) and I've run into issues myself (basically the issue here: https://stackoverflow.com/questions/31342012/read-and-writing-to-file-in-haskell), but none of those seem so pathological - the second issue could be better resolved with linear types! Are the any explanations of why Haskell *does* use lazy I/O? Laziness allows symmetries between values and generators of values - surely it is not *that* immoral to enforce this even in the IO monad? Cheers, Vanessa McHale -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From jo at durchholz.org Thu Mar 28 18:43:16 2019 From: jo at durchholz.org (Joachim Durchholz) Date: Thu, 28 Mar 2019 19:43:16 +0100 Subject: [Haskell-cafe] Resources/papers on lazy I/O In-Reply-To: <5a4dbbf5-4f6b-3d8a-7d57-de330892c228@iohk.io> References: <5a4dbbf5-4f6b-3d8a-7d57-de330892c228@iohk.io> Message-ID: Am 28.03.19 um 19:26 schrieb Vanessa McHale: > That got me thinking - what exactly is wrong with lazy I/O? I've seen > Oleg Kiselyov's paper (http://okmij.org/ftp/Haskell/#lazyIO-not-True) > and I've run into issues myself (basically the issue here: > https://stackoverflow.com/questions/31342012/read-and-writing-to-file-in-haskell), > but none of those seem so pathological - the second issue could be > better resolved with linear types! As far as I recall, IO was the nasty loophole in Haskell initially, and after lots of ugly hackery and living with the pain, various approaches emerged. Today's IO system was one of the, and there were two contenders. IO won because it was the first one to become useful, and while at least one of the others made it to the proof-of-concept stages, IO had started to mature already and the contender never managed to get much mind share. So... it's part historical accident, part "nobody had a better idea at the time". Which means that there could well be better alternatives, but it's unclear how to judge whether some alternative is really better, so even people with an interest in the field tend to research on other topics. Regards, Jo From ian at zenhack.net Thu Mar 28 18:51:50 2019 From: ian at zenhack.net (Ian Denhardt) Date: Thu, 28 Mar 2019 14:51:50 -0400 Subject: [Haskell-cafe] Resources/papers on lazy I/O In-Reply-To: <5a4dbbf5-4f6b-3d8a-7d57-de330892c228@iohk.io> References: <5a4dbbf5-4f6b-3d8a-7d57-de330892c228@iohk.io> Message-ID: <155379911032.7058.8544784724675684287@localhost.localdomain> The basic problem is just that it's error prone when you're doing things that are non-trivial wrt the lifetime of the file. Part of the original motivation for Haskell's "purity" was that lazy evaluation and side-effects are hard to think about. In languages that allow it, "don't mix laziness and effects" is a bit of common folk-wisdom. See e.g: https://stuartsierra.com/2015/08/25/clojure-donts-lazy-effects If I'm writing a simple program that just reads in some data, does stuff, and spits it back out, I usually don't stress about it. But for long running programs, file descriptor leaks can be a problem, and between that and needing to be async-exception safe, the usual resource management strategies in Haskell make the whole thing pretty dicey. Quoting Vanessa McHale (2019-03-28 14:26:22) > the second issue could be better resolved with linear types! How would this work? Not sure I follow. From javran.c at gmail.com Fri Mar 29 03:22:42 2019 From: javran.c at gmail.com (Javran Cheng) Date: Thu, 28 Mar 2019 20:22:42 -0700 Subject: [Haskell-cafe] a bug in Read instance of Int? In-Reply-To: References: Message-ID: Thanks for the responses! If Haskell report says so, it's fine to leave it as it is - I do have it in mind that Read isn't supposed to be used this way. To be fair, "(x,s) is an element of (readsPrec d (showsPrec d x s))" is indeed a stronger property than it needs to be. But still, it is worth taking a note as concerns when using `readS_to_P reads`. Cheers, Javran On Thu, Mar 28, 2019 at 4:09 AM Brent Yorgey wrote: > The 2010 Haskell Report specifies that > > (x,"") is an element of (readsPrec d (showsPrec d x "")) > > Notice it does NOT say that > > (x,s) is an element of (readsPrec d (showsPrec d x s)) > > which would be false, as your example shows by setting x = 123 and s = > ".456". > > read really isn't meant to be a general-purpose parser; it only works on > sequences of lexical tokens. As Ömer points out, reads @Int and reads > @Double both depend on 'lex', and there is no way for 'lex' to know what > kind of thing reads is ultimately trying to parse. If you changed lex so > that lex "123.456aaa" returns [("123", ".456aaa")] then reads @Double would > break. I suppose you could try to change lex so that lex "123.456aaa" > returns [("123", ".456aaa"), ("123.456", "aaa")] but this seems like a lot > of work for little benefit. If you are running into this kind of issue > with Read instances, it strongly suggests to me that you should be using a > proper parsing library rather than Read. > > -Brent > > On Thu, Mar 28, 2019 at 5:08 AM Ömer Sinan Ağacan > wrote: > >> Hi, >> >> I'm not sure if this is a bug, but I think the problem is in the lexer, >> not in >> the part that that tries to convert a single token to a value of the >> given type. >> Example: >> >> λ:1> lex "123" >> [("123","")] >> >> λ:2> lex "123.aaa" >> [("123",".aaa")] >> >> λ:3> lex "123.456aaa" >> [("123.456","aaa")] >> >> So in "123.aaa" you actually convert "123" to an Int, but in "123.456aaa" >> you >> try to convert "123.456" to an Int, which fails. >> >> Not sure how hard would it be to improve this (while keeping things >> standard-compliant) or whether it's worth the effort. >> >> Ömer >> >> Javran Cheng , 27 Mar 2019 Çar, 10:17 tarihinde şunu >> yazdı: >> > >> > Hi Cafe, >> > >> > Not sure if there are existing discussions, but I recently run into a >> problem with Read instance of Int: >> > >> > Prelude> :set -XTypeApplications >> > Prelude> reads @Int "123." >> > [(123,".")] >> > Prelude> reads @Int "123.aaa" >> > [(123,".aaa")] >> > Prelude> reads @Int "123.456aaa" >> > [] -- I expected this to be [(123,".456aaa")] >> > Prelude> reads @Double "123.234aaa" >> > [(123.234,"aaa")] >> > >> > Further investigation shows that [realNumber]( >> http://hackage.haskell.org/package/base/docs/src/GHC.Read.html#readNumber) >> is used for Read instance of Int. >> > I think what happened is that when the leading parser input can be >> parsed as a floating number, it will do so and commit to that decision, >> making backtracking impossible. >> > >> > I do understand that Read just need to be able to parse whatever Show >> can produce and is not designed to deal with raw inputs, but this is still >> a surprising behavior to me. >> > When I'm using Text.ParserCombinators.ReadP, I really appreciates it >> that I can use `readP_to_S read` to parse simple values (integers and >> floating points in particular), >> > but it bothers me that parsing Int from "123.aaa" is fine but "123.1aa" >> will fail simply because the not-yet-consumed part of input is different. >> > >> > Javran >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > To (un)subscribe, modify options or view archives go to: >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > Only members subscribed via the mailman list are allowed to post. >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > -- Javran (Fang) Cheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From brucker at spamfence.net Fri Mar 29 07:55:06 2019 From: brucker at spamfence.net (Achim D. Brucker) Date: Fri, 29 Mar 2019 07:55:06 +0000 Subject: [Haskell-cafe] Open Position: Lecturer in Cybersecurity - University of Exeter Message-ID: <20190329075506.chf7jt2low4gekhi@kandagawa.home.brucker.ch> Dear all, As part of the recent expansion of the Department of Computer Science (www.ex.ac.uk/computer-science/) at the University of Exeter, we are recruiting for a new Lecturer in Cybersecurity. You will join a growing department and will contribute to a new research focus in cybersecurity. This is a *unique* opportunity to join a new cybersecurity group as founding member and to influence its future development. Application in all areas of cybersecurity are welcome. Please apply by 4th of April 2019! See the full announcement and apply here: https://jobs.exeter.ac.uk/hrpr_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=458120OCP0&WVID=3817591jNg Feel free to contact me for informal inquires about the post. Best, Achim -- Dr. Achim D. Brucker | Chair of Cybersecurity | University of Exeter https://www.brucker.ch | https://logicalhacking.com/blog @adbrucker | @logicalhacking From S.J.Thompson at kent.ac.uk Fri Mar 29 11:34:46 2019 From: S.J.Thompson at kent.ac.uk (Simon Thompson) Date: Fri, 29 Mar 2019 11:34:46 +0000 Subject: [Haskell-cafe] Resources/papers on lazy I/O In-Reply-To: <155379911032.7058.8544784724675684287@localhost.localdomain> References: <5a4dbbf5-4f6b-3d8a-7d57-de330892c228@iohk.io> <155379911032.7058.8544784724675684287@localhost.localdomain> Message-ID: <66D9844F-BC2C-4465-8C5C-B531ABAA548F@kent.ac.uk> I wrote about this in the context of Miranda some time ago … https://kar.kent.ac.uk/20889/1/interactive_thompson.pdf Simon > On 28 Mar 2019, at 18:51, Ian Denhardt wrote: > > The basic problem is just that it's error prone when you're doing things > that are non-trivial wrt the lifetime of the file. Part of the original > motivation for Haskell's "purity" was that lazy evaluation and > side-effects are hard to think about. In languages that allow it, "don't > mix laziness and effects" is a bit of common folk-wisdom. See e.g: > > https://stuartsierra.com/2015/08/25/clojure-donts-lazy-effects > > If I'm writing a simple program that just reads in some data, does > stuff, and spits it back out, I usually don't stress about it. But for > long running programs, file descriptor leaks can be a problem, and > between that and needing to be async-exception safe, the usual resource > management strategies in Haskell make the whole thing pretty dicey. > > Quoting Vanessa McHale (2019-03-28 14:26:22) > >> the second issue could be better resolved with linear types! > > How would this work? Not sure I follow. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson at kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt -------------- next part -------------- An HTML attachment was scrubbed... URL: From rrnewton at gmail.com Sat Mar 30 18:24:15 2019 From: rrnewton at gmail.com (Ryan Newton) Date: Sat, 30 Mar 2019 14:24:15 -0400 Subject: [Haskell-cafe] [Job] Cloudseal seeks Haskell Software Engineers Message-ID: Dear fellow Haskellers, We're hiring! See below. -Ryan Newton Cloudseal co-founder, Professor of Computer Science (on leave), Functional programmer for 28 years, Haskeller for 10 *========================================================* We are looking for a Haskell software engineer who is passionate about developer technologies and changing how the world tests and runs software. Cloudseal is an early-stage startup, a spin-out from academic research on reliable (deterministic) software execution. Our core technology is a new containerization mechanism for Linux/x86 programs that enables reproducible builds and functional tests that never flake. You will be joining a growing team with decades of collective experience contributing to developer tools, including the GHC compiler and ecosystem, as well as developer products at Intel and Microsoft. *Skills sought* *-------------------* We are seeking curious and independent people ready to become involved with every part of product development. We are a Haskell+Rust shop, corresponding to the high- and low-level components of our stack, respectively, with this position focused on the former. Familiarity with low-level Linux systems software is not required but would be a positive. Experience with building services (authentication, billing) is a plus for this role. You should be comfortable with modern infrastructure essentials like AWS, Docker, CI, etc. and have passable sysadmin skills. Ability to work independently as part of a distributed development team is essential. This full-time position carries an option for remote work (within the U.S.), but geographic proximity to Philadelphia, Indianapolis, or Bloomington, IN would be preferred. During a ramp-up phase we will work on site together at a higher rate (25-50% travel), after which in-person team meetups will be less frequent. *Benefits* *------------* - Competitive salary & fringe benefits - Significant early-stage stock options - Flexible working hours and vacation time *Applying* *-------------* Email your resume to jobs at cloudseal.io. In your message, please share links to your previous work, such as running websites or open source contributions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mightybyte at gmail.com Sun Mar 31 17:35:47 2019 From: mightybyte at gmail.com (MightyByte) Date: Sun, 31 Mar 2019 17:35:47 +0000 Subject: [Haskell-cafe] Compose Second CFP & Keynote Speaker Announcement Message-ID: The April 23 submission deadline for Compose NYC is rapidly approaching, so get your submissions in soon.  We are also excited to announce that we now have two confirmed keynote speakers: David Spivak - Compositional Graphical Logic Donya Quick - Making Algorithmic Music Compose is a conference focused specifically on strongly typed functional programming languages. It will be held in New York on Monday and Tuesday, June 24 -25, 2019. Registration will be open shortly. http://www.composeconference.org/2019 To get a sense of Compose, you can check out the great talks from past conferences: https://www.youtube.com/channel/UC0pEknZxL7Q1j0Ok8qImWdQ Below is our call for presentations. http://www.composeconference.org/2019/cfp/ In past years, we have also hosted an unconference over the weekend adjacent to the conference.  The unconference details this year have not been finalized yet, but if you’re interested you may want to keep that in mind when making travel plans. *** Compose Conference NYC 2019 Second Call for Presentations June 24 -25, 2019 New York City The audience for Compose is people using Haskell, PureScript, OCaml, F#, SML, and other strongly typed functional programming languages who are looking to increase their skills or learn new technologies and libraries. Presentations should be aimed at teaching or introducing new ideas or tools. We are also interested in presentations aiming at taking complex concepts, such as program derivation, and putting them into productive use. However presentations on anything that you suspect our audience may find interesting are welcome. The following are some of the types of talks we would welcome: *Library/Tool Talks* — Exploring the uses of a powerful toolkit or library, be it for parsing, testing, data access and analysis, or anything else. *Production Systems* — Experience reports on deploying functional techniques in real systems; insights revealed, mistakes made, lessons learned. *Theory made Practical* — Just because it’s locked away in papers doesn’t mean it’s hard! Accessible lectures on classic results and why they matter to us today. Such talks can include simply introducing the principles of a field of research so as to help the audience read up on it in the future; from abstract machines to program derivation to branch-and-bound algorithms, the sky’s the limit. Check out the Compose YouTube channel ( https://www.youtube.com/playlist?list=PLNoHgLVTxtaoolkQo4hLy4ZsA1prUJ51m ) to see videos of talks we've had previously and get an idea of the kinds of topics we usually feature. We also welcome presentations for more formal tutorials. Tutorials should be aimed at a smaller audience of beginner-to-novice understanding, and ideally include hands-on exercises. The due date for submissions is *April 23, 2019*. We will send out notice of acceptance by *April 30th*. We prefer that submissions be via the EasyChair website ( https://easychair.org/conferences/?conf=compose2019 ). Please suggest a title, and describe the topic you intend to speak on. Talks can be either 30 or 45 minutes, please indicate how much time you would prefer to take. You may submit multiple talks if you have several ideas and are unsure which would be the most likely to be accepted. Accepted talks will be asked to submit slides for review prior to the conference. Feel free to include any additional information on both your expertise and interesting elements of your topic that would be appropriate for inclusion in the public abstract. Furthermore, if your abstract doesn't feel "final"—don't worry! We'll work with you to polish it up. If you want to discuss your presentation(s) before submitting, or to further nail down what you intend to speak on, please feel free to contact us at n... at composeconference.org ( n... at composeconference.org ). We're happy to work with you, even if you are a new or inexperienced speaker, to help your talk be great. Diversity We would like to put an emphasis on soliciting a diverse set of speakers - anything you can do to distribute information about this CFP and encourage submissions from under-represented groups would be greatly appreciated. We welcome your contributions and encourage you to apply! Best, Doug Sent via Superhuman ( https://sprh.mn/?vip=mightybyte at gmail.com ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlang at delysid.org Sun Mar 31 22:50:49 2019 From: mlang at delysid.org (Mario Lang) Date: Mon, 01 Apr 2019 00:50:49 +0200 Subject: [Haskell-cafe] I fell in love with System.Console.Haskeline.getExternalPrint Message-ID: <878swuhd4m.fsf@fx.blind.guru> Hi. Since I discovered getExternalPrint, I found a ton of use cases for it. In particular, it makes it possible to write background threads that report stuff to the console without disturbing the prompt. I ended up writing a sort of uGHCi with the help of hint and the unreleased master branch of haskeline[1] to make externalPrint available from within the interpreter. Combined with Shh, this opens the door for a lot of useful functionality. Here is a simplified example based on shell programming with the help of Shh: % let watch r p = forkIO . forever $ printProc p >> readIORef r >>= OS.sleep % delay <- newIORef 10 % clock <- watch delay OS.date Sat Mar 2 21:32:28 CET 2019 Sat Mar 2 21:32:38 CET 2019 % writeIORef delay 5 Sat Mar 2 21:32:48 CET 2019 Sat Mar 2 21:32:53 CET 2019 Sat Mar 2 21:32:58 CET 2019 Sat Mar 2 21:33:03 CET 2019 % killThread clock printProc uses externalPrint from haskeline to print the output of a shell command to the console without disturbing the prompt. The OS module in this example simply exports all executables as haskell functions, thanks to the TH magic from Shh. I am relying on a pretty crude hack to make this work: (rFd, wFd) <- liftIO createPipe eprint <- getExternalPrint -- from haskeline liftIO . forkIO . forever $ do (s, bc) <- fdRead rFd 1024 eprint s -- ... -- define a function in the interpreter using hint runStmt $ "let externalPrint s = fdWrite (read " <> show (show wFd) <> ") s >> pure ()" This hack is basically the whole magic of my own hand-rolled uGHCi. I'd love to not reinvent the wheel there, and just be able to use standard GHCi to make use of externalPrint. Question is, would a similar thing be possible to implement in GHCi directly, and if so, what would be required to make this work? I am likely far too much a rooky to get this working on my own, so I am asking for help. What steps should I follow to eventually achieve my goal? I guess submitting a feature request would be a start. However, I want progress, so I am wondering: * The pipe trick is likely too hacky for GHCi. Are there any other portable alternatives for getting data from within the interpreter to the haskell process running it? * Or is there a way to serialize an IO action into the interpreter that I've missed? The problem here is the boundary between the process that runs the interpreter, and the interpreter itself. I am a bit whacky on terminology here, but as I see it, getExternalPrint returns a function that has internal state. So it isn't really possible to make such a function available from within the interpreter. Hence, the pipe hack above, which just sends the *argument* to externalPrint from the interpreter to the process running it. Any insights that might help me make that available in standard GHCi? I really think this is a pretty unique feature that would enable all sorts of interesting interactive code. [1] Haskeline < 0.8.0 doesn't allow to combine IntterpreterT from hint with InputT because of the way exceptions are done. The master branch of haskeline fixes that, so finally you can have a transformer stack that combines both, allowing for pretty simple interactive haskell interpreters with readline functionality. Thanks for that! -- CYa, ⡍⠁⠗⠊⠕ From allbery.b at gmail.com Sun Mar 31 23:13:57 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 31 Mar 2019 19:13:57 -0400 Subject: [Haskell-cafe] I fell in love with System.Console.Haskeline.getExternalPrint In-Reply-To: <878swuhd4m.fsf@fx.blind.guru> References: <878swuhd4m.fsf@fx.blind.guru> Message-ID: You might consider that ghci is just a client of ghc-api that's built into ghc for convenience; see the ghci-ng package on hackage, which is a standalone version used to experiment with new ghci features. ghci itself has a separate interpreter, in other words. (Note also -fexternal-interpreter, which relies on this to move the backend to a different process or even different host for cross-platform work; you might want to look into customization at that level.) On Sun, Mar 31, 2019 at 6:51 PM Mario Lang wrote: > Hi. > > Since I discovered getExternalPrint, I found a ton of use cases for it. > In particular, it makes it possible to write background threads that > report stuff to the console without disturbing the prompt. > > I ended up writing a sort of uGHCi with the help of hint and the > unreleased master branch of haskeline[1] to make externalPrint available > from within the interpreter. Combined with Shh, this opens the door for > a lot of useful functionality. Here is a simplified example based > on shell programming with the help of Shh: > > % let watch r p = forkIO . forever $ printProc p >> readIORef r >>= > OS.sleep > % delay <- newIORef 10 > % clock <- watch delay OS.date > Sat Mar 2 21:32:28 CET 2019 > Sat Mar 2 21:32:38 CET 2019 > % writeIORef delay 5 > Sat Mar 2 21:32:48 CET 2019 > Sat Mar 2 21:32:53 CET 2019 > Sat Mar 2 21:32:58 CET 2019 > Sat Mar 2 21:33:03 CET 2019 > % killThread clock > > printProc uses externalPrint from haskeline to print the output of a > shell command to the console without disturbing the prompt. > The OS module in this example simply exports all executables as haskell > functions, thanks to the TH magic from Shh. > > I am relying on a pretty crude hack to make this work: > > (rFd, wFd) <- liftIO createPipe > eprint <- getExternalPrint -- from haskeline > liftIO . forkIO . forever $ do > (s, bc) <- fdRead rFd 1024 > eprint s > -- ... > -- define a function in the interpreter using hint > runStmt $ "let externalPrint s = fdWrite (read " <> show (show wFd) <> > ") s >> pure ()" > > This hack is basically the whole magic of my own hand-rolled uGHCi. > > I'd love to not reinvent the wheel there, and just be able to use > standard GHCi to make use of externalPrint. > > Question is, would a similar thing be possible to implement in GHCi > directly, and if so, what would be required to make this work? > I am likely far too much a rooky to get this working on my own, so I am > asking for help. What steps should I follow to eventually achieve my > goal? I guess submitting a feature request would be a start. > However, I want progress, so I am wondering: > > * The pipe trick is likely too hacky for GHCi. Are there any other > portable alternatives for getting data from within the interpreter to > the haskell process running it? > * Or is there a way to serialize an IO action into the interpreter that > I've missed? > > The problem here is the boundary between the process that runs > the interpreter, and the interpreter itself. I am a bit whacky on > terminology here, but as I see it, getExternalPrint returns a function that > has internal state. So it isn't really possible to make such a function > available from within the interpreter. Hence, the pipe hack above, > which just sends the *argument* to externalPrint from the interpreter to > the process running it. > > Any insights that might help me make that available in standard > GHCi? I really think this is a pretty unique feature that would enable > all sorts of interesting interactive code. > > [1] Haskeline < 0.8.0 doesn't allow to combine IntterpreterT from hint > with InputT because of the way exceptions are done. The master branch > of haskeline fixes that, so finally you can have a transformer stack > that combines both, allowing for pretty simple interactive haskell > interpreters with readline functionality. Thanks for that! > > -- > CYa, > ⡍⠁⠗⠊⠕ > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: