[Haskell-cafe] Distributing Haskell on a cluster
felipe zapata
tifonzafel at gmail.com
Sun Mar 15 22:53:02 UTC 2015
Hi all,
I have posted the following question on stackoverflow, but so far I have
not received an answer.
http://stackoverflow.com/questions/29039815/distributing-haskell-on-a-cluster
I have a piece of code that process files,
processFiles :: [FilePath] -> (FilePath -> IO ()) -> IO ()
This function spawns an async process that execute an IO action. This IO
action must be submitted to a cluster through a job scheduling system (e.g
Slurm).
Because I must use the job scheduling system, it's not possible to use
cloudHaskell to distribute the closure. Instead the program writes a new
*Main.hs* containing the desired computations, that is copy to the cluster
node together with all the modules that main depends on and then it is
executed remotely with "runhaskell Main.hs [opts]". Then the async process
should ask periodically to the job scheduling system (using *threadDelay*)
if the job is done.
Is there a way to avoid creating a new Main? Can I serialize the IO action
and execute it somehow in the node?
Best,
Felipe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150315/7fbf8025/attachment.html>
More information about the Haskell-Cafe
mailing list