[Haskell-cafe] Haskell and Big Data
depot051 at gmail.com
Thu Dec 19 17:15:18 UTC 2013
have you took a look at hlearn and statistics packages? it's even easy to
parallellize hlearn on cluster because it's training result is designed for
composable, which means you can create two model , train them seperately
and finally combine them. you can also use other database such as redis or
cassandra,which has haskell binding, as backend. for parallellizing on
clusters, hdph is also good.
I personally prefer python for data science because it has much more mature
packages and is more interactive and more effective (not kidding. you can
create compiled C for core datas and algorithms by python-like cython and
call it from python, and exploit gpus for accelerating by theano) than
haskell and scala, spark also has a unfinish python binding.
2013/12/18 下午3:41 於 "jean-christophe mincke" <
jeanchristophe.mincke at gmail.com> 寫道：
> Hello Cafe,
> Big Data is a bit trendy these days.
> Does anybody know about plans to develop an Haskell eco-system in that
> I.e tools such as Storm or Spark (possibly on top of Cloud Haskell) or,
at least, bindings to tools which exist in other languages.
> Thank you
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe