[Haskell-cafe] Haskell and Big Data
carter.schonwald at gmail.com
Thu Dec 19 17:26:09 UTC 2013
There are a number of Haskell projects for large scale data analysis that
are likely to be released over the coming months.
On the high performance front, it's worth noting that pretty much every
usable python tool for gpu computing has to replicate the typing discipline
that Haskell libs like accelerate get for free.
On Thursday, December 19, 2013, He-chien Tsai wrote:
> have you took a look at hlearn and statistics packages? it's even easy to
> parallellize hlearn on cluster because it's training result is designed for
> composable, which means you can create two model , train them seperately
> and finally combine them. you can also use other database such as redis or
> cassandra,which has haskell binding, as backend. for parallellizing on
> clusters, hdph is also good.
> I personally prefer python for data science because it has much more
> mature packages and is more interactive and more effective (not kidding.
> you can create compiled C for core datas and algorithms by python-like
> cython and call it from python, and exploit gpus for accelerating by
> theano) than haskell and scala, spark also has a unfinish python binding.
> 2013/12/18 下午3:41 於 "jean-christophe mincke" <
> 'jeanchristophe.mincke at gmail.com');>> 寫道：
> > Hello Cafe,
> > Big Data is a bit trendy these days.
> > Does anybody know about plans to develop an Haskell eco-system in that
> > I.e tools such as Storm or Spark (possibly on top of Cloud Haskell) or,
> at least, bindings to tools which exist in other languages.
> > Thank you
> > Regards
> > J-C
> > _______________________________________________
> > Haskell-Cafe mailing list
> 'Haskell-Cafe at haskell.org');>
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe