[Haskell-cafe] Loading a csv file with ~200 columns into Haskell Record

Mario Blažević mblazevic at stilo.com
Wed Oct 4 13:32:34 UTC 2017


On 2017-09-30 09:30 PM, Guru Devanla wrote:
> ...
> I am not looking to replicate the Pandas data-frame functionality in 
> Haskell. First thing I want to do is reach out to the 'record' data 
> structure. Here are some ideas I have:
> 
> 1.  I need to declare all these 100+ columns into multiple record 
> structures.
> 2.  Some of the columns can have NULL/NaN values. Therefore, some of the 
> attributes of the record structure would be 'MayBe' values. Now, I could 
> drop some columns during load and cut down the number of attributes i 
> created per record structure.
> 3.  Create a dictionary of each record structure which will help me 
> index into into them.'
> 
> I would like some feedback on the first 2 points. Seems like there is a 
> lot of boiler plate code I have to generate for creating 100s of record 
> attributes. Is this the only sane way to do this?  What other patterns 
> should I consider while solving such a problem.


	I can only offer a suggestion with point #2. Have a look at the README 
for the rank2classes package. You'd still need to generate the 
boilerplate code for the 100+ record fields, but only once.

http://hackage.haskell.org/package/rank2classes


More information about the Haskell-Cafe mailing list