[Haskell-cafe] Storing big datasets

Joachim Durchholz jo at durchholz.org
Sat May 7 12:44:48 UTC 2016


Am 07.05.2016 um 12:48 schrieb David Turner:
> Btrees are good for storing data on disk. And something like postgres is an
> extremely efficient implementation of a btree supporting atomic updates and
> the like. I'd use that!

The original question was about standard hardware (i.e. still including 
rotating rust) and ~50 updates/second.
I'd assume that that's doable with an ACID-compliant DB on standard 
hardware, though it does not leave room for inefficiencies so you need 
to know what you're doing in SQL.

On a later update, he corrected the specs to 1,000-2,000 updates/second, 
and I believe that it's impossible to do that on a standard single-HDD. 
I don't know whether Mikhail considers SSDs standard configuration.

Now transaction rates aren't the same as write rates. If he can batch 
multiple writes in one transaction, Postgresql or any other RDBMS might 
actually work.


More information about the Haskell-Cafe mailing list