[Haskell-cafe] Can a GC delay TCP connection formation?
Jeff Shaw
shawjef3 at gmail.com
Tue Nov 27 20:02:12 CET 2012
Hello Timothy and others,
One of my clients hosts their HTTP clients in an Amazon cloud, so even
when they turn on persistent HTTP connections, they use many
connections. Usually they only end up sending one HTTP request per TCP
connection. My specific problem is that they want a response in 120 ms
or so, and at times they are unable to complete a TCP connection in that
amount of time. I'm looking at on the order of 100 TCP connections per
second, and on the order of 1000 HTTP requests per second (other clients
do benefit from persistent HTTP connections).
Once each minute, a thread of my program updates a global state, stored
in an IORef, and updated with atomicModifyIORef', based on query results
via HDBC-obdc. The query results are strict, and atomicModifyIORef'
should receive the updated state already evaluated. I reduced the amount
of time that query took from tens of seconds to just a couple, and for
some reason that reduced the proportion of TCP timeouts drastically. The
approximate before and after TCP timeout proportions are 15% and 5%. I'm
not sure why this reduction in timeouts resulted from the query time
improving, but this discovery has me on the task of removing all
database code from the main program and into a cron job. My best guess
is that HDBC-odbc somehow disrupts other communications while it waits
for the DB server to respond.
To respond to Ertugrul, I'm compiling with -threaded, and running with
+RTS -N.
I hope this helps describe my problem. I c an probably come up with some
hard information if requested, E.G. threadscope.
Jeff
On 11/27/2012 10:55 AM, timothyhobbs at seznam.cz wrote:
> Could you give us more info on what your constraints are? Is it
> necessary that you have a certain number of connections per second, or
> is it necessary that the connection results very quickly after some
> other message is received?
More information about the Haskell-Cafe
mailing list