One-shot semantics in GHC event manager

Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=) kazu at iij.ad.jp
Tue Oct 21 04:15:46 UTC 2014


Hi,

>> Andreas - want me to go ahead and get you some hardware to test Ben's
>> patch in the mean time? This way we'll at least not leave it hanging
>> until the last moment...
> 
> I will also try this with two 20-core machines connected 10G on
> Monday.

I measured the performace of GHC head, 7.8.3 and 7.8.3 + Ben's patch
set.

Server: witty 8080 -r -a -s +RTS -N<n> *1
Measurement tool: weighttp -n 100000 -c 1000 -k -t 19 http://192.168.0.1:8080/
Measurement env: two 20 core (w/o HT) machines directly connected 10G

Here is result (req/s):

-N<n>          1       2        4        8        16
---------------------------------------------------------
head           92,855  155,957  306,813  498,613  527,034
7.8.3          86,494  160,321  310,675  494,020  510,751
7.8.3+ben      37,608   69,376  131,686  237,783  333,946

head and 7.8.3 has almost the same performance. But I saw significant
performance regression in Ben's patch set.

*1 https://github.com/kazu-yamamoto/witty/blob/master/README.md

P.S.

- Scalability is not linear as you can see.
- prefork (witty -n <n>) got much better result than Mio (witty +RTS
  <n>) (677,837 req/s for witty 8080 -r -a -s -n 16)

--Kazu


More information about the ghc-devs mailing list