suboptimal ghc code generation in IO vs equivalent pure code case

Harendra Kumar harendra.kumar at gmail.com
Sat May 14 21:18:18 UTC 2016


I have stared at the cmm and assembly quite a bit. Indeed there is no trace
of a token in cmm and assembly as expected. Here is what is happening.

In the IO case the entire original list is evaluated and unfolded on the
stack first. During the recursion, the stack will have as many closure
pointers as the size of the list, last element of the list being on top of
the stack. When we finish recursing the original list, the stack unwinds
and we start creating the closures for the new list in the reverse order.
This is all pretty evident from the cmm dump output.

This process retains a lot of heap and stack memory (proportional to the
size of the list) which will require the GC to do a lot of walking, fixing
and copying. I guess that's where the additional cost is coming from. When
the list size increases this cost increases nonlinearly. That explains why
at lower list sizes the IO version performs not just equal to but a tad
better than the pure version because if GC overhead is not considered this
code is in fact more efficient.

-harendra

On 15 May 2016 at 01:56, David Feuer <david.feuer at gmail.com> wrote:

> The state token is zero-width and should therefore be erased altogether in
> code generation.
> On May 14, 2016 4:21 PM, "Tyson Whitehead" <twhitehead at gmail.com> wrote:
>
>> On 14/05/16 02:31 PM, Harendra Kumar wrote:
>>
>>> The difference seems to be entirely due to memory pressure. At list size
>>> 1000 both pure version and IO version perform equally. But as the size of
>>> the list increases the pure version scales linearly while the IO version
>>> degrades exponentially. Here are the execution times per list element in ns
>>> as the list size increases:
>>>
>>> Size of list  Pure       IO
>>> 1000           8.7          8.3
>>> 10000         8.7          18
>>> 100000       8.8          63
>>> 1000000     9.3          786
>>>
>>> This seems to be due to increased GC activity in the IO case. The GC
>>> stats for list size 1 million are:
>>>
>>> IO case:       %GC     time      66.1%  (61.1% elapsed)
>>> Pure case:   %GC     time       2.6%  (3.3% elapsed)
>>>
>>> Not sure if there is a way to write this code in IO monad which can
>>> reduce this overhead.
>>>
>>
>> Something to be aware of is that GHC currently can't pass multiple return
>> values in registers (that may not be a 100% accurate statement, but a
>> reasonable high level summary, see ticket for details)
>>
>> https://ghc.haskell.org/trac/ghc/ticket/2289
>>
>> This can bite you with with the IO monad as having to pass around the
>> world state token turns single return values into multiple return values
>> (i.e., the new state token plus the returned value).
>>
>> I haven't actually dug into your code to see if this is part of the
>> problem, but figured I would mention it.
>>
>> Cheers!  -Tyson
>> _______________________________________________
>> Glasgow-haskell-users mailing list
>> Glasgow-haskell-users at haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>>
>
> _______________________________________________
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/glasgow-haskell-users/attachments/20160515/b0e9d9d8/attachment.html>


More information about the Glasgow-haskell-users mailing list