<div dir="ltr">Thank you for your extended explanation. Let me reply my thoughts inline below.<div><br><div class="gmail_quote"><div dir="ltr">pá 15. 6. 2018 v 0:22 odesílatel Li-yao Xia <<a href="mailto:lysxia@gmail.com" target="_blank">lysxia@gmail.com</a>> napsal:<br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The purpose of the size parameter is not well-defined formally, but it <br>
is a very convenient knob to easily tune the test suite in various <br>
situations, that is definitely worth the negligible cost of having it <br>
around unconditionally.<br></blockquote><div><br></div><div>That's what has been disturbing me. Is it a bound on a data structure size? The amount of memory it occupies (which can be quite different due to laziness)? Or a time bound on the tested algorithm running time? </div><br class="inbox-inbox-Apple-interchange-newline"><div>If we take a specific example, a typical recursive data structure is some kind of a tree. When generating a random binary tree, should the size bound it's height? Or its average height? Or the number of its elements?</div><div><br></div><div>As the size bound it it's not really defined, everyone uses it differently, so in the end all we now that there is <i>some</i> bound and that we can make it larger or smaller, but that's really all.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Without a size parameter, a fixed distribution means that if we want the <br>
generator to cover all possible cases, we have to keep a small <br>
probability of generating humongous examples and thus go OOM. We can <br>
avoid that by making the size of generated values bounded by the size <br>
parameter (or a function thereof), which we can increase easily when <br>
more resources become available.<br></blockquote><div><br></div><div>That's true, but I'd assume that if we use something like the <a href="https://en.wikipedia.org/wiki/Exponential_distribution">exponential distribution</a>, the probability of an example that'd cause OOM can be really tiny. (Possibly smaller than the change that there is a human-introduced bug causing OOM, for example.)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Furthermore, if we really want to generate very large examples, the only <br>
way with a fixed distribution is to wait longer. Instead, using a size <br>
parameter, we can make smaller values less likely to target further <br>
regions in the search space more accurately.<br></blockquote><div><br></div><div>That's probably the most convincing argument for me. Without a size bound, as the size of a structure increases, the likelihood of it's appearance must approach zero probability. To test larger structures as well, it makes sense to something like "pick a size between 1 and 1M and then generate a structure of that size", so we have a naturally occurring bound in such tests. But still I have doubts if it wouldn't be better to either:</div><div><ul><li>Express this limit explicitly for generators of such structures of variable size, like in vectorOf, or</li><li>Define the meaning of 'size' more rigorously to make the behavior more consistent among different data structures.</li></ul></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Some properties can take a while to test on larger examples (either <br>
because of the generators or the actual verification process) so we <br>
might like to keep the size small during development, and raise it again <br>
once we're done.<br>
<br>
The Arbitrary class assigns a "default" generator for every type. While <br>
it is not always a good choice, having a parameter to tweak makes <br>
Arbitrary more generally useful.<br></blockquote><div><br></div><div>Here I'd argue that not having a precisely defined meaning of size, tweaking Arbitrary instances by modifying its size parameter before produces very vague results. If one needs at least some level of confidence in the behavior of a test, it's usually necessary to use parametrized functions that document the distribution of the values depending on parameters</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
As for your last point, small examples are faster to generate and check, <br>
so it seems like a decent strategy to start small by default.<br></blockquote><div><br></div><div>I'd say that this applies only during fixing a bug, as you mentioned above. As long as the test is failing, starting with smaller example indeed makes is fail faster. But for example for tests in continuous integration, we expect all of them to succeed and we need to test both small and large, so here the advantage of starting with small ones vanishes.</div><div><br></div><div>Best,<br>Petr</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Li-yao<br>
_______________________________________________<br>
Haskell-Cafe mailing list<br>
To (un)subscribe, modify options or view archives go to:<br>
<a href="http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe" rel="noreferrer" target="_blank">http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe</a><br>
Only members subscribed via the mailman list are allowed to post.</blockquote></div></div></div>