[web-devel] questions about ResponseEnumerator
Gregory Collins
greg at gregorycollins.net
Sun Oct 23 17:09:12 CEST 2011
On Sat, Oct 22, 2011 at 10:20 PM, Michael Snoyman <michael at snoyman.com> wrote:
>
> I think Greg's/Snap's approach of a separate timeout for the status
> and headers is right on the money. It should never take more than one
> timeout cycle to receive a full set of headers, regardless of how slow
> the user's connection, and given a reasonable timeout setting from the
> user (anything over 2 seconds should be fine I'd guess, and our
> default is 30 seconds).
That's fairly uncontroversial.
> The bigger question is what we do about the request body. A simple
> approach might just be that if we receive a packet from the client
> which is less than a certain size (user defined, maybe 2048 bytes is a
> good default) it does not tickle the timeout at all. Obviously this
> means a malicious program could be devised to send precisely 2048
> bytes per timeout cycle... but I don't think there's any way to do
> better than this.
This doesn't really work either. I've already posted code in this
thread for what I think is the only reasonable option, which is rate
limiting. The way we've implemented rate limiting is:
1) any individual data packet must arrive within N seconds (the
usual timeout)
2) when you receive a packet, you compute the data rate in bytes
per second -- if it's lower than X bytes/sec (where X is a policy
decision left up to the user), the connection is killed
3) the logic from 2) only kicks in after Y seconds, to cover cases
where the client needs to do some expensive initial setup. Y is also a
policy decision.
> We *have* to err on the side of allowing attacks, otherwise we'll end up with disconnecting valid requests.
I don't agree with this. Some kinds of "valid" requests are
indistinguishable from attacks. You need to decide what's more
important: letting some guy on a 30-kilobit packet radio connection
upload a big file, or letting someone DoS your server.
> In other words, here's how I'd see the timeout code working:
>
> 1. A timeout is created at the beginning of a connection, and not
> tickled at all until all the request headers are read in.
> 2. Every time X (default: 2048) bytes of the request body are read,
> the timeout is tickled.
Note that this is basically a crude form of rate-limiting (at X/T
bytes per second). Why not do it "properly"?
G
--
Gregory Collins <greg at gregorycollins.net>
More information about the web-devel
mailing list