[web-devel] questions about ResponseEnumerator

Greg Weber greg at gregweber.info
Sun Oct 23 21:35:27 CEST 2011


On Sun, Oct 23, 2011 at 11:37 AM, Gregory Collins
<greg at gregorycollins.net>wrote:

> On Sun, Oct 23, 2011 at 6:55 PM, Greg Weber <greg at gregweber.info> wrote:
> > Apache is considered vulnerable to slowloris because it has a limited
> thread
> > pool. Nginx is not considered vulnerable to slowloris because it uses an
> > evented architecture and by default drops connections after 60 seconds
> that
> > have not been completed. Technically we say our Haskell web servers are
> > using threads, but they are managed by a very fast evented system. So we
> can
> > hold many unused connections open like Nginx and should not be vulnerable
> if
> > we have a timeout that cannot be tickled. This could make for an
> interesting
> > benchmark - how many slowloris connections can we take on? The code from
> > Kazu makes just one connection - it does not demonstrate a successful
> > slowloris attack, just one successful slowloris connection.
>
> Slowloris causes problems with any scarce resource -- threads in a
> pool, as you mentioned, but a bigger problem for us is running out of
> file descriptors. If the client is allowed to hold connections open
> for long enough, an attacker should be able to run the server out of
> file descriptors using only a handful of machines.
>
>
Good point. The equation for number of nodes needed to pull off a slowloris
attack becomes:

process file descriptor limit / connections per ip address

With a 60 second request timeout (like nginx), each slowloris request must
be finished in 60 seconds. This also means the log will be writing
information about the attack as it is in progress.

Stopping Slowloris
* 60 second hard timeout
* reject partial GET/HEAD requests
* require all the headers for any request be sent at once
* increase ulimit for the web server.
* limit requests per ip address

limiting per ip can be done with iptables, but i don't think iptables would
know about the http request method. Limiting the number of post requests per
ip address to something small seems like an easy way to stop a simple
Slowloris attack without having to worry much about the proxy/gateway issue.
We will need to add a deploy instruction about increasing the ulimit.

If we wanted to, as a final precaution, we could try to detect when we are
at the ulimit and start dropping connections.


>  > If we limit the number of connections per ip address, that means a
> slowloris
> > attack will require the coordination of thousands of nodes and make it
> > highly impractical. Although there may be a potential  issue with proxies
> > (AOL at least used to do this, but I think just for GET) wanting to make
> > lots of connections.
>
> Yep -- this is one possible solution, although you're right about the
> proxy/NAT gateway issue potentially being a problem for some
> applications. Ultimately I think to handle this "well enough", we just
> need to be able to handle timeouts properly to deter low-grade script
> kiddies with a couple of machines at their disposal. Attackers with
> real botnets are going to be hard to stop no matter what.
>
> G
> --
> Gregory Collins <greg at gregorycollins.net>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/web-devel/attachments/20111023/157afa6a/attachment.htm>


More information about the web-devel mailing list