There are two modules on nginx that limit connections, one is limit_zone and the other is
limie_req_zone, both can limit the connection, but what is the specific difference?
The following is the explanation given on the nginx official website
limit_req_zone
Limit frequency of connections from a client.
This module allows you to limit the number of requests for a given
session, or as a special case, with one address.
Restriction done using leaky bucket.
limit_zone
Limit simultaneous connections from a client.
This module makes it possible to limit the number of simultaneous
connections for the assigned session or as a special case, from one
address.
According to the literal understanding, the function of lit_req_zone is to limit the user’s connection frequency through the token bucket principle, (this module allows you to limit a single address
Specify the number of requests for a session or special need )
The limit_zone function is to limit the number of concurrent connections of a client. (This module can limit the number of specified sessions of a single address or the number of concurrent connections in special cases)
One is to limit concurrent connections and the other is to limit connection frequency. It seems that there is no difference on the surface, so let’s see the actual effect~~~
Add these two parameters on my test machine and the following is part of my configuration file
http {
limit_zone one $binary_remote_addr 10m;
#limit_req_zone $binary_remote_addr zOne=req_one:10m
rate=1r/s;
server
{
…
limit_conn one 1;
#limit_req zOne=req_one burst=120;
……
}
}
Explain limit_zone one $binary_remote_addr 10m;
Here one is to declare a limit_zone name, $binary_remote_addr is to replace $remore_addr
variable, 10m is the storage space for the session state
limit_conn one 1, limit the number of concurrent client connections to 1
Test the limit_zone module first
I am looking for a machine to test it with ab. The command format is
ab -c 100 -t 10 http://192.168.6.26/test.php
The content of test.php is phpinfo
Look at the access in the log
650) this.width=650;” src=”http://upload.server110.com/image/20130821/0923421Q2-0.jpg” />
It seems that it may not be possible to limit one concurrent connection per second, (some netizens told me that this is because the test file itself is too small, so you must test it if you have time), you can see from the log come out
Except for a few 200, the others are basically 503, and most concurrent accesses are 503.
I ran with ab for a while and found another situation
650) this.width=650;” src=”http://upload.server110.com/image/20130821/0923425D7-1.jpg” />
It seems that as the number increases, the effect will also change, and it does not fully achieve the effect in the module description
Look at the current number of tcp connections
# netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a,
S[a]}’
TIME_WAIT 29
FIN_WAIT1 152
FIN_WAIT2 2
ESTABLISHED 26
SYN_RECV 16
In this test limit_req_zone, the configuration file is slightly changed
http {
#limit_zone one $binary_remote_addr 10m;
limit_req_zone $binary_remote_addr zOne=req_one:10m
rate=1r/s;
server
{
…
#limit_conn one 1;
limit_req zOne=req_one burst=120;
……
}
}
restart nginx
To briefly explain, rate=1r/s means that each address can only request once per second, that is to say, according to the principle of the token bucket (it should be the leaky bucket principle after the netizen Bingbing’s correction)
burst=120 There are 120 tokens in total, and only 1 token is added every second,
After the 120 tokens are issued, the extra requests will return 503
test it
ab -c 100 -t 10 http://192.168.6.26/test.php
Take a look at the access log at this time
650) this.width=650;” src=”http://upload.server110.com/image/20130821/0923421A1-2.jpg” />
It is indeed a request per second, so how about testing it for a while? Increase the time from 10 seconds to 30 seconds
650) this.width=650;” src=”http://upload.server110.com/image/20130821/0923424213-3.jpg” />
At this time, it should be 120 is not enough, there are many 503, and there are two other situations, please see the picture
650) this.width=650;” src=”http://upload.server110.com/image/20130821/09234212X-4.jpg” />
It’s very likely that some requests in the queue timed out without being responded to, but I’m not sure if that’s the case.
650) this.width=650;” src=”http://upload.server110.com/image/20130821/0923421I5-5.jpg” />
The client can’t wait to disconnect, and returns 499
Look at the current number of tcp connections
netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a,
S[a]}’
TIME_WAIT 51
FIN_WAIT1 5
ESTABLISHED 155
SYN_RECV 12
Although this will make nginx
Only one is processed per second.However, there will still be many requests in the queue waiting to be processed, which will also take up a lot of tcp connections, as can be seen from the result of the above command.
What if
limit_req zOne=req_one burst=120 nodelay;
After adding nodelay, requests exceeding the burst size will directly return 503, as shown in the figure
650) this.width=650;” src=”http://upload.server110.com/image/20130821/0923423N8-6.jpg” />
It also processes 1 request per second, but the extra requests are not waiting to be processed as before, but return 503 directly.
current tcp connection
# netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a,
S[a]}’
TIME_WAIT 30
FIN_WAIT1 15
SYN_SENT 7
FIN_WAIT2 1
ESTABLISHED 40
SYN_RECV 37
The number of connections is less than the above
Through this test I found that
Neither of these two modules can achieve absolute restrictions, but they have indeed played a great role in reducing concurrency and limiting connections. Which one to use in a production environment or whether to use the two together depends on their respective needs. .
This is the end of the test, if there is something wrong in the article, please correct me in time, thank you