For the second group of pictures, there are several places that need to be analyzed:
The result of the production environment should be between the blue line and the red line, so there is no need to analyze it.
“Logest Response Time” actually takes the time when 99% of all requests can be completed, which can shield some errors.
As pressure increases, response time spikes are to be expected, but how much is acceptable? At the 2009 System Architects Conference, Tencent’s Qiu Yuepeng mentioned the “1-3-10 principle of user speed experience” in his speech in “Flexible Operation of Massive SNS Websites”:
It can be simply considered that if the response time of 3 seconds is used as the standard, nginx can handle no more than 10,000 concurrent connections, and if the response time of 10 seconds is used as the standard, nginx can handle less than 15,000 concurrent connections. Of course, there may be occasions It’s different, your users can’t stand even 0.3 seconds, so that’s another story.
If I assume that as long as the server does not have “connection reset”, “server no response” and other errors, as long as the content can be returned, I am willing to wait, then how many concurrent connections can nginx handle? I did a test myself, 20000+20000 long connections, 20000 short connections, pressing to nginx at the same time, what is the result?
Nginx still resisted and did not hang up. I tried to increase the pressure, but I couldn’t finish the test and gave up.
If you are not afraid of ignorance, you are afraid of comparing goods, what will happen to the famous apache? Before that, you can take a look at this post – everyone guesses what is the maximum concurrent number of apache for such a linux server. The server mentioned in the post is better than mine, but more than 70% of people think that it cannot break through At the 3000 mark, we “don’t look at the advertisement, but look at the curative effect”.
My Apache uses worker mode, the configuration is as follows:
1 2 3 4 5 6 7 8 9 10 |
ServerLimit 1000 ThreadLimit 11000 StartServers 40 MaxClients 30000 MinSpareThreads 1000 MaxSpareThreads 1000 ThreadsPerChild 300 MaxRequestsPerChild 0 |
Apache short connection results (1/10 sample display)
Apache short connection results (1/10 sample display)
The result graph of Apache is similar to that of nginx, but please pay attention to the abscissa, the maximum is 10,000, while the maximum of nginx is 20,000. This is because when the pressure reaches 10,000, Apache will not be able to bear it. Either the SWAP is exhausted or Connection timed out.
I put together the icons of nginx and Apache for easy comparison:
the
From the chart, we can see that nginx is a simple web server, that is, it puts static content, and its performance is better than that of Apache. It is especially better than Apache in terms of pressure resistance, bandwidth and resource consumption. Many large websites like to put nginx on the front end, which may be the reason.