It is said that nginx performs better than apache in a high-stress environment, so I downloaded one to toss about it.
Download and compile and install, my compilation process is a bit special:
1. Remove the debugging information, modify the $nginx_setup_path/auto/cc/gcc file, and set CFLAGS=”$CFLAGS
-g” This line is commented out.
2. Since only the performance of the WEB server is tested, FastCGI is not installed.
1 2 3 4 5 6 7 |
./configure\ –prefix=/opt/nginx\ –user=www\ –group=www\ –with-http_stub_status_module \ –with-http_ssl_module\ –without-http_fastcgi_module |
After the installation is complete, copy a bunch of static HTML pages in the production environment to the nginx server, my nginx.conf
The configuration is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 twenty one twenty two twenty three twenty four 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
worker_processes 8; worker_rlimit_nofile 102400; events { use epoll; worker_connections 204800; } http {
include
default_type
sendfile
tcp_nopush charset GBK; keepalive_timeout 60; server_names_hash_bucket_size 128; client_header_buffer_size 2k; large_client_header_buffers 4 4k; client_max_body_size 8m;
open_file_cache max=102400 server {
location {
root
index }
location = {
stub_status
access_log }
error_page
location = 0103/0Z63U295-2.png” /> For the first set of pictures, there are several places that need to be analyzed.
“Concurrency
A typical representative of a long connection is HTTP 1.1, while a typical representative of a short connection is HTTP 1.0, which supports HTTP
Regarding the significance of the vertical axis of the “transmission rate” graph, 100,000 is equivalent to 100MB/sec, which is often referred to as a 100M network (ignoring CSMA/CD
For the second group of pictures, there are several places that need to be analyzed: The result of the production environment should be between the blue line and the red line, so there is no need to analyze it. “Logest Response Time” actually takes the time when 99% of all requests can be completed, which can shield some errors. As pressure increases, response time spikes are to be expected, but how much is acceptable? At the 2009 System Architects Conference, Tencent’s Qiu Yuepeng mentioned the “1-3-10 principle of user speed experience” in his speech in “Flexible Operation of Massive SNS Websites”:
It can be simply considered that if the response time of 3 seconds is used as the standard, nginx can handle no more than 10,000 concurrent connections, and if the response time of 10 seconds is used as the standard, nginx can handle less than 15,000 concurrent connections. Of course, there may be occasions It’s different, your users can’t stand even 0.3 seconds, so that’s another story. If I assume that as long as the server does not have “connection reset”, “server no response” and other errors, as long as the content can be returned, I am willing to wait, then how many concurrent connections can nginx handle? I did a test myself, 20000+20000 long connections, 20000 short connections, pressing to nginx at the same time, what is the result?
Nginx still resisted and did not hang up. I tried to increase the pressure, but I couldn’t finish the test and gave up.
If you are not afraid of ignorance, you are afraid of comparing goods, what will happen to the famous apache? Before that, you can take a look at this post – everyone guesses that such a linux server My Apache uses worker mode, the configuration is as follows:
Apache short connection results (1/10 sample display) Apache short connection results (1/10 sample display)
Apache the I put together the icons of nginx and Apache for easy comparison:
the From the chart, we can see that nginx is a simple web server, that is, it puts static content, and its performance is better than that of Apache. It is especially better than Apache in terms of pressure resistance, bandwidth and resource consumption. Many large websites like to put nginx on the front end, which may be the reason. //upload.server110.com/image/20140103/0Z63U637-13.png” />
Apache the I put together the icons of nginx and Apache for easy comparison:
the From the chart, we can see that nginx is a simple web server, that is, it puts static content, and its performance is better than that of Apache. It is especially better than Apache in terms of pressure resistance, bandwidth and resource consumption. Many large websites like to put nginx on the front end, which may be the reason.
This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/performance-efficiency-test-comparison-between-nginx-and-apache/
|