1024programmer Nginx Performance efficiency test comparison between Nginx and Apache

Performance efficiency test comparison between Nginx and Apache

It is said that nginx performs better than apache in a high-stress environment, so I downloaded one to toss about it.

Download and compile and install, my compilation process is a bit special:

1. Remove the debugging information, modify the $nginx_setup_path/auto/cc/gcc file, and set CFLAGS=”$CFLAGS
-g” This line is commented out.

2. Since only the performance of the WEB server is tested, FastCGI is not installed.

1

2

3

4

5

6

7

./configure\

–prefix=/opt/nginx\

–user=www\

–group=www\

–with-http_stub_status_module \

–with-http_ssl_module\

–without-http_fastcgi_module

After the installation is complete, copy a bunch of static HTML pages in the production environment to the nginx server, my nginx.conf
The configuration is as follows:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

twenty one

twenty two

twenty three

twenty four

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

worker_processes 8;

worker_rlimit_nofile 102400;

events

{

use epoll;

worker_connections 204800;

}

http

{

include
mime.types;

default_type
application/octet-stream;

sendfile
on;

tcp_nopush
on;

charset GBK;

keepalive_timeout 60;

server_names_hash_bucket_size 128;

client_header_buffer_size 2k;

large_client_header_buffers 4 4k;

client_max_body_size 8m;

open_file_cache max=102400
inactive=20s;

server

{

                 
80;

location
/

{

root
/tmp/webapps/;

index
index.html index.htm;

}

location =
/NginxStatus

{

stub_status
on;

access_log
off;

}

error_page
500 502 503 504 /50x.html;

location =
/50x.html

0103/0Z63U295-2.png” />

For the first set of pictures, there are several places that need to be analyzed.

“Concurrency
Level” does not correspond to how many browsers or how many users there are. It should be understood as the number of concurrent connections. Usually, IE visits a web page and opens 3~10 connections. Under normal circumstances, 10,000 “client numbers” can be considered very roughly 1000~3000 users.

A typical representative of a long connection is HTTP 1.1, while a typical representative of a short connection is HTTP 1.0, which supports HTTP
1.1 browsers have been everywhere for a long time, why do we need to test short connections? First, this is because in actual browsing, a “long” link cannot be as long as the “long” link in the ab test, so the test score of the short link is used as a “bottom line”; second, some scanning tools use The most important thing is the way of short links. Since we want to use the Internet, we must “take care” of them. Therefore, in the production environment, the real performance will be in the range between the red line and the blue line. How about the specifics? “This cannot be said to be too detailed.”

Regarding the significance of the vertical axis of the “transmission rate” graph, 100,000 is equivalent to 100MB/sec, which is often referred to as a 100M network (ignoring CSMA/CD
The loss caused by it), and the often-called Gigabit network, after testing, it is probably between 400,000 and 500,000. In other words, if the export bandwidth of the nginx server is a 100-megabit network, the bottleneck is the network rather than nginx.

For the second group of pictures, there are several places that need to be analyzed:

The result of the production environment should be between the blue line and the red line, so there is no need to analyze it.

“Logest Response Time” actually takes the time when 99% of all requests can be completed, which can shield some errors.

As pressure increases, response time spikes are to be expected, but how much is acceptable? At the 2009 System Architects Conference, Tencent’s Qiu Yuepeng mentioned the “1-3-10 principle of user speed experience” in his speech in “Flexible Operation of Massive SNS Websites”:

It can be simply considered that if the response time of 3 seconds is used as the standard, nginx can handle no more than 10,000 concurrent connections, and if the response time of 10 seconds is used as the standard, nginx can handle less than 15,000 concurrent connections. Of course, there may be occasions It’s different, your users can’t stand even 0.3 seconds, so that’s another story.

If I assume that as long as the server does not have “connection reset”, “server no response” and other errors, as long as the content can be returned, I am willing to wait, then how many concurrent connections can nginx handle? I did a test myself, 20000+20000 long connections, 20000 short connections, pressing to nginx at the same time, what is the result?

Nginx still resisted and did not hang up. I tried to increase the pressure, but I couldn’t finish the test and gave up.

If you are not afraid of ignorance, you are afraid of comparing goods, what will happen to the famous apache? Before that, you can take a look at this post – everyone guesses that such a linux server
What is the maximum concurrent number of apache? The server mentioned in the post is even better than mine. However, more than 70% of people think that it will not break through the 3000 mark. Let’s “see the curative effect instead of watching advertisements”.

My Apache uses worker mode, the configuration is as follows:

1

2

3

4

5

6

7

8

9

10

ServerLimit 1000

ThreadLimit 11000

StartServers 40

MaxClients 30000

MinSpareThreads 1000

MaxSpareThreads 1000

ThreadsPerChild 300

MaxRequestsPerChild 0

Apache short connection results (1/10 sample display)

Apache short connection results (1/10 sample display)

Apache
The graph of the result is similar to that of nginx, but please pay attention to the abscissa, the maximum is 10000, and the maximum of nginx is 20000, this is because when the measurement reaches 10000, Apache will not be able to bear the pressure, either the SWAP is exhausted or the connection time out.

the

I put together the icons of nginx and Apache for easy comparison:

the

From the chart, we can see that nginx is a simple web server, that is, it puts static content, and its performance is better than that of Apache. It is especially better than Apache in terms of pressure resistance, bandwidth and resource consumption. Many large websites like to put nginx on the front end, which may be the reason.

//upload.server110.com/image/20140103/0Z63U637-13.png” />

Apache
The graph of the result is similar to that of nginx, but please pay attention to the abscissa, the maximum is 10000, and the maximum of nginx is 20000, this is because when the measurement reaches 10000, Apache will not be able to bear the pressure, either the SWAP is exhausted or the connection time out.

the

I put together the icons of nginx and Apache for easy comparison:

the

From the chart, we can see that nginx is a simple web server, that is, it puts static content, and its performance is better than that of Apache. It is especially better than Apache in terms of pressure resistance, bandwidth and resource consumption. Many large websites like to put nginx on the front end, which may be the reason.

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/performance-efficiency-test-comparison-between-nginx-and-apache/

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

The latest and most comprehensive programming knowledge, all in 1024programmer.com

© 2023 1024programmer - Encyclopedia of Programming Field
Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us