1024programmer Nginx Nginx Toss Notes (HTTP performance test, compared with Apache)

Nginx Toss Notes (HTTP performance test, compared with Apache)

It is said that nginx (“engine x”) performs better than apache in a high-pressure environment, so I downloaded one to toss about it.

Download and compile and install, my compilation process is a bit special:

1. Remove the debugging information, modify the file $nginx_setup_path/auto/cc/gcc, and comment out the line CFLAGS=”$CFLAGS -g”.

2. Since only the performance of the WEB server is tested, FastCGI is not installed.

./configure \

–prefix=/opt/nginx \

–user=www\

–group=www\

–with-http_stub_status_module\

–with-http_ssl_module\

–without-http_fastcgi_module

After the installation is complete, copy a bunch of static HTML pages in the production environment to the nginx server. The configuration of my nginx.conf is as follows:

worker_processes 8;

worker_rlimit_nofile 102400;

events

{

use epoll;

worker_connections 204800;

}

http

{

include mime.types;

default_type application/octet-stream;

sendfile on;

tcp_nopush on;

charset GBK;

keepalive_timeout 60;

server_names_hash_bucket_size 128;

client_header_buffer_size 2k;

large_client_header_buffers 4 4k;

client_max_body_size 8m;

open_file_cache max=102400 inactive=20s;

server

{

listen 80;

       

Location /

{

root /tmp/webapps/;

index index.html index.htm;

}

“ “

location = /NginxStatus

{

stub_status on;

access_log off;

}

“ “

error_page 500 502 503 504 /50x.html;

“ “

location = /50x.html

{

root html;

}

}

}

In order to prevent the operating system from becoming a bottleneck, the parameters were adjusted as follows:

[root@logserver etc]# cat sysctl.conf | grep -v “^$” | grep -v “^#”;

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6553600

net.ipv4.tcp_synCOOKIEs = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

net.ipv4.tcp_max_tw_buckets = 6000

net.ipv4.tcp_sack = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_rmem = 4096 87380 4194304

net.ipv4.tcp_wmem = 4096 16384 4194304

net.core.wmem_default=8388608

net.core.rmem_default=8388608

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.netdev_max_backlog=262144

net.core.somaxcOnn=262144

net.ipv4.tcp_max_orphans = 3276800

net.ipv4.tcp_max_syn_backlog = 262144

net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_synack_retries = 1

net.ipv4.tcp_syn_retries = 1

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_mem = 94500000 915000000 927000000

net.ipv4.tcp_fin_timeout = 1

net.ipv4.tcp_keepalive_time = 30

net.ipv4.ip_local_port_range = 1024 65000

My server is relatively old, DELL 2850 with two Intel(R) Xeon(TM) CPUs 2.80GHz, the OS recognizes 4 CPUs, 4GB memory, the OS is as follows:

[root@logserver etc]# uname -a

Linux logserver 2.6.9-78.ELsmp #1 SMP Thu Jul 24 23:54:48 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux

[root@logserver etc]# cat /etc/redhat-release

CentOS release 4.7 (Final)

The test tool is ab of apache, which is used to simulate�, A large number of concurrent connections originally simulated the client in another virtual machine, but as the pressure increased, I crushed myself before crushing nginx -_-, and finally I had to crush myself .

The test script is roughly as follows:

ab -n 100000 -c >client_number<[-k] http://************/cms/index.html

The size of index.html is: 123784 bytes

I organized the test data into Excel and clicked here to download, as follows:

nginx short connection test results (1/20 sample display)

Nginx long connection test results (1/20 sampling display)

Looking at the numbers alone may be boring, let’s look at the pictures:

For the first set of pictures, there are several places that need to be analyzed.

“Concurrency Level” does not correspond to how many browsers or users there are. It should be understood as the number of concurrent connections. Usually, IE visits a web page and opens 3~10 connections. Under normal circumstances, 10,000 “client numbers” can be very rough Think 1000~3000 users.

The typical representative of long connection is HTTP1.1, and the typical representative of short connection is HTTP1.0. Browsers supporting HTTP1.1 have been everywhere for a long time. Why do we need to test short connection? First, this is because in actual browsing, a “long” link cannot be as long as the “long” link in the ab test, so the test score of the short link is used as a “bottom line”; second, some scanning tools use The most important thing is the way of short links. Since we want to use the Internet, we must “take care” of them. Therefore, in the production environment, the real performance will be in the range between the red line and the blue line. How about the specifics? “This cannot be said to be too detailed.”

Regarding the significance of the vertical axis of the “transmission rate” graph, 100,000 is equivalent to 100MB/sec, which is often referred to as a 100M network (ignoring the loss caused by CSMA/CD), and the commonly referred to as a Gigabit network, after testing, Probably between 400,000 and 500,000. In other words, if the egress bandwidth of the nginx server is a 100M network, the bottleneck is the network rather than nginx.

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/nginx-toss-notes-http-performance-test-compared-with-apache/

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us

Follow Weibo
Back to top
首页
微信
电话
搜索