1024programmer Nginx Nginx+PHP-fpm environment performance parameter optimization method

Nginx+PHP-fpm environment performance parameter optimization method

1. The bigger the worker_processes, the better (the performance increase is not obvious after a certain number)

2.worker_cpu_affinity All cpus share worker_processes equally
than each worker_processes
The performance of cross-cpu allocation is better; regardless of the execution of php, the test result has the best performance when the number of worker_processes is twice the number of cpu cores

3. Unix domain socket (the way of shared memory) is better than tcp network port configuration performance

Regardless of the backlog, the request speed has an order of magnitude leap, but the error rate exceeds 50%

With backlog, the performance is improved by about 10%

4. Adjust the backlog (backlog) of nginx, php-fpm and kernel, connect() to
unix: /tmp/php-fpm.socket failed (11: Resource temporarily
unavailable) while connecting to upstream error returns will be reduced

nginx:

The server block of the configuration file

listen 80 default backlog=1024;

php-fpm:

configuration file

listen.backlog = 2048

Kernel parameters:

/etc/sysctl.conf, cannot be lower than the above configuration

net.ipv4.tcp_max_syn_backlog = 4096

net.core.netdev_max_backlog = 4096

5. Increasing the master instance of php-fpm on a single server will increase the processing capacity of fpm and reduce the probability of error reporting and return

Multi-instance startup method, using multiple configuration files:

/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-fpm.conf
&

/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-fpm1.conf
&

nginx fastcgi configuration

upstream phpbackend {

# server
127.0.0.1:9000 weight=100 max_fails=10 fail_timeout=30;

# server
127.0.0.1:9001 weight=100 max_fails=10 fail_timeout=30;

# server
127.0.0.1:9002 weight=100 max_fails=10 fail_timeout=30;

# server
127.0.0.1:9003 weight=100 max_fails=10 fail_timeout=30;

server
unix:/var/www/php-fpm.sock weight=100 max_fails=10
fail_timeout=30;

server
unix:/var/www/php-fpm1.sock weight=100 max_fails=10
fail_timeout=30;

server
unix:/var/www/php-fpm2.sock weight=100 max_fails=10
fail_timeout=30;

server
unix:/var/www/php-fpm3.sock weight=100 max_fails=10
fail_timeout=30;

# server
unix:/var/www/php-fpm4.sock weight=100 max_fails=10
fail_timeout=30;

# server
unix:/var/www/php-fpm5.sock weight=100 max_fails=10
fail_timeout=30;

# server
unix:/var/www/php-fpm6.sock weight=100 max_fails=10
fail_timeout=30;

# server
unix:/var/www/php-fpm7.sock weight=100 max_fails=10
fail_timeout=30;

}

location ~ .php* {

fastcgi_pass phpbackend;

#
fastcgi_pass unix:/var/www/php-fpm.sock;

fastcgi_index index.php;

……….

}

6. Test environment and results

Memory 2G

swap2G

cpu 2-core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz

Using ab remote access test, the test program is the string processing program of php

1) Open 4 php-fpm instances, nginx 8 worker_processes, each cpu 4 worker_processes
, backlog is 1024, php backlog is 2048, kernel backlog is 4096, using unix domain
In the case of socket connection, other parameters remain unchanged

The performance and error rate are relatively balanced and acceptable. After more than 4 fpm instances, the performance begins to decline, and the error rate does not drop significantly.

The conclusion is that the number of fpm instances, the number of worker_processes and the cpu maintain a multiple relationship, and the performance is higher

The parameters that affect performance and error reporting are

php-fpm instance,nginx
The number of worker_processes, max_request of fpm, backlog of php, unix domain
socket

10W requests, 500 concurrency without error reporting, 1000 concurrency error reporting rate is 0.9%

500 concurrency:

Time taken for tests: 25 seconds avg.

Complete requests: 100000

Failed requests:
0

Write errors:  
0

Requests per second: 4000 [#/sec]
(mean) avg.

Time per request: 122.313
[ms] (mean)

Time per request: 0.245
[ms] (mean, across all concurrent requests)

Transfer rate:
800 [Kbytes/sec] received avg.

1000 concurrency:

Time taken for tests: 25 seconds avg.

Complete requests: 100000

Failed requests:
524

(Connect: 0, Length: 524, Exceptions:
0)

Write errors:
0

Non-2xx responses: 524

Requests per second: 3903.25 [#/sec]
(mean)

Time per request: 256.197
[ms] (mean)

Time per request: 0.256 [ms]
(mean, across all concurrent requests)

Transfer rate:
772.37 [Kbytes/sec] received

2) With other parameters unchanged, unix domain socket is replaced by tcp network port connection, the result is as follows

500 concurrency:

Concurrency Level: 500

Time taken for tests: 26.934431 seconds

Complete requests: 100000

Failed requests:
0

Write errors:
0

Requests per second: 3712.72 [#/sec]
(mean)

Time per request: 134.672
[ms] (mean)

Time per request: 0.269 [ms]
(mean, across all concurrent requests)

Transfer rate:
732.37 [Kbytes/sec] received

1000 concurrency:

Concurrency Level: 1000

Time taken for tests: 28.385349 seconds

Complete requests: 100000

Failed requests:
0

Write errors:
0

Requests per second: 3522.94 [#/sec]
(mean)

Time per request: 283.853
[ms] (mean)

Time per request: 0.284 [ms]
(mean, across all concurrent requests)

Transfer rate:
694.94 [Kbytes/sec] received

Compared with 1), there is about 10% performance drop

7. 5.16 Adjust the max_request parameter of fpm to 1000, and the number of concurrent 1000 error reports will be reduced to less than 200.

Transfer rate is around 800

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/nginxphp-fpm-environment-performance-parameter-optimization-method/

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us

Follow Weibo
Back to top
首页
微信
电话
搜索