Restart and terminate operation commands of php-fpm in php5.4
Restart and terminate operation commands of php-fpm in php 5.4: View the php running directory command: which php /usr/bin/php View the number of php-fpm processes: ps aux | grep -c php-fpm View running memory /usr/bin/php -i | grep mem restart php-fpm /etc/init.d/php-fpm restart In the phpinfo() output, you can see the PHP related configuration. Loaded Configuration File /etc/php.ini ================================ First, find the php-fpm.conf configuration file, check the configuration path of pid (not the installation path), and then change the corresponding place below to execute normally. [root@DO-SG-H1 ~]# ps aux | grep php-fpm root 11799 0.0 0.0 103248 880 pts/0 S+ 13:51 0:00 grep –color php-fpm root 11973 0.0 0.0 417748 964 ? Ss Jun01 0:20 php-fpm: master process (/etc/php-fpm.conf) cat /etc/php-fpm.conf see pid = /var/run/php-fpm/php-fpm.pid php-fpm start: /usr/local/php/sbin/php-fpm php-fpm off: kill -INT `cat /var/run/php-fpm/php-fpm.pid` php-fpm restart: kill -USR2 `cat /var/run/php-fpm/php-fpm.pid` View the number of php-fpm processes: ps aux | grep -c php-fpm =============================== [root@DO-SG-H1 ~]# find / -name ‘php-fpm’ -type d /var/log/php-fpm /var/run/php-fpm The path found by this find command is wrong which php /usr/bin/php
Merge request connection of Nginx learning to speed up website access
As one of the best web servers in the world, the advantages of Nginx are self-evident. Let’s talk about how Nginx merges request connections. tips When we browse the web, an important factor affecting the browsing speed is the number of concurrent browsers. The number of concurrency is simple and popular, that is, the number of simultaneous work when browsing the webpage. Of course, the browser’s limit on the number of concurrent requests is for the same domain name, and there is a certain limit for the number of requests under the same domain name at the same time, and requests exceeding the limit number will be blocked. First, let’s look at the number of concurrent connections of each browser: List the possible considerations of the browser’s decision Due to the limitation of the TCP protocol, only 65536 ports on the PC side can be used to send connections to the outside, and the operating system also has a limit on the number of half-open connections to protect the TCP\IP protocol stack resources of the operating system from being exhausted quickly, so the browser does not It is best to issue too many TCP connections, but to reuse the TCP…
Nginx learns how to prevent traffic attacks
scenes to be used Recently, the related configuration of the report query system load balancing cluster has been completed. The two implementation methods are session management strategies based on Ehcache and Redis respectively. We all know that server resources are limited, but requests from clients are unlimited (malicious attacks are not ruled out). In order to ensure that most requests can be responded normally, some requests from clients have to be given up, so we will use Nginx Current limiting operation, this operation can relieve the pressure on the server to a great extent, so that other normal requests can be responded normally. How to use Nginx to implement basic current limiting, such as limiting a single IP to 50 visits per second. Through the Nginx current limiting module, we can set that once the number of concurrent connections exceeds our setting, a 503 error will be returned to the client. This can be very effective in preventing CC attacks. Combined with the iptables firewall, basically CC attacks can be ignored. how to use conf configuration #Unified configuration in the http domain #Limit requests limit_req_zone $binary_remote_addr $ uri zOne=api_read:20m rate=50r/s; #Configure a connection zone by ip limit_conn_zone $binary_remote_addr zOne=perip_conn:10m; #Configure…
Nginx learning reverse proxy WebSocket configuration example
write at the beginning Last year, I made an APP for scoring competitions. For specific needs, teachers in the same group can communicate with each other, and notify the same group members in time, what operations other team members have performed (of course, this is only for specific operations). Implementation plan Using the relatively mature WebSocket technology at present, the WebSocket protocol provides a choice for creating webapps that require real-time two-way communication between the client and the server. Part of HTML5, WebSocket can make development of such apps much easier than the original method of developing such apps. Most of the current browsers support WebSocket, such as Firefox, IE, Chrome, Safari, Opera, and more and more server frameworks now also support WebSocket. WebSocket Cluster In the actual production environment, multiple WebSocket servers must have high performance and high availability, so the WebSocket protocol needs a load balancing layer. NGINX supports WebSocket since 1.3, which can be used as a reverse proxy and load for WebSocket programs balanced. Nginx configuration Note: According to the official document, Nginx supports websocket reverse proxy only after version 1.3, so if you want to use the function that supports websocket, you must upgrade to…
Deep Dive into Nginx: How We Designed for Performance and Scale
NGINX excels in network applications because of its unique design. Many network or application servers are mostly simple frameworks based on threads or processes. NGINX stands out for its mature event-driven framework, which can handle thousands of concurrent connections on modern hardware. The NGINX Internals infographic starts at the top of the process framework and works its way down to reveal how NGINX handles multiple connections within a single process, and further explores how it works. Scenario Setup — NGINX Process Model To better understand this design pattern, we need to understand how NGINX works. NGINX has a main thread to handle privileged operations such as reading configuration files and port binding, as well as a set of working processes and auxiliary processes. 1 2 3 4 5 6 7 8 9 10 11 # service nginx restart Restarting nginx # ps ef forest | grep nginx root 32475 1 0 13:36 ? 00:00:00 nginx: master process /usr/ sbin/nginx c /</etc/nginx/nginx.conf nginx 32476 32475 0 13:36 ? 00:00:00 _ nginx: worker <span style="fontsize:inherit !important;fontfamily:inherit;lineheight:inherit !important;font�� Accept connections, handle network communication (with new configuration environment). Notifies old worker processes to gracefully roll out, and those worker processes stop accepting new connections.…
Number of concurrent connections, number of requests, number of concurrent users
Number of concurrent connections SBC (Simultaneous Browser Connections) The number of concurrent connections means that the client initiates a request to the server and establishes a TCP connection. The total number of TCP connections to the server per second is the number of concurrent connections. Number of requests QPS (Query Per Second)/RPS (Request Per Second) There are two abbreviations for the number of requests, which can be called QPS or RPS. The unit is how many requests per second. Query=query, also equivalent to request. The number of requests refers to the fact that the client sends GET/POST/HEAD packets to the http service after the connection is established, and there are two situations after the server returns the request result: The http data packet header contains the word Close, which closes this TCP connection; The header of the http data packet contains the word KeepAlive, the connection is not closed this time, and the request can continue to be sent to the http service through the connection, which is used to reduce the number of concurrent TCP connections. How to measure server performance? Usually, what we test is QPS, which is the number of requests per second. However, in order to…