Nginx+PHP+MySQL dual-machine mutual backup, fully automatic switching solution
Full series of PHP video tutorials: Detailed PHP – http://www.xishuophp.com/ In production applications, a certain “Nginx+PHP+MySQL” interface data server plays a very important role. If the server is hard software or Nginx, MySQL failure, and a short time If it cannot be recovered internally, the consequences will be very serious. In order to avoid a single point of failure, I designed this solution, wrote the failover.sh script, and realized Dual-machine mutual backup, fully automatic switching, and the failover time only takes a few seconds ten seconds. 1. Two-machine mutual backup and fully automatic switching scheme: 1. Topology map: 2. Explanation: (1), assuming that the external network domain name blog.zyan.cc is resolved to the external network virtual IP 72.249.146.214, and the internal network hosts are set to db10 corresponding to the internal network Virtual IP 192.168.146.214 (2) By default, the host is bound to the internal and external network virtual IPs, and the standby machine is used as a backup. When MySQL, Nginx or the server of the host fails In case of inaccessibility due to failure, the standby machine will automatically take over the internal and external network Virtual IP. Both servers start the daemon process /usr/bin/nohup /bin/sh responsible for…
Priority of nginxlocation in configuration
location expression type ~ means to perform a regular match, case-sensitive ~* means to perform a regular match, case insensitive ^~ means an ordinary character match. Use prefix matching. If the match is successful, no other locations will be matched. = Performs an exact match on ordinary characters. That is, an exact match. @ “@” defines a named location, used when targeting internally, e.g. error_page, try_files location priority description The order of location in nginx configuration does not matter much. The type of the positive location expression is related. For expressions of the same type, longer strings will be matched first. Here are the instructions in order of priority: First priority: The equal sign type (=) has the highest priority. Once a match is found, no further matches are looked for. Second priority: ^~ type expression. Once a match is found, no further matches are looked for. Third priority: The regular expression type (~ ~*) has the second priority. If there are multiple location regular expressions that can match, the one with the longest regular expression is used. Fourth priority: regular string matching type. Match by prefix. location priority example The configuration items are as follows: location = / {…
Use varnish+nginx+lua to build a website downgrading system
foreword Usually after a website database hangs up, the consequences will be very serious. Basically the entire site is basically unusable. For some websites, it would be nice if they could provide basic browsing services when the database is down. This article will try to use varnish + nginx + lua to build a website downgrade system to achieve the whole goal. relegation target The goal of the downgrade solution is to display the cached page data to the user when a fatal failure occurs on the website (for example, a 500 error occurs and the service cannot be provided). This provides basic browsing services. 1. Only provide basic browsing services 2. The browsed data is all data in the non-login state 3. Support manual and automatic downgrade. Automatic downgrade is when the backend returns 500 errors and reaches a certain threshold within a period of time (excluding 503). Manual downgrades are performed from the control interface. downgrade plan storage Use varnish as storage. It effectively saves physical memory and maintains good performance. renew Use the crond script to analyze the request URL from the nginx access log, and then send a request to Varnish to update Varnish’s cache. The…
Use nginx to limit frequent crawling by web crawlers
The amount of crawling by spiders has increased sharply, resulting in a high server load. Finally, the ngx_http_limit_req_module module of nginx is used to limit the crawling frequency of Baidu Spider. Baidu Spider is allowed to crawl 200 times per minute, and the redundant crawl request returns 503. nginx configuration: #Global Configuration limit_req_zone $anti_spider zOne=anti_spider:60m rate=200r/m; #In a server limit_req zOne=anti_spider burst=5 nodelay; if ($http_user_agent ~* “baiduspider”) { set $anti_spider $http_user_agent; } Parameter description: The rate=200r/m in the command limit_req_zone means that only 200 requests can be processed per minute. The burst=5 in the instruction limit_req means that the maximum concurrency is 5. That is, only 5 requests can be processed at the same time. The nodelay in the instruction limit_req indicates that when the burst value has been reached, when a new request is made, 503 will be returned directly The IF part is used to judge whether it is the user agent of Baidu Spider. If so, assign a value to the variable $anti_spider. In this way, only Baidu spiders are restricted. For detailed parameter descriptions, you can view the official documentation. http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone This module uses a leaky bucket algorithm to limit requests. For the leaky bucket algorithm,…
Super-detailed analysis of Nginx configuration files – data organization
#Run user user nobody; #Start the process, usually set to be equal to the number of cpu worker_processes 1; #Global error log and PID file #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; #Working mode and connection limit events { #epoll is a way of multiplexing IO (I/O Multiplexing), #Only used for kernels above linux2.6, which can greatly improve the performance of nginx use epoll; #The maximum number of concurrent connections for a single background worker process worker_connections 1024; # The total number of concurrency is the product of worker_processes and worker_connections # ie max_clients = worker_processes * worker_connections # When reverse proxy is set, max_clients = worker_processes * worker_connections / 4 why # Why should the above reverse proxy be divided by 4, it should be said to be an experience value # According to the above conditions, the maximum number of connections that Nginx Server can handle under normal circumstances is: 4 * 8000 = 32000 # The setting of worker_connections value is related to the size of physical memory # Because concurrency is bound by IO, the value of max_clients must be less than the maximum number of files that the system can open #…
Analyze the data cache design of large web systems together
1. Preface In a high-traffic web system, caching is almost inseparable; but it is not easy to design a proper and efficient caching solution; so next, we will discuss what should be paid attention to in the design of application system caching, including caching The selection of types, characteristics and data indicators of common cache systems, cache object structure design and invalidation strategy, and cache object compression, etc., in order to allow students in need, especially beginners, to quickly and systematically understand relevant knowledge. the 2. The bottleneck of the database 2.1 Data volume The amount of data in a relational database is relatively small. Taking our commonly used MySQL as an example, the number of data entries in a single table should generally be controlled within 20 million. If the business is complex, it may be lower. Even for large commercial databases like Oracle, the amount of data it can store is difficult to satisfy a large Internet system with tens of millions or even hundreds of millions of users. the 2.2 TPS In actual development, we often find that the bottleneck of relational databases on TPS is often exposed more easily than other bottlenecks, especially for large-scale web…
Install php7alpha and install yaf on WINDOWS and Linux
windows 1. To install php7 alpha on windows, you only need to download it from the official website http://windows.php.net/qa/ and configure it directly. If you don’t know how, you can download PHP Manager and configure it directly. At this time, you need to test, open the command line window, and then go to your php7 directory, if php -m pops up the following window 1111 At this time, because you lack the Visual C++ Redistributable Package for Visual Studio 2015, you can click here to download http://www.microsoft.com/zh-CN/download/details.aspx?id=46881, and it will be fine after installation. linux Linux is installed as before, but you will find that php7 alpha is not the same as the previous dev, and the –with-mysql option is missing, that is, this extension will not be loaded, and mysqli and Pdo will be used in the future Download the installation package from the official https://downloads.php.net/~ab/ wget https://downloads.php.net/~ab/php-7.0.0alpha1.tar.gz #unzip tar zxf php-7.0.0alpha1.tar.gz #Enter directory cd php-7.0.0alpha1 #configure –prefix=/usr/local/php7 \ –with-config-file-path=/usr/local/php7/etc \ –enable-fpm \ –with-fpm-user=www \ –with-fpm-group=www \ –with-mysqli=/usr/local/mysql/bin/mysql_config \ –with-pdo-mysql=/usr/local/mysql/ \ –with-iconv-dir \ –with-freetype-dir \ –with-jpeg-dir \ –with-png-dir \ –with-zlib \ –with-libxml-dir \ –disable-rpath \ –enable-bcmath \ –enable-shmop \ –enable-sysvsem \ –enable-inline-optimization \ –with-curl \ –enable-mbregex \ –enable-mbstring \…
Explain in detail how to configure Nginx+PHP correctly
Suppose we implement a front-end controller with PHP, or to put it bluntly, a unified entry: send all PHP requests to the same file, and then implement routing by parsing “REQUEST_URI” in this file. At this time, many tutorials will teach you to configure Nginx+PHP like this: server { listen 80; server_name foo.com; root /path; location / { index index.html index.htm index.php; If (!-e $request_filename) { rewrite ./index.php last; } } location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME /path $fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; } } There are a lot of mistakes, or at least bad smells, let’s see if you can find a few. … It is necessary for us to understand the inheritance relationship of the instructions in the Nginx configuration file: The Nginx configuration file is divided into many blocks. The common ones are “http”, “server”, “location” and so on from the outside to the inside. The default inheritance relationship is from the outside to the inside, which means that the inner blocks will be obtained automatically The value of the outer block is used as the default value. Let’s start with the “index” directive In the problem configuration it is defined in…
How to start a Linux daemon
A “daemon” is a process (daemon) that has been running in the background. This article describes how to start a web application as a daemon process. 1. The origin of the problem After the web application is written, the next thing is to start it and keep it running in the background. It’s not easy. For example, the following is the simplest Node application server.js, only 6 lines. var http = require(‘http’); http.createServer(function(req, res) { res.writeHead(200, {‘Content-Type’: ‘text/plain’}); res.end(‘Hello World’); }).listen(5000); You start it from the command line. $ node server.js Everything looks fine, and everyone can happily access port 5000. However, once you exit the command line window, the application exits together and cannot be accessed. How can it become a daemon of the system, a service (service), and run there all the time? 2. Foreground tasks and background tasks The script started above is called a “foreground job”. It will monopolize the command line window, and other commands can only be executed after running or manually aborting. The first step to becoming a daemon is to turn it into a “background job”. $ node server.js As long as the symbol & is added to the end of the…
How to correctly configure Nginx+PHP under Linux
Suppose we use PHP to implement a front-end controller, or to put it bluntly, a unified entry: send all PHP requests to the same file, and then implement routing by parsing “REQUEST_URI” in this file. Generally configured like this At this time, many tutorials will teach you to configure Nginx+PHP like this: server { listen 80; server_name foo.com; root /path; location / { index index.html index.htm index.php; if (!-e $request_filename) { rewrite ./index.php last; } } location ~ /.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME /path$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; }} There are a lot of mistakes, or at least bad smells, let’s see if you can find a few. It is necessary for us to understand the inheritance relationship of the instructions in the Nginx configuration file: The Nginx configuration file is divided into many blocks. The common ones are “http”, “server”, “location” and so on from the outside to the inside. The default inheritance relationship is from the outside to the inside, that is to say, the inner block will be obtained automatically The value of the outer block is used as the default value. Let’s start with the “index” directive In the problem configuration it…