Nginx Toss Notes (HTTP performance test, compared with Apache)
It is said that nginx (“engine x”) performs better than apache in a high-pressure environment, so I downloaded one to toss about it. Download and compile and install, my compilation process is a bit special: 1. Remove the debugging information, modify the file $nginx_setup_path/auto/cc/gcc, and comment out the line CFLAGS=”$CFLAGS -g”. 2. Since only the performance of the WEB server is tested, FastCGI is not installed. ./configure \ –prefix=/opt/nginx \ –user=www\ –group=www\ –with-http_stub_status_module\ –with-http_ssl_module\ –without-http_fastcgi_module After the installation is complete, copy a bunch of static HTML pages in the production environment to the nginx server. The configuration of my nginx.conf is as follows: worker_processes 8; worker_rlimit_nofile 102400; events { use epoll; worker_connections 204800; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; charset GBK; keepalive_timeout 60; server_names_hash_bucket_size 128; client_header_buffer_size 2k; large_client_header_buffers 4 4k; client_max_body_size 8m; open_file_cache max=102400 inactive=20s; server { listen 80; Location / { root /tmp/webapps/; index index.html index.htm; } “ “ location = /NginxStatus { stub_status on; access_log off; } “ “ error_page 500 502 503 504 /50x.html; “ “ location = /50x.html { root html; } } } In order to prevent the operating system from becoming a bottleneck, the parameters…
Nginx toss (HTTP performance test, compared with Apache) (below)
For the second group of pictures, there are several places that need to be analyzed: The result of the production environment should be between the blue line and the red line, so there is no need to analyze it. “Logest Response Time” actually takes the time when 99% of all requests can be completed, which can shield some errors. As pressure increases, response time spikes are to be expected, but how much is acceptable? At the 2009 System Architects Conference, Tencent’s Qiu Yuepeng mentioned the “1-3-10 principle of user speed experience” in his speech in “Flexible Operation of Massive SNS Websites”: It can be simply considered that if the response time of 3 seconds is used as the standard, nginx can handle no more than 10,000 concurrent connections, and if the response time of 10 seconds is used as the standard, nginx can handle less than 15,000 concurrent connections. Of course, there may be occasions It’s different, your users can’t stand even 0.3 seconds, so that’s another story. If I assume that as long as the server does not have “connection reset”, “server no response” and other errors, as long as the content can be returned, I am willing to…
Nginx+Tomcat+Memcached load balancing realizes Session sharing
The last time I talked about Alibaba Cloud Linux Nginx integrates Tomcat to realize load balancing cluster just simply realizes load balancing and does not realize session sharing. 1. Description Our system often saves user login information. There is a COOKIE and Session mechanism. The COOKIE client stores user information, and the Session is in The server saves user information. If the browser does not support COOKIE or the user disables the COOKIE, the COOKIE will not be used. There are also different browsers that save the COOKIE in different ways, so we use the Session server to save it. In the previous section we We have introduced the deployment of Tomcat cluster. How can Tomcat in the cluster obtain the user information stored in the Session for the same user request, and use Memcached to manage the Session. Memcached is a high-performance distributed memory object caching system. Next we Introduce Nginx+Tomcat+Memcached to realize Session sharing. two. Tomcat, Nginx, Memcached configuration The first step: Memcached installation and deployment Memcached deployment Check out this article Step 2: Installation and deployment of Nginx Check out this article Step 3: Tomcat and JDK installation and environment configuration Tomcat and JDK installation and deployment…
The matching domain name of Nginx pan analysis is bound to the subdirectory configuration
The directory structure of the website is: # tree /home/wwwroot/linuxeye.com /home/wwwroot/linuxeye.com ├── bbs │ └── index.html └── www └── index.html 2 directories, 2 files /home/wwwroot/linuxeye.com is the default path to store the source code in the nginx installation directory. bbs is the source code path of the forum program; www is the source code path of the homepage program; put the corresponding program into the above path and pass; http://www.linuxeye.com visits the homepage http://bbs.linuxeye.com It is a forum, and other second-level domain names can be analogized. There are 2 methods, recommended method 1 Method 1: server { listen 80; server_name ~^(?.+).linuxeye.com$; access_log /data/wwwlogs/linuxeye.com_nginx.log combined; index index.html index.htm index.php; root /home/wwwroot/linuxeye/$subdomain/; location ~ .php$ { fastcgi_pass unix:/dev/shm/php-cgi.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ .*\.(gif|jpg|jpeg|png|bmp |swf|flv|ico)$ { expires 30d; } location ~ .*\.(js css)?$ { expires 7d; } } Method Two: server { listen 80; server_name *.linuxeye.com; access_log /home/wwwlogs/linuxeye.com_nginx.log combined; index index.html index.htm index.php; if ($host ~* ^([^\ .]+)\.([^\.]+\.[^\.]+)$) { set $subdomain $1; set $domain $2; } location / { root /home/wwwroot/linuxeye.com/$subdomain/; index index.php index.html index.htm; } location ~ .php$ { fastcgi_pass unix:/dev/shm/php-cgi.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~…
How Nginx allows users to access through username and password authentication
How does Nginx allow users to access through username and password authentication At the end of the year, I took the time to study the ELK log analysis and monitoring system, mainly because I read the following passage and benefited a lot. ====================== Gorgeous dividing line ============= ================== How much do you value server logs? 1. We have no logs 2. There are logs, but basically do not control the content that needs to be output 3. Frequently fine-tune the log, only output what we want to see and useful 4. Frequently monitor the log, on the one hand, help the log to fine-tune, and on the other hand, find program problems early If you only do point 1, you can wash up and go to sleep. Many companies have achieved points 2 and 3. The server programs of these companies have basically been running for a long time, and they are relatively stable, so there is really no need to spend too much time paying attention. If a new product is launched early, I think it is necessary to achieve the fourth point. What about the logs? 1. Having said that, we have no logs 2. Online logs tail+grep…
Lua implementation of “faster first” of NGINX load balancing strategy
Recently I implemented a load balancing strategy on NGINX, giving priority to backend services with the “shortest response time”. It is actually not very accurate to say that it is Nginx, because I actually used many modules packaged into NGINX in Openresty to achieve it, so it is more accurate to say that it is based on Openresty. The “fastest first” Lua implementation is available as part of my Nginx configuration at dynamic-upstream-weight.lua. In addition to Lua, we also need two global Key-Value caches as the data basis for adjusting the load strategy. For configuration details, see my NGINX site configuration blog.jamespan.me. In order to achieve this load balancing feature, I modified lua-upstream-nginx-module and added a Lua API to modify the weight of upstream server. For details, see commit 6b40d40a4 of JamesPan/lua-upstream-nginx-module. The “fastest first” load balancing strategy is actually a kind of “unbalanced load” strategy. Projected into reality, it seems that the more capable a person is, the more work he needs to do in the end, until one day he is overwhelmed. It is the legendary “those who can do more work”. I am trying to write an NGINX module in C to implement this “faster first” strategy.…
Nginx+Tomcat dynamic and static separation to achieve load balancing
0. Preliminary preparation Use the Debian environment. Install Nginx (installed by default), a web project, install tomcat (installed by default), etc. 1. A Nginx.conf configuration file # Define the user and user group that Nginx runs on. If the corresponding server is exposed to the outside, it is recommended to use a user with less authority to prevent intrusion # user www www; #Nginx process number, it is recommended to set it equal to the total number of CPU cores worker_processes 8; #Enable the global error log type error_log /var/log/nginx/error.log info; #Process file pid /var/run/nginx.pid; #The maximum number of file descriptions opened by an Nginx process is recommended to be consistent with ulimit -n #If you face high concurrency, pay attention to modify the value ulimit -n and some system parameters, not this one determined separately worker_rlimit_nofile 65535; events { #Use the epoll model to improve performance use epoll; #The maximum number of connections for a single process worker_connections 65535; } http{ #Extension and file type mapping table include mime.types; #default type default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; #log access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #gzip compressed transfer gzip on; gzip_min_length 1k; #minimum 1K gzip_buffers 16…
ngxtop: An artifact to monitor Nginx in real time on the command line
The Nginx web server needs to be monitored in real time when it is running in a production environment. In fact, network monitoring software such as Nagios, Zabbix, and Munin support Nginx monitoring. If you don’t need the comprehensive reporting or long-term data statistics functions provided by the above software, but just need a quick and easy way to monitor the requests of the Nginx server, it is recommended that you use a command line tool called ngxtop. ngxtop borrows from the famous top command both in interface and name. ngxtop is displayed in real time by analyzing Nginx or other log files and using an interface similar to the top command. How to use ngxtop to monitor Nginx web server in real time? Install ngxtop on Linux First install the dependency library pip in the Linux system (Annotation: ngxtop is written in python). Then install ngxtop using the following command. $ sudo pip install ngxtop ngxtop uses The basic usage is as follows: ngxtop [options] ngxtop [options] (print|top|avg|sum) ngxtop info Here are some general options. -l : specify the full path to the log file (Nginx or Apache2) -f : log format –no-follow: Process the currently written log file…
Nginxlog real-time monitoring system based on Storm
Background UAE (UC App Engine) is a PaaS platform inside UC. The overall structure is somewhat similar to CloudFoundry, including: Rapid deployment: support Node.js, Play!, PHP and other frameworks Information transparency: operation and maintenance process, system status, business status Gray scale trial and error: IP gray scale, regional gray scale Basic services: key-value storage, MySQL high availability, picture platform, etc. It is not the main character here and will not be introduced in detail. There are hundreds of web applications running on the UAE, and all requests will be routed through the UAE. The daily Nginx access log size is terabytes. How to monitor the access trends of each business, advertising data, page time consumption, access quality, Custom reports and exception alarms? Hadoop can meet the statistical requirements, but the second-level real-time performance is not enough; using Spark Streaming is a bit overkill, and we have no Spark engineering experience; self-written distributed program scheduling is cumbersome and needs to consider expansion and message flow; Finally, our technology selection is Storm: relatively lightweight, flexible, convenient for message transmission, and flexible in expansion. In addition, since there are many clusters in UC, cross-cluster log transmission will also be a relatively big problem.…
Nginx server prevents high load module sysguard
Full series of PHP video tutorials: Detailed PHP – http://www.xishuophp.com/ Nginx (“engine x”) is a high-performance HTTP and reverse proxy server, as well as an IMAP/POP3/SMTP server. Nginx was developed by Igor Sysoev for the second most visited Rambler.ru site in Russia. The first public version 0.1.0 was released on October 4, 2004. It releases its source code under a BSD-like license and is known for its stability, rich feature set, sample configuration files, and low system resource consumption. On June 1, 2011, nginx 1.0.4 was released. Nginx is a lightweight web server/reverse proxy server and email (IMAP/POP3) proxy server released under a BSD-like protocol. Developed by Russian programmer Igor Sysoev, it is used by Russia’s large portal and search engine Rambler (Russian: Рамблер). Its characteristic is that it occupies less memory and has strong concurrency capability. In fact, the concurrency capability of nginx is indeed better than other web servers of the same type. Users of nginx websites in mainland China include: Baidu, Sina, Netease, Tencent, etc. If nginx is attacked or the traffic suddenly increases, nginx will cause the server to go down due to high load or insufficient memory, and eventually the site will be inaccessible. The…