1024programmer Nginx Nginx+Tomcat dynamic and static separation to achieve load balancing

Nginx+Tomcat dynamic and static separation to achieve load balancing

0. Preliminary preparation

Use the Debian environment. Install Nginx (installed by default), a web project, install tomcat (installed by default), etc.

1. A Nginx.conf configuration file

# Define the user and user group that Nginx runs on. If the corresponding server is exposed to the outside, it is recommended to use a user with less authority to prevent intrusion

# user www www;

#Nginx process number, it is recommended to set it equal to the total number of CPU cores

worker_processes 8;

#Enable the global error log type

error_log /var/log/nginx/error.log info;

#Process file

pid /var/run/nginx.pid;

#The maximum number of file descriptions opened by an Nginx process is recommended to be consistent with ulimit -n

#If you face high concurrency, pay attention to modify the value ulimit -n and some system parameters, not this one determined separately

worker_rlimit_nofile 65535;

events {

#Use the epoll model to improve performance

use epoll;

#The maximum number of connections for a single process

worker_connections 65535;

}

http{

#Extension and file type mapping table

include mime.types;

#default type

default_type application/octet-stream;

sendfile on;

tcp_nopush on;

tcp_nodelay on;

keepalive_timeout 65;

types_hash_max_size 2048;

#log

access_log /var/log/nginx/access.log;

error_log /var/log/nginx/error.log;

#gzip compressed transfer

gzip on;

gzip_min_length 1k; #minimum 1K

gzip_buffers 16 64K;

gzip_http_version 1.1;

gzip_comp_level 6;

gzip_types text/plain application/x-Javascript text/css application/xml application/Javascript;

gzip_vary on;

# load balancing group

# static server group

upstream static.zh-jieli.com {

server 127.0.0.1:808 weight=1;

}

#Dynamic server group

upstream zh-jieli.com {

server 127.0.0.1:8080;

#server 192.168.8.203:8080;

}

#Configure proxy parameters

proxy_redirect off;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

client_max_body_size 10m;

client_body_buffer_size 128k;

proxy_connect_timeout 65;

proxy_send_timeout 65;

proxy_read_timeout 65;

proxy_buffer_size 4k;

proxy_buffers 4 32k;

proxy_busy_buffers_size 64k;

#cache configuration

proxy_cache_key ‘$host:$server_port$request_uri’;

proxy_temp_file_write_size 64k;

proxy_temp_path /dev/shm/JieLiERP/proxy_temp_path;

proxy_cache_path /dev/shm/JieLiERP/proxy_cache_path levels=1:2 keys_zOne=cache_one:200m inactive=5d max_size=1g;

proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-COOKIE;

server {

listen 80;

server_name erp.zh-jieli.com;

location / {

index index; #The default homepage is /index

#proxy_pass http://jieli;

}

location ~ .*\.(js|css|ico|png|jpg|eot|svg|ttf|woff) {

proxy_cache cache_one;

proxy_cache_valid 200 304 302 5d;

proxy_cache_valid any 5d;

proxy_cache_key ‘$host:$server_port$request_uri’;

add_header X-Cache ‘$upstream_cache_status from $host’;

proxy_pass http://static.zh-jieli.com;

#All static files are directly read from the hard disk

# root /var/lib/tomcat7/webapps/JieLiERP/WEB-INF ;

expires 30d; # cache for 30 days

}

#Other pages reverse proxy to the tomcat container

location ~ .*$ {

index index;

proxy_pass http://zh-jieli.com;

}

}

server {

listen 808;

server_name static;

location / {

}

location ~ .*\.(js|css|ico|png|jpg|eot|svg|ttf|woff) {

#All static files are directly read from the hard disk

root /var/lib/tomcat7/webapps/JieLiERP/WEB-INF ;

expires 30d; # cache for 30 days

}

}

}

After basically configuring this file, the load can be realized. But it is more troublesome to understand the various relationships inside.4 proxy_temp_path /dev/shm/JieLiERP/proxy_temp_path;

5 proxy_cache_path /dev/shm/JieLiERP/proxy_cache_path levels=1:2 keys_zOne=cache_one:200m inactive=5d max_size=1g;

6 proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-COOKIE;

location ~ .*\.(js|css|ico|png|jpg|eot|svg|ttf|woff) {

proxy_cache cache_one;

proxy_cache_valid 200 304 302 5d;

proxy_cache_valid any 5d;

proxy_cache_key ‘$host:$server_port$request_uri’;

add_header X-Cache ‘$upstream_cache_status from $host’;

proxy_pass http://192.168.8.203:808;

expires 30d; # cache for 30 days

}

After these two configurations, it can basically be realized. Here are a few precautions, which are also problems that have troubled me for a long time. Line 6 of the first paragraph of code above, proxy_ignore_headers
If the head of the html in the web project specifies

1

2

3

If these are not cached, the configuration item of proxy_ignore_headers must be added. Another point is that the file system permissions under /dev/shm are only given to the root user by default, so chmod is required
777 -R /dev/shm This is not a very safe approach. If a certain user group can be given online, the settings about the user group are the first line of configuration

user www www;

Line 6 of the second paragraph of code above is to add a header field to check whether the cache is hit.

we rm -rf
All files under /dev/shm/JieLiERP/proxy_* (note that if multiple tests are performed here, nginx -s reload is required
re-read the config or restart the service since you rm
-rf just deletes the cache file, but the cached structure information is still in the nginx process, and the structure is still there. If it is not restarted, it will be inaccessible)

So remember to restart. The following is the running effect

first visit

For the second visit, press Ctrl+Shift+R in the browser to force refresh

You can see the effect here. Let’s take a look inside /dev/shm

It’s almost over here. Finally, a more critical technical point is clusters, clusters, clusters. This is going to use upstream, have you seen the configuration file at the beginning, that is the one

# load balancing group

# static server group

upstream static {

server 127.0.0.1:808 weight=1;

server 192.168.8.203:808 weight=1;

}

#Dynamic server group

upstream dynamic {

server 127.0.0.1:8080;

#server 192.168.8.203:8080;

}

The one above is the cluster group. upstream is a keyword, static and
dynamic is the name of the two server cluster groups. Take the first one as an example, server 127.0.0.1:808 is the server address, followed by weight=1
is the weight. Write more than one. I have tested it personally, and one of the clusters is broken, which does not affect the operation of the system. As for more polling rules, you can refer to more information on the Internet. Not much to say here. As for how to use it?
proxy_pass http://192.168.8.203:808 changed to proxy_pass http://static;
This achieves equilibrium.

That’s all for now. By configuring the above parts according to your own needs, you can achieve load balancing in a single computer room.
One disadvantage of the above approach is that if the front nginx crashes, the latter machine will lose the ability to be accessed, so it is necessary to implement the load of multiple nginx multi-computer rooms in the front. About that is another topic. There is no research yet. I’ll talk about it later when I have a chance.

If the above dynamic server group is the one that needs to save the user status, there will be a problem, which is the session problem. For example, after I log in to server1, the next dynamic server group polling may be assigned to server2, which will cause a new login. . The solution to the symptoms is to configure the polling rules, perform Hash according to the IP requested by the user, and then assign the corresponding server. The specific configuration is as follows:

upstream dynamic{

ip_hash;

server 127.0.0.1:8080;

server 192.168.0.203:8080;

}

In this way, one user corresponds to one server node. This way you won’t have the problem of repeated logins. Another solution is to use the cache system for unified storage management of sessions. I haven’t tried the specific method. There are related articles in the reference materials, so you can find out.

��?
proxy_pass http://192.168.8.203:808 changed to proxy_pass http://static;
This achieves equilibrium.

That’s all for now. By configuring the above parts according to your own needs, you can achieve load balancing in a single computer room.
One disadvantage of the above approach is that if the front nginx crashes, the latter machine will lose the ability to be accessed, so it is necessary to implement the load of multiple nginx multi-computer rooms in the front. About that is another topic. There is no research yet. I’ll talk about it later when I have a chance.

If the above dynamic server group is the one that needs to save the user status, there will be a problem, which is the session problem. For example, after I log in to server1, the next dynamic server group polling may be assigned to server2, which will cause a new login. . The solution to the symptoms is to configure the polling rules, perform Hash according to the IP requested by the user, and then assign the corresponding server. The specific configuration is as follows:

upstream dynamic{

ip_hash;

server 127.0.0.1:8080;

server 192.168.0.203:8080;

}

In this way, one user corresponds to one server node. This way you won’t have the problem of repeated logins. Another solution is to use the cache system for unified storage management of sessions. I haven’t tried the specific method. There are related articles in the reference materials, so you can find out.

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/nginxtomcat-dynamic-and-static-separation-to-achieve-load-balancing/

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us

Follow Weibo
Back to top
首页
微信
电话
搜索