1024programmer Nginx Lua implementation of “faster first” of NGINX load balancing strategy

Lua implementation of “faster first” of NGINX load balancing strategy

Recently I implemented a load balancing strategy on NGINX, giving priority to backend services with the “shortest response time”. It is actually not very accurate to say that it is Nginx, because I actually used many modules packaged into NGINX in Openresty to achieve it, so it is more accurate to say that it is based on Openresty.

The “fastest first” Lua implementation is available as part of my Nginx configuration at dynamic-upstream-weight.lua. In addition to Lua, we also need two global Key-Value caches as the data basis for adjusting the load strategy. For configuration details, see my NGINX site configuration blog.jamespan.me.

In order to achieve this load balancing feature, I modified lua-upstream-nginx-module and added a Lua API to modify the weight of upstream server. For details, see commit 6b40d40a4 of JamesPan/lua-upstream-nginx-module.

The “fastest first” load balancing strategy is actually a kind of “unbalanced load” strategy. Projected into reality, it seems that the more capable a person is, the more work he needs to do in the end, until one day he is overwhelmed. It is the legendary “those who can do more work”. I am trying to write an NGINX module in C to implement this “faster first” strategy. Out of the above-mentioned brain hole, I decided to name this strategy labor. When I’m done with this module, we can easily use it like this without having to make invasive changes to the site configuration~

upstream backend {

labor;

server back1.example.com;

server back2.example.com;

}

requirement background

Yep, I’m blogging again.

I manually deployed the blog’s Docker image to my two ECSs, and then added the two newly deployed containers to the blog’s backend list, so I found an interesting problem due to network latency.

Blog deployment structure

Needless to say, Nginx reverses the traffic to the backend of the same ECS as itself, which can achieve the shortest response time, and DOMContentLoaded takes less than 500ms. If the traffic is reversed to Github or DaoCloud, it can barely achieve seconds, and DOMContentLoaded is between 800ms and 1200ms. If anti generation� Fast servers.

Implementation

Talk is cheap, show me the code.

Anyway, I have already shown the code from the beginning, so here is just talking but not practicing.

In the beginning, the “fastest first” strategy was just an idea that was hovering in my head. In order to realize the idea and make a prototype in the shortest time, it is unreliable to directly write NGINX modules in C, at least it is too difficult for me. Thanks to the great work of agentzh and the Openresty community, we can program NGINX in Lua.

Although the difficulty has been reduced a lot after knowing that Lua can be used, I still haven’t used Lua.

“What is it like to write a usable software in a language you have never used before?”

“It seems to be back in the first year of college when I first started to learn programming. I think of running in the sunset that day, that is my lost youth.”

After some exploration, I found that the lua-upstream-nginx-module in Openresty provides some APIs for operating upstream configurations, mainly for reading, and the only writing operation is to take a certain backend offline. This is totally not enough. So I had to modify the code by myself, and added several APIs to support modifying the three attributes of the server’s weight, effective_weight, and current_weight.

The subsequent development and debugging is to use Lua in NGINX to realize the above “Four Faster First Laws”, and struggle with various small problems of log_by_lua and log_by_lua_block, and finally find that it is better to write Lua files directly, and then use log_by_lua_file to Calling is the most reliable.

something interesting

Finally, sharing the Dockerfile I use to package Openresty, there are some interesting things in it.

As soon as I came up, I installed several software first. The first three are the running dependencies of Openresty, but perl is the compilation dependency. Don’t ask me why, maybe agentzh is too good at using perl~

apk add openssl pcre libgcc perl

After that, download the source code of Openresty and decompress it, and then download the modified module to replace the original module. Followed by a large series of configure before compilation.

In fact, when we don’t know how to compile a software package, we can first go to the software repository of the distribution to see the build script of the software package, which is much more reliable than the various “compile and install XXX” shared on the Internet. For example, Alpine Linux’s NGINX APKBUILD.

Another interesting point is about Docker’s log collection mechanism. The applications we usually write basically put logs directly into specified files, and NGINX is no exception, with relatively fixed log directories and log files. However, Docker only collects the information output to standard output and standard error, which is quite painful, and therefore some adaptation solutions have emerged.

The first kind of frustration is to modify the startup command to a custom script, use tail to track the log in the background and output it to standard output, and finally start the application. The second is to change the soup without changing the medicine, but it is a little less frustrating. Someone wrote a program called dockerize in Golang. The effect is similar to a custom script, but it can be written directly in the CMD of the Dockerfile without introducing additional scripts.

Later, I don’t know from which Dockerfile I saw an amazing writing method, which directly points the standard output soft link to the log file, so that the NGINX log is directly output to the standard output, and there is no need to worry about a large number of logs accumulating after running for a long time Inside the container or something like that.

# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout ar/loginx/access.log
RUN ln -sf /dev/stderr ar/loginx/error.log

When I first needed to compile Openresty, I tried to build it locally, but I got stuck in downloading the compiler and compiling dependencies, and I couldn’t download it. It was really urgent. So I had an idea and directly submitted the Dockerfile for preparing the compilation environment to Github, and Lingqueyun started to build automatically. It took more than ten minutes to build the entire image. After the build was completed, I downloaded the image directly and had a usable compilation environment. I’m so smart!

YaHei’, Arial, sans-serif;font-size:16px;background-color:#FFFFFF;”>
Another interesting point is about Docker’s log collection mechanism. The applications we usually write basically put logs directly into specified files, and NGINX is no exception, with relatively fixed log directories and log files. However, Docker only collects the information output to standard output and standard error, which is quite painful, and therefore some adaptation solutions have emerged.

The first kind of frustration is to modify the startup command to a custom script, use tail to track the log in the background and output it to standard output, and finally start the application. The second is to change the soup without changing the medicine, but it is a little less frustrating. Someone wrote a program called dockerize in Golang. The effect is similar to a custom script, but it can be written directly in the CMD of the Dockerfile without introducing additional scripts.

Later, I don’t know from which Dockerfile I saw an amazing writing method, which directly points the standard output soft link to the log file, so that the NGINX log is directly output to the standard output, and there is no need to worry about a large number of logs accumulating after running for a long time Inside the container or something like that.

# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout ar/loginx/access.log
RUN ln -sf /dev/stderr ar/loginx/error.log

When I first needed to compile Openresty, I tried to build it locally, but I got stuck in downloading the compiler and compiling dependencies, and I couldn’t download it. It was really urgent. So I had an idea and directly submitted the Dockerfile for preparing the compilation environment to Github, and Lingqueyun started to build automatically. It took more than ten minutes to build the entire image. After the build was completed, I downloaded the image directly and had a usable compilation environment. I’m so smart!

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/lua-implementation-of-faster-first-of-nginx-load-balancing-strategy/

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us

Follow Weibo
Back to top
首页
微信
电话
搜索