Bootstrap

NGINX--HTTP&TCP负载均衡

一、HTTP负载均衡

1、基本介绍

在多应用实例中,通常可以用nginx来做负载均衡器来分发流量,以达到提高应用吞吐量、降低时延、调优性能、提供容错性等。

2、http负载均衡最简单的配置如下

http {
    upstream myapp1 {
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }
    upstream myapp2 {
        server srva1.example.com;
        server srva2.example.com;
        server srva3.example.com;
    }

    server {
        listen 80;
        location / {
            proxy_pass http://myapp1;
        }
    }
}

3、nginx中支持的负载均衡的方法

1、 round-robin 轮询,默认的负载均衡的方法,轮询分发给不同的服务,如果不设置权重的话,会均发给下游服务
2、 least-connected 下一个请求会分发给有最少活跃连接的服务
3、 ip-hash 一个hash函数, 会计算下一个请求会分配给到哪一个服务器
使用方法如下:

upstream myapp1 {
        ip-hash
        server ip1;
        server ip2;
        server ip3;
}

upstream myapp2 {
        least_conn;
        server ip1;
        server ip2;
        server ip3;
}
//轮询实现
upstream myapp3 {
		
		server ip1;
		server ip2;
		server ip3;
}

4、权重的设置(weight)

在以round-robin(轮询)方法实现的负载均衡中,如果不设置weight(权重),nginx会将请求均发给下游服务。nginx也提供了一种权重设置方法,如下:

upstream myapp3 {
		server ip1 weight=3 ;
		server ip2;
		server ip3;
}

如上配置所示,当有5个请求进来的时候,nginx会将3个请求发给ip1,一个请求发给ip2,另一个请求发给ip3。即60%的流量请求发给ip1,20%的流量请求发给ip2,20%的流量请求发给ip3

五、健康检查(Health checks)及语法介绍

http {
    upstream dynamic {
    zone upstream_dynamic 64k;

    server backend1.example.com      weight=5;
    server backend2.example.com:8080 fail_timeout=5s slow_start=30s;
    server 192.0.2.1                 max_fails=3;
    server backend3.example.com      resolve;
    server backend4.example.com      service=http resolve;

    server backup1.example.com:8080  backup;
    server backup2.example.com:8080  backup;
    }
    server {
        location / {
            proxy_pass http://dynamic;
            health_check;
        }
    }
}

Syntax: server address [parameters];
Default: —
Context: upstream

二、TCP负载均衡

1、简介

该ngx_stream_core_module模块是从1.9.0版本才被引入进来,该模块并不被默认安装,所以如果要使用该模块的功能,在编译的时候需要加上 --with-stream 配置参数

configure   ...  --with-stream  ...

2、TCP负载均衡最简单的配置

worker_processes auto;

error_log /var/log/nginx/error.log info;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;

        server backend1.example.com:12345 weight=5;
        server 127.0.0.1:12345            max_fails=3 fail_timeout=30s;
        server unix:/tmp/backend3;
    }

    upstream dns {
       server 192.168.0.1:53535;
       server dns.example.com:53;
    }

    server {
        listen 12345;
        proxy_connect_timeout 1s;
        proxy_timeout 3s;
        proxy_pass backend;
    }

    server {
        listen 127.0.0.1:53 udp reuseport;
        proxy_timeout 20s;
        proxy_pass dns;
    }

    server {
        listen [::1]:12345;
        proxy_pass unix:/tmp/stream.socket;
    }
}
;