当前位置: > Linux集群 > 服务器集群 >

《Linux keepalived与lvs的深入分析》三之负载调度算法

时间:2014-12-21 19:30来源:linux.it.net.cn 作者:IT
七)负载调度算法
 
1)轮叫调度(Round Robin)(简称rr)
调度器通过"轮叫"调度算法将外部请求按顺序轮流分配到集群中的真实服务器上,它均等地对待每一台服务器,而不管服务器上实际的连接数和系统负载.
 
下面看一下轮叫调度的效果,如下:
while ((1)); do ipvsadm -l; sleep 1; done
 
客户端测试:
ab -n 1000 -c 100 http://10.1.1.166/
注:
这里用ab连接10.1.1.166首页1000页,而且是并发100个连接.
 
lvs表现如下:
while ((1)); do ipvsadm -l; sleep 1; done
   IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      0          0         
  -> 10.1.1.163:www               Route   1      0          0         
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      5          95        
  -> 10.1.1.163:www               Route   1      6          94        
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      6          138       
  -> 10.1.1.163:www               Route   1      5          140       
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      38         174       
  -> 10.1.1.163:www               Route   1      37         176       
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      5          290       
  -> 10.1.1.163:www               Route   1      3          293       
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      19         483       
  -> 10.1.1.163:www               Route   1      19         483       
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www rr
  -> 10.1.1.164:www               Route   1      0          502       
  -> 10.1.1.163:www               Route   1      0          502       
 
我们看到1000个连接平均的分发到两台机器上.
 
注L
ActiveConn是活动连接数,也就是tcp连接状态的ESTABLISHED.
InActConn是指除了ESTABLISHED以外的,所有的其它状态的tcp连接.
 
2)加权轮叫(Weighted Round Robin)(简称wrr)
调度器通过"加权轮叫"调度算法根据real server的加权值(weight)来计算权值比例,最终将请求更多的发向哪台real server.
如果没有定义加权值,也就是加权值默认为1,这时的wrr同rr调度算法一样.
我们下面修改weight,如下:
virtual_server 10.1.1.166 80 {
        delay_loop 6
        lb_algo wrr
        lb_kind DR
        #persistence_timeout 60
        protocol TCP
        real_server 10.1.1.163 80 {
                weight 10
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
        real_server 10.1.1.164 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}
 
 
这里我们再次进行测试,如下:
客户端:
ab -n 1000 -c 100 http://10.1.1.166/
 
lvs:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wrr
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   10     0          0         
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wrr
  -> 10.1.1.164:www               Route   5      0          31        
  -> 10.1.1.163:www               Route   10     1          60        
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wrr
  -> 10.1.1.164:www               Route   5      0          39        
  -> 10.1.1.163:www               Route   10     0          77        
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wrr
  -> 10.1.1.164:www               Route   5      24         172       
  -> 10.1.1.163:www               Route   10     49         344       
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wrr
  -> 10.1.1.164:www               Route   5      3          251       
  -> 10.1.1.163:www               Route   10     6          502  
  
我们这里看到lvs会把请求通过加权值分布到两台real server上,10.1.1.163的加权值是10,而10.1.1.164的加权值是5,所以基本上10.1.1.163处理的请求是10.1.1.164的两倍.
 
 
3)最小连接调度(lc)
该算法是把新的连接请求分配到当前连接数最小的服务器.最小连接调度是一种动态调度算法,它通过服务器当前所活跃的连接数来估计服务器的负载情况.
 
我们调整lvs的调度算法,如下:
virtual_server 10.1.1.166 80 {
        delay_loop 6
        lb_algo lc
        lb_kind DR
        #persistence_timeout 60
        protocol TCP
        real_server 10.1.1.163 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
        real_server 10.1.1.164 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}
 
下面在客户端进行测试,如下:
ab -c 100 -n 10000 http://10.1.1.166/index.html
 
观察lvs的连接状态,如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www lc
  -> 10.1.1.164:www               Route   5      54         2717      
  -> 10.1.1.163:www               Route   5      19         2730      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www lc
  -> 10.1.1.164:www               Route   5      9          3038      
  -> 10.1.1.163:www               Route   5      35         2981      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www lc
  -> 10.1.1.164:www               Route   5      45         3533      
  -> 10.1.1.163:www               Route   5      18         3579
 
我们看到lvs1服务器上面的连接数高上去后,调度去转而更多的向lvs2服务器发送请求.通上以上的反复调度来达到平衡.
 
 
4)加权最小连接调度(wlc)
该算法是最小连接调度的超集,各个服务器用相应的权值表示其处理性能.所以这里可以更好的处理real server处理能力不一致的情况.
 
我们调整lvs的调度算法,如下:
virtual_server 10.1.1.166 80 {
        delay_loop 6
        lb_algo wlc
        lb_kind DR
        #persistence_timeout 60
        protocol TCP
        real_server 10.1.1.163 80 {
                weight 10
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
        real_server 10.1.1.164 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}
 
下面在客户端进行测试,如下:
ab -c 100 -n 10000 http://10.1.1.166/index.html
 
观察lvs的连接状态,如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wlc
  -> 10.1.1.164:www               Route   5      42         1914      
  -> 10.1.1.163:www               Route   10     22         2489      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wlc
  -> 10.1.1.164:www               Route   5      9          2067      
  -> 10.1.1.163:www               Route   10     35         2723      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wlc
  -> 10.1.1.164:www               Route   5      53         2342      
  -> 10.1.1.163:www               Route   10     2          3159      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www wlc
  -> 10.1.1.164:www               Route   5      58         2350      
  -> 10.1.1.163:www               Route   10     0          3153      
 
我们看到在加权最小连接调度算法中,权值高的服务器仍然被分配到了更多的连接,但它还是在最小连接算法的基础上实现的,所以它表现的不如wrr算法明显.
 
 
5)基于局部性的最少链接调度算法(lblc)
 
该算法的前提假设是:任意一台服务器都可以处理任一请求.算法的设计目标是在服务器的负载基本平衡情况下,
将相同目标IP地址的请求调度到同一台服务器,来提高各台服务器的访问局部性和主存Cache命中率,从而整个集群系统的处理能力.
若被选择服务器超载则用”最少链接”的原则选出一个可用的服务器,将请求发送到该服务器.
 
这里调整lvs的调度算法,如下:
virtual_server 10.1.1.166 80 {
        delay_loop 6
        lb_algo lblc
        lb_kind DR
        #persistence_timeout 60
        protocol TCP
        real_server 10.1.1.163 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
        real_server 10.1.1.164 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}
 
测试:
ab -c 1 -n 10000 http://10.1.1.166/index.html
 
查看lvs服务端:
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www lblc
  -> 10.1.1.164:www               Route   5      0          2182      
  -> 10.1.1.163:www               Route   5      0          0         
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www lblc
  -> 10.1.1.164:www               Route   5      1          2436      
  -> 10.1.1.163:www               Route   5      0          0         
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www lblc
  -> 10.1.1.164:www               Route   5      1          2622      
  -> 10.1.1.163:www               Route   5      0          0         
  
注:
我们看到lblc调度算法把对vip的请求都转发给了10.1.1.164,这是根据相同ip地址的请求分配到同一台机器的原则.
如果我们将并发数调整到100,请求则会分散到两台real server,这是因为选择的服务器超载会采用"最少使用"的原则使用另外可以用的机器进行处理请求.
 
 
 
6)带复制的基于局部性最少链接(lblcr)
 
它与LBLC算法基本相同,唯一的不同之处是它要维护从一个目标IP地址到一个服务器组的映射,而LBLC算法维护从一个目标IP地址到一台服务器的映射.
LBLC算法的主要缺点是:对于一个"热门"站点的服务请求,一台Cache服务器可能会忙不过来处理这些请求.这时,LBLC调度算法会从所有的Cache服务器中按"最小连接"原则选出一台Cache服务器,
映射该“热门”站点到这台Cache服务器,很快这台Cache服务器也会超载,就会重复上述过程选出新的Cache服务器.这样,可能会导致该“热门”站点的映像会出现在所有的Cache服务器上,降低了Cache服务器的使用效率
 
 
测试方法同lblc算法,在此不列举.
 
 
7)目标地址散列调度(dh)
该算法也是针对目标IP地址的负载均衡,但它是一种静态映射算法,通过一个散列(Hash)函数将一个目标IP地址映射到一台服务器.
目标地址散列调度算法先根据请求的目标IP地址,作为散列键(Hash Key)从静态分配的散列表找出对应的服务器,若该服务器是可用的且未超载,将请求发送到该服务器,否则返回空.
 
测试:
ab -c 1 -n 10000 http://10.1.1.166/index.html
 
查看lvs服务端:
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          1924      
  -> 10.1.1.163:www               Route   5      0          0         
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          2086      
  -> 10.1.1.163:www               Route   5      0          0         
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          2240      
  -> 10.1.1.163:www               Route   5      0          0       
  
 
注:我们看到lvs调度器将请求都转发给了10.1.1.164,这是因为通过散列函数将10.1.1.166(vip)这个的请求都转发给了10.1.1.164.
如果我们使用10.1.1.167做为vip,转发的结果会是10.1.1.163.这可能和ip地址的奇/偶数有关.
 
我们下面增加10.1.1.167(vip)再进行测试如下
在virtual_ipaddress选项中增加10.1.1.167(vip),如下:
virtual_ipaddress {
                10.1.1.166
                10.1.1.167
        }
}
 
新增加virtual_server选项,如下:
virtual_server 10.1.1.167 80 {
        delay_loop 6
        lb_algo dh
        lb_kind DR
        #persistence_timeout 60
        protocol TCP
        real_server 10.1.1.163 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
        real_server 10.1.1.164 80 {
                weight 5
                TCP_CHECK {
                        connect_timeout 10
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}
 
客户端测试:
ab -c 100 -n 10000 http://10.1.1.167/index.html
 
测试结果如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      0          0         
TCP  10.1.1.167:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      65         2930      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      0          0         
TCP  10.1.1.167:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      0          10054     
 
我们看到这回lvs通过dh算法把连接都转发给了10.1.1.163.
 
8)源地址散列调度(sh)
 
该算法正好与目标地址散列调度算法相反,它根据请求的源IP地址,作为散列键(Hash Key)从静态分配的散列表找出对应的服务器,若该服务器是可用的且未超载,将请求发送到该服务器,否则返回空,
 
测试:
ab -c 100 -n 10000 http://10.1.1.166/index.html
 
查看结果:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      0          0         
TCP  10.1.1.167:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      65         2930      
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      0          0         
TCP  10.1.1.167:www dh
  -> 10.1.1.164:www               Route   5      0          0         
  -> 10.1.1.163:www               Route   5      0          10054   
  
  
注:我们看到通过sh调度算法,散列函数计算源ip地址(10.1.1.165)将数据转发给10.1.1.163服务器.
如果我们在10.1.1.22上发请求呢?
结果是把数据转发给10.1.1.164服务器,如下:
客户端测试:
ab -c 100 -n 10000 http://10.1.1.166/index.html
 
显示结果如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www sh
  -> 10.1.1.164:www               Route   5      2          3629      
  -> 10.1.1.163:www               Route   5      0          0         
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.166:www sh
  -> 10.1.1.164:www               Route   5      0          10033     
  -> 10.1.1.163:www               Route   5      0          0  
  
在10.1.1.22上的请求这回转发到了10.1.1.164服务器.

(责任编辑:IT)
------分隔线----------------------------