当前位置: > Linux集群 > 负载均衡SLB >

CentOS6.4 LVS+keepalived高可用负载均衡服务配置

时间:2015-01-17 02:34来源:blog.51cto.com 作者:songxj
CentOS6.4 LVS+keepalived高可用负载均衡服务配置
 
 
一:测试环境信息
服务器DELL-R720 
虚拟化 KVM 
虚拟机4台: ipvs01,ipvs02,web01,web02 
[root@KVM01~]# virsh list
 Id    Name                           State
----------------------------------------------------
 1     SN-web01                    running
 2     SN-web02                    running
 3     SN-ipvs01                    running
 4     SN-ipvs02                    running
 
网络环境:
Ipvs01:192.168.40.90
Ipvs02:192.168.40.91
Web01:192.168.40.86
Web02:192.168.40.87
Vip:192.168.40.6
 
系统版本:
物理机CentOS6.4 64bit
虚拟机CentOS6.4 64bit



二:服务配置
软件包6.4系统已经自带ipvs1.25和keepalived1.2.7 测试用yum自行安装。
 
主LVS服务器ipvs01配置:
[root@ipvs01 ~]# yum list >yum.list
[root@ipvs01 ~]# cat yum.list|grep ipvs
ipvsadm.x86_64                         1.25-10.el6                       @base
[root@ipvs01 ~]# cat yum.list |grep keepalived
keepalived.x86_64                       1.2.7-3.el6                        @base
[root@ipvs01 ~]#yum install ipvsadm keepalived
[root@ipvs01 ~]#ipvsadm
[root@ipvs01 ~]#lsmod |grep ip_vs  查看模块是否加载成功
ip_vs_rr                1420  1 
ip_vs_wrr               2179  0 
ip_vs                 115643  5 ip_vs_rr,ip_vs_wrr
libcrc32c               1246  1 ip_vs
ipv6                  321422  11 ip_vs
[root@ipvs01 ~]#cd /etc/keepalived/
[root@ipvs01 ~]#cp -a keepalived.conf ./keepalived.bak
[root@ipvs01 ~]#vi keepalived.conf
 
 
! Configuration File for keepalived
 
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_MASTER
}
 
vrrp_instance VI_1 {
    state MASTER   #角色主MASTER,备服务器改为BACKUP
    interface eth0   #HA侦听接口
    virtual_router_id 51   #虚拟路由标记ID,同一组vrrp一致
   priority 100   #优先级自定义,MASTER高于BACKUP即可
    advert_int 1   #HA 侦听间隔:1秒
    authentication {   #认证形式
        auth_type PASS   #认证类型PASS:PASS/AH 2种可选
        auth_pass 1111   #认证密码,同一组vrrp密码一致
    }
    virtual_ipaddress {   #虚拟服务地址,可以多个,分多行
        192.168.40.6
    }
}
 
virtual_server 192.168.40.6 80 {   #虚拟服务地址和端口
    delay_loop 6   #运行情况检查,单位秒
    lb_algo rr   #负载调度算法,RR为轮询
    lb_kind DR   #LVS负载工作模式为DR,三大模式NAT,TUN,DR
    nat_mask 255.255.255.0   #网络掩码
    #persistence_timeout 50   #会话保持时间,50秒内分配同一节点,测试时候为了查看均衡效果可以先注释掉
    protocol TCP    #协议类型TCP/UDP
 
    real_server 192.168.40.86 80 {   #配置真实服务器节点1 的IP和端口
        weight 5            #权值大小,越大权值越高
        TCP_CHECK {           #realserver 状态检测时间,单位秒
            connect_timeout 3           #连接超时时间3秒
            nb_get_retry 3                  #重试次数:3次
            delay_before_retry 3       #重试间隔
            connect_port 80                #连接端口
            }
    }
real_server 192.168.40.87 80 {
        weight 5
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}
 
备服务器ipvs02上将主服务器配置文件复制过去,然后修改2处:
state BACKUP
priority 100
其他保持不变。
 


三:服务节点配置:
Web01,web02 ,http服务安装略
[root@web01 ~]# cat /var/www/html/index.html 
<h1>WEB01/192.168.40.86</h1>
[root@web02 ~]# cat /var/www/html/index.html 
<h1>WEB02/192.168.40.87</h1>
 
在web01和web02上分别运行脚本
ifconfig lo:0 192.168.40.6 netmask 255.255.255.0 up     
route add -host 192.168.40.6 dev lo:0        
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore        
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce        
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore        
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
 
执行完毕后查看路由有一条虚拟IP的路由指向lo接口
[root@web01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.40.6    0.0.0.0         255.255.255.255 UH    0      0        0 lo
 


四:测试LVS
将ipvs01,ipvs02 ,keepalived服务启动
将web01,web02, http服务启动
在主LVS服务器上查看lvs状态如下 说明配置生效
[root@ipvs01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.40.6:80 rr persistent 50
  -> 192.168.40.86:80             Route   5      0          0         
  -> 192.168.40.87:80             Route   5      0          0  
 
从ip a 命令来看 虚拟IP现在是在ipvs01上的,而从服务器上ip a没有虚拟IP
[root@ipvs01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:06:88:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.90/24 brd 192.168.40.255 scope global eth0
    inet 192.168.40.6/32 scope global eth0
    inet6 fe80::5054:ff:fe06:88f4/64 scope link 
       valid_lft forever preferred_lft forever
 
>测试负载均衡效果:
打开浏览器测试:
clip_image002[4]
刷新一下
clip_image004[4]
 
>测试一下HA效果
将ipvs01 keepalived服务器停掉,然后查看ipvs02 ip a状态
[root@ipvs01 ~]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]
[root@SN349_ipvs02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:45:f8:78 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.91/24 brd 192.168.40.255 scope global eth0
    inet 192.168.40.6/32 scope global eth0
    inet6 fe80::5054:ff:fe45:f878/64 scope link 
       valid_lft forever preferred_lft forever
 
此时ipvs02绑定了虚拟IP 顶替了ipvs01的服务。


(责任编辑:IT)
------分隔线----------------------------
栏目列表
推荐内容