Linux双网卡绑定
时间:2014-08-28 19:46 来源:linux.it.net.cn 作者:it
Linux双网卡绑定是通过Bonding技术实现使用两块网卡虚拟成为一块网卡设备,两块网卡使用共同的IP地址,可实现负载均衡,网卡冗余。bonding的实现前提是使用相同型号的网卡芯片。
以下练习在VM的虚拟环境使用redhat5.4并添加双网卡
1.redhat5.X默认已支持bonding,可通过以下命令查看:
[root@server1 ~]# modinfo bonding
filename: /lib/modules/2.6.18-164.el5/kernel/drivers/net/bonding/bonding.k o
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.4.0
version: 3.4.0
license: GPL
srcversion: 3D3684A1DE11F2E8B0D4E80
depends: ipv6
vermagic: 2.6.18-164.el5 SMP mod_unload 686 REGPARM 4KSTACKS gcc-4.1
parm: max_bonds:Max number of bonded devices (int)
parm: num_grat_arp:Number of gratuitous ARP packets to send on failove r event (int)
parm: num_unsol_na:Number of unsolicited IPv6 Neighbor Advertisements packets to send on failover event (int)
parm: miimon:Link check interval in milliseconds (int)
parm: updelay:Delay before considering link up, in milliseconds (int)
parm: downdelay:Delay before considering link down, in milliseconds ( nt)
parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 f r off, 1 for on (default) (int)
parm: mode:Mode of operation : 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for bal nce-alb (charp)
parm: primary:Primary network device to use (charp)
parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner (slow/ ast) (charp)
parm: xmit_hash_policy:XOR hashing method: 0 for layer 2 (default), 1 for layer 3+4 (charp)
parm: arp_interval:arp interval in milliseconds (int)
parm: arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm: arp_validate:validate src/dst of ARP probes: none (default), ac ive, backup or all (charp)
parm: fail_over_mac:For active-backup, do not set all slaves to the s me MAC. none (default), active or follow (charp)
module_sig: 883f3504a8b7aed18758d6145e112aa1909f62632a4a9b30e790b7b31a74bde 31a772fa4909f40969e891a448344afce6ded18dd8e6ddf11a4
2.建立虚拟的网卡接口,配置网卡接口信息等信息
[root@server1 ~]# cd /etc/sysconfig/network-scripts/
[root@server1 network-scripts]# cp ifcfg-lo ifcfg-bond0
[root@server1 network-scripts]# vim ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.0.254
NETMASK=255.255.255.0
NETWORK=192.168.0.0
BROADCAST=192.168.0.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=0 miimon=100"
BONDING_OPTS选项中的mode=0指的是使用指负载均衡模式,如果这个参数为1,表示冗余功能。miimon=100表示系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路。
[root@server1 network-scripts]# vim ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
[root@server1 network-scripts]# vim ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
3.修改配置使系统启动时加载bonding模块使用bond0接口。在末尾添加一行:
[root@server1 ~]# vim /etc/modprobe.conf
alias eth0 vmxnet
alias scsi_hostadapter mptbase
alias scsi_hostadapter1 mptspi
alias scsi_hostadapter2 ata_piix
# Added by VMware Tools
install pciehp /sbin/modprobe -q --ignore-install acpiphp; /bin/true
install pcnet32 (/sbin/modprobe -q --ignore-install vmxnet || /sbin/modprobe -q --ignore-install pcnet32 $CMDLINE_OPTS);/bin/true
alias eth1 vmxnet
alias bond0 bonding
4.重启网络服务,查看绑定情况:
[root@server1 ~]# /etc/init.d/network restart
[root@server1 ~]# ifconfig
bond0 Link encap:Ethernet HWaddr 00:0C:29:41:E1:48
inet addr:192.168.0.254 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe41:e148/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2913 errors:0 dropped:0 overruns:0 frame:0
TX packets:2338 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:274948 (268.5 KiB) TX bytes:357004 (348.6 KiB)
eth0 Link encap:Ethernet HWaddr 00:0C:29:41:E1:48
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2289 errors:0 dropped:0 overruns:0 frame:0
TX packets:2176 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:209531 (204.6 KiB) TX bytes:326498 (318.8 KiB)
Interrupt:59 Base address:0x2024
eth1 Link encap:Ethernet HWaddr 00:0C:29:41:E1:48
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:629 errors:0 dropped:0 overruns:0 frame:0
TX packets:175 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:65717 (64.1 KiB) TX bytes:32508 (31.7 KiB)
Interrupt:51 Base address:0x20a4
[root@server1 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.4.0 (October 7, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:0c:29:41:e1:48
Slave Interface: eth1
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:0c:29:41:e1:52
通过使用bonding技术实现的网卡负载均衡模式就完成了。
(责任编辑:IT)
Linux双网卡绑定是通过Bonding技术实现使用两块网卡虚拟成为一块网卡设备,两块网卡使用共同的IP地址,可实现负载均衡,网卡冗余。bonding的实现前提是使用相同型号的网卡芯片。 以下练习在VM的虚拟环境使用redhat5.4并添加双网卡
1.redhat5.X默认已支持bonding,可通过以下命令查看: 2.建立虚拟的网卡接口,配置网卡接口信息等信息
[root@server1 ~]# cd /etc/sysconfig/network-scripts/ BONDING_OPTS选项中的mode=0指的是使用指负载均衡模式,如果这个参数为1,表示冗余功能。miimon=100表示系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路。
[root@server1 network-scripts]# vim ifcfg-eth0
[root@server1 network-scripts]# vim ifcfg-eth1
3.修改配置使系统启动时加载bonding模块使用bond0接口。在末尾添加一行: 4.重启网络服务,查看绑定情况: [root@server1 ~]# /etc/init.d/network restart
[root@server1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:41:E1:48
eth1 Link encap:Ethernet HWaddr 00:0C:29:41:E1:48
[root@server1 ~]# cat /proc/net/bonding/bond0
Bonding Mode: load balancing (round-robin)
Slave Interface: eth0
Slave Interface: eth1
通过使用bonding技术实现的网卡负载均衡模式就完成了。 (责任编辑:IT) |