当前位置: > Linux集群 > 服务器集群 >

corosync+pacemaker+drbd+mysql 实现mysql的高可用

时间:2016-03-19 19:00来源:linux.it.net.cn 作者:IT
拓扑如下:但是主从没有做配置,只做了主mysql的高可用

wKiom1M7-bnwlDc_AADdREO3BSc794.jpg

node1:

心跳:172.16.0.11

drbd:10.1.1.11

固定:192.168.1.166

node2:

心跳:172.16.0.12

drbd:10.1.1.12

固定:192.168.1.167

VIP:192.168.1.161

mysql:5.5.35

linux:centos 6.4(64bit)

前提:两台主机selinux 关闭,iptables 规则清空,时间同步,iptables的后续规则再添加上。

[root@node1 tools]#cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.166 node1.example.com node1
192.168.1.167 node2.example.com node2
[root@node1 tools]#yum install pacemaker corosync rpm-build -y
[root@node1 tools]#rpm -ivh crmsh-1.2.6-4.el6.src.rpm
[root@node1 tools]#rpm -ivh pssh-2.3.1-2.el6.src.rpm
[root@node1 tools]# cd /root/rpmbuild/SPECS
[root@node1 SPECS]#rpmbuild -bb crmsh.spec #根据提示进行安装所缺的rpm包
[root@node1 SPECS]#rpmbuild -bb pssh-CentOS_CentOS-6.spec #根据提示进行安装所缺的rpm包
在/root/rpmbuild/RPMS/x86_64下会生成相应的rpm包
[root@node1 SPECS]# yum install python-lxml  cluster-glue-libs-devel pacemaker-libs-devel asciidoc  autoconf automake libtool redhat-rpm-config  -y
[root@node1 SPECS]# yum install python-devel python-setuptools python-setuptools-devel -y
[root@node1 SPECS]# cd /root/rpmbuild/RPMS/x86_64
[root@node1 SPECS]# scp scp crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm node1:/tools
[root@node1 SPECS]# scp crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm node2:/tools
[root@node1 tools] yum install --nogpgcheck localinstall pssh-2.3.1-2.el6.x86_64.rpm crmsh-1.2.6-4.el6.x86_64.rpm
创建DRBD的资源盘:
[root@node1 ~]# pvcreate /dev/sdb1
[root@node1 ~]# vgcreate vg_data /dev/sdb1
[root@node1 ~]# lvcreate -L 5G -n lv_data vg_data
node2做同样的操作

2.配置corosync

[root@node1 tools] cd /etc/corosync
[root@node1 corosync] cp corosync.conf.example corosync.conf
[root@node1 corosync] vim corosync.conf
compatibility: whitetank # 兼容以前的版本
totem { # 心跳传递协议
  version: 2   #版本
  secauth: on   #开启安全认证
  threads: 0      #开启的线程,默认就好
       # rrp_mode: passive #The Totem Redundant Ring Protocol,通过使用冗余把多个节点连接起来,可允许网络的损坏,有3个方式,active,passive,none,默认是none,active的方式是把所有消息发送到n个冗余的网络,每个消息都被接收n次,passive的方式是所有消息发送到n个冗余网络其中的一个,每个消息被接收1次。只要使用在多个心跳网络(充分保障心跳网络的健壮)。如果只有一个心跳网的话,不用配置,默认就是none
  interface {
    ringnumber: 0 #冗余环号
    bindnetaddr: 172.16.0.0 #绑定心跳网段
    mcastaddr: 226.94.1.1  #心跳组播地址
    mcastport: 5405   #心跳组播端口
    ttl: 1 #经过几个下一跳。
  }
  #      interface {
  #	     ringnumber: 1
  #	     bindnetaddr: 192.168.1.0
  #	    mcastaddr: 226.94.1.2
  #	    mcastport: 5406
  #	   ttl: 1
  #      }
}
logging {
  fileline: off #指定要打印的行
  to_stderr: no #标准错误输出
  to_logfile: yes #记录到文件
  to_syslog: no  #记录到syslog
  logfile: /var/log/cluster/corosync.log
  debug: off  
  timestamp: on #是否打印时间戳
  logger_subsys {
    subsys: AMF
    debug: off
  }
}
amf {
  mode: disabled
}
service {
  ver: 0
  name: pacemaker #定义corosync启动时同时启动pacemaker
}
aisexec {
  user: root
  group: root
}
[root@node1 corosync]#corosync-keygen #生成key,使用/dev/random生成随机数,这时可能需要敲击键盘,安装些包,卸载包来生成随机数
[root@node1 corosync]# scp corosync.conf authkey node2:/etc/corosync/
[root@node1 ~]#/etc/init.d/corosync start

校验是否正常工作:

查看corosync引擎是否正常启动:

[root@node1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
corosync [MAIN  ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
corosync [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

查看初始化成员节点通知是否正常发出:

[root@node1 ~]# grep  TOTEM  /var/log/cluster/corosync.log
corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
corosync [TOTEM ] The network interface [172.16.0.11] is now up.
corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.

检查启动过程中是否有错误产生。下面的错误信息表示packmaker不久之后将不再作为corosync的插件运行,因此,建议使用cman作为集群基础架构服务;此处可安全忽略。

root@node1 ~]# grep ERROR: /var/log/cluster/corosync.log | grep -v unpack_resources
Apr 02 21:55:08 corosync [pcmk  ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.
Apr 02 21:55:08 corosync [pcmk  ] ERROR: process_ais_conf:  Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN

查看corosync的进程

[root@node1 ~]# ps auxf 
corosync
 \_ /usr/libexec/pacemaker/cib    #集群信息基库
 \_ /usr/libexec/pacemaker/stonithd  #stonish
 \_ /usr/libexec/pacemaker/lrmd    #ra需要本地代理  本地资源管理器
 \_ /usr/libexec/pacemaker/attrd   #管理集群资源属性
 \_ /usr/libexec/pacemaker/pengine #策略引擎
 \_ /usr/libexec/pacemaker/crmd  #资源管理

查看当前ring的状态

[root@node1 corosync]# corosync-cfgtool -s
Printing ring status.
Local node ID 184553644
RING ID 0
    id  = 172.16.0.11
    status  = ring 0 active with no faults

node2 做同样的操作

CRM相关的检查:

[root@node1 corosync]# crm_verify -L -V  #检查crm在语法配置文件上是否有错误,由于没有stonish设备,可能会报相关的错误。
[root@node1 corosync]# crm configure property stonith-enabled=false
在双节点集群中,由于票数是偶数,当心跳出现问题(脑裂)时,两个节点都将达不到法定票数,默认quorum策略会关闭集群服务,为了避免这种情况,配置quorum策略为【ignore】。
[root@node1 corosync]# crm configure property  no-quorum-policy=ignore
[root@node1 corosync]# crm configure  show
node node1.example.com
node node2.example.com
property $id="cib-bootstrap-options" \
  dc-version="1.1.10-14.el6_5.2-368c726" \
  cluster-infrastructure="classic openais (with plugin)" \
  expected-quorum-votes="2" \
  stonith-enabled="false" \
  no-quorum-policy="ignore"

crm,可以进行交互式配置的。如下:

wKioL1M7-_DQo6dIAAItBtTJ9jU901.jpg

3.安装配置drbd

在centos6里需要安装的是drbd 以及drbd-kmdl 的rpm包,在centos5里安装的是drbd和kmod-drbd,其中版本号一定要和内核的版本完全相符,可以通过rpmfind和rpmsearch的网站来找到相应的版本。

drbd是一个软件来实现的,无共享的,服务器之间镜像块设备的存储复制解决方案。drbd的核心功能通过内核来实现的。

在rpmfind里只发现了drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm,而系统内核是2.6.32-358.el6.x86_64,无奈只好升级内核到2.6.32-358.23.2.el6 ,下载 kernel-2.6.32-358.23.2.el6.x86_64.rpm。

[root@node1 ~]# yum install kernel-2.6.32-358.23.2.el6.x86_64.rpm -y
最好验证下 grub里是否添加到了新内核,然后重启。
[root@node1 bak]# ls
drbd-8.4.3-33.el6.x86_64.rpm  drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[root@node1 bak]# rpm -ivh drbd*

drbd配置文件:/etc/drbd.conf /etc/drbd.d/*

[root@node1 ~]#cat /etc/drbd.conf
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
drbd的3个协议

A异步:指的是当数据写到磁盘上,并且复制的数据已经被放到我们的tcp缓冲区并等待发送以后,就认为写入完成
B半同步:指的是数据已经写到磁盘上,并且这些数据已经发送到对方内存缓冲区,对方的tcp已经收到数据,并宣布写入
C同步:指的是主节点已写入,从节点磁盘也写入

[root@node1 ~]# vim /etc/drbd.d/global_common.conf
gobal {
  usage-count no; #是否参加用户统计
}
common {
  protocol C; #同步
  handlers { #定义一系列处理器,用来回应特定事件
    pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
    pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
    local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
  }
  startup {
         wfc-timeout 300;#该选项设定一个时间值,单位是秒。在启用DRBD块时,初始化脚本drbd会阻塞启动进程的运行,直到对等节点的出现。该选项就是用来限制这个等待时间的,默认为0,即不限制,永远等待。
         degr-wfc-timeout 300; #该选项也设定一个时间值,单位为秒。也是用于限制等待时间,只是作用的情形不同:它作用于一个降级集群(即那些只剩下一个节点的集群)在重启时的等待时间。
  }
  disk {
    on-io-error detach;#当发生I/O错误,detach,以diskless mode继续工作。
  }
  net {
    cram-hmac-alg "sha1";
    shared-secret "mydrbd";
      }
  syncer {
    rate 1000M;
  }
}
[root@node1 ~]# vim /etc/drbd.d/mydata.res
resource mydata {
  on node1.example.com {
    device /dev/drbd0;
    disk   /dev/vg_data/lv_data;
    address 10.1.1.11:7789;
    meta-disk internal;
  }
  on node2.example.com {
    device /dev/drbd0;
    disk /dev/vg_data/lv_data;
    address 10.1.1.12:7789;
    meta-disk internal;
  }
}

两个配置文件,两个节点要相同

[root@node1 ~]#drbdadm create-md mydata#2个节点都运行
Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
[root@node1 ~]# /etc/init.d/drbd start #2个节点都运行
[root@node1 ~]# drbdadm primary --force mydata #只在节点1运行
[root@node1 drbd.d]# drbd-overview
  0:mydata/0  Connected Primary/Secondary UpToDate/UpToDate C r----- 
[root@node1 drbd.d]#mkfs.ext4 /dev/drbd0
[root@node1 drbd.d]#mount /dev/drbd0 /mnt
[root@node1 drbd.d]#cp /etc/passwd /mnt
[root@node1 drbd.d]#umount /mnt
[root@node1 drbd.d]#drbdadm secondary mydata
[root@node1 ~]# drbd-overview
 0:mydata/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----
在node2进行测试
[root@node2 ~]#drbdadm primary mydata
[root@node2 ~]#mount /dev/drbd0 /mnt
[root@node2 ~]# ls /mnt
lost+found  passwd   #发现了passwd,并开验证下。
[root@node2 ~]# drbd-overview
0:mydata/0  Connected Primary/Secondary UpToDate/UpToDate C r----- /mnt ext4 5.0G 138M 4.6G 3%

然后反过来再测一遍,没有问题的话就说明drbd 配置成功了。

[root@node2 ~]# umont /mnt
关闭2个节点的drbd,并确定开机不自动启动
[root@node1 ~]#/etc/init.d/drbd stop
[root@node1 ~]#chkconfig drbd off

4.配置drbd的资源

[root@node1 ~]# crm conf
crm(live)configure# primitive mysql_drbd ocf:linbit:drbd params drbd_resource=mydata op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100 on-fail=restart
crm(live)configure# ms ms_mysql_drbd mysql_drbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Last updated: Thu Apr  3 01:11:31 2014
Last change: Thu Apr  3 01:11:23 2014 via cibadmin on node1.example.com
Stack: classic openais (with plugin)
Current DC: node2.example.com - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ node1.example.com node2.example.com ]
 Master/Slave Set: ms_mysql_drbd [mysql_drbd]
     Masters: [ node2.example.com ]
     Slaves: [ node1.example.com ]

master-max: 有几个主资源 master-node-max:1个节点上最多运行的主资源

clone-max:有几个克隆资源   clone-node-max: 1个节点上最多运行的克隆资源

主从资源也是克隆资源的一种的,只不过它有主从关系

验证drbd:

root@node1 drbd.d]# drbd-overview
  0:mydata/0  Connected Secondary/Primary UpToDate/UpToDate C r-----
做下测试:
[root@node2 ~]# crm
crm(live)# node
crm(live)node# standby
[root@node2 ~]# crm status
Node node2.example.com: standby
Online: [ node1.example.com ]
 Master/Slave Set: ms_mysql_drbd [mysql_drbd]
   Masters: [ node1.example.com ]
   Stopped: [ node2.example.com ]
[root@node1 ~]# drbd-overview
0:mydata/0  WFConnection Primary/Unknown UpToDate/DUnknown C r-----
[root@node2 ~]# crm node online
[root@node2 ~]# crm status
Online: [ node1.example.com node2.example.com ]
 Master/Slave Set: ms_mysql_drbd [mysql_drbd]
   Masters: [ node1.example.com ]
   Slaves: [ node2.example.com ]
[root@node2 ~]# drbd-overview
 0:mydata/0  Connected Secondary/Primary UpToDate/UpToDate C r-----

现在DRBD资源已经添加成功,现在把它创建成文件系统,并自动挂载到master上的mydata 目录下。

[root@node1 ~]# mkdir /mydata #node2 也创建
[root@node1 ~]# crm conf
crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata  op monitor interval=20 timeout=40 op start timeout=60 op stop timeout=60 on-fail=restart
crm(live)configure# verify
crm(live)configure#colocation mystore_with_ms-mysql-drbd inf: mystore ms_mysql_drbd:Master
crm(live)configure# order ms-mysql-drbd_before_mystore inf: ms_mysql_drbd:promote mystore:start
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Last updated: Thu Apr  3 01:49:23 2014
Last change: Thu Apr  3 01:48:40 2014 via cibadmin on node1.example.com
Stack: classic openais (with plugin)
Current DC: node2.example.com - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
3 Resources configured
Online: [ node1.example.com node2.example.com ]
 Master/Slave Set: ms_mysql_drbd [mysql_drbd]
     Masters: [ node1.example.com ]
     Slaves: [ node2.example.com ]
 mystore    (ocf::heartbeat:Filesystem):    Started node1.example.com

wKiom1M7_XGxdC7kAAFHUx5ZCv4978.jpg

wKiom1M7_ZnRruz_AAKw06sI1_U881.jpg

以上看到drbd资源和文件系统资源已经配置成功。

5.安装mysql

[root@node2 ~]# yum install bison gcc gcc-c++ autoconf automake ncurses-devel cmake -y
[root@node2 ~]#groupadd -r mysql
[root@node2 ~]#useradd  -g mysql -r -d /mydata/data mysql
[root@node2 ~]#cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
-DMYSQL_DATADIR=/data/mydata \
-DSYSCONFDIR=/etc \
-DWITH_INNOBASE_STORAGE=1 \
-DWITH_ARCHIVE_STORAGE=1 \
-DWITH_BLACKHOLE_STORAGE=1 \
-DWITH_READLINE=1 \
-DWITH_SSL=system \
-DWITH_ZLIB=system \
-DWITH_LIBWRAP=0 \
-DMYSQL_UNIX_ADDR=/tmp/mysql.sock \
-DDEFAULT_CHARSET=utf8 \
-DDEFAULT_COLLATION=utf8_general_ci
make && make install

这个是参考,我偷懒了,从别的机器上把mysql打包过来的

[root@node2 local]#tar  -zxvf mysql.tar.gz
[root@node2 local]#cd mysql
[root@node2 mysql]#cp supports-file/my-large.cnf /etc/my.cnf
[root@node2 mysql]#cp supports-file/mysql.server /etc/rc.d/init.d/mysqld
[root@node2 mysql]#scripts/mysql_install_db --user=mysql --datadir=/data/mydata/
[root@node2 mysql]#chmod a+x /etc/rc.d/init.d/mysqld
[root@node2 mysql]#vim /etc/my.cnf
datadir=/data/mydata
[root@node2 mysql]# chkconfig --add mysqld
[root@node2 mysql]# chkconfig mysqld off
[root@node2 mysql]# /etc/init.d/mysqld start
看到启动成功

然后再关闭 /etc/init.d/mysqld stop

切换节点2,然后测试下mysqld 是否能启动

[root@node1 local]# useradd -r -u 306 mysql
[root@node1 local]#tar  -zxvf mysql.tar.gz
[root@node1 local]#cd mysql
[root@node1 mysql]#cp supports-file/my-large.cnf /etc/my.cnf
[root@node1 mysql]#cp supports-file/mysql.server /etc/rc.d/init.d/mysqld
[root@node1 mysql]#chmod a+x /etc/rc.d/init.d/mysqld
[root@node1 mysql]#vim /etc/my.cnf
datadir=/data/mydata
[root@node1 mysql]# chkconfig --add mysqld
[root@node1 mysql]# chkconfig mysqld off
[root@node1 mysql]# /etc/init.d/mysqld star

能启动成功说明mysql安装完成。

6.配置mysql服务资源

[root@node1 mysql]# crm conf
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip="192.168.1.161" nic="eth0" op monitor interval="20" timeout="20" on-fail="restart"
crm(live)configure# primitive myserver lsb:mysqld
crm(live)configure# group mysql vip mystore myserver
crm(live)configure# order vip-mystore_before_myserver inf: (vip mystore) myserver
crm(live)configure#verify
crm(live)configure#commit

wKioL1M7_jWjO1ZiAAMBDm5cD5Y023.jpg

[root@node1 mysql]# crm node standby

wKiom1M7_pDScFUwAAMOyw2HHxE278.jpg

进行网络故障测试,ifdown eth0,也是可以发生切换的,可惜的是mysql脚本也应该用ocf的,但是我测试时,使用ocf脚本不成功,没办法才选了lsb,这也许是这次试验的不足吧。



(责任编辑:IT)
------分隔线----------------------------