当前位置: > Linux集群 > Ceph >

CentOS 6.3上部署Ceph

时间:2015-10-17 02:27来源:linux.it.net.cn 作者:IT

一、背景知识
搭建ceph的机器分为两种:client和非client(mds、monitor、osd)。
配置时client只需要在内核编译时选上ceph就行,而其它三种则还需要编译ceph用户态源码(下载地址:http://ceph.com/download/),另外osd还要记得安装btrfs文件系统(内核编译作为模块就行)。
内核版本参考:http://ceph.com/docs/master/install/os-recommendations/#glibc
 
 
二、机器分配
IP Roles Hostname 备注
222.31.76.209 client localhost.localdomain  
222.31.76.178 mds&monitor ceph_mds  
222.31.76.74 osd ceph_osd0  
222.31.76.67 osd ceph_osd1  
222.31.76.235 osd ceph_osd2  

 
操作系统:CentOS 6.3
内核版本:linux-3.8.8.tar.xz (stable2013-04-17)
ceph版本:ceph-0.60.tar.gz (01-Apr-201317:42)
 
三、编译与配置
(1) client
1. 编译最新版内核3.8.8
#make mrproper
#make menuconfig //需要ncurses-devel包:#yum install ncurses-devel。配置时记得选上ceph和btrfs。
#make all //若是多核处理器(例:4核),则可以使用#make -j8命令,以多线程方式加速构建内核
#make modules_install
#make install
编译完成后,修改/etc/grub.conf文件,reboot启动新内核。到此,client端的安装配置就已完成。

 
 
(2)mds/monitor/osd
1. 编译最新版内核3.8.8(同client)
 
 
2. 编译ceph源码
#tar -xvf ceph-0.60.tar.gz
#cd ceph-0.60
#./autogen.sh
#./configure --without-tcmalloc
若提示以下错误,说明缺少相应依赖包,安装即可:
checking whether -lpthread saves the day... yes
checking for uuid_parse in -luuid... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libuuid not found
See `config.log' for more details.
安装:#yum install libuuid-devel
checking for __res_nquery in -lresolv... yes
checking for add_key in -lkeyutils... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libkeyutils not found
See `config.log' for more details.
安装:#yum install keyutils-libs-devel
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE... no
checking for NSS... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no suitable crypto library found
See `config.log' for more details.
安装(下载的rpm包):
#rpm -ivh cryptopp-5.6.2-2.el6.x86_64.rpm
#rpm -ivh cryptopp-devel-5.6.2-2.el6.x86_64.rpm
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE...-lcryptopp
checking for NSS... no
configure: using cryptopp for cryptography
checking for FCGX_Init in -lfcgi... no
checking for fuse_main in -lfuse... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no FUSE found (use --without-fuse to disable)
See `config.log' for more details.
安装:#yum install fuse-devel
checking for fuse_main in -lfuse... yes
checking for fuse_getgroups... no
checking jni.h usability... no
checking jni.h presence... no
checking for jni.h... no
checking for LIBEDIT... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: No usable version of libedit found.
See `config.log' for more details.
安装:#yum install libedit-devel
checking for LIBEDIT... yes
checking atomic_ops.h usability... no
checking atomic_ops.h presence... no
checking for atomic_ops.h... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no libatomic-ops found (use --without-libatomic-ops to disable)
See `config.log' for more details.
安装:#yum install libatomic_ops-devel (也可按提示,使用#./configure --without-tcmalloc --without-libatomic-ops命令屏蔽掉libatomic-ops)
checking for LIBEDIT... yes
checking for snappy_compress in -lsnappy... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libsnappy not found
See `config.log' for more details.
安装(下载的rpm包):
#rpm -ivh snappy-1.0.5-1.el6.x86_64.rpm
#rpm -ivh snappy-devel-1.0.5-1.el6.x86_64.rpm
checking for snappy_compress in -lsnappy... yes
checking for leveldb_open in -lleveldb... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libleveldb not found
See `config.log' for more details.
安装(下载的rpm包):
#rpm -ivh leveldb-1.7.0-2.el6.x86_64.rpm
#rpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm
checking for leveldb_open in -lleveldb... yes
checking for io_submit in -laio... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libaio not found
See `config.log' for more details.
安装:#yum install libaio-devel
checking for sys/wait.h that is POSIX.1 compatible... yes
checking boost/spirit/include/classic_core.hpp usability... no
checking boost/spirit/include/classic_core.hpp presence... no
checking for boost/spirit/include/classic_core.hpp... no
checking boost/spirit.hpp usability... no
checking boost/spirit.hpp presence... no
checking for boost/spirit.hpp... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: "Can't find boost spirit headers"
See `config.log' for more details.
安装:#yum install boost-devel
checking if more special flags are requiredfor pthreads... no
checking whether to check for GCC pthread/shared inconsistencies... yes
checking whether -pthread is sufficient with -shared... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating scripts/gtest-config
config.status: creating build-aux/config.h
config.status: executing depfiles commands
config.status: executing libtool commands
见上说明#./configure --without-tcmalloc命令执行成功,会生成Makefile文件,接下来正式编译:
#make -j8
若过程中报以下错误,说明expat-devel没安装:
CXX osdmaptool.o
CXXLD osdmaptool
CXX ceph_dencoder-ceph_dencoder.o
test/encoding/ceph_dencoder.cc: In function'int main(int, const char**)':
test/encoding/ceph_dencoder.cc:196: note: variable tracking size limit exceeded with-fvar-tracking-assignments, retrying without
CXX ceph_dencoder-rgw_dencoder.o
In file included from rgw/rgw_dencoder.cc:6:
rgw/rgw_acl_s3.h:9:19: error: expat.h: No such file or directory
In file included from rgw/rgw_acl_s3.h:12,
from rgw/rgw_dencoder.cc:6:
rgw/rgw_xml.h:62: error: 'XML_Parser' does not name a type
make[3]:*** [ceph_dencoder-rgw_dencoder.o] Error1
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: *** [all] Error2
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make: *** [all-recursive] Error 1
安装:#yum install expat-devel
CXXLD ceph-dencoder
CXXLD cephfs
CXXLD librados-config
CXXLD ceph-fuse
CCLD rbd-fuse
CCLD mount.ceph
CXXLD rbd
CXXLD rados
CXXLD ceph-syn
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making all in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'
make[1]: Nothing to bedonefor`all'.
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
见上即编译成功,再安装ceph即可:
#make install
libtool: install: ranlib/usr/local/lib/rados-classes/libcls_kvs.a
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig-n/usr/local/lib/rados-classes
----------------------------------------------------------------------
Libraries have been installed in:
/usr/local/lib/rados-classes

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool,and
specify the full pathname of the library, or use the`-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to`/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
test -z "/usr/local/lib/ceph" || /bin/mkdir -p "/usr/local/lib/ceph"
/usr/bin/install -c ceph_common.sh '/usr/local/lib/ceph'
make[4]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making install in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'
make[2]: Entering directory`/cwn/ceph/ceph-0.60/man'
make[2]: Nothing to be done for `install-exec-am'.
test -z "/usr/local/share/man/man8" || /bin/mkdir-p"/usr/local/share/man/man8"
/usr/bin/install-c-m644 ceph-osd.8 ceph-mds.8 ceph-mon.8 mkcephfs.8 ceph-fuse.8 ceph-syn.8 crushtool.8 osdmaptool.8 monmaptool.8 ceph-conf.8 ceph-run.8 ceph.8 mount.ceph.8 radosgw.8 radosgw-admin.8 ceph-authtool.8 rados.8 librados-config.8 rbd.8 ceph-clsinfo.8 ceph-debugpack.8 cephfs.8 ceph-dencoder.8 ceph-rbdnamer.8 rbd-fuse.8'/usr/local/share/man/man8'
make[2]: Leaving directory`/cwn/ceph/ceph-0.60/man'
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
到此,ceph的编译安装全部成功。


 

3. 配置ceph
除客户端外,其它的节点都需一个配置文件ceph.conf,并需要是完全一样的。这个文件要位于/etc/ceph下面,如果在./configure时没有修改prefix的话,则应该是在/usr/local/etc/ceph下。
#cp ./src/sample.* /usr/local/etc/ceph/
#mv /usr/local/etc/ceph/sample.ceph.conf /usr/local/etc/ceph/ceph.conf
#mv /usr/local/etc/ceph/sample.fetch_config /usr/local/etc/ceph/fetch_config
#cp ./src/init-ceph /etc/init.d/ceph
#mkdir /var/log/ceph //存放log,现在ceph自己还不自动建这个目录
注:
①部署每台服务器,主要修改的就是/usr/local/etc/ceph/下的两个文件ceph.conf(ceph集群配置文件)和fetch_config(同步脚本,用于同步各节点的ceph.conf文件,具体方法是scp远程拷贝,但我发现没啥用,所以后来自行写了个脚本)。
②针对osd,除了要加载btrfs模块,还要安装btrfs-progs(#yum install btrfs-progs),这样才有mkfs.btrfs命令。另外就是要在数据节点osd上创建分区或逻辑卷供ceph使用:可以是磁盘分区(如/dev/sda2),也可以是逻辑卷(如/dev/mapper/VolGroup-lv_ceph),只要与配置文件ceph.conf中写的一致即可。具体创建分区或逻辑卷的方法请自行google。
[root@ceph_mds ceph]# cat /usr/local/etc/ceph/ceph.conf
;
; Sample ceph ceph.conf file.
;
; This file defines cluster membership, the various locations
; that Ceph stores data, and any other runtime options.

; If a 'host' is defined for a daemon, the init.d start/stop script will
; verify that it matches the hostname (or else ignore it). If it is
; not defined, it is assumed that the daemon is intended to start on
; the current host (e.g., in a setup with a startup.conf on each
; node).

; The variables $type, $id and $name are available to usein paths
; $type = The type of daemon, possible values: mon, mdsand osd
; $id = The ID of the daemon, for mon.alpha, $id will be alpha
; $name = $type.$id

; For example:
; osd.0
; $type = osd
; $id = 0
; $name = osd.0

; mon.beta
; $type = mon
; $id = beta
; $name = mon.beta

; global
[global]
; enable secure authentication
; auth supported = cephx

; allow ourselves to open a lot of files
max open files = 131072

; set log file
log file = /var/log/ceph/$name.log
; log_to_syslog = true ; uncomment this line to log to syslog

; set up pid files
pid file = /var/run/ceph/$name.pid

; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible
;ms bind ipv6 = true

; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id

; If you are using for example the RADOS Gateway and want to have your newly created
; pools a higher replication level, you can set a default
;osd pool default size = 3

; You can also specify a CRUSH rule for new pools
; Wiki: http://ceph.newdream.net/wiki/Custom_data_placement_with_CRUSH
;osd pool default crush rule = 0

; Timing is critical for monitors, but if you want to allow the clocks to drift a
; bit more, you can specify the max drift.
;mon clock drift allowed = 1

; Tell the monitor to backoff from this warning for 30 seconds
;mon clock drift warn backoff = 30

; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20

[mon.0]
host = ceph_mds
mon addr = 222.31.76.178:6789

; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /data/keyring.$name

; mds logging to debug issues.
;debug ms = 1
;debug mds = 20

[mds.alpha]
host = ceph_mds

; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
sudo = true
; This is where the osd expects its data
osd data = /data/osd$id

; Ideally, make the journal a separate disk or partition.
; 1-10GB should be enough; moreif you have fastor many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/$name/journal), but it will be slower than a
; separate disk or partition.
; This is an example of a file-based journal.
osd journal = /data/$name/journal
osd journal size = 1000 ; journal size, in megabytes

; If you want to run the journal on a tmpfs (don't), disable DirectIO
;journal dio = false

; You can change the number of recovery operations to speed up recovery
; or slow it down if your machines can't handle it
; osd recovery max active = 3

; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20


; ### The below options only apply if you're using mkcephfs
; ### and the devs options
; The filesystem used on the volumes
osd mkfs type = btrfs
; If you want to specify some other mount options, you can do so.
; for other filesystems use 'osd mount options $fstype'
osd mount options btrfs = rw,noatime
; The options used to format the filesystem via mkfs.$fstype
; for other filesystems use 'osd mkfs options $fstype'
; osd mkfs options btrfs =


[osd.0]
host = ceph_osd0

; if 'devs' is not specified, you're responsible for
; setting up the 'osd data' dir.
btrfs devs = /dev/mapper/VolGroup-lv_ceph

[osd.1]
host = ceph_osd1

btrfs devs = /dev/mapper/VolGroup-lv_ceph

[osd.2]
host = ceph_osd2

btrfs devs = /dev/mapper/VolGroup-lv_ceph


 
 
4. 配置网络
① 修改各节点的hostname,并能够通过hostname来互相访问
修改/etc/sysconfig/network文件以重定义自己的hostname;
修改/etc/hosts文件以标识其他节点的hostname与IP的映射关系;
重启主机后用hostname命令查看验证。
② 各节点能够ssh互相访问而不输入密码
原理就是公私钥机制,我要访问别人,那么就要把自己的公钥先发给别人,对方就能通过我的公钥验证我的身份。
例:在甲节点执行如下命令
#ssh-keygen –d
该命令会在"~/.ssh"下面生成几个文件,这里有用的是id_dsa.pub,为该节点(甲)的公钥,然后把里面的内容添加到对方节点(乙) "~/.ssh/"目录下的authorized_keys文件中,如果没有则创建一个,这样就从甲节点不需要密码ssh登陆到乙上了.


(责任编辑:IT)

------分隔线----------------------------