CentOS6.3上部署配置Ceph教程CentOS6.3上部署配置Ceph教程。 一、背景知识
搭建ceph的机器分为两种:client和非client(mds、monitor、osd)。
配置时client只需要在内核编译时选上ceph就行,而其它三种则还需要编译ceph用户态源码(下载地址:http://ceph.com/download/),另外osd还要记得安装btrfs文件系统(内核编译作为模块就行)。
二、机器分配
操作系统:CentOS 6.3
内核版本:linux-3.8.8.tar.xz (stable2013-04-17) ceph版本:ceph-0.60.tar.gz (01-Apr-201317:42)
三、编译与配置
(1) client
1. 编译最新版内核3.8.8
#make mrproper
#make menuconfig //需要ncurses-devel包:#yum install ncurses-devel。配置时记得选上ceph和btrfs。
#make all //若是多核处理器(例:4核),则可以使用#make -j8命令,以多线程方式加速构建内核
#make modules_install
#make install
编译完成后,修改/etc/grub.conf文件,reboot启动新内核。到此,client端的安装配置就已完成。
(2)mds/monitor/osd
1. 编译最新版内核3.8.8(同client)
2. 编译ceph源码
#tar -xvf ceph-0.60.tar.gz
#cd ceph-0.60
#./autogen.sh
#./configure –without-tcmalloc
若提示以下错误,说明缺少相应依赖包,安装即可:
checking whether -lpthread saves the day… yes
checking for uuid_parse in -luuid… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: libuuid not found See `config.log’ for more details.
安装:#yum install libuuid-devel
checking for __res_nquery in -lresolv… yes
checking for add_key in -lkeyutils… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: libkeyutils not found See `config.log’ for more details.
安装:#yum install keyutils-libs-devel
checking pkg-config is at least version 0.9.0… yes
checking for CRYPTOPP… no checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE… no checking for NSS… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: no suitable crypto library found See `config.log’ for more details.
安装(下载的rpm包):
#rpm -ivh cryptopp-5.6.2-2.el6.x86_64.rpm
#rpm -ivh cryptopp-devel-5.6.2-2.el6.x86_64.rpm
checking pkg-config is at least version 0.9.0… yes
checking for CRYPTOPP… no checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE…-lcryptopp checking for NSS… no configure: using cryptopp for cryptography checking for FCGX_Init in -lfcgi… no checking for fuse_main in -lfuse… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: no FUSE found (use –without-fuse to disable) See `config.log’ for more details.
安装:#yum install fuse-devel
checking for fuse_main in -lfuse… yes
checking for fuse_getgroups… no checking jni.h usability… no checking jni.h presence… no checking for jni.h… no checking for LIBEDIT… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: No usable version of libedit found. See `config.log’ for more details.
安装:#yum install libedit-devel
checking for LIBEDIT… yes
checking atomic_ops.h usability… no checking atomic_ops.h presence… no checking for atomic_ops.h… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: no libatomic-ops found (use –without-libatomic-ops to disable) See `config.log’ for more details.
安装:#yum install libatomic_ops-devel (也可按提示,使用#./configure –without-tcmalloc –without-libatomic-ops命令屏蔽掉libatomic-ops)
checking for LIBEDIT… yes
checking for snappy_compress in -lsnappy… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: libsnappy not found See `config.log’ for more details.
安装(下载的rpm包):
#rpm -ivh snappy-1.0.5-1.el6.x86_64.rpm
#rpm -ivh snappy-devel-1.0.5-1.el6.x86_64.rpm
checking for snappy_compress in -lsnappy… yes
checking for leveldb_open in -lleveldb… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: libleveldb not found See `config.log’ for more details.
安装(下载的rpm包):
#rpm -ivh leveldb-1.7.0-2.el6.x86_64.rpm
#rpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm
checking for leveldb_open in -lleveldb… yes
checking for io_submit in -laio… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: libaio not found See `config.log’ for more details.
安装:#yum install libaio-devel
checking for sys/wait.h that is POSIX.1 compatible… yes
checking boost/spirit/include/classic_core.hpp usability… no checking boost/spirit/include/classic_core.hpp presence… no checking for boost/spirit/include/classic_core.hpp… no checking boost/spirit.hpp usability… no checking boost/spirit.hpp presence… no checking for boost/spirit.hpp… no configure: error:in`/cwn/ceph/ceph-0.60′: configure: error: “Can’t find boost spirit headers” See `config.log’ for more details.
安装:#yum install boost-devel
checking if more special flags are requiredfor pthreads… no
checking whether to check for GCC pthread/shared inconsistencies… yes checking whether -pthread is sufficient with -shared… yes configure: creating ./config.status config.status: creating Makefile config.status: creating scripts/gtest-config config.status: creating build-aux/config.h config.status: executing depfiles commands config.status: executing libtool commands
见上说明#./configure –without-tcmalloc命令执行成功,会生成Makefile文件,接下来正式编译:
#make -j8
若过程中报以下错误,说明expat-devel没安装:
CXX osdmaptool.o
CXXLD osdmaptool CXX ceph_dencoder-ceph_dencoder.o test/encoding/ceph_dencoder.cc: In function’int main(int, const char**)’: test/encoding/ceph_dencoder.cc:196: note: variable tracking size limit exceeded with-fvar-tracking-assignments, retrying without CXX ceph_dencoder-rgw_dencoder.o In file included from rgw/rgw_dencoder.cc:6: rgw/rgw_acl_s3.h:9:19: error: expat.h: No such file or directory In file included from rgw/rgw_acl_s3.h:12, from rgw/rgw_dencoder.cc:6: rgw/rgw_xml.h:62: error: ‘XML_Parser’ does not name a type make[3]:*** [ceph_dencoder-rgw_dencoder.o] Error1 make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src’ make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src’ make[1]: *** [all] Error2 make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src’ make: *** [all-recursive] Error 1
安装:#yum install expat-devel
CXXLD ceph-dencoder
CXXLD cephfs CXXLD librados-config CXXLD ceph-fuse CCLD rbd-fuse CCLD mount.ceph CXXLD rbd CXXLD rados CXXLD ceph-syn make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src’ make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src’ make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src’ Making all in man make[1]: Entering directory `/cwn/ceph/ceph-0.60/man’ make[1]: Nothing to bedonefor`all’. make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man’
见上即编译成功,再安装ceph即可:
#make install
libtool: install: ranlib/usr/local/lib/rados-classes/libcls_kvs.a
libtool: finish: PATH=”/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin” ldconfig-n/usr/local/lib/rados-classes ———————————————————————- Libraries have been installed in: /usr/local/lib/rados-classes
If you ever happen to want to link against installed libraries
See any operating system documentation about shared libraries for
到此,ceph的编译安装全部成功。
3. 配置ceph
除客户端外,其它的节点都需一个配置文件ceph.conf,并需要是完全一样的。这个文件要位于/etc/ceph下面,如果在./configure时没有修改prefix的话,则应该是在/usr/local/etc/ceph下。
#cp ./src/sample.* /usr/local/etc/ceph/
#mv /usr/local/etc/ceph/sample.ceph.conf /usr/local/etc/ceph/ceph.conf
#mv /usr/local/etc/ceph/sample.fetch_config /usr/local/etc/ceph/fetch_config
#cp ./src/init-ceph /etc/init.d/ceph
#mkdir /var/log/ceph //存放log,现在ceph自己还不自动建这个目录
注:
①部署每台服务器,主要修改的就是/usr/local/etc/ceph/下的两个文件ceph.conf(ceph集群配置文件)和fetch_config(同步脚本,用于同步各节点的ceph.conf文件,具体方法是scp远程拷贝,但我发现没啥用,所以后来自行写了个脚本)。
②针对osd,除了要加载btrfs模块,还要安装btrfs-progs(#yum install btrfs-progs),这样才有mkfs.btrfs命令。另外就是要在数据节点osd上创建分区或逻辑卷供ceph使用:可以是磁盘分区(如/dev/sda2),也可以是逻辑卷(如/dev/mapper/VolGroup-lv_ceph),只要与配置文件ceph.conf中写的一致即可。具体创建分区或逻辑卷的方法请自行google。
[root@ceph_mds ceph]# cat /usr/local/etc/ceph/ceph.conf
; ; Sample ceph ceph.conf file. ; ; This file defines cluster membership, the various locations ; that Ceph stores data, and any other runtime options.
; If a ’host’ is defined for a daemon, the init.d start/stop script will
; The variables $type, $id and $name are available to usein paths
; For example:
; mon.beta
; global
; allow ourselves to open a lot of files
; set log file
; set up pid files
; If you want to run a IPv6 cluster, set this to true. Dual-stack isn’t possible
; monitors
; If you are using for example the RADOS Gateway and want to have your newly created
; You can also specify a CRUSH rule for new pools
; Timing is critical for monitors, but if you want to allow the clocks to drift a
; Tell the monitor to backoff from this warning for 30 seconds
; logging, for debugging monitor crashes, in order of
[mon.0]
; mds
; mds logging to debug issues.
[mds.alpha]
; osd
; Ideally, make the journal a separate disk or partition.
; If you want to run the journal on a tmpfs (don’t), disable DirectIO
; You can change the number of recovery operations to speed up recovery
; osd logging to debug osd issues, in order of likelihood of being
; ### The below options only apply if you’re using mkcephfs
[osd.0]
; if ’devs’ is not specified, you’re responsible for
[osd.1] btrfs devs = /dev/mapper/VolGroup-lv_ceph
[osd.2] btrfs devs = /dev/mapper/VolGroup-lv_ceph
4. 配置网络
① 修改各节点的hostname,并能够通过hostname来互相访问
修改/etc/sysconfig/network文件以重定义自己的hostname;
修改/etc/hosts文件以标识其他节点的hostname与IP的映射关系;
重启主机后用hostname命令查看验证。
② 各节点能够ssh互相访问而不输入密码
原理就是公私钥机制,我要访问别人,那么就要把自己的公钥先发给别人,对方就能通过我的公钥验证我的身份。
例:在甲节点执行如下命令
#ssh-keygen –d
该命令会在”~/.ssh”下面生成几个文件,这里有用的是id_dsa.pub,为该节点(甲)的公钥,然后把里面的内容添加到对方节点(乙) “~/.ssh/”目录下的authorized_keys文件中,如果没有则创建一个,这样就从甲节点不需要密码ssh登陆到乙上了.
5. 创建文件系统并启动。以下命令在监控节点进行!
#mkcephfs -a -c /usr/local/etc/ceph/ceph.conf –mkbtrfs
遇以下问题:
(1)scp: /etc/ceph/ceph.conf: No such file or directory
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf –mkbtrfs
[/usr/local/etc/ceph/fetch_config/tmp/fetched.ceph.conf.2693] The authenticity of host ’ceph_mds (127.0.0.1)’ can’t be established. RSA key fingerprint is a7:c8:b8:2e:86:ea:89:ff:11:93:e9:29:68:b5:7c:11. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘ceph_mds’ (RSA) to the list of known hosts. ceph.conf 100% 4436 4.3KB/s 00:00 temp dir is /tmp/mkcephfs.tIHQnX8vkw preparing monmap in /tmp/mkcephfs.tIHQnX8vkw/monmap /usr/local/bin/monmaptool –create –clobber –add 0 222.31.76.178:6789 –print /tmp/mkcephfs.tIHQnX8vkw/monmap /usr/local/bin/monmaptool: monmap file /tmp/mkcephfs.tIHQnX8vkw/monmap /usr/local/bin/monmaptool: generated fsid f998ee83-9eba-4de2-94e3-14f235ef840c epoch 0 fsid f998ee83-9eba-4de2-94e3-14f235ef840c last_changed 2013-05-31 08:22:52.972189 created 2013-05-31 08:22:52.972189 0: 222.31.76.178:6789/0 mon.0 /usr/local/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.tIHQnX8vkw/monmap (1 monitors) === osd.0 === pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.0b3c65941572123eb704d9d614411fc1 scp: /etc/ceph/ceph.conf: No such file or directory
解决:编写一个脚本,将配置文件同步到/etc/ceph和/usr/local/etc/ceph目录下(需手动先建立/etc/ceph目录):
[root@ceph_mds ceph]# cat cp_ceph_conf.sh
cp /usr/local/etc/ceph/ceph.conf /etc/ceph/ceph.conf scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/usr/local/etc/ceph/ceph.conf scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/etc/ceph/ceph.conf scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/usr/local/etc/ceph/ceph.conf scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/etc/ceph/ceph.conf scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/usr/local/etc/ceph/ceph.conf scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/etc/ceph/ceph.conf
(2)
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf –mkbtrfs
temp dir is /tmp/mkcephfs.hz1EcPJjtu preparing monmap in /tmp/mkcephfs.hz1EcPJjtu/monmap /usr/local/bin/monmaptool–create–clobber–add0222.31.76.178:6789–print/tmp/mkcephfs.hz1EcPJjtu/monmap /usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.hz1EcPJjtu/monmap /usr/local/bin/monmaptool: generated fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43 epoch 0 fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43 last_changed 2013-05-3108:39:48.198656 created 2013-05-3108:39:48.198656 0: 222.31.76.178:6789/0 mon.0 /usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.hz1EcPJjtu/monmap (1 monitors) === osd.0=== pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.2e991ed41f1cdca1149725615a96d0be umount: /data/osd0:not mounted umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
WARNING! – Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
解决:手动建立这个文件:
#mkdir /data
#touch /data/keyring.mds.alpha
【创建成功】
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf –mkbtrfs
temp dir is /tmp/mkcephfs.v9vb0zOmJ5 preparing monmap in /tmp/mkcephfs.v9vb0zOmJ5/monmap /usr/local/bin/monmaptool–create–clobber–add0222.31.76.178:6789–print/tmp/mkcephfs.v9vb0zOmJ5/monmap /usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.v9vb0zOmJ5/monmap /usr/local/bin/monmaptool: generated fsid652b09fb-bbbf-424c-bd49-8218d75465ba epoch 0 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba last_changed 2013-05-3108:50:21.797571 created 2013-05-3108:50:21.797571 0: 222.31.76.178:6789/0 mon.0 /usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.v9vb0zOmJ5/monmap (1 monitors) === osd.0=== pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.8912ed2e34cfd2477c2549354c03faa3 umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
WARNING! – Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
【启动】
#/etc/init.d/ceph -a start //必要时先关闭防火墙(#service iptables stop)
[root@ceph_mds ceph]# /etc/init.d/ceph -a start
=== mon.0=== Starting Ceph mon.0 on ceph_mds… starting mon.0 rank 0 at 222.31.76.178:6789/0 mon_data /data/mon0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba === mds.alpha=== Starting Ceph mds.alpha on ceph_mds… starting mds.alpha at :/0 === osd.0=== Mounting Btrfs on ceph_osd0:/data/osd0 Scanning for Btrfs filesystems Starting Ceph osd.0 on ceph_osd0… starting osd.0 at :/0 osd_data/data/osd0/data/osd.0/journal === osd.1=== Mounting Btrfs on ceph_osd1:/data/osd1 Scanning for Btrfs filesystems Starting Ceph osd.1 on ceph_osd1… starting osd.1 at :/0 osd_data/data/osd1/data/osd.1/journal === osd.2=== Mounting Btrfs on ceph_osd2:/data/osd2 Scanning for Btrfs filesystems Starting Ceph osd.2 on ceph_osd2… starting osd.2 at :/0 osd_data/data/osd2/data/osd.2/journal
【查看Ceph集群状态】
[root@ceph_mds ceph]# ceph -s
health HEALTH_OK monmap e1: 1 mons at {0=222.31.76.178:6789/0}, election epoch 2, quorum 0 0 osdmap e7: 3 osds:3 up,3in pgmap v432: 768 pgs:768 active+clean;9518 bytes data, 16876 KB used,293 GB/300 GB avail mdsmap e4: 1/1/1 up {0=alpha=up:active}
[root@ceph_mds ceph]# ceph df
GLOBAL: SIZE AVAIL RAW USED %RAW USED 300M 293M 16876 0
POOLS:
疑问:空间统计有问题吧?!”ceph -s”查看是300GB,”ceph df”查看是300M。
6. 客户端挂载
#mkdir /mnt/ceph
#mount -t ceph ceph_mds:/ /mnt/ceph
遇以下错误:
(1)
[root@localhost ~]# mount -t ceph ceph_mds:/ /mnt/ceph/
mount: wrong fs type, bad option, bad superblock on ceph_mds:/, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog- try dmesg | tail or so
查看#dmesg
ceph: Unknown symbol ceph_con_keepalive (err0)
ceph: Unknown symbol ceph_create_client (err 0) ceph: Unknown symbol ceph_calc_pg_primary (err0) ceph: Unknown symbol ceph_osdc_release_request (err0) ceph: Unknown symbol ceph_con_open (err 0) ceph: Unknown symbol ceph_flags_to_mode (err 0) ceph: Unknown symbol ceph_msg_last_put (err 0) ceph: Unknown symbol ceph_caps_for_mode (err 0) ceph: Unknown symbol ceph_copy_page_vector_to_user (err0) ceph: Unknown symbol ceph_msg_new (err 0) ceph: Unknown symbol ceph_msg_type_name (err 0) ceph: Unknown symbol ceph_pagelist_truncate (err0) ceph: Unknown symbol ceph_release_page_vector (err0) ceph: Unknown symbol ceph_check_fsid (err 0) ceph: Unknown symbol ceph_pagelist_reserve (err0) ceph: Unknown symbol ceph_pagelist_append (err0) ceph: Unknown symbol ceph_calc_object_layout (err0) ceph: Unknown symbol ceph_get_direct_page_vector (err0) ceph: Unknown symbol ceph_osdc_wait_request (err0) ceph: Unknown symbol ceph_osdc_new_request (err0) ceph: Unknown symbol ceph_pagelist_set_cursor (err0) ceph: Unknown symbol ceph_calc_file_object_mapping (err0) ceph: Unknown symbol ceph_monc_got_mdsmap (err0) ceph: Unknown symbol ceph_osdc_readpages (err 0) ceph: Unknown symbol ceph_con_send (err 0) ceph: Unknown symbol ceph_zero_page_vector_range (err0) ceph: Unknown symbol ceph_osdc_start_request (err0) ceph: Unknown symbol ceph_compare_options (err0) ceph: Unknown symbol ceph_msg_dump (err 0) ceph: Unknown symbol ceph_buffer_new (err 0) ceph: Unknown symbol ceph_put_page_vector (err0) ceph: Unknown symbol ceph_pagelist_release (err0) ceph: Unknown symbol ceph_osdc_sync (err 0) ceph: Unknown symbol ceph_destroy_client (err 0) ceph: Unknown symbol ceph_copy_user_to_page_vector (err0) ceph: Unknown symbol __ceph_open_session (err 0) ceph: Unknown symbol ceph_alloc_page_vector (err0) ceph: Unknown symbol ceph_monc_do_statfs (err 0) ceph: Unknown symbol ceph_monc_validate_auth (err0) ceph: Unknown symbol ceph_osdc_writepages (err0) ceph: Unknown symbol ceph_parse_options (err 0) ceph: Unknown symbol ceph_str_hash (err 0) ceph: Unknown symbol ceph_pr_addr (err 0) ceph: Unknown symbol ceph_buffer_release (err 0) ceph: Unknown symbol ceph_con_init (err 0) ceph: Unknown symbol ceph_destroy_options (err0) ceph: Unknown symbol ceph_con_close (err 0) ceph: Unknown symbol ceph_msgr_flush (err 0) Key type ceph registered libceph: loaded (mon/osd proto15/24, osdmap5/65/6) ceph: loaded (mds proto 32) libceph: parse_ips bad ip ’ceph_mds’ ceph: loaded (mds proto 32) libceph: parse_ips bad ip ’ceph_mds’
我发现客户端mount命令根本没有ceph类型(无”mount.ceph”),而我们配置的其他节点都有mount.ceph,所以我在ceph客户端上也重新编译了最新版的ceph-0.60。
(2)编译安装ceph-0.60后mount还是报同样的错,查看dmesg
#dmesg | tail
Key type ceph unregistered Key type ceph registered libceph: loaded (mon/osd proto15/24, osdmap5/65/6) ceph: loaded (mds proto 32) libceph: parse_ips bad ip ’ceph_mds’ libceph: no secret set (for auth_x protocol) libceph: error -22 on auth protocol 2 init libceph: client4102 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba 最终查明原因,是因为mount时还需要输入用户名和密钥,具体mount命令为:
#mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
[root@localhost ~]# mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
parsing options: name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
上述命令中的name和secret参数值来自monitor的/etc/ceph/keyring文件:
[root@ceph_mds ceph]# cat /etc/ceph/keyring
[client.admin] key = AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
注:
1. To mount the Ceph file system you may use the mount command if you know the monitor host IP address(es), or use the mount.ceph utility to resolve the monitor host name(s) into IP address(es) for you.
2. mount参数
-v, –verbose:Verbose mode.
-o, –options opts:Options are specified with a -o flag followed by a comma separated string of options.
3. mount.ceph参考:http://ceph.com/docs/master/man/8/mount.ceph/
查看客户端的挂载情况:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 50G 13G 35G 27%/ tmpfs 2.0G 02.0G0%/dev/shm /dev/sda1 477M 48M 405M11%/boot /dev/mapper/VolGroup-lv_home 405G 71M 385G 1%/home 222.31.76.178:/ 300G 6.1G 294G 3% /mnt/ceph
P.S. 网上说若不想每次输入密钥这么繁琐,可以在配置文件ceph.conf中加入以下字段(并记得同步到其他节点),但我实验发现还是无效,所以暂且采用上述方法挂载使用,有哪位朋友知道我错在哪欢迎指出啊。
[mount /]
allow = %everyone
【解决】
查看了官网文档http://ceph.com/docs/master/rados/operations/authentication/,发现真正要取消挂载时的认证,需要在配置文件[global]下加入以下内容:
对于0.51及以上版本:
auth cluster required = none
auth service required = none auth client required = none
对于0.50及以下版本:
auth supported = none
官方注:If your cluster environment is relatively safe, you can offset the computation expense of running authentication. We do not recommend it. However, it may be easier during setup and/or troubleshooting to temporarily disable authentication.
到此Ceph的安装配置就已全部完成,可以在客户端的/mnt/ceph目录下使用Ceph分布式文件系统。
近期我还将进行Ceph的功能性验证测试,测试报告敬请期待!
———————————————————————
【参考】
1. 安装过程
centos 6.2 64位上安装ceph 0.47.2: http://blog.csdn.net/frank0712105003/article/details/7631035
Install Ceph on CentOS 5.5 :http://blog.csdn.net/gurad2008/article/details/6270804
2. 配置过程
文章来源http://blog.csdn.net/pc620/article/details/9002045
(责任编辑:IT) |