接下来要准备deploy节点和node的环境.
一. 部署ceph-deploy节点 :
[root@localhost ~]# ssh 172.17.0.1
root@172.17.0.1's password:
Last login: Thu Nov 27 15:05:54 2014 from 172.17.42.1
[root@65a05647c55d ~]# yum install -y ceph-deploy
二. 部署node (所有node节点需执行, 包括deploy节点) :
The admin node must be have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.
The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
1. 配置时钟同步(本例docker环境中不需要)
# crontab -e
8 * * * * /usr/sbin/ntpdate asia.pool.ntp.org && /sbin/hwclock --systohc
2. 安装ssh server(本例docker环境中不需要,已安装)
yum install -y openssh-server openssh-clients
3. 创建一个ceph用户, 用于deploy节点对node节点进行管理. (deploy节点也需要创建这个用户)
[root@localhost ~]# ssh 172.17.0.2
root@172.17.0.2's password:
Last login: Thu Nov 27 15:06:29 2014 from 172.17.42.1
[root@136dd8993abd ~]# useradd ceph
设置密码
[root@136dd8993abd ~]# echo 'ceph:cephpwd' | chpasswd
配置sudo
[root@136dd8993abd ~]# yum install -y sudo
[root@136dd8993abd ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
验证sudo 配置是否正确
[root@136dd8993abd ~]# su - ceph
[ceph@136dd8993abd ~]$ sudo date
Thu Nov 27 15:41:53 GMT 2014
..............
4. 生成deploy节点ceph用户的ssh key
当前为deploy节点 :
[root@deploy ~]# ip addr show v1
64: v1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 4a:e5:cd:99:f5:c8 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global v1
valid_lft forever preferred_lft forever
inet6 fe80::48e5:cdff:fe99:f5c8/64 scope link
valid_lft forever preferred_lft forever
在ceph用户下生成key, 不要输入passphrase :
[root@deploy ~]# su - ceph Last login: Fri Nov 28 11:14:46 GMT 2014 on pts/0 [ceph@deploy ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ceph/.ssh/id_rsa): Created directory '/home/ceph/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ceph/.ssh/id_rsa. Your public key has been saved in /home/ceph/.ssh/id_rsa.pub. The key fingerprint is: 32:3b:89:89:36:28:e7:01:3e:1b:80:ce:70:de:ca:34 ceph@deploy The key's randomart image is: +--[ RSA 2048]----+ | | | | | | |. | |= . o S | |*= o o = | |o*E + + | |.*+= . | | .+ | +-----------------+
将ceph用户的key拷贝到所有节点(包括自己)的ceph用户下面 :
[ceph@deploy ~]$ ssh-copy-id ceph@172.17.0.1
....................
[ceph@deploy ~]$ ssh-copy-id ceph@172.17.0.8
The authenticity of host '172.17.0.8 (172.17.0.8)' can't be established. ECDSA key fingerprint is db:5c:6b:2a:bc:9e:3e:31:24:1b:c0:8d:5f:96:f2:e0. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys ceph@172.17.0.8's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'ceph@172.17.0.8'" and check to make sure that only the key(s) you wanted were added.
..................
测试是否无需密码调用ceph.
[ceph@deploy ~]$ ssh ceph@172.17.0.1 date Fri Nov 28 11:22:04 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.2 date Fri Nov 28 11:22:06 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.3 date Fri Nov 28 11:22:08 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.4 date Fri Nov 28 11:22:09 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.5 date Fri Nov 28 11:22:10 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.6 date Fri Nov 28 11:22:12 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.7 date Fri Nov 28 11:22:13 GMT 2014 [ceph@deploy ~]$ ssh ceph@172.17.0.8 date Fri Nov 28 11:22:15 GMT 2014
5. 配置nodes的网络自动启动(本例不需要配置) :
Ceph OSDs peer with each other and report to Ceph Monitors over the network. If networking is off by default, the Ceph cluster cannot come online during bootup until you enable networking.
The default configuration on some distributions (e.g., CentOS) has the networking interface(s) off by default. Ensure that, during boot up, your network interface(s) turn(s) on so that your Ceph daemons can communicate over the network. For example, on Red Hat and CentOS, navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-{iface} file has ONBOOT set to yes.
6. 配置主机名连通性, 在所有节点包括deploy节点执行.
[root@deploy ~]# yum install -y hostname
[root@deploy ~]# vi /etc/hosts
172.17.0.1 deploy 172.17.0.2 mon1 172.17.0.3 mon2 172.17.0.4 mon3 172.17.0.5 osd1 172.17.0.6 osd2 172.17.0.7 osd3 172.17.0.8 osd4 127.0.0.1 localhost
.......................
7. 配置防火墙(本例已打开, 不需要添加条目)
Ceph Monitors communicate using port 6789 by default. Ceph OSDs communicate in a port range of 6800:7810 by default. See the Network Configuration Reference for details. Ceph OSDs can use multiple network connections to communicate with clients, monitors, other OSDs for replication, and other OSDs for heartbeats. On some distributions (e.g., RHEL), the default firewall configuration is fairly strict. You may need to adjust your firewall settings allow inbound requests so that clients in your network can communicate with daemons on your Ceph nodes. For firewalld on RHEL 7, add port 6789 for Ceph Monitor nodes and ports 6800:7100 for Ceph OSDs to the public zone and ensure that you make the setting permanent so that it is enabled on reboot. For example: For iptables, add port 6789 for Ceph Monitors and ports 6800:7100 for Ceph OSDs. For example: Once you have finished configuring iptables, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For example:
8. 配置tty. (所有节点)
[root@deploy ~]# visudo -f /etc/sudoers
注释
#Defaults requiretty
9. 关闭selinux(所有节点)
# setenforce 0
# vi /etc/selinux/config
SELINUX=disabled
10. 添加epel源(所有节点)
http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/repoview/epel-release.html
http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[root@ ~]# yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.1 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.2 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.3 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.4 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.5 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.6 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.7 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
[ceph@deploy ~]$ ssh 172.17.0.8 sudo yum install -y http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
1. http://blog.163.com/digoal@126/blog/static/163877040201410269169450/
2. http://blog.163.com/digoal@126/blog/static/1638770402014102711413675/
(责任编辑:IT) |