1 环境
1.1 集群
ceph-mon1, public network: 192.168.1.131 用于监控和部署ceph集群
Monitor节点应该为奇数,这里为了快一点安装就配置一个monitor,后面可以再添加。
ceph-osd1, public network: 192.168.1.141, cluster network: 192.168.56.50
ceph-osd2, public network: 192.168.1.142, cluster network: 192.168.56.51
ceph-osd3, public network: 192.168.1.143, cluster network: 192.168.56.52
每个OSD节点三个盘用户存数据
Disk1 (data1盘,用于数据) = 10GB
Disk2 (data2盘,用于数据) = 10GB
Disk3 (date3盘,用于数据) = 10GB
1.2 安装环境
1.2.1 操作系统:
CentOS 7,选择最小化(minimal install)安装类型。可能最小化安装会有一些包没有安装例如ifconfig,wget,scp等。
1.2.2 Ceph版本:
Jewel
2 操作系统配置
操作范围:在所有主机进行
2.1 修改主机名
hostnamectl set-hostname ceph-mon1
2.2 配置yum源
yum clean all rm -rf /etc/yum.repos.d/*.repo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo
2.3 增加ceph的yum源
vi /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0
2.4 安装ceph客户端
yum makecache yum install ceph ceph-radosgw rdate -y
2.5 关闭seliinux和防火墙
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config setenforce 0 systemctl stop firewalld systemctl disable firewalld
2.6 同步节点之间的时间
yum -y install rdate rdate -s time-a.nist.gov echo rdate -s time-a.nist.gov >> /etc/rc.d/rc.local chmod +x /etc/rc.d/rc.local
2.7 准备磁盘
磁盘通过vbox添加就可以了,每个OSD节点添加三个10G的磁盘。
3 部署ceph集群
3.1 查看分区信息
[root@ceph-osd1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 49.5G 0 part ├─centos-root 253:0 0 47.5G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk sdc 8:32 0 10G 0 disk sdd 8:48 0 10G 0 disk sr0 11:0 1 1024M 0 rom
3.2 安装ceph-deploy
操作范围:ceph-mon1节点
Ceph-deploy用户快速部署ceph集群,当然也可以手动部署。
#安装ceph-deploy
yum -y install ceph-deploy
3.2.1 查看ceph和ceph-deploy版本
[root@ceph-mon1 ~]# ceph --version ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b) --jewel版本 [root@ceph-mon1 ~]# ceph-deploy --version 1.5.36
3.3 配置hosts文件
操作范围:ceph-mon1节点
Vi /etc/hosts 192.168.1.131 ceph-mon1 192.168.1.141 ceph-osd1 192.168.1.142 ceph-osd2 192.168.1.143 ceph-osd3
3.4 创建目录
操作范围:ceph-mon1节点
[root@ceph-mon1 ~]# mkdir /ceph-cluster [root@ceph-mon1 ~]# cd /ceph-cluster/ [root@ceph-mon1 ceph-cluster]#
3.5 部署ceph集群
指定监控节点
[root@ceph-mon1 ceph-cluster]# ceph-deploy new ceph-mon1 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy new ceph-mon1 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7fa392b2a230> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at ···· [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
3.5.1 目录下会生成配置文件和日志文件
[root@ceph-mon1 ceph-cluster]# ll total 12 -rw-r--r-- 1 root root 200 Oct 26 04:16 ceph.conf -rw-r--r-- 1 root root 2970 Oct 26 04:16 ceph-deploy-ceph.log -rw------- 1 root root 73 Oct 26 04:16 ceph.mon.keyring
3.5.2 查看配置文件
配置文件显示monitor节点的一些信息
[root@ceph-mon1 ceph-cluster]# cat ceph.conf [global] fsid = 3fa8936a-118a-49aa-b31c-c6c728cb3b71 mon_initial_members = ceph-mon1 mon_host = 192.168.1.131 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
3.5.3 修改配置文件
根据自己的IP配置向ceph.conf中添加 public_network ,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s):
[root@ceph-mon1 ceph-cluster]# echo public_network=192.168.1.0/24 >> ceph.conf [root@ceph-mon1 ceph-cluster]# echo mon_clock_drift_allowed = 2 >> ceph.conf [root@ceph-mon1 ceph-cluster]# cat ceph.conf [global] fsid = 3fa8936a-118a-49aa-b31c-c6c728cb3b71 mon_initial_members = ceph-mon1 mon_host = 192.168.1.131 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network=192.168.1.0/24 mon_clock_drift_allowed = 2
3.6 部署monitor
[root@ceph-mon1 ceph-cluster]#ceph-deploy mon create-initial [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy mon create-initial [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False ······ [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpBtiEvK
3.7 查看集群状态
[root@ceph-mon1 ceph-cluster]# ceph -s cluster 3fa8936a-118a-49aa-b31c-c6c728cb3b71 health HEALTH_ERR no osds monmap e1: 1 mons at {ceph-mon1=192.168.1.131:6789/0} election epoch 3, quorum 0 ceph-mon1 osdmap e1: 0 osds: 0 up, 0 in --由于还没部署osd,所以还没osd信息 flags sortbitwise pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating
3.8 部署OSD
命令执行需要几分钟时间
[root@ceph-mon1 ceph-cluster]#ceph-deploy --overwrite-conf osd prepare ceph-osd1:/dev/sdb ceph-osd1:/dev/sdc ceph-osd1:/dev/sdd ceph-osd2:/dev/sdb ceph-osd2:/dev/sdc ceph-osd2:/dev/sdd ceph-osd3:/dev/sdb ceph-osd3:/dev/sdc ceph-osd3:/dev/sdd --zap-disk
注意:--zap-disk 表示如果OSD节点存储的盘已经分区了,就不需要加这个参数。
我这里是没有分区格式化的。
我这里没有做ssh所以执行期间会输入几次密码,要是怕麻烦可以做ssh
ssh-keygen -t rsa ssh-copy-id root@ceph-osd1 ssh-copy-id root@ceph-osd2 ssh-copy-id root@ceph-osd3
部署成功后再次查看集群状态
[root@ceph-mon1 ceph-cluster]# ceph -s cluster 3fa8936a-118a-49aa-b31c-c6c728cb3b71 health HEALTH_WARN too few PGs per OSD (21 < min 30) monmap e1: 1 mons at {ceph-mon1=192.168.1.131:6789/0} election epoch 3, quorum 0 ceph-mon1 osdmap e51: 9 osds: 9 up, 9 in flags sortbitwise pgmap v147: 64 pgs, 1 pools, 0 bytes data, 0 objects 306 MB used, 45674 MB / 45980 MB avail 64 active+clean
至此集群就部署完毕!
版权声明:本文为博主原创文章,未经博主允许不得转载。
ceph