记录一个小问题,之前对集群做了一个操作,就是停止一个OSD,然后不让集群进行数据迁移的。
[root@ceph-osd1 XinFusion]# ceph osd set noout
然后出现了下面的问题,然后不知道咋该回来了。哎
noout,sortbitwise flag(s) set
[root@ceph-osd1 XinFusion]# ceph -s cluster 21ed0f42-69d2-450c-babf-b1a44c1b82e4 health HEALTH_WARN noout,sortbitwise flag(s) set 1 mons down, quorum 0,1,2 ceph-osd1,ceph-osd2,ceph-osd3 monmap e7: 4 mons at {ceph-osd1=192.168.1.141:6789/0,ceph-osd2=192.168.1.142:6789/0,ceph-osd3=192.168.1.143:6789/0,ceph-osd4=192.168.1.145:6789/0} election epoch 92, quorum 0,1,2 ceph-osd1,ceph-osd2,ceph-osd3 osdmap e733: 10 osds: 10 up, 10 in flags noout,noin,sortbitwise pgmap v558367: 640 pgs, 3 pools, 1853 MB data, 497 objects 6106 MB used, 398 GB / 404 GB avail 640 active+clean
查了一下官方文档,找到了
http://docs.ceph.com/docs/hammer/rados/troubleshooting/troubleshooting-osd/
[root@ceph-osd1 XinFusion]# ceph osd unset noout unset noout
可以了。
[root@ceph-osd1 XinFusion]# ceph -s cluster 21ed0f42-69d2-450c-babf-b1a44c1b82e4 health HEALTH_WARN 1 mons down, quorum 0,1,2 ceph-osd1,ceph-osd2,ceph-osd3 monmap e7: 4 mons at {ceph-osd1=192.168.1.141:6789/0,ceph-osd2=192.168.1.142:6789/0,ceph-osd3=192.168.1.143:6789/0,ceph-osd4=192.168.1.145:6789/0} election epoch 92, quorum 0,1,2 ceph-osd1,ceph-osd2,ceph-osd3 osdmap e735: 10 osds: 10 up, 10 in flags sortbitwise pgmap v558399: 640 pgs, 3 pools, 1853 MB data, 497 objects 6106 MB used, 398 GB / 404 GB avail 640 active+clean
版权声明:本文为博主原创文章,未经博主允许不得转载。