系统:CentOS 7.9 最小化安装;升级软件补丁;关闭SELinux和防火墙。 1、本地系统预配置(以下所 […]
Ceph集群刚部署完默认只有一个系统生成的存储池
[root@ceph1 ~]# ceph osd lspools 1 device_health_metrics [root@ceph1 ~]# ceph osd pool ls device_health_metrics
首先要创建一个默认的RBD存储池,池名称就叫rbd并且关联至rbd应用程序;然后才能正常使用rbd应用;
[root@ceph1 ~]# ceph osd pool create rbd pool 'rbd' created [root@ceph1 ~]# ceph osd pool application enable rbd rbd enabled application 'rbd' on pool 'rbd' [root@ceph1 ~]# ceph osd pool ls device_health_metrics rbd
存储池基本管理命令
ceph osd pool #OSD存储池管理命令
ls:列出osd存储池列表;等同于(ceph osd lspools)
detail:列出osd存储池列表详细信息
application #存储池关联应用程序管理;可关联应用程序[cephfs,rbd,rgw]
get PoolName:获取存储池关联信息;不加存储池名为显示所有池关联信息
enable PoolName ApplicationName:配置存储池关联应用程序
disable PoolName ApplicationName:取消存储池关联应用程序
create PoolName:创建osd存储池
delete PoolName:删除osd存储池;必须在Monitor的配置中将mon_allow_pool_delete标志设置为true,否则将拒绝删除池;
rename Srcpool Destpool:重命名osd存储池
detail:列出osd存储池列表详细信息
[root@ceph-t1 ~]# ceph osd pool ls device_health_metrics rbd a b m [root@ceph-t1 ~]# ceph osd pool ls detail pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 139 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth pool 2 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 113 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 3 'a' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 134 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 4 'b' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 132 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 5 'm' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 131 flags hashpspool stripe_width 0 application rbd
[root@ceph-t1 ~]# ceph osd pool ls device_health_metrics rbd a b m [root@ceph-t1 ~]# ceph osd pool create test pool 'test' created [root@ceph-t1 ~]# ceph osd pool ls device_health_metrics rbd a b m test
[root@ceph-t1 ~]# ceph osd pool ls device_health_metrics rbd a b m test [root@ceph-t1 ~]# ceph osd pool rename test test2 pool 'test' renamed to 'test2' [root@ceph-t1 ~]# ceph osd pool ls device_health_metrics rbd a b m test2
RBD存储池创建后需要初始化供RBD使用
[root@ceph1 ~]# rbd pool init rbd
RBD基本管理命令
rbd #Rados块设备(RBD)映像管理命令
ls(list):列出所有块设备列表
PoolName:列出指定RBD池的块设备列表
info PoolName/RBDName:显示块设备大小等信息;不指定RBD池默认显示默认池(rbd)的块设备信息;
create #块设备创建命令
-s,–size Size #定义块设备容量大小;默认单位为M;单位[M,G,T]
PoolName/RBDName:定义rbd名称;不指定RBD池默认创建至默认池(rbd)中;
resize -s Size PoolName/RBDName:调整块设备(扩大或缩小)的大小;不指定RBD池则为默认池(rbd)中的块设备;
rm(remove) PoolName/RBDName:删除块设备;不指定RBD池则为删除默认池(rbd)中的块设备;
PoolName:列出指定RBD池的块设备列表
[root@ceph-t1 ~]# rbd ls liuzhe liuzhe1 luyefeng luyefeng1 xieliping xieliping1 [root@ceph-t1 ~]# rbd ls rbd liuzhe liuzhe1 luyefeng luyefeng1 xieliping xieliping1
[root@ceph-t1 ~]# rbd info luyefeng rbd image 'luyefeng': size 10 GiB in 2560 objects order 22 (4 MiB objects) snapshot_count: 0 id: 87918a340e63 block_name_prefix: rbd_data.87918a340e63 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Sat Mar 27 14:16:52 2021 access_timestamp: Sat Mar 27 14:16:52 2021 modify_timestamp: Sat Mar 27 14:16:52 2021
-s,–size Size #定义块设备容量大小;默认单位为M;单位[M,G,T]
PoolName/RBDName:定义rbd名称;不指定RBD池默认创建至默认池(rbd)中;
[root@ceph-t1 ~]# rbd create -s 100G test/test1 [root@ceph-t1 ~]# rbd ls liuzhe liuzhe1 luyefeng luyefeng1 xieliping xieliping1 [root@ceph-t1 ~]# rbd ls test test1 [root@ceph-t1 ~]# rbd info test/test1 rbd image 'test1': size 100 GiB in 25600 objects order 22 (4 MiB objects) snapshot_count: 0 id: d722c21760cc block_name_prefix: rbd_data.d722c21760cc format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Mon Mar 29 17:55:49 2021 access_timestamp: Mon Mar 29 17:55:49 2021 modify_timestamp: Mon Mar 29 17:55:49 2021
[root@ceph-t1 ~]# rbd info test/test1 rbd image 'test1': size 100 GiB in 25600 objects order 22 (4 MiB objects) snapshot_count: 0 id: d722c21760cc block_name_prefix: rbd_data.d722c21760cc format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Mon Mar 29 17:55:49 2021 access_timestamp: Mon Mar 29 17:55:49 2021 modify_timestamp: Mon Mar 29 17:55:49 2021 [root@ceph-t1 ~]# rbd resize -s 200G test/test1 Resizing image: 100% complete...done. [root@ceph-t1 ~]# rbd info test/test1 rbd image 'test1': size 200 GiB in 51200 objects order 22 (4 MiB objects) snapshot_count: 0 id: d722c21760cc block_name_prefix: rbd_data.d722c21760cc format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Mon Mar 29 17:55:49 2021 access_timestamp: Mon Mar 29 17:55:49 2021 modify_timestamp: Mon Mar 29 17:55:49 2021
[root@ceph-t1 ~]# rbd ls test test1 [root@ceph-t1 ~]# rbd rm test/test1 Removing image: 100% complete...done. [root@ceph-t1 ~]# rbd ls test [root@ceph-t1 ~]#