一个完整的ceph集群,可以提供块存储、文件系统和对象存储。
本节主要介绍rbd存储功能如何灵活的使用,集群背景:
$ ceph -s
cluster:
id: 537175bb-51de-4cc4-9ee3-b5ba8842bff2
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 2m)
mgr: ceph-node1(active, since 2d), standbys: ceph-node2
mds: mycephfs:1 {0=ceph-node2=up:active} 1 up:standby
osd: 12 osds: 12 up (since 30h), 12 in (since 30h)
rgw: 2 daemons active (ceph-node1, ceph-node2)
task status:
data:
pools: 7 pools, 201 pgs
objects: 212 objects, 6.9 KiB
usage: 12 GiB used, 1.2 TiB / 1.2 TiB avail
pgs: 201 active+clean
rbd(块存储)介绍及使用
rbd介绍
RBD即RADOS Block Device的简称,RBD块存储是最稳定且最常用的存储类型。RBD块设备类似磁盘可以被格式化、挂载。RBD块设备具有快照、多副本、克隆和一致性等特性,数据以条带化的方式存储在Ceph集群的多个OSD中。
rbd工作流程- 客户端创建一个pool,并指定pg数量,创建rbd设备并挂载;
- 用户写入数据,ceph进行对数据切块,每个块的大小默认为4M,每个块名字是object+序号;
- 将每个object通过哈希算法分配给对应的pg;
- pg根据crush算法会寻找3个osd(假设副本数为3),把这object分别保存在这3个osd上存储,这一组osd就是pgp的概念;
- osd实际把硬盘格式化为xfs文件系统,object存储在这个文件系统就相当于存储了一个文件rbd0.object1.file。
某节点想要挂载、创建或使用rbd功能,必须满足以下两个条件
- 有操纵ceph集群的能力,也就是必须要有ceph.conf文件
- 要有ceph客户端相关组件,也就是ceph-common组件包
## 创建pool
$ ceph osd pool create myrbd1 64 64
pool 'myrbd1' created
## 将此pool启动rbd功能
$ ceph osd pool application enable myrbd1 rbd
enabled application 'rbd' on pool 'myrbd1'
## 初始化pool池
$ rbd pool init -p myrbd1
## 创建一个10G大小的块镜像
$ rbd create --size 10240 myrbd1/image01
## 查看存储池中的image
$ rbd ls --pool myrbd1
image01
## 查看镜像详情
$ rbd info myrbd1/image01
rbd image 'image01':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 121a89b2ad1f2
block_name_prefix: rbd_data.121a89b2ad1f2
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon Feb 21 11:24:07 2022
access_timestamp: Mon Feb 21 11:24:07 2022
modify_timestamp: Mon Feb 21 11:24:07 2022
(2)映射到客户端机器内核
## 映射到内核
# rbd map myrbd1/image01
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable myrbd1/image01 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
以上有报错,因为ceph rbd很多特性基于内核,提示此版本内核不支持object-map fast-diff deep-flatten特性
将此特性关闭
rbd feature disable myrbd1/image01 object-map fast-diff deep-flatten
再次映射
# rbd map myrbd1/image01
## 查看机器块设备
lsblk | grep rbd
rbd0 252:0 0 10G 0 disk
## List mapped rbd images.
# rbd showmapped
id pool namespace image snap device
0 myrbd1 image01 - /dev/rbd0
(3)格式化、挂载、测试、持久化
# mkfs.xfs /dev/rbd0
# mount /dev/rbd0 /mnt/
测试读写
# for((i=1;i<=20;i++));do echo $i > $i.txt;done
# ls
10.txt 12.txt 14.txt 16.txt 18.txt 1.txt 2.txt 4.txt 6.txt 8.txt
11.txt 13.txt 15.txt 17.txt 19.txt 20.txt 3.txt 5.txt 7.txt 9.txt
# cat 1.txt
1
卸载image
# umount /dev/rbd0
# rbd unmap myrbd1/image01
如果取消映射失败,提示设备忙,则加上-o force参数
映射到其他机器,检查数据是否还存在
# mount /dev/rbd0 /mnt/rbd/
# cd /mnt/rbd
# ls
10.txt 12.txt 14.txt 16.txt 18.txt 1.txt 2.txt 4.txt 6.txt 8.txt
11.txt 13.txt 15.txt 17.txt 19.txt 20.txt 3.txt 5.txt 7.txt 9.txt
# cat 1.txt
1
(4)修改rbd image存储空间大小测试完毕,未丢失数据。
$ rbd resize myrbd1/image01 --size 15G
Resizing image: 100% complete...done.
$ rbd info myrbd1/image01
rbd image 'image01':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
image01修改至15G大小
目前查看大小还是10G:
# df -h | grep mnt
/dev/rbd0 10G 33M 10G 1% /mnt
resize后,内核会直接识别到,不用重新map,直接刷新块设备即可:
# xfs_growfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=16, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2621440 to 3932160
# df -h | grep mnt
/dev/rbd0 15G 34M 15G 1% /mnt
rbd快照的使用 (1)快照: 在某个时间点的副本,当系统出现问题,可以通过恢复快照恢复之前副本状态。xfs用xfs_growfs命令刷新,ext4和ext3用resize2fs
创建快照
$ rbd snap create myrbd1/image01@snap01
$ rbd snap list myrbd1/image01
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 snap01 10 GiB Tue Feb 22 11:33:32 2022
$ rbd info myrbd1/image01@snap01
rbd image 'image01':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 1
id: 121a89b2ad1f2
block_name_prefix: rbd_data.121a89b2ad1f2
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Mon Feb 21 11:24:07 2022
access_timestamp: Mon Feb 21 11:24:07 2022
modify_timestamp: Mon Feb 21 11:24:07 2022
protected: False
模拟删除数据,恢复快照
# ls
10.txt 12.txt 14.txt 16.txt 18.txt 1.txt 2.txt 4.txt 6.txt 8.txt
11.txt 13.txt 15.txt 17.txt 19.txt 20.txt 3.txt 5.txt 7.txt 9.txt
# rm -f *
# cd ..
# umount /dev/rbd0
# rbd unmap myrbd1/image01
## 恢复
$ rbd snap rollback myrbd1/image01@snap01
Rolling back to snapshot: 100% complete...done.
注意:恢复快照前,需要先将rbd设备umount和取消内核映射,然后再进行rollback。
重新挂载,验证数据是否恢复
# rbd map myrbd1/image01
# mount /dev/rbd0 /mnt/rbd/
# ls
10.txt 12.txt 14.txt 16.txt 18.txt 1.txt 2.txt 4.txt 6.txt 8.txt
11.txt 13.txt 15.txt 17.txt 19.txt 20.txt 3.txt 5.txt 7.txt 9.txt
(2)克隆: 基于指定的块设备克隆出相同的一份出来数据恢复完成
克隆是要基于块设备的快照进行的,我们之前用上一步所创建出来的snap01来进行演示克隆功能。
设置快照处于被保护状态(快照必须处于被保护状态才能被克隆)
## 设为被保护状态
$ rbd snap protect myrbd1/image01@snap01
## 查看详情,主要是protected变成了TRUE
$ rbd info myrbd1/image01@snap01
rbd image 'image01':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 1
...
protected: True
通过快照克隆一个新块设备
## 克隆
rbd clone replicapool/image01@snap01 replicapool/image01_clone
## 查看目前所有image
# rbd ls replicapool
image01
image01_clone
## 查看克隆块设备信息
# rbd info replicapool/image01_clone
rbd image 'image01_clone':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 8549a76ff74f
block_name_prefix: rbd_data.8549a76ff74f
format: 2
features: layering
op_features:
flags:
create_timestamp: Tue Mar 29 14:40:51 2022
access_timestamp: Tue Mar 29 14:40:51 2022
modify_timestamp: Tue Mar 29 14:40:51 2022
parent: replicapool/image01@snap01
overlap: 10 GiB
这个克隆设备主要多了 parent: myrbd1/image01@snap01 这行配置,表示它的父关系
取消其父关系,使其独立,随后再进行mount测试
## 独立于父块设备
# rbd flatten myrbd1/image01_clone
Image flatten: 100% complete...done.
## 找其他机器,映射其克隆设备
# rbd map myrbd1/image01_clone
/dev/rbd1
# mount 即可
需要注意:根据块设备的快照克隆的块设备UUID不发生改变,所以克隆的块设备不能和原块设备挂载同一台主机上,否则会产生冲突:
/dev/rbd0: UUID="14e60e20-3620-4177-a453-400b00caad1b" TYPE="xfs"
/dev/rbd1: UUID="14e60e20-3620-4177-a453-400b00caad1b" TYPE="xfs"
rbd快照删除
$ rbd snap remove myrbd1/image01@snap01
Removing snap: 100% complete...done.
$ rbd snap list myrbd1/image01
导入导出RBD镜像
1.导出rbd镜像
# rbd export myrbd1/image01 /home/nfs/image01_bak
Exporting image: 100% complete...done.
2.导入镜像,恢复数据
先删除image01
## 快照要提前删完
# rbd snap remove replicapool/image01@snap01
Removing snap: 100% complete...done.
## 删除镜像
# rbd remove replicapool/image01
Removing image: 100% complete...done.
重新导入镜像
# rbd import /home/nfs/image01_bak myrbd1/image01 --image-format 2
Importing image: 100% complete...done.
重新挂载测试数据是否存在
# rbd map replicapool/image01
/dev/rbd0
# mount /dev/rbd0 /mnt/
# ls /mnt/
10.txt 12.txt 14.txt 16.txt 18.txt 1.txt 2.txt 4.txt 6.txt 8.txt
11.txt 13.txt 15.txt 17.txt 19.txt 20.txt 3.txt 5.txt 7.txt 9.txt
rbd使用普通用户挂载没什么问题
首先,某节点想要挂载、创建或使用rbd功能,必须满足以下两个条件
- 有操纵ceph集群的能力,也就是必须要有ceph.conf文件
- 要有ceph客户端相关组件,也就是ceph-common组件包
$ ceph auth list
installed auth entries:
mds.ceph-node1
key: AQBoFg9iVjO6DhAADSCzL/Njv16XONHBAPuRLA==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
mds.ceph-node2
key: AQCGFg9icS7TCBAAGvFfz/C5Av0P6Hu1Ws5SUw==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
...
默认情况下,ceph每个组件都会创建对应的用户,每个用户都有自己对应的的权限。
授权类型:
获取指定用户权限信息:
$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]
key = AQA0bgdi30YtIRAABefFzbzVQw0vCttrjeqirA==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
(2)创建普通用户,并赋予对应的权限
## 创建 client.vfan 用户,并授权可读 MON、可读写 OSD
$ ceph auth add client.vfan mon 'allow r' osd 'allow rw pool=myrbd1'
added key for client.vfan
$ ceph auth get client.vfan
exported keyring for client.vfan
[client.vfan]
key = AQCq2CViGig9HxAA64YqzL85idfTn3TRzE18dQ==
caps mon = "allow r"
caps osd = "allow rw pool=myrbd1"
## 如果该用户已存在,此命令只以密钥文件格式返回用户名和密钥。授权可读 MON,可读写 OSD
$ ceph auth get-or-create client.tom mon 'allow r' osd 'allow rw pool=myrbd1'
[client.tom]
key = AQDh2SVi+cFJJhAAXZomi67Sfd7vbbUE9MmHDw==
## 如果该用户已存在,此命令只返回密钥。授权可读 MON,可读写 OSD
$ ceph auth get-or-create-key client.tony mon 'allow r' osd 'allow rw pool=myrbd1'
AQAV2iVizi0pLhAA1RXFbxcYeDZqJxtKW9ZzSg==
(3)传输用户密钥至客户端主机
当客户端访问 ceph 集群时,ceph 会使用以下四个密钥环文件预设置密钥环设置:
/etc/ceph/<$cluster name>.<user $type>.<user $id>.keyring # 保存单个用户的 keyring
/etc/ceph/cluster.keyring # 保存多个用户的 keyring
/etc/ceph/keyring # 未定义集群名称的多个用户的 keyring
/etc/ceph/keyring.bin # 编译后的二进制文件
创建普通用户密钥权限文件:
$ ceph auth get client.vfan -o ceph.client.vfan.keyring
$ cat ceph.client.vfan.keyring
[client.vfan]
key = AQCq2CViGig9HxAA64YqzL85idfTn3TRzE18dQ==
caps mon = "allow r"
caps osd = "allow rw pool=myrbd1"
此文件还可以进行用户恢复操作,具体命令ceph auth import -i ceph.client.vfan.keyring,这种恢复方式,用户key值不会发生改变。
将密钥权限文件传输至客户端主机:
$ sudo scp ceph.client.vfan.keyring ceph-node2:/etc/ceph/
(4)客户端使用普通用户测试访问集群
# ceph --user vfan -s
cluster:
id: 537175bb-51de-4cc4-9ee3-b5ba8842bff2
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 2m)
mgr: ceph-node1(active, since 2d), standbys: ceph-node2
mds: mycephfs:1 {0=ceph-node2=up:active} 1 up:standby
osd: 12 osds: 12 up (since 30h), 12 in (since 30h)
rgw: 2 daemons active (ceph-node1, ceph-node2)
task status:
data:
pools: 8 pools, 265 pgs
objects: 273 objects, 14 MiB
usage: 8.1 GiB used, 792 GiB / 800 GiB avail
pgs: 265 active+clean
(5)普通用户挂载rbd默认情况下,ceph是用admin用户,可以使用--user指定用户。
因为之前创建用户赋予权限时没有给osd的x权限,现在重新赋予一下权限:
$ ceph auth caps client.vfan mon 'allow r' osd 'allow rwx pool=myrbd1'updated caps for client.vfan
映射到内核,挂载rbd
$ sudo rbd map myrbd1/image01 --user vfan
$ sudo mount /dev/rbd0 /mnt/
$ ls /mnt/
10.txt 12.txt 14.txt 16.txt 18.txt 1.txt 2.txt 4.txt 6.txt 8.txt
11.txt 13.txt 15.txt 17.txt 19.txt 20.txt 3.txt 5.txt 7.txt 9.txt
取消映射
$ sudo rbd unmap myrbd1/image01 --user vfan
(6)其他操作如更改镜像大小、创建快照、克隆等功能,均与admin用户操作相同,只不过每条命令后边会添加--user [用户名]的操作,例如:
$ rbd resize replicapool/image01 --size 15G --user vfan
Resizing image: 100% complete...done.
至此rbd常用功能已演示完毕,接下来介绍cephfs和对象存储的使用。