From f4a9ba4df60cc5115c544243bfc1f60dae317038 Mon Sep 17 00:00:00 2001 From: iProbe Date: Sun, 25 Jun 2023 22:00:30 +0800 Subject: [PATCH] =?UTF-8?q?=E6=B7=BB=E5=8A=A0=20'=E5=AD=98=E5=82=A8/ceph/p?= =?UTF-8?q?=E7=89=88=E6=9C=AC=E5=AE=89=E8=A3=85/9-=E4=B8=BAceph=E9=9B=86?= =?UTF-8?q?=E7=BE=A4=E6=B7=BB=E5=8A=A0OSD.md'?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- 存储/ceph/p版本安装/9-为ceph集群添加OSD.md | 51 ++++++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 存储/ceph/p版本安装/9-为ceph集群添加OSD.md diff --git a/存储/ceph/p版本安装/9-为ceph集群添加OSD.md b/存储/ceph/p版本安装/9-为ceph集群添加OSD.md new file mode 100644 index 0000000..03e0b33 --- /dev/null +++ b/存储/ceph/p版本安装/9-为ceph集群添加OSD.md @@ -0,0 +1,51 @@ +> 说明:添加OSD时,建议将磁盘先格式化为无分区的原始磁盘 +```shell +## https://rook.github.io/docs/rook/v1.10/Getting-Started/ceph-teardown/?h=sgdisk#zapping-devices +DISK="/dev/sdX" + +## Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) +sgdisk --zap-all $DISK + +## Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present +dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync + +## SSDs may be better cleaned with blkdiscard instead of dd +blkdiscard $DISK + +## Inform the OS of partition table changes +partprobe $DISK +``` +```shell +## 查看各ceph节点有哪些磁盘是可用的,关注`AVAILABLE`列 +# ceph orch device ls +HOST    PATH          TYPE  DEVICE ID                                             SIZE  AVAILABLE  REFRESHED  REJECT REASONS   +ceph01  /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000             107G  Yes        17m ago                     +ceph01  /dev/sda      hdd   VMware_Virtual_SATA_Hard_Drive_00000000000000000001   107G  Yes        17m ago                     +ceph02  /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000             107G  Yes        18m ago                     +ceph02  /dev/sda      hdd   VMware_Virtual_SATA_Hard_Drive_00000000000000000001   107G  Yes        18m ago                     +ceph03  /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000             107G  Yes        25m ago                     +ceph03  /dev/sda      hdd   VMware_Virtual_SATA_Hard_Drive_00000000000000000001   107G  Yes        25m ago                     +ceph04  /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000             107G  Yes        17m ago                     +ceph04  /dev/sda      hdd   VMware_Virtual_SATA_Hard_Drive_00000000000000000001   107G  Yes        17m ago                     +ceph05  /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000             107G  Yes        17m ago                     +ceph05  /dev/sda      hdd   VMware_Virtual_SATA_Hard_Drive_00000000000000000001   107G  Yes        17m ago   + +## 接下来初始化osd +## 将指定的磁盘格式化为无分区的原始磁盘 +# blkdiscard /dev/nvme0n2 +# cephadm shell ceph orch device zap ceph01 /dev/sda +## 接着初始化其他节点上磁盘 +... + +## 添加OSD +# ceph orch daemon add osd ceph01:/dev/nvme0n2 +# ceph orch daemon add osd ceph01:/dev/sda +# ceph orch daemon add osd ceph02:/dev/nvme0n2 +# ceph orch daemon add osd ceph02:/dev/sda   +# ceph orch daemon add osd ceph03:/dev/nvme0n2 +# ceph orch daemon add osd ceph03:/dev/sda   +# ceph orch daemon add osd ceph04:/dev/nvme0n2 +# ceph orch daemon add osd ceph04:/dev/sda   +# ceph orch daemon add osd ceph05:/dev/nvme0n2 +# ceph orch daemon add osd ceph05:/dev/sda             +``` \ No newline at end of file