Docs/存储/ceph/p版本安装/6-初始化ceph最小集群.md

5.3 KiB
Raw Blame History

使用cephadm bootstrap初始化最小集群

cephadm bootstrap 过程是在单一节点上创建一个小型的ceph集群包括一个ceph monitor和一个ceph mgr以及监控组件包括prometheus、node-exporter等。

## 初始化时指定了mon-ip、集群网段、dashboard初始用户名和密码
# cephadm bootstrap --mon-ip 192.168.59.241  --cluster-network 10.168.59.0/24 --initial-dashboard-user admin --initial-dashboard-password demo2023
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 2e1228b0-0781-11ee-aa8a-000c2921faf1
Verifying IP 192.168.59.241 port 3300 ...
Verifying IP 192.168.59.241 port 6789 ...
Mon IP `192.168.59.241` is in CIDR network `192.168.59.0/24`
Mon IP `192.168.59.241` is in CIDR network `192.168.59.0/24`
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.59.0/24
Setting cluster_network to 10.168.59.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

            URL: https://ceph01:8443/
           User: admin
       Password: p5tuqo17we

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/2e1228b0-0781-11ee-aa8a-000c2921faf1/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

       sudo /usr/sbin/cephadm shell --fsid 2e1228b0-0781-11ee-aa8a-000c2921faf1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

       sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

       ceph telemetry on

For more information see:

       https://docs.ceph.com/docs/master/mgr/telemetry/

## 也可以在初始化时指定dashboard用户名和密码  --initial-dashboard-user  admin --initial-dashboard-password demo2023
# ls /etc/ceph/
ceph.client.admin.keyring  ceph.conf  ceph.pub  rbdmap
  • ceph.client.admin.keyring  是具有ceph管理员的秘钥
  • ceph.conf  是最小化配置文件
  • ceph.pub  是一个公钥,拷贝到其他节点后,可以免密登录。

在5个以上ceph节点时默认会将其中5个节点当做mon这可以从ceph orch ls中看出来

# ceph orch ls
NAME           PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager   ?:9093,9094      1/1  7m ago     46m  count:1    
crash                           1/1  7m ago     46m  *          
grafana        ?:3000           1/1  7m ago     46m  count:1    
mgr                             1/2  7m ago     46m  count:2    
mon                             1/5  7m ago     46m  count:5    
node-exporter  ?:9100           1/1  7m ago     46m  *          
prometheus     ?:9095           1/1  7m ago     46m  count:1  

初始化mon后此时集群还处于WARN状态没有OSDMON的数量也才只有1个MGR也只有1个所以接下来就是先添加ceph节点。

# ceph -s
 cluster:
   id:     67ccccf2-07f6-11ee-a1c2-000c2921faf1
   health: HEALTH_WARN
           OSD count 0 < osd_pool_default_size 3

 services:
   mon: 1 daemons, quorum ceph01 (age 9m)
   mgr: ceph01.sdqukl(active, since 7m)
   osd: 0 osds: 0 up, 0 in

 data:
   pools:   0 pools, 0 pgs
   objects: 0 objects, 0 B
   usage:   0 B used, 0 B / 0 B avail
   pgs: