본문 바로가기
Linux&Ubuntu

ceph Quincy release (17) 구축해보기

by Vittorio_Lee 2023. 9. 20.
728x90
반응형
SMALL

설치 환경

Rocky Linux release 8.8 (Green Obsidian)
Ceph Quincy  17.x

 

 

 

사전작업 

# vi /etc/hosts

10.101.0.32 ceph-1
10.101.0.8 ceph-2
10.101.0.28 ceph-3

각서버에 등록 
hostnamectl set-hostname ceph-1
hostnamectl set-hostname ceph-2
hostnamectl set-hostname ceph-3


시간동기화는 미리 해두어야 합니다 !

# yum install chron
# systemctl enable chronyd
# systemctl start chronyd

그러면 모든 서버에 Python3와 Podman을 설치해야 합니다(Ceph 서비스는 컨테이너에서 실행됨).Podman 대신 Docker를 사용해도 좋습니다.

# dnf install python39 -y
# python3 --version
Python 3.9.7

podman설치 

# dnf install podman -y
# podman -v

podman version 4.0.2

클러스터를 생성하고 구성하는 데 사용할 CEPHADM 도구를 설치합니다.

# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
# chmod +x cephadm
# ./cephadm add-repo --release quincy
Writing repo to /etc/yum.repos.d/ceph.repo...
Enabling EPEL...

# ./cephadm install

Writing repo to /etc/yum.repos.d/ceph.repo...
Enabling EPEL...
Completed adding repo.
# ./cephadm install
Installing packages ['cephadm']...

 

ceph 클러스터를 만들기 위해 cephadm 부트스트랩이 사용됩니다.이 명령어는 지정된 노드에서 ceph 서비스와 첫 번째 모니터를 시작하고 클러스터를 만들고 키, 구성 파일 등을 생성합니다.

운영 환경에서는 OSD 간 복제 트래픽을 위해 별도의 네트워크를 사용하는 것이 좋습니다.따라서 먼저 호스트에 네트워크 인터페이스를 구성해야 합니다.

# cephadm bootstrap --mon-ip 10.101.0.32 --cluster-network 10.101.0.0/24
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.4.1 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 84fb8d9e-576b-11ee-96a6-fa163e37f532
Verifying IP 10.101.0.32 port 3300 ...
Verifying IP 10.101.0.32 port 6789 ...
Mon IP `10.101.0.32` is in CIDR network `10.101.0.0/21`
Mon IP `10.101.0.32` is in CIDR network `10.101.0.0/21`
The cluster CIDR network 10.101.0.0/24 is not configured locally.
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
firewalld ready
Enabling firewalld service ceph-mon in current zone...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 10.101.0.0/21
Setting cluster_network to 10.101.0.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
firewalld ready
Enabling firewalld service ceph in current zone...
firewalld ready
Enabling firewalld port 9283/tcp in current zone...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph-1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
firewalld ready
Enabling firewalld port 8443/tcp in current zone...
Ceph Dashboard is now available at:

     URL: https://ceph-1:8443/
    User: admin
Password: tbf25ru27t

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/84fb8d9e-576b-11ee-96a6-fa163e37f532/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

sudo /usr/sbin/cephadm shell --fsid 84fb8d9e-576b-11ee-96a6-fa163e37f532 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

sudo /usr/sbin/cephadm shell 

Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.

 

# podman ps
CONTAINER ID  IMAGE                                                                                      COMMAND               CREATED             STATUS             PORTS       NAMES
19e5fbaef671  quay.io/ceph/ceph:v17                                                                      -n mon.ceph-1 -f ...  4 minutes ago       Up 4 minutes                   ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-mon-ceph-1
73fe11bcefa0  quay.io/ceph/ceph:v17                                                                      -n mgr.ceph-1.jgq...  4 minutes ago       Up 4 minutes                   ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-mgr-ceph-1-jgqwhj
dffef93b4fe4  quay.io/ceph/ceph@sha256:6b0a24e3146d4723700ce6579d40e6016b2c63d9bf90422653f2d4caa49be232  -n client.ceph-ex...  2 minutes ago       Up 2 minutes                   ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-ceph-exporter-ceph-1
d031ae6982bc  quay.io/ceph/ceph@sha256:6b0a24e3146d4723700ce6579d40e6016b2c63d9bf90422653f2d4caa49be232  -n client.crash.c...  2 minutes ago       Up 2 minutes                   ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-crash-ceph-1
34e192555938  quay.io/prometheus/node-exporter:v1.3.1                                                    --no-collector.ti...  About a minute ago  Up About a minute              ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-node-exporter-ceph-1
2b8ffca0f2df  quay.io/prometheus/prometheus:v2.33.4                                                      --config.file=/et...  About a minute ago  Up About a minute              ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-prometheus-ceph-1
f93e68e3f3b3  quay.io/prometheus/alertmanager:v0.23.0                                                    --cluster.listen-...  42 seconds ago      Up 43 seconds                  ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-alertmanager-ceph-1
4c9d10b997a8  quay.io/ceph/ceph-grafana:8.3.5                                                            /bin/bash             38 seconds ago      Up 38 seconds                  ceph-ed19efc2-5759-11ee-ae2a-fa163e37f532-grafana-ceph-1


클러스터 상태 확인:

# cephadm install ceph-common
# ceph -s


ssh 키젠 생성 

# ssh-keygen -t rsa    설정 따로 안할꺼면  enter 3연타 
- 키를 저장할 경로( 기본값 : $HOME/.ssh/id_rsa)
- passphrase (추가로 사용할 암호, 기본값 없음)
- passphrase 확인


이 예제에서는 관리 작업에 첫 번째 노드를 사용하므로 SSH 키를 설치하고 나머지 노드의 /etc/ceph에 배치해야 합니다.

# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3

# ssh-copy-id -f -i /root/.ssh/id_rsa.pub root@ceph-1
# ssh-copy-id -f -i /root/.ssh/id_rsa.pub root@ceph-2
# ssh-copy-id -f -i /root/.ssh/id_rsa.pub root@ceph-3




ceph 클러스터에 호스트 추가:

# ceph orch host add ceph-2
Added host 'ceph-2' with addr '10.101.0.8'
# ceph orch host add ceph-3
Added host 'ceph-3' with addr '10.101.0.28'

# ceph orch host  label add ceph-2 _admin
# ceph orch host  label add ceph-3 _admin

# scp /etc/ceph/ceph.conf root@ceph-2:/etc/ceph/
ceph.conf                                                    100%  263   175.6KB/s   00:00    
# scp /etc/ceph/ceph.conf root@ceph-3:/etc/ceph/
ceph.conf                                                    100%  263   248.1KB/s   00:00   

# scp /etc/ceph/ceph.client.admin.keyring root@ceph-2:/etc/ceph/
ceph.client.admin.keyring                                    100%  151   123.0KB/s   00:00    
# scp /etc/ceph/ceph.client.admin.keyring root@ceph-3:/etc/ceph/
ceph.client.admin.keyring                                    100%  151    85.2KB/s   00:00   

 

# ceph orch host ls
HOST    ADDR         LABELS  STATUS  
ceph-1  10.101.0.32  _admin          
ceph-2  10.101.0.8   _admin          
ceph-3  10.101.0.28  _admin          
3 hosts in cluster



# ceph orch daemon add osd ceph-1:/dev/vdb
Created osd(s) 0 on host 'ceph-1'
# ceph orch daemon add osd ceph-2:/dev/vdb
Created osd(s) 1 on host 'ceph-2'
# ceph orch daemon add osd ceph-3:/dev/vdb
Created osd(s) 2 on host 'ceph-3'

 



# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.05846  root default                              
-3         0.01949      host ceph-1                           
 0    hdd  0.01949          osd.0        up   1.00000  1.00000
-5         0.01949      host ceph-2                           
 1    hdd  0.01949          osd.1        up   1.00000  1.00000
-7         0.01949      host ceph-3                           
 2    hdd  0.01949          osd.2        up   1.00000  1.00000

이렇게 되면 구성은 완료 되었습니다. !

728x90
반응형
LIST

'Linux&Ubuntu' 카테고리의 다른 글

proxmox VM 생성 [Memory,CPU]  (0) 2024.11.26