一、前提条件
如下操作在所有节点
1、 配置主机名
hostnamectl set-hostname ceph01
hostnamectl set-hostname ceph02
hostnamectl set-hostname ceph03
2、配置hosts
[root@ceph01 ceph-ansible]# cat /etc/hosts
192.168.65.175 ceph01
192.168.65.176 ceph02
192.168.65.177 ceph03
3、配置免密
需配置ceph1节点对所有主/客户机节点的免密(包括ceph1本身)
[root@ceph01 ceph-ansible]# ssh-keygen -t rsa
[root@ceph01 ceph-ansible]# for i in {1..3}; do ssh-copy-id ceph0$i; done
4、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
5、关闭SELinux
[root@ceph01 ceph-ansible]# setenforce 0
[root@ceph01 ceph-ansible]# vi /etc/selinux/config
SELINUX=disable
6、配置repo源
[root@ceph01 ceph-ansible]# vi /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
配置aliyun源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
配置epel源
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
更新yum源
yum clean all && yum makecache
二、配置NTP服务
如下操作在所有节点
# yum -y install ntp ntpdate
# cd /etc && mv ntp.conf ntp.conf.bak
# vi /etc/ntp.conf
server ntp1.aliyun.com
启动ntp服务
systemctl start ntpd
systemctl enable ntpd
systemctl status ntpd
在除ceph1的所有节点强制同步server(ceph1)时间。
ntpdate ceph01
在除ceph1的所有节点写入硬件时钟,避免重启后失效。
hwclock –w
添加以下内容,每隔10分钟自动同步时间。
*/10 * * * * /usr/sbin/ntpdate ntp1.aliyun.com
三、安装ansible
如下操作都仅需在ceph1节点上操作。
1、安装ceph-ansible
安装ansible
yum -y install ansible
确保环境上安装了git,可通过以下方式安装。
yum -y install git
配置“http.sslVerify”参数为“false”,跳过系统证书。
git config --global http.sslVerify false
下载ceph-ansible。
6.0的需要ansible版本2.10以上
所以下载5.0或者4.0的
ceph-ansible项目地址:https://github.com/ceph/ceph-ansible
ceph-ansible官方文档:https://docs.ceph.com/projects/ceph-ansible/en/latest/
https://github.com/ceph/ceph-ansible/releases
2、安装ceph-ansible依赖
安装python-pip
yum install -y python-pip
将pip更新到最新版本
pip install --upgrade pip
检查并安装需要的软件版本
cd /opt/ceph-ansible/
pip install -r requirements.txt
3、解决环境依赖问题
1)所有节点安装依赖
yum install -y yum-plugin-priorities
2)创建服务节点配置列表
vi /opt/ceph-ansible/hosts
[mons]
192.168.65.175
192.168.65.176
192.168.65.177
[mgrs]
192.168.65.175
192.168.65.176
192.168.65.177
[osds]
192.168.65.175
192.168.65.176
192.168.65.177
#[mdss]
#192.168.65.175
#[rgws]
#192.168.65.175
#[clients]
#192.168.65.175
#[grafana-server]
#192.168.65.175
四、修改ceph-ansible配置文件
1、配置更名
在ansible部署时,必须传递相应的playbook给ansible-playbook命令。需要修改相应的playbook名称,然后修改对应的内容,使之满足集群部署的要求。
使用ceph-ansible提供的Ansible变量用来设置Ceph集群的配置。
所有选项及默认配置放在“group_vars”目录下,每种Ceph进程对应相关的配置文件。
cd /opt/ceph-ansible/group_vars/
cp mons.yml.sample mons.yml
cp mgrs.yml.sample mgrs.yml
cp mdss.yml.sample mdss.yml
cp rgws.yml.sample rgws.yml
cp osds.yml.sample osds.yml
cp clients.yml.sample clients.yml
cp all.yml.sample all.yml
2、添加ceph.conf配置参数
“all.yml”文件中,“ceph_conf_overrides”变量可用于覆盖ceph.conf中已配置的选项,或是增加新选项。
进入“all.yml”文件,修改“ceph_conf_overrides”这个参数。
vim all.yml
添加如下内容:
ceph_conf_overrides:
global:
osd_pool_default_pg_num: 64
osd_pool_default_pgp_num: 64
osd_pool_default_size: 3
mon:
mon_allow_pool_create: true
3、定义osd
osds.yml中可以对osd进行定义
devices:
- /dev/sdb
4、安装ceph的mon、mgr、osd
[root@ceph01 ceph-ansible-4.0.72]# ansible-playbook -i hosts site.yml
查看mgr mon的状态
## systemctlstatus ceph-mon@主机名
[root@ceph01 ~]# systemctl status ceph-mon@ceph01
● [email protected] - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since 二 2024-12-24 18:46:23 CST; 1s ago
Process: 48878 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 48878 (code=exited, status=1/FAILURE)
12月 24 18:46:23 ceph01 systemd[1]: Unit [email protected] entered failed state.
12月 24 18:46:23 ceph01 systemd[1]: [email protected] failed.
[root@ceph01 ~]# systemctl status ceph-mgr@ceph01
● [email protected] - Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
Active: inactive (dead)
12月 24 18:46:57 ceph01 systemd[1]: [/usr/lib/systemd/system/[email protected]:15] Unknown lvalue 'LockPersonality' in section 'Service'
12月 24 18:46:57 ceph01 systemd[1]: [/usr/lib/systemd/system/[email protected]:18] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
12月 24 18:46:57 ceph01 systemd[1]: [/usr/lib/systemd/system/[email protected]:21] Unknown lvalue 'ProtectControlGroups' in section 'Service'
12月 24 18:46:57 ceph01 systemd[1]: [/usr/lib/systemd/system/[email protected]:23] Unknown lvalue 'ProtectKernelModules' in section 'Service'
12月 24 18:46:57 ceph01 systemd[1]: [/usr/lib/systemd/system/[email protected]:24] Unknown lvalue 'ProtectKernelTunables' in section 'Service'
[root@ceph01 ~]# cd /opt/ceph-ansible-4.0.72/
查看集群状态
[root@ceph01 ceph-ansible-4.0.72]# ceph -s
cluster:
id: 71ee23cd-03ea-41a6-be45-0cebabd63d5c
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 20m)
mgr: ceph02(active, since 20m), standbys: ceph01, ceph03
osd: 3 osds: 3 up (since 17m), 3 in (since 17m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:
1个mon节点,3个osd节点(其中一个osd同时是mon节点)这是不安全的,所以禁用不安全模式
解决办法:禁用不安全模式!
[root@ceph01 ceph-ansible-4.0.72]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@ceph01 ceph-ansible-4.0.72]# ceph -s
cluster:
id: 71ee23cd-03ea-41a6-be45-0cebabd63d5c
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 23m)
mgr: ceph02(active, since 23m), standbys: ceph01, ceph03
osd: 3 osds: 3 up (since 21m), 3 in (since 21m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs: