How do i install ceph using ceph-ansible on redhat?
-
There are a number of pre-reqs to follow when installing ceph on redhat. These are described in the link below. The workflow diagram is useful to follow.
After following the pre-reqs, this command will install the ceph-ansible package and all dependancies.
Run all this on your ceph management node.$ sudo yum install -y ceph-ansible
Add hosts to the /etc/ansible/hosts file
$ sudo vi /etc/ansible/hosts
Add entries for your Monitor nodes and Object Storage Daemon nodes. These are previously created nodes.
$ cat /etc/ansible/hosts [mons] mon-node1 mon-node2 mon-node3 [osds] osd-node1 osd-node2 osd-node3
Use ansible to, as this is a ansible deployment of ceph to check the nodes are responding to ansible. The pre-reqs discusses the creation of a ansible user and ssh key authentication.
$ ansible all -m ping osd-node2 | success >> { "changed": false, "ping": "pong" } osd-node1 | success >> { "changed": false, "ping": "pong" } osd-node3 | success >> { "changed": false, "ping": "pong" } mon-node2 | success >> { "changed": false, "ping": "pong" } mon-node3 | success >> { "changed": false, "ping": "pong" } mon-node1 | success >> { "changed": false, "ping": "pong" }
To prevent prompting of ssh key acceptance do the following. Create the .ansible.cfg file and add host_key_checking = False
$ vi /home/ceph/.ansible.cfg
Add the lines below
[defaults] host_key_checking = False
Configure Ceph Global Settings
Create a directory under the home directory so Ansible can write the keys
$ cd ~ $ mkdir ceph-ansible-keys
Copy the sample config to file called "all".
# cd /usr/share/ceph-ansible/group_vars/ # cp all.sample all # vi all
Set the following values in the all file.
generate_fsid: false fetch_directory: ~/ceph-ansible-keys ceph_stable_rh_storage: true ceph_stable_rh_storage_cdn_install: true cephx: true monitor_interface: eth0 journal_size: 4096 public_network: 10.50.20.0/24 cluster_network: 10.50.10.0/24
If you want to enable calamari do this:
# cd /usr/share/ceph-ansible/group_vars/ # cp mons.sample mons # vi mons
Add/Modify the line
calamari: true
Configure Ceph OSD settings
Check the devices on one of the monitor nodes
$ ssh osd-node1 lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 10G 0 disk ├─xvda1 202:1 0 1M 0 part └─xvda2 202:2 0 10G 0 part / xvdb 202:16 0 100G 0 disk xvdc 202:32 0 100G 0 disk xvdd 202:48 0 100G 0 disk
Create a osd config file from the sample file, edit file as required, your device may be different.
# cd /usr/share/ceph-ansible/group_vars/ # cp osds.sample osds # vi osds
Add/Modify entries
devices: - /dev/xvdb - /dev/xvdc - /dev/xvdd journal_collocation: true
Deploy the cluster At the end of the playbook output the PLAY RECAP will show if everything rans successfully.
# cd /usr/share/ceph-ansible # cp site.yml.sample site.yml # ansible-playbook site.yml -u ceph ... ... PLAY RECAP ******************************************************************** mon-node1 : ok=91 changed=18 unreachable=0 failed=0 mon-node2 : ok=91 changed=18 unreachable=0 failed=0 mon-node3 : ok=91 changed=17 unreachable=0 failed=0 osd-node1 : ok=164 changed=16 unreachable=0 failed=0 osd-node2 : ok=164 changed=16 unreachable=0 failed=0 osd-node3 : ok=164 changed=16 unreachable=0 failed=0
Now you have a cluster. You can check status of you cluster from one of the monitor nodes.
$ ssh mon-node1 $ ceph health HEALTH_OK
© Lightnetics 2024