C Example Procedure of Manual Ceph Installation #
The following procedure shows the commands that you need to install Ceph storage cluster manually.
Generate the key secrets for the Ceph services you plan to run. You can use the following command to generate it:
python -c "import os ; import struct ; import time; import base64 ; \ key = os.urandom(16) ; header = struct.pack('<hiih',1,int(time.time()),0,len(key)) ; \ print base64.b64encode(header + key)"Add the keys to the related keyrings. First for
client.admin, then for monitors, and then other related services, such as OSD, Object Gateway, or MDS:cephadm >ceph-authtool -n client.admin \ --create-keyring /etc/ceph/ceph.client.admin.keyring \ --cap mds 'allow *' --cap mon 'allow *' --cap osd 'allow *' ceph-authtool -n mon. \ --create-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \ --set-uid=0 --cap mon 'allow *' ceph-authtool -n client.bootstrap-osd \ --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \ --cap mon 'allow profile bootstrap-osd' ceph-authtool -n client.bootstrap-rgw \ --create-keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring \ --cap mon 'allow profile bootstrap-rgw' ceph-authtool -n client.bootstrap-mds \ --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring \ --cap mon 'allow profile bootstrap-mds'Create a monmap—a database of all monitors in a cluster:
monmaptool --create --fsid eaac9695-4265-4ca8-ac2a-f3a479c559b1 \ /tmp/tmpuuhxm3/monmap monmaptool --add osceph-02 192.168.43.60 /tmp/tmpuuhxm3/monmap monmaptool --add osceph-03 192.168.43.96 /tmp/tmpuuhxm3/monmap monmaptool --add osceph-04 192.168.43.80 /tmp/tmpuuhxm3/monmap
Create a new keyring and import keys from the admin and monitors' keyrings there. Then use them to start the monitors:
cephadm >ceph-authtool --create-keyring /tmp/tmpuuhxm3/keyring \ --import-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring ceph-authtool /tmp/tmpuuhxm3/keyring \ --import-keyring /etc/ceph/ceph.client.admin.keyring sudo -u ceph ceph-mon --mkfs -i osceph-03 \ --monmap /tmp/tmpuuhxm3/monmap --keyring /tmp/tmpuuhxm3/keyring systemctl restart ceph-mon@osceph-03Check the monitors state in
systemd:root #systemctl show --property ActiveState ceph-mon@osceph-03Check if Ceph is running and reports the monitor status:
cephadm >ceph --cluster=ceph \ --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_statusCheck the specific services' status using the existing keys:
cephadm >ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin -f json-pretty status [...] ceph --connect-timeout 5 \ --keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \ --name mon. -f json-pretty statusImport keyring from existing Ceph services and check the status:
cephadm >ceph auth import -i /var/lib/ceph/bootstrap-osd/ceph.keyring ceph auth import -i /var/lib/ceph/bootstrap-rgw/ceph.keyring ceph auth import -i /var/lib/ceph/bootstrap-mds/ceph.keyring ceph --cluster=ceph \ --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin -f json-pretty statusPrepare disks/partitions for OSDs, using the XFS file system:
cephadm >ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \ --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdb ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \ --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdc [...]Activate the partitions:
cephadm >ceph-disk -v activate --mark-init systemd --mount /dev/vdb1 ceph-disk -v activate --mark-init systemd --mount /dev/vdc1For SUSE Enterprise Storage version 2.1 and earlier, create the default pools:
cephadm >ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users.swift 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .intent-log 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw.gc 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users.uid 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw.control 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .usage 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .log 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw 16 16Create the Object Gateway instance key from the bootstrap key:
cephadm >ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-rgw \ --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create \ client.rgw.0dc1e13033d2467eace46270f0048b39 osd 'allow rwx' mon 'allow rw' \ -o /var/lib/ceph/radosgw/ceph-rgw.rgw_name/keyringEnable and start Object Gateway:
root #systemctl enable ceph-radosgw@rgw.rgw_name systemctl start ceph-radosgw@rgw.rgw_nameOptionally, create the MDS instance key from the bootstrap key, then enable and start it:
cephadm >ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-mds \ --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create \ mds.mds.rgw_name osd 'allow rwx' mds allow mon \ 'allow profile mds' \ -o /var/lib/ceph/mds/ceph-mds.rgw_name/keyring systemctl enable ceph-mds@mds.rgw_name systemctl start ceph-mds@mds.rgw_name