3 Operating Ceph Services #
You can operate Ceph services either using systemd, or using DeepSea.
3.1 Operating Ceph Cluster Related Services using systemd #
Use the systemctl command to operate all Ceph related
services. The operation takes place on the node you are currently logged in
to. You need to have root privileges to be able to operate on Ceph
services.
3.1.1 Starting, Stopping, and Restarting Services using Targets #
To simplify starting, stopping, and restarting all the services of a
particular type (for example all Ceph services, or all MONs, or all OSDs)
on a node, Ceph provides the following systemd unit files:
cephadm > ls /usr/lib/systemd/system/ceph*.target
ceph.target
ceph-osd.target
ceph-mon.target
ceph-mgr.target
ceph-mds.target
ceph-radosgw.target
ceph-rbd-mirror.targetTo start/stop/restart all Ceph services on the node, run:
root #systemctl start ceph.targetroot #systemctl stop ceph.targetroot #systemctl restart ceph.target
To start/stop/restart all OSDs on the node, run:
root #systemctl start ceph-osd.targetroot #systemctl stop ceph-osd.targetroot #systemctl restart ceph-osd.target
Commands for the other targets are analogous.
3.1.2 Starting, Stopping, and Restarting Individual Services #
You can operate individual services using the following parameterized
systemd unit files:
ceph-osd@.service ceph-mon@.service ceph-mds@.service ceph-mgr@.service ceph-radosgw@.service ceph-rbd-mirror@.service
To use these commands, you first need to identify the name of the service you want to operate. See Section 3.1.3, “Identifying Individual Services” to learn more about services identification.
To start/stop/restart the osd.1 service, run:
root #systemctl start ceph-osd@1.serviceroot #systemctl stop ceph-osd@1.serviceroot #systemctl restart ceph-osd@1.service
Commands for the other service types are analogous.
3.1.3 Identifying Individual Services #
You can find out the names/numbers of a particular type of service in
several ways. The following commands provide results for services
ceph* and lrbd*. You can run them on
any node of the Ceph cluster.
To list all (even inactive) services of type ceph* and
lrbd*, run:
root # systemctl list-units --all --type=service ceph* lrbd*To list only the inactive services, run:
root # systemctl list-units --all --state=inactive --type=service ceph* lrbd*
You can also use salt to query services across multiple
nodes:
root@master # salt TARGET cmd.shell \
"systemctl list-units --all --type=service ceph* lrbd* | sed -e '/^$/,$ d'"Query storage nodes only:
root@master # salt -I 'roles:storage' cmd.shell \
'systemctl list-units --all --type=service ceph* lrbd*'3.1.4 Service Status #
You can query systemd for the status of services. For example:
root #systemctl status ceph-osd@1.serviceroot #systemctl status ceph-mon@HOSTNAME.service
Replace HOSTNAME with the host name the daemon is running on.
If you do not know the exact name/number of the service, see Section 3.1.3, “Identifying Individual Services”.
3.2 Restarting Ceph Services using DeepSea #
After applying updates to the cluster nodes, the affected Ceph related services need to be restarted. Normally, restarts are performed automatically by DeepSea. This section describes how to restart the services manually.
Tip: Watching the Restart
The process of restarting the cluster may take some time. You can watch the events by using the Salt event bus by running:
root@master # salt-run state.event pretty=TrueAnother command to monitor active jobs is
root@master # salt-run jobs.active3.2.1 Restarting All Services #
Warning: Interruption of Services
If Ceph related services—specifically iSCSI or NFS Ganesha—are configured as single points of access with no High Availability setup, restarting then will result in their temporary outage as viewed from the client side.
Tip: Samba not Managed by DeepSea
Because DeepSea and openATTIC do not currently support Samba deployments, you need to manage Samba related services manually. For more details, see Book “Deployment Guide”, Chapter 13 “Exporting Ceph Data via Samba”.
To restart all services on the cluster, run the following command:
root@master # salt-run state.orch ceph.restartAll roles you have configured restart in the following order: Ceph Monitor, Ceph Manager, Ceph OSD, Metadata Server, Object Gateway, iSCSI Gateway, NFS Ganesha. To keep the downtime low and to find potential issues as early as possible, nodes are restarted sequentially. For example, only one monitoring node is restarted at a time.
The command waits for the cluster to recover if the cluster is in a degraded, unhealthy state.
3.2.2 Restarting Specific Services #
To restart a specific service on the cluster, run:
root@master # salt-run state.orch ceph.restart.service_nameFor example, to restart all Object Gateways, run:
root@master # salt-run state.orch ceph.restart.rgwYou can use the following targets:
root@master # salt-run state.orch ceph.restart.monroot@master # salt-run state.orch ceph.restart.mgrroot@master # salt-run state.orch ceph.restart.osdroot@master # salt-run state.orch ceph.restart.mdsroot@master # salt-run state.orch ceph.restart.rgwroot@master # salt-run state.orch ceph.restart.igwroot@master # salt-run state.orch ceph.restart.ganesha3.3 Shutdown and Restart of the Whole Ceph Cluster #
Shutting down and restarting the cluster may be necessary in the case of a planned power outage. To stop all Ceph related services and restart without issue, follow the steps below.
Procedure 3.1: Shutting Down the Whole Ceph Cluster #
Shut down or disconnect any clients accessing the cluster.
To prevent CRUSH from automatically rebalancing the cluster, set the cluster to
noout:root@master #ceph osd set nooutDisable safety measures:
root@master #salt-run disengage.safetyStop all Ceph services in the following order:
Stop NFS Ganesha:
root@master #salt -C 'I@roles:ganesha and I@cluster:ceph' ceph.terminate.ganeshaStop Object Gateways:
root@master #salt -C 'I@roles:rgw and I@cluster:ceph' ceph.terminate.rgwStop Metadata Servers:
root@master #salt -C 'I@roles:mds and I@cluster:ceph' ceph.terminate.mdsStop iSCSI Gateways:
root@master #salt -C 'I@roles:igw and I@cluster:ceph' ceph.terminate.igwStop Ceph OSDs:
root@master #salt -C 'I@roles:storage and I@cluster:ceph' ceph.terminate.storageStop Ceph Managers:
root@master #salt -C 'I@roles:mgr and I@cluster:ceph' ceph.terminate.mgrStop Ceph Monitors:
root@master #salt -C 'I@roles:mon and I@cluster:ceph' ceph.terminate.mon
Power off all cluster nodes:
root@master #salt -C 'G@deepsea:*' cmd.run "shutdown -h"
Procedure 3.2: Starting the Whole Ceph Cluster #
Power on the Admin Node.
Power on the Ceph Monitor nodes.
Power on the Ceph OSD nodes.
Unset the previously set
nooutflag:root@master #ceph osd unset nooutPower on all configured gateways.
Power on or connect cluster clients.