16 Operating Ceph Services #
You can operate Ceph services either using systemd or using DeepSea.
16.1 Operating Ceph Cluster Related Services Using systemd #
Use the systemctl command to operate all Ceph related
services. The operation takes place on the node you are currently logged in
to. You need to have root privileges to be able to operate on Ceph
services.
16.1.1 Starting, Stopping, and Restarting Services Using Targets #
To simplify starting, stopping, and restarting all the services of a
particular type (for example all Ceph services, or all MONs, or all OSDs)
on a node, Ceph provides the following systemd unit files:
cephadm@adm > ls /usr/lib/systemd/system/ceph*.target
ceph.target
ceph-osd.target
ceph-mon.target
ceph-mgr.target
ceph-mds.target
ceph-radosgw.target
ceph-rbd-mirror.targetTo start/stop/restart all Ceph services on the node, run:
root #systemctl start ceph.targetroot #systemctl stop ceph.targetroot #systemctl restart ceph.target
To start/stop/restart all OSDs on the node, run:
root #systemctl start ceph-osd.targetroot #systemctl stop ceph-osd.targetroot #systemctl restart ceph-osd.target
Commands for the other targets are analogous.
16.1.2 Starting, Stopping, and Restarting Individual Services #
You can operate individual services using the following parameterized
systemd unit files:
ceph-osd@.service ceph-mon@.service ceph-mds@.service ceph-mgr@.service ceph-radosgw@.service ceph-rbd-mirror@.service
To use these commands, you first need to identify the name of the service you want to operate. See Section 16.1.3, “Identifying Individual Services” to learn more about services identification.
To start, stop or restart the osd.1 service, run:
root #systemctl start ceph-osd@1.serviceroot #systemctl stop ceph-osd@1.serviceroot #systemctl restart ceph-osd@1.service
Commands for the other service types are analogous.
16.1.3 Identifying Individual Services #
You can find out the names/numbers of a particular type of service in
several ways. The following commands provide results for
ceph* services. You can run them on any node of the
Ceph cluster.
To list all (even inactive) services of type ceph*, run:
root # systemctl list-units --all --type=service ceph*To list only the inactive services, run:
root # systemctl list-units --all --state=inactive --type=service ceph*
You can also use salt to query services across multiple
nodes:
root@master # salt TARGET cmd.shell \
"systemctl list-units --all --type=service ceph* | sed -e '/^$/,$ d'"Query storage nodes only:
root@master # salt -I 'roles:storage' cmd.shell \
'systemctl list-units --all --type=service ceph*'16.1.4 Service Status #
You can query systemd for the status of services. For example:
root #systemctl status ceph-osd@1.serviceroot #systemctl status ceph-mon@HOSTNAME.service
Replace HOSTNAME with the host name the daemon is running on.
If you do not know the exact name/number of the service, see Section 16.1.3, “Identifying Individual Services”.
16.2 Restarting Ceph Services Using DeepSea #
After applying updates to the cluster nodes, the affected Ceph related services need to be restarted. Normally, restarts are performed automatically by DeepSea. This section describes how to restart the services manually.
Tip: Watching the Restart
The process of restarting the cluster may take some time. You can watch the events by using the Salt event bus by running:
root@master # salt-run state.event pretty=TrueAnother command to monitor active jobs is
root@master # salt-run jobs.active16.2.1 Restarting All Services #
Warning: Interruption of Services
If Ceph related services—specifically iSCSI or NFS Ganesha—are configured as single points of access with no High Availability setup, restarting them will result in their temporary outage as viewed from the client side.
Tip: Samba Not Managed by DeepSea
Because DeepSea and the Ceph Dashboard do not currently support Samba deployments, you need to manage Samba related services manually. For more details, see Chapter 29, Exporting Ceph Data via Samba.
To restart all services on the cluster, run the following command:
root@master # salt-run state.orch ceph.restartFor DeepSea prior to version 0.8.4, the Metadata Server, iSCSI Gateway, Object Gateway, and NFS Ganesha services restart in parallel.
For DeepSea 0.8.4 and newer, all roles you have configured restart in the following order: Ceph Monitor, Ceph Manager, Ceph OSD, Metadata Server, Object Gateway, iSCSI Gateway, NFS Ganesha. To keep the downtime low and to find potential issues as early as possible, nodes are restarted sequentially. For example, only one monitoring node is restarted at a time.
The command waits for the cluster to recover if the cluster is in a degraded, unhealthy state.
16.2.2 Restarting Specific Services #
To restart a specific service on the cluster, run:
root@master # salt-run state.orch ceph.restart.service_nameFor example, to restart all Object Gateways, run:
root@master # salt-run state.orch ceph.restart.rgwYou can use the following targets:
root@master # salt-run state.orch ceph.restart.monroot@master # salt-run state.orch ceph.restart.mgrroot@master # salt-run state.orch ceph.restart.osdroot@master # salt-run state.orch ceph.restart.mdsroot@master # salt-run state.orch ceph.restart.rgwroot@master # salt-run state.orch ceph.restart.igwroot@master # salt-run state.orch ceph.restart.ganeshaThe restart orchestration checks if the installated binary is newer than the current one, or if configuration changes exist for this daemon and only restarts in those cases. If you run the above command and nothing happens, this is due to these conditions. See Section 16.1.2, “Starting, Stopping, and Restarting Individual Services” for more information.
16.3 Shutdown and Start of the Whole Ceph Cluster #
There are occasions when you need to stop all Ceph related services in the cluster in the recommended order, and then be able to simply start them again. For example, in case of a planned power outage.
Procedure 16.1: Shutting Down the Whole Ceph Cluster #
Shut down or disconnect any clients accessing the cluster.
To prevent CRUSH from automatically rebalancing the cluster, set the cluster to
noout:root@master #ceph osd set nooutDisable safety measures and run the
ceph.shutdownrunner:root@master #salt-run disengage.safetyroot@master #salt-run state.orch ceph.shutdownPower off all cluster nodes:
root@master #salt -C 'G@deepsea:*' cmd.run "shutdown -h"
Procedure 16.2: Starting the Whole Ceph Cluster #
Power on the Admin Node.
Power on the Ceph Monitor nodes.
Power on the Ceph OSD nodes.
Unset the previously set
nooutflag:root@master #ceph osd unset nooutPower on all configured gateways.
Power on or connect cluster clients.