21 Ceph Manager Modules #
The architecture of the Ceph Manager (refer to Book “Deployment Guide”, Chapter 1 “SUSE Enterprise Storage 6 and Ceph”, Section 1.2.3 “Ceph Nodes and Daemons” for a brief introduction) allows extending its functionality via modules, such as 'dashboard' (see Part II, “Ceph Dashboard”), 'prometheus' (see Chapter 18, Monitoring and Alerting), or 'balancer'.
To list all available modules, run:
cephadm@adm > ceph mgr module ls
{
"enabled_modules": [
"restful",
"status"
],
"disabled_modules": [
"dashboard"
]
}To enable or disable a specific module, run:
cephadm@adm > ceph mgr module enable MODULE-NAMEFor example:
cephadm@adm > ceph mgr module disable dashboardTo list the services that the enabled modules provide, run:
cephadm@adm > ceph mgr services
{
"dashboard": "http://myserver.com:7789/",
"restful": "https://myserver.com:8789/"
}21.1 Balancer #
The balancer module optimizes the placement group (PG) distribution across OSDs for a more balanced deployment. Although the module is activated by default, it is inactive. It supports the following two modes: 'crush-compat' and 'upmap'.
Tip: Current Balancer Configuration
To view the current balancer configuration, run:
cephadm@adm > ceph balancer statusImportant: Supported Mode
We currently only support the 'crush-compat' mode because the 'upmap' mode requires an OSD feature that prevents any pre-Luminous OSDs from connecting to the cluster.
21.1.1 The 'crush-compat' Mode #
In 'crush-compat' mode, the balancer adjusts the OSDs' reweight-sets to achieve improved distribution of the data. It moves PGs between OSDs, temporarily causing a HEALTH_WARN cluster state resulting from misplaced PGs.
Tip: Mode Activation
Although 'crush-compat' is the default mode, we recommend activating it explicitly:
cephadm@adm > ceph balancer mode crush-compat21.1.2 Planning and Executing of Data Balancing #
Using the balancer module, you can create a plan for data balancing. You can then execute the plan manually, or let the balancer balance PGs continuously.
The decision whether to run the balancer in manual or automatic mode depends on several factors, such as the current data imbalance, cluster size, PG count, or I/O activity. We recommend creating an initial plan and executing it at a time of low I/O load in the cluster. The reason for this is that the initial imbalance will probably be considerable and it is a good practice to keep the impact on clients low. After an initial manual run, consider activating the automatic mode and monitor the rebalance traffic under normal I/O load. The improvements in PG distribution need to be weighed against the rebalance traffic caused by the balancer.
Tip: Movable Fraction of Placement Groups (PGs)
During the process of balancing, the balancer module throttles PG movements so that only a configurable fraction of PGs is moved. The default is 5% and you can adjust the fraction, to 9% for example, by running the following command:
cephadm@adm > ceph config set mgr target_max_misplaced_ratio .09To create and execute a balancing plan, follow these steps:
Check the current cluster score:
cephadm@adm >ceph balancer evalCreate a plan. For example, 'great_plan':
cephadm@adm >ceph balancer optimize great_planSee what changes the 'great_plan' will entail:
cephadm@adm >ceph balancer show great_planCheck the potential cluster score if you decide to apply the 'great_plan':
cephadm@adm >ceph balancer eval great_planExecute the 'great_plan' for one time only:
cephadm@adm >ceph balancer execute great_planObserve the cluster balancing with the
ceph -scommand. If you are satisfied with the result, activate automatic balancing:cephadm@adm >ceph balancer onIf you later decide to deactivate automatic balancing, run:
cephadm@adm >ceph balancer off
Tip: Automatic Balancing without Initial Plan
You can activate automatic balancing without executing an initial plan. In such case, expect a potentially long running rebalancing of placement groups.
21.2 Telemetry Module #
The telemetry plugin sends the Ceph project anonymous data about the cluster in which the plugin is running.
This (opt-in) component contains counters and statistics on how the cluster has been deployed, the version of Ceph, the distribution of the hosts and other parameters which help the project to gain a better understanding of the way Ceph is used. It does not contain any sensitive data like pool names, object names, object contents, or host names.
The purpose of the telemetry module is to provide an automated feedback loop for the developers to help quantify adoption rates, tracking, or point out things that need to be better explained or validated during configuration to prevent undesirable outcomes.
Note
The telemetry module requires the Ceph Manager nodes to have the ability to push data over HTTPS to the upstream servers. Ensure your corporate firewalls permit this action.
To enable the telemetry module:
cephadm@adm >ceph mgr module enable telemetryNote
This command only enables you to view your data locally. This command does not share your data with the Ceph community.
To allow the telemetry module to start sharing data:
cephadm@adm >ceph telemetry onTo disable telemetry data sharing:
cephadm@adm >ceph telemetry offTo generate a JSON report that can be printed:
cephadm@adm >ceph telemetry showTo add a contact and description to the report:
cephadm@adm >ceph config set mgr mgr/telemetry/contact ‘John Doe john.doe@example.com’cephadm@adm >ceph config set mgr mgr/telemetry/description ‘My first Ceph cluster’The module compiles and sends a new report every 24 hours by default. To adjust this interval:
cephadm@adm >ceph config set mgr mgr/telemetry/interval HOURS