7 Customizing the Default Configuration #
You can change the default cluster configuration generated in Stage 2 (refer
to DeepSea Stages Description). For example, you may need to
change network settings, or software that is installed on the Admin Node by
default. You can perform the former by modifying the pillar updated after
Stage 2, while the latter is usually done by creating a custom
sls file and adding it to the pillar. Details are
described in following sections.
7.1 Using Customized Configuration Files #
This section lists several tasks that require adding/changing your own
sls files. Such a procedure is typically used when you
need to change the default deployment process.
Tip: Prefix Custom .sls Files
Your custom .sls files belong to the same subdirectory as DeepSea's .sls
files. To prevent overwriting your .sls files with the possibly newly added
ones from the DeepSea package, prefix their name with the
custom- string.
7.1.1 Disabling a Deployment Step #
If you address a specific task outside of the DeepSea deployment process and therefore need to skip it, create a 'no-operation' file following this example:
Procedure 7.1: Disabling Time Synchronization #
Create
/srv/salt/ceph/time/disabled.slswith the following content and save it:disable time setting: test.nop
Edit
/srv/pillar/ceph/stack/global.yml, add the following line, and save it:time_init: disabled
Verify by refreshing the pillar and running the step:
root@master #salt target saltutil.pillar_refreshroot@master #salt 'admin.ceph' state.apply ceph.time admin.ceph: Name: disable time setting - Function: test.nop - Result: Clean Summary for admin.ceph ------------ Succeeded: 1 Failed: 0 ------------ Total states run: 1Note: Unique ID
The task ID 'disable time setting' may be any message unique within an
slsfile. Prevent ID collisions by specifying unique descriptions.
7.1.2 Replacing a Deployment Step #
If you need to replace the default behavior of a specific step with a
custom one, create a custom sls file with replacement
content.
By default /srv/salt/ceph/pool/default.sls creates an
rbd image called 'demo'. In our example, we do not want this image to be
created, but we need two images: 'archive1' and 'archive2'.
Procedure 7.2: Replacing the demo rbd Image with Two Custom rbd Images #
Create
/srv/salt/ceph/pool/custom.slswith the following content and save it:wait: module.run: - name: wait.out - kwargs: 'status': "HEALTH_ERR"1 - fire_event: True archive1: cmd.run: - name: "rbd -p rbd create archive1 --size=1024"2 - unless: "rbd -p rbd ls | grep -q archive1$" - fire_event: True archive2: cmd.run: - name: "rbd -p rbd create archive2 --size=768" - unless: "rbd -p rbd ls | grep -q archive2$" - fire_event: TrueThe wait module will pause until the Ceph cluster does not have a status of
HEALTH_ERR. In fresh installations, a Ceph cluster may have this status until a sufficient number of OSDs become available and the creation of pools has completed.The
rbdcommand is not idempotent. If the same creation command is re-run after the image exists, the Salt state will fail. The unless statement prevents this.To call the newly created custom file instead of the default, you need to edit
/srv/pillar/ceph/stack/ceph/cluster.yml, add the following line, and save it:pool_init: custom
Verify by refreshing the pillar and running the step:
root@master #salt target saltutil.pillar_refreshroot@master #salt 'admin.ceph' state.apply ceph.pool
Note: Authorization
The creation of pools or images requires sufficient authorization. The
admin.ceph minion has an admin keyring.
Tip: Alternative Way
Another option is to change the variable in
/srv/pillar/ceph/stack/ceph/roles/master.yml instead.
Using this file will reduce the clutter of pillar data for other minions.
7.1.3 Modifying a Deployment Step #
Sometimes you may need a specific step to do some additional tasks. We do not recommend modifying the related state file as it may complicate a future upgrade. Instead, create a separate file to carry out the additional tasks identical to what was described in Section 7.1.2, “Replacing a Deployment Step”.
Name the new sls file descriptively. For example, if you
need to create two rbd images in addition to the demo image, name the file
archive.sls.
Procedure 7.3: Creating Two Additional rbd Images #
Create
/srv/salt/ceph/pool/custom.slswith the following content and save it:include: - .archive - .default
Tip: Include Precedence
In this example, Salt will create the archive images and then create the demo image. The order does not matter in this example. To change the order, reverse the lines after the
include:directive.You can add the include line directly to
archive.slsand all the images will get created as well. However, regardless of where the include line is placed, Salt processes the steps in the included file first. Although this behavior can be overridden with requires and order statements, a separate file that includes the others guarantees the order and reduces the chances of confusion.Edit
/srv/pillar/ceph/stack/ceph/cluster.yml, add the following line, and save it:pool_init: custom
Verify by refreshing the pillar and running the step:
root@master #salt target saltutil.pillar_refreshroot@master #salt 'admin.ceph' state.apply ceph.pool
7.1.4 Modifying a Deployment Stage #
If you need to add a completely separate deployment step, create three new
files—an sls file that performs the command, an
orchestration file, and a custom file which aligns the new step with the
original deployment steps.
For example, if you need to run logrotate on all minions
as part of the preparation stage:
First create an sls file and include the
logrotate command.
Procedure 7.4: Running logrotate on all Salt minions #
Create a directory such as
/srv/salt/ceph/logrotate.Create
/srv/salt/ceph/logrotate/init.slswith the following content and save it:rotate logs: cmd.run: - name: "/usr/sbin/logrotate /etc/logrotate.conf"Verify that the command works on a minion:
root@master #salt 'admin.ceph' state.apply ceph.logrotate
Because the orchestration file needs to run before all other preparation steps, add it to the Prep stage 0:
Create
/srv/salt/ceph/stage/prep/logrotate.slswith the following content and save it:logrotate: salt.state: - tgt: '*' - sls: ceph.logrotateVerify that the orchestration file works:
root@master #salt-run state.orch ceph.stage.prep.logrotate
The last file is the custom one which includes the additional step with the original steps:
Create
/srv/salt/ceph/stage/prep/custom.slswith the following content and save it:include: - .logrotate - .master - .minion
Override the default behavior. Edit
/srv/pillar/ceph/stack/global.yml, add the following line, and save the file:stage_prep: custom
Verify that Stage 0 works:
root@master #salt-run state.orch ceph.stage.0
Note: Why global.yml?
The global.yml file is chosen over the
cluster.yml because during the
prep stage, no minion belongs to the Ceph cluster
and has no access to any settings in cluster.yml.
7.1.5 Updates and Reboots during Stage 0 #
During stage 0 (refer to DeepSea Stages Description for more information on DeepSea stages), the Salt master and Salt minions may optionally reboot because newly updated packages, for example kernel, require rebooting the system.
The default behavior is to install available new updates and not reboot the nodes even in case of kernel updates.
You can change the default update/reboot behavior of DeepSea stage 0 by
adding/changing the stage_prep_master and
stage_prep_minion options in the
/srv/pillar/ceph/stack/global.yml file.
stage_prep_master sets the behavior of the Salt master, and
stage_prep_minion sets the behavior of all minions. All
available parameters are:
- default
Install updates without rebooting.
- default-update-reboot
Install updates and reboot after updating.
- default-no-update-reboot
Reboot without installing updates.
- default-no-update-no-reboot
Do not install updates or reboot.
For example, to prevent the cluster nodes from installing updates and
rebooting, edit /srv/pillar/ceph/stack/global.yml and
add the following lines:
stage_prep_master: default-no-update-no-reboot stage_prep_minion: default-no-update-no-reboot
Tip: Values and Corresponding Files
The values of stage_prep_master correspond to file names
located in /srv/salt/ceph/stage/0/master, while
values of stage_prep_minion correspond to files in
/srv/salt/ceph/stage/0/minion:
root@master #ls -l /srv/salt/ceph/stage/0/master default-no-update-no-reboot.sls default-no-update-reboot.sls default-update-reboot.sls [...]root@master #ls -l /srv/salt/ceph/stage/0/minion default-no-update-no-reboot.sls default-no-update-reboot.sls default-update-reboot.sls [...]
7.2 Modifying Discovered Configuration #
After you completed Stage 2, you may want to change the discovered configuration. To view the current settings, run:
root@master # salt target pillar.itemsThe output of the default configuration for a single minion is usually similar to the following:
----------
available_roles:
- admin
- mon
- storage
- mds
- igw
- rgw
- client-cephfs
- client-radosgw
- client-iscsi
- mds-nfs
- rgw-nfs
- master
cluster:
ceph
cluster_network:
172.16.22.0/24
fsid:
e08ec63c-8268-3f04-bcdb-614921e94342
master_minion:
admin.ceph
mon_host:
- 172.16.21.13
- 172.16.21.11
- 172.16.21.12
mon_initial_members:
- mon3
- mon1
- mon2
public_address:
172.16.21.11
public_network:
172.16.21.0/24
roles:
- admin
- mon
- mds
time_server:
admin.ceph
time_service:
ntp
The above mentioned settings are distributed across several configuration
files. The directory structure with these files is defined in the
/srv/pillar/ceph/stack/stack.cfg directory. The
following files usually describe your cluster:
/srv/pillar/ceph/stack/global.yml- the file affects all minions in the Salt cluster./srv/pillar/ceph/stack/ceph/cluster.yml- the file affects all minions in the Ceph cluster calledceph./srv/pillar/ceph/stack/ceph/roles/role.yml- affects all minions that are assigned the specific role in thecephcluster./srv/pillar/ceph/stack/ceph/minions/MINION_ID/yml- affects the individual minion.
Note: Overwriting Directories with Default Values
There is a parallel directory tree that stores the default configuration
setup in /srv/pillar/ceph/stack/default. Do not change
values here, as they are overwritten.
The typical procedure for changing the collected configuration is the following:
Find the location of the configuration item you need to change. For example, if you need to change cluster related setting such as cluster network, edit the file
/srv/pillar/ceph/stack/ceph/cluster.yml.Save the file.
Verify the changes by running:
root@master #salt target saltutil.pillar_refreshand then
root@master #salt target pillar.items
7.2.1 Enabling IPv6 for Ceph Cluster Deployment #
Since IPv4 network addressing is prevalent, you need to enable IPv6 as a customization. DeepSea has no auto-discovery of IPv6 addressing.
To configure IPv6, set the public_network and
cluster_network variables in the
/srv/pillar/ceph/stack/global.yml file to valid IPv6
subnets. For example:
public_network: fd00:10::/64 cluster_network: fd00:11::/64
Then run DeepSea stage 2 and verify that the network information matches
the setting. Stage 3 will generate the ceph.conf with
the necessary flags.
Important: No Support for Dual Stack
Ceph does not support dual stack—running Ceph simultaneously on
IPv4 and IPv6 is not possible. DeepSea validation will reject a mismatch
between public_network and
cluster_network or within either variable. The following
example will fail the validation.
public_network: "192.168.10.0/24 fd00:10::/64"
Tip: Avoid Using fe80::/10 link-local Addresses
Avoid using fe80::/10 link-local addresses. All network
interfaces have an assigned fe80 address and require an
interface qualifier for proper routing. Either assign IPv6 addresses
allocated to your site or consider using fd00::/8.
These are part of ULA and not globally routable.