13 SUSE Enterprise Storage 6 on Top of SUSE CaaS Platform 4 Kubernetes Cluster #
Warning: Technology Preview
Running containerized Ceph cluster on SUSE CaaS Platform is a technology preview. Do not deploy on a production Kubernetes cluster. This is not a supported version.
This chapter describes how to deploy containerized SUSE Enterprise Storage 6 on top of SUSE CaaS Platform 4 Kubernetes cluster.
13.1 Considerations #
Before you start deploying, consider the following points:
To run Ceph in Kubernetes, SUSE Enterprise Storage 6 uses an upstream project called Rook (https://rook.io/).
Depending on the configuration, Rook may consume all unused disks on all nodes in a Kubernetes cluster.
The setup requires privileged containers.
13.2 Prerequisites #
The minimum requirements and prerequisites to deploy SUSE Enterprise Storage 6 on top of SUSE CaaS Platform 4 Kubernetes cluster are as follows:
A running SUSE CaaS Platform 4 cluster. You need to have an account with a SUSE CaaS Platform subscription. You can activate a 60-day free evaluation here https://www.suse.com/products/caas-platform/download/MkpwEt3Ub98~/?campaign_name=Eval:_CaaSP_4.
At least three SUSE CaaS Platform worker nodes, with at least one additional disk attached to each worker node as storage for the OSD. We recommend four SUSE CaaS Platform worker nodes.
At least one OSD per worker node, with a minimum disk size of 5 GB.
Access to SUSE Enterprise Storage 6. You can get a trial subscription from here https://www.suse.com/products/suse-enterprise-storage/download/.
Access to a workstation that has access to the SUSE CaaS Platform cluster via
kubectl. We recommend using the SUSE CaaS Platform master node as the workstation.Ensure that the
SUSE-Enterprise-Storage-6-PoolandSUSE-Enterprise-Storage-6-Updatesrepositories are configured on the management node to install the rook-k8s-yaml RPM package.
13.3 Get Rook Manifests #
The Rook orchestrator uses configuration files in YAML format called manifests. The manifests you need are included in the rook-k8s-yaml RPM package. You can find this package in the SUSE Enterprise Storage 6 repository. Install it by running the following:
root # zypper install rook-k8s-yaml13.4 Installation #
Rook-Ceph includes two main components: the 'operator' which is run by Kubernetes and allows creation of Ceph clusters, and the Ceph 'cluster' itself which is created and partially managed by the operator.
13.4.1 Configuration #
13.4.1.1 Global Configuration #
The manifests used in this setup install all Rook and Ceph components in the 'rook-ceph' namespace. If you need to change it, adopt all references to the namespace in the Kubernetes manifests accordingly.
Depending on which features of Rook you intend to use, alter the 'Pod
Security Policy' configuration in common.yaml to
limit Rook's security requirements. Follow the comments in the manifest
file.
13.4.1.2 Operator Configuration #
The manifest operator.yaml configures the Rook
operator. Normally, you do not need to change it. Find more information
following the comments in the manifest file.
13.4.1.3 Ceph Cluster Configuration #
The manifest cluster.yaml is responsible for
configuring the actual Ceph cluster which will run in Kubernetes. Find
detailed description of all available options in the upstream Rook
documentation at
https://rook.io/docs/rook/v1.0/ceph-cluster-crd.html.
By default, Rook is configured to use all nodes that are not tainted
with node-role.kubernetes.io/master:NoSchedule and will
obey configured placement settings (see
https://rook.io/docs/rook/v1.0/ceph-cluster-crd.html#placement-configuration-settings).
The following example disables such behavior and only uses the nodes
explicitly listed in the nodes section:
storage:
useAllNodes: false
nodes:
- name: caasp4-worker-0
- name: caasp4-worker-1
- name: caasp4-worker-2Note
By default, Rook is configured to use all free and empty disks on each node for use as Ceph storage.
13.4.1.4 Documentation #
The Rook-Ceph upstream documentation at https://rook.github.io/docs/rook/v1.3/ceph-storage.html contains more detailed information about configuring more advanced deployments. Use it as a reference for understanding the basics of Rook-Ceph before doing more advanced configurations.
Find more details about the SUSE CaaS Platform product at https://documentation.suse.com/suse-caasp/4.0/.
13.4.2 Create the Rook Operator #
Install the Rook-Ceph common components, CSI roles, and the Rook-Ceph operator by executing the following command on the SUSE CaaS Platform master node:
root # kubectl apply -f common.yaml -f operator.yaml
common.yaml will create the 'rook-ceph' namespace,
Ceph Custom Resource Definitions (CRDs) (see
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
to make Kubernetes aware of Ceph Objects (for example, 'CephCluster'), and
the RBAC roles and Pod Security Policies (see
https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
which are necessary for allowing Rook to manage the cluster-specific
resources.
Tip: hostNetwork and hostPorts Usage
Allowing the usage of hostNetwork is required when using
hostNetwork: true in the Cluster Resource Definition.
Allowing the usage of hostPorts in the
PodSecurityPolicy is also required.
Verify the installation by running kubectl get pods -n
rook-ceph on SUSE CaaS Platform master node, for example:
root # kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-agent-57c9j 1/1 Running 0 22h
rook-ceph-agent-b9j4x 1/1 Running 0 22h
rook-ceph-operator-cf6fb96-lhbj7 1/1 Running 0 22h
rook-discover-mb8gv 1/1 Running 0 22h
rook-discover-tztz4 1/1 Running 0 22h13.4.3 Create the Ceph Cluster #
After you modify cluster.yaml according to your needs,
you can create the Ceph cluster. Run the following command on the SUSE CaaS Platform
master node:
root # kubectl apply -f cluster.yaml
Watch the 'rook-ceph' namespace to see the Ceph cluster being created.
You will see as many Ceph Monitors as configured in the
cluster.yaml manifest (default is 3), one Ceph Manager, and
as many Ceph OSDs as you have free disks.
Tip: Temporary OSD Pods
While bootstrapping the Ceph cluster, you will see some pods with the
name
rook-ceph-osd-prepare-NODE-NAME
run for a while and then terminate with the status 'Completed'. As their
name implies, these pods provision Ceph OSDs. They are left without being
deleted so that you can inspect their logs after their termination. For
example:
root # kubectl get pods --namespace rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-agent-57c9j 1/1 Running 0 22h
rook-ceph-agent-b9j4x 1/1 Running 0 22h
rook-ceph-mgr-a-6d48564b84-k7dft 1/1 Running 0 22h
rook-ceph-mon-a-cc44b479-5qvdb 1/1 Running 0 22h
rook-ceph-mon-b-6c6565ff48-gm9wz 1/1 Running 0 22h
rook-ceph-operator-cf6fb96-lhbj7 1/1 Running 0 22h
rook-ceph-osd-0-57bf997cbd-4wspg 1/1 Running 0 22h
rook-ceph-osd-1-54cf468bf8-z8jhp 1/1 Running 0 22h
rook-ceph-osd-prepare-caasp4-worker-0-f2tmw 0/2 Completed 0 9m35s
rook-ceph-osd-prepare-caasp4-worker-1-qsfhz 0/2 Completed 0 9m33s
rook-ceph-tools-76c7d559b6-64rkw 1/1 Running 0 22h
rook-discover-mb8gv 1/1 Running 0 22h
rook-discover-tztz4 1/1 Running 0 22h13.5 Using Rook as Storage for Kubernetes Workload #
Rook allows you to use three different types of storage:
- Object Storage
Object storage exposes an S3 API to the storage cluster for applications to put and get data. Refer to https://rook.io/docs/rook/v1.0/ceph-object.html for a detailed description.
- Shared File System
A shared file system can be mounted with read/write permission from multiple pods. This is useful for applications that are clustered using a shared file system. Refer to https://rook.io/docs/rook/v1.0/ceph-filesystem.html for a detailed description.
- Block Storage
Block storage allows you to mount storage to a single pod. Refer to https://rook.io/docs/rook/v1.0/ceph-block.html for a detailed description.
13.6 Uninstalling Rook #
To uninstall Rook, follow these steps:
Delete any Kubernetes applications that are consuming Rook storage.
Delete all object, file, and/or block storage artifacts that you created by following Section 13.5, “Using Rook as Storage for Kubernetes Workload”.
Delete the Ceph cluster, operator, and related resources:
root #kubectl delete -f cluster.yamlroot #kubectl delete -f operator.yamlroot #kubectl delete -f common.yamlDelete the data on hosts:
root #rm -rf /var/lib/rookIf necessary, wipe the disks that were used by Rook. Refer to https://rook.github.io/docs/rook/v1.3/ceph-teardown.html for more details.