This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Integration with external systems might require you to install additional packages to the base OS. Please refer to Section 3.1, “Software Installation”.
SUSE CaaS Platform offers SUSE Enterprise Storage as a storage solution for its containers. This chapter describes the steps required for successful integration.
Before you start with integrating SUSE Enterprise Storage, you need to ensure the following:
The SUSE CaaS Platform cluster must have ceph-common and xfsprogs installed on all nodes.
You can check this by running rpm -q ceph-common and rpm -q xfsprogs.
The SUSE CaaS Platform cluster can communicate with all of the following SUSE Enterprise Storage nodes: master, monitoring nodes, OSD nodes and the metadata server (in case you need a shared file system). For more details refer to the SUSE Enterprise Storage documentation: https://documentation.suse.com/ses/6/.
The steps will differ in small details depending on whether you are using RBD or CephFS.
RBD, also known as the Ceph Block Device or RADOS Block Device, facilitates the storage of block-based data in the Ceph distributed storage system. The procedure below describes steps to take when you need to use a RADOS Block Device in a Kubernetes Pod.
ceph osd pool create myPool 64 64
rbd pool init myPool
rbd create -s 2G myPool/image
Create a Block Device User, and record the key:
ceph auth get-or-create-key client.myPoolUser mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=myPool" | tr -d '\n' | base64
Create the Secret containing client.myPoolUser key:
apiVersion: v1
kind: Secret
metadata:
name: ceph-user
namespace: default
type: kubernetes.io/rbd
data:
key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1Jreplace== 1The block device user key from the Ceph cluster. |
Create the Pod:
apiVersion: v1
kind: Pod
metadata:
name: ceph-rbd-inline
spec:
containers:
- name: ceph-rbd-inline
image: opensuse/leap
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/ceph_rbd 1
name: volume
volumes:
- name: volume
rbd:
monitors:
- 10.244.2.136:6789 2
- 10.244.3.123:6789
- 10.244.4.7:6789
pool: myPool 3
image: image 4
user: myPoolUser 5
secretRef:
name: ceph-user 6
fsType: ext4
readOnly: falseOnce the pod is running, check the volume is mounted:
kubectl exec -it pod/ceph-rbd-inline -- df -k | grep rbd Filesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 1998672 6144 1976144 1% /mnt/ceph_rbd
The following procedure describes how to use RBD in a Persistent Volume:
ceph osd pool create myPool 64 64
rbd pool init myPool
rbd create -s 2G myPool/image
Create a Block Device User, and record the key:
ceph auth get-or-create-key client.myPoolUser mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=myPool" | tr -d '\n' | base64
Create the Secret containing client.myPoolUser key:
apiVersion: v1
kind: Secret
metadata:
name: ceph-user
namespace: default
type: kubernetes.io/rbd
data:
key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1Jreplace== 1The block device user key from the Ceph cluster. |
Create the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-rbd-pv
spec:
capacity:
storage: 2Gi 1
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 172.28.0.25:6789 2
- 172.28.0.21:6789
- 172.28.0.6:6789
pool: myPool 3
image: image 4
user: myPoolUser 5
secretRef:
name: ceph-user 6
fsType: ext4
readOnly: falseThe size of the volume image. Reference to Setting requests and limits for local ephemeral storage to see supported suffixes. | |
A list of Ceph monitor nodes IP and port. The default port is 6789. | |
The Ceph pool name. | |
The Ceph volume image name. | |
The Ceph pool user. | |
The Kubernetes Secret name contains the Ceph pool user key. |
Create the Persistent Volume Claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-rbd-pv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: ceph-rbd-pvDeleting Persistent Volume Claim does not remove RBD volume in the Ceph cluster.
Create the Pod:
apiVersion: v1
kind: Pod
metadata:
name: ceph-rbd-pv
spec:
containers:
- name: ceph-rbd-pv
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/ceph_rbd 1
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: ceph-rbd-pv 2Once the pod is running, check the volume is mounted:
kubectl exec -it pod/ceph-rbd-pv -- df -k | grep rbd Filesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 1998672 6144 1976144 1% /mnt/ceph_rbd
The following procedure describes how use RBD in Storage Class:
ceph osd pool create myPool 64 64
Create a Block Device User to use as pool admin and record the key:
ceph auth get-or-create-key client.myPoolAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=myPool' | tr -d '\n' | base64
Create a Block Device User to use as pool user and record the key:
ceph auth get-or-create-key client.myPoolUser mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=myPool" | tr -d '\n' | base64
Create the Secret containing the block device pool admin key:
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin
type: kubernetes.io/rbd
data:
key: QVFCa0ZJVmZBQUFBQUJBQUp2VzdLbnNIOU1yYll1R0p6T2Zreplace== 1The block device pool admin key from the Ceph cluster. |
Create the Secret containing the block device pool user key:
apiVersion: v1
kind: Secret
metadata:
name: ceph-user
type: kubernetes.io/rbd
data:
key: QVFCa0ZJVmZBQUFBQUJBQUp2VzdLbnNIOU1yYll1R0p6T2Zreplace== 1The block device pool user key from the Ceph cluster. |
Create the Storage Class:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: ceph-rbd-sc
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: 172.28.0.19:6789, 172.28.0.5:6789, 172.218:6789 1
adminId: myPoolAdmin 2
adminSecretName: ceph-admin 3
adminSecretNamespace: default
pool: myPool 4
userId: myPoolUser 5
userSecretName: ceph-user 6Create the Persistent Volume Claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-rbd-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi 1The request volume size. Reference to Setting requests and limits for local ephemeral storage to see supported suffixes. |
Deleting Persistent Volume Claim does not remove RBD volume in the Ceph cluster.
Create the Pod:
apiVersion: v1
kind: Pod
metadata:
name: ceph-rbd-sc
spec:
containers:
- name: ceph-rbd-sc
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/ceph_rbd 1
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: ceph-rbd-sc 2Once the pod is running, check the volume is mounted:
kubectl exec -it pod/ceph-rbd-sc -- df -k | grep rbd Filesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 1998672 6144 1976144 1% /mnt/ceph_rbd
The procedure below describes how to use CephFS in Pod.
Create a Block Device User to use as CephFS user and record the key:
ceph auth get-or-create-key client.myCephFSUser mds 'allow *' mgr 'allow *' mon 'allow r' osd 'allow rw pool=cephfs_metadata,allow rwx pool=cephfs_data' | tr -d '\n' | base64
The cephfs_data pool should be pre-existed with SES deployment, if not you can create and initialize with:
ceph osd pool create cephfs_data 256 256 ceph osd pool create cephfs_metadata 64 64 ceph fs new cephfs cephfs_metadata cephfs_data
Multiple Filesystems Within a Ceph Cluster is still an experimental feature, and disabled by default, to setup more than one filesystem requires to have this feature enabled. See Create a Ceph File System on how to create more filesystems.
Create the Secret containing the CephFS admin key:
apiVersion: v1
kind: Secret
metadata:
name: ceph-user
data:
key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1J4ZStreplace== 1The CephFS user key from the Ceph cluster. |
Create the Pod:
apiVersion: v1
kind: Pod
metadata:
name: cephfs-inline
spec:
containers:
- name: cephfs-inline
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/cephfs 1
name: volume
volumes:
- name: volume
cephfs:
monitors:
- 172.28.0.19:6789 2
- 172.28.0.5:6789
- 172.28.0.18:6789
user: myCephFSUser 3
secretRef:
name: ceph-user 4
readOnly: falseOnce the pod is running, check the volume is mounted:
kubectl exec -it pod/cephfs-inline -- df -k | grep cephfs
Filesystem 1K-blocks Used Available Use% Mounted on
172.28.0.19:6789,172.28.0.5:6789,172.28.0.18:6789:/
79245312 0 79245312 0% /mnt/cephfsThe following procedure describes how to attach a CephFS static persistent volume to a pod:
Create a Block Device User to use as CephFS user and record the key:
ceph auth get-or-create-key client.myCephFSUser mds 'allow *' mgr 'allow *' mon 'allow r' osd 'allow rw pool=cephfs_metadata,allow rwx pool=cephfs_data' | tr -d '\n' | base64
The cephfs_data pool should be pre-existed with SES deployment, if not you can create and initialize with:
ceph osd pool create cephfs_data 256 256 ceph osd pool create cephfs_metadata 64 64 ceph fs new cephfs cephfs_metadata cephfs_data
Multiple Filesystems Within a Ceph Cluster is still an experimental feature, and disabled by default, to setup more than one filesystem requires to have this feature enabled. See Create a Ceph File System on how to create more filesystem.
Create the Secret that contains the created CephFS admin key:
apiVersion: v1
kind: Secret
metadata:
name: ceph-user
data:
key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1J4ZStreplace== 1The CephFS user key from the Ceph cluster. |
Create the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-pv
spec:
capacity:
storage: 2Gi 1
accessModes:
- ReadWriteOnce
cephfs:
monitors:
- 172.28.0.19:6789 2
- 172.28.0.5:6789
- 172.28.0.18:6789
user: myCephFSUser 3
secretRef:
name: ceph-user 4
readOnly: falseThe desired volume size. Reference to Setting requests and limits for local ephemeral storage to see supported suffixes. | |
A list of Ceph monitor nodes IP and port. The default port is 6789. | |
The CephFS user name. | |
The Kubernetes Secret name contains the CephFS user key. |
Create the Persistent Volume Claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-pv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi 1The request volume size. |
Deleting Persistent Volume Claim does not remove CephFS volume in the Ceph cluster.
Create the Pod:
apiVersion: v1
kind: Pod
metadata:
name: cephfs-pv
spec:
containers:
- name: cephfs-pv
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/cephfs 1
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: cephfs-pv 2Once the pod is running, check the CephFS is mounted:
kubectl exec -it pod/cephfs-pv -- df -k | grep cephfs
Filesystem 1K-blocks Used Available Use% Mounted on
172.28.0.19:6789,172.28.0.5:6789,172.28.0.18:6789:/
79245312 0 79245312 0% /mnt/cephfsFor integration with SUSE Cloud Application Platform, refer to: Deploying SUSE Cloud Application Platform on SUSE CaaS Platform.