This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

Jump to content
SUSE CaaS Platform 4.5.2

Architecture Description

This guide describes the architecture of SUSE CaaS Platform 4.5.2.

Publication Date: 2019-05-06

Warning
Warning

This document is a work in progress.

The content in this document is subject to change without notice.

Note
Note

This guide assumes a configured SUSE Linux Enterprise Server 15 SP2 environment.

Copyright © 2006 — 2020 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™, etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors, nor the translators shall be held liable for possible errors or the consequences thereof.

1 Product Description

SUSE CaaS Platform is a Cloud Native Computing Foundation (CNCF) certified Kubernetes distribution on top of SUSE Linux Enterprise.

SUSE CaaS Platform automates the orchestration and management of containerized applications and services with powerful Kubernetes capabilities, including:

  • Workload scheduling optimizes hardware utilization while taking the container requirements into account.

  • Service proxies provide single IP addresses for services and distribute the load between containers.

  • Application scaling up and down accommodates changing loads.

  • Non-disruptive rollout/rollback of new applications and updates enables frequent changes without downtime.

  • Health monitoring and management supports application self-healing and ensures application availability.

In addition, SUSE CaaS Platform simplifies the platform operator’s experience, with everything you need to get up and running quickly, and to manage the environment effectively in production. It provides:

  • A complete container execution environment, container run time (CRI-O), and container image registries (registry.suse.com).

  • Application ecosystem support with SUSE Linux Enterprise container base images, and access to tools and services offered by SUSE Ready for CaaS Platform partners and the Kubernetes community.

  • Enhanced datacenter integration features that enable you to plug Kubernetes into new or existing infrastructure, systems, and processes.

  • End-to-End security, implemented holistically across the full stack including network policies via Cilium, PodSecurityPolicies and Kubernetes RBAC.

  • Advanced platform management that simplifies platform installation, configuration, re-configuration, monitoring, maintenance, updates, and recovery.

  • Enterprise hardening including comprehensive interoperability testing, support for thousands of platforms, and world-class platform maintenance and technical support.

You can deploy SUSE CaaS Platform onto physical servers or use it on virtual machines. After deployment, it is immediately ready to run and provides a highly-scalable cluster.

SUSE CaaS Platform inherits benefits of SUSE Linux Enterprise and uses tools and technologies well-known to system administrators such as cloud-init or AutoYaST.

For more information, including a list of the various components which make up SUSE CaaS Platform please refer to the Release Notes on https://www.suse.com/releasenotes/.

SUSE CaaSP Components

1.1 Product Release Cycle

Product releases are aligned with SUSE Linux Enterprise releases and Kubernetes. Major version jumps are determined by the SUSE Linux Enterprise release cycle. Minor version jumps by the quarterly Kubernetes releases. Individual maintenance updates with receive patchlevel numbering.

Versioning scheme: x.y.z

  • x - Base OS version

  • y - Kubernetes version

  • z - Patchlevel (incl. new features !)

VersionBase OSKubernetes versionPatchlevel

4.0.0

SUSE Linux Enterprise Server 15 SP2

1.15

0

4.0.1

SUSE Linux Enterprise Server 15 SP2

1.15

1

…​

   

4.1.0

SUSE Linux Enterprise Server 15 SP2

1.16

0

…​

   

5.0.0

SUSE Linux Enterprise 15 SP2

tbd

0

2 Supported Platforms

Note
Note

SUSE CaaS Platform currently only runs on x86_64 architectures.

The following platforms are currently supported:

  • SUSE OpenStack Cloud 8

  • VMware ESXi 6.7

  • KVM

  • Bare Metal x86_64

  • Amazon Web Services (technological preview)

3 The SUSE CaaS Platform stack

All long-lived components run in containers, with the exception of:

  • Kubelet

  • CRI-O

All Kubernetes components critical to the infrastructure are created inside the kube-system namespace. As a user, you can create as many namespaces as necessary for your operations and enforce different permissions using RBAC rules.

kubeadm is what drives the deployment of new machines, and this component runs on-demand, uncontainerized during the bootstrap or join procedures.

3.1 Base Operating System

  • SLE 15 SP2

    • Kernel: kernel-default 4.12.14 or greater

    • Filesystem: XFS or BTRFS

3.1.1 Software Management

SUSE CaaS Platform is distributed as a dedicated repository. All required packages to deploy a node to the cluster are installable via a pattern. This pattern will automatically be installed by skuba when bootstrapping or joining a node.

  • Extension to base OS image

  • Software distribution channels

  • Packages

  • Container ecosystem

    • Helm

    • SUSE container registry

3.1.1.1 Upgrades of OS components (automated)

By default SUSE CaaS Platform clusters automatically apply all the patches that are marked as non interactive. These patches are safe to be applied since they should not cause any side effect.

However, some patches need the nodes to be rebooted in order to be activated. This is the case for example of kernel updates or some package updates (like glibc).

The nodes of the cluster have some metadata that is kept up-to-date by SUSE CaaS Platform. This metadata can be used by the cluster administrator to answer these questions:

  • Does the node need to be rebooted to make some updates active?

  • Does the node have non interactive updates pending?

  • When was the check done last time?

Cluster administrators can get a quick overview of each cluster node by using one of the following methods.

UI mode:

  • Open a standard kubernetes UI (eg: ​kubernetes dashboard​).

  • Click on the node.

  • Look at the annotations associated to the node.

Text mode:

  • Go to a machine with a working kubectl (meaning the user can connect to the cluster).

  • Ensure you’ve the caasp kubectl plugin installed. This is a simple statically linked binary that SUSE distributes alongside skuba.

  • Execute the kubectl caasp cluster status command (or skuba cluster status).

The output of the command will look like that:

NAME      OS-IMAGE                              KERNEL-VERSION           KUBELET-VERSION   CONTAINER-RUNTIME   HAS-UPDATES   HAS-DISRUPTIVE-UPDATES   CAASP-RELEASE-VERSION
master0   SUSE Linux Enterprise Server 15 SP2   4.12.14-197.29-default   v1.18.6           cri-o://1.18.2      no            no                       4.5.0
master1   SUSE Linux Enterprise Server 15 SP2   4.12.14-197.29-default   v1.18.6           cri-o://1.18.2      no            no                       4.5.0
master2   SUSE Linux Enterprise Server 15 SP2   4.12.14-197.29-default   v1.18.6           cri-o://1.18.2      no            no                       4.5.0
worker0   SUSE Linux Enterprise Server 15 SP2   4.12.14-197.29-default   v1.18.6           cri-o://1.18.2      no            no                       4.5.0
worker1   SUSE Linux Enterprise Server 15 SP2   4.12.14-197.29-default   v1.18.6           cri-o://1.18.2      no            no                       4.5.0
worker2   SUSE Linux Enterprise Server 15 SP2   4.12.14-197.29-default   v1.18.6           cri-o://1.18.2      no            no                       4.5.0
3.1.1.1.1 Node reboots

Some updates require the node to be rebooted to make them active.

The SUSE CaaS Platform cluster is configured by default to take advantage of ​kured​. This service looks for nodes that have to be rebooted and, before doing the actual reboot, takes care of draining the node.

Kured reboots one node per time. This ensures the rest of worker nodes won’t be saturated when adopting the being rebooted worker node workloads, as well as ensuring that etcd will always be healthy in the case of control plane nodes.

Cluster administrators can integrate kured stats into a prometheus instance and create alerts, charts and other personal customizations.

Cluster administrators can also fine tune the kured deployment to prevent its agent from rebooting machines where special workloads are running. For example: it’s possible to prevent kured from rebooting nodes running computational workloads until their pods are done. To achieve that it’s necessary to instruct cluster users to add special labels to the pods they don’t want to see interrupted due to the node being rebooted.

3.1.1.1.2 Interactive upgrades

Some updates might cause damages to the running workloads. These interactive updates are currently being referenced by this document as "disruptive upgrades".

Cluster administrators don’t have to worry about disruptive upgrades. SUSE CaaS Platform will automatically apply them making sure no disruption is caused to the cluster. That happens because nodes with disruptive upgrades are updated one at a time, similar to when nodes are automatically rebooted. Moreover, SUSE CaaS Platform will take care of draining and cordoning the node before these updates are applied, and uncordoning it afterwards.

Disruptive upgrades could take some time to be automatically applied due to their sensitive nature (nodes are updated one by one). Cluster operators can always see the status of all nodes by looking at the annotations of the kubernetes nodes (see previous section).

By looking at node annotations a cluster administrator can answer the following questions:

  • Does the node have pending disruptive upgrades?

  • Are the disruptive upgrades being applied?

3.1.1.2 Upgrades of OS components (not automated)

It’s possible to disable the automatic patch apply completely, if the user wants to inspect every patch that will be applied to the cluster, so they are in complete control of when the cluster is patched. By default SUSE CaaS Platform clusters have some updates applied automatically. Nodes can also be rebooted in an automatic fashion under some circumstances. To prevent that from happening it’s possible to annotate nodes that are not desired to be automatically rebooted. Any user with rights to annotate nodes will be able to configure this behavior. Cluster administrators can use software like SUSE Manager to check the patching level of the underlying operating system of any node. When rebooting nodes, it’s important to take some considerations into account:

  • Ensure nodes are drained (and thus, cordoned) before they are rebooted, uncordon the nodes once they are back

  • Reboot master/etcd nodes one by one. Wait for the rebooted node to come back, make sure etcd is in an healthy state before moving to the next etcd node (this can be done using etcdctl)

  • Do not reboot too many worker nodes at the same time to avoid the remaining ones to be swamped by workloads

Cluster administrators have to follow the very same steps whenever a node has an interactive (aka disruptive) upgrade pending.

3.1.1.3 Upgrades of the Kubernetes platform

The cluster administrator can check whether there’s a new Kubernetes version available distributed by SUSE, and in case there is, they can upgrade the cluster in a controlled way.

  • In order to find out if there’s a new Kubernetes version available, the following command will be executed in a machine that has access to the cluster definition folder:

    • skuba cluster upgrade plan

  • If there’s a new version available, it will be reported in the terminal.

  • In order to start the upgrade of the cluster, all commands should be executed from a machine that contains the cluster definition folder, so that it contains an administrative kubeconfig file:

    • This command will confirm the target version to what the cluster will be applied if the process is continued.

      • It’s necessary to upgrade all control plane nodes first, running:

      • skuba node upgrade apply --user sles --sudo --target <IP_ADDRESS/FQDN>

    • This command has to be applied on all control plane nodes, one by one.

      • It’s necessary to upgrade all worker nodes last, running:

      • skuba node upgrade apply --user sles --sudo --target <IP_ADDRESS/FQDN>

    • This command has to be applied on all worker nodes, one by one.

3.2 Software Versions

SUSE CaaS Platform 4 ships with the following components and their respective dependencies (not listed).

3.3 Networking

In the networking level, several concepts need to be defined:

  • Container Network Interface (CNI)

    The default plugin providing CNI in SUSE CaaS Platform is Cilium. CNI forms an overlay network, allowing for pods running in different machines in the cluster to communicate with each other transparently.

  • Network policies

    Network policies allow for security in terms of routing within the cluster. They define how a groups of pods are allowed to communicate with each other and with other network endpoints.

    • Network policies allow for fine grained restrictions on what networks are reachable, both in ingress and egress traffic.

  • Kube proxy

    The kube proxy is a service that runs containerized in all machines of the cluster and allows for pod to service communication. When you expose a service, and some other pod consumes this service, the kube proxy is the responsible for setting the rules on every machine so the service is reachable from within the pod using that service.

  • Envoy

    Envoy is a high performance service and edge proxy that provides L7 policy support. Envoy is an application language neutral service and hence can work along with services written in different languages, even though the core is written in C++ language. Envoy supports Go-extensions and WASM support that makes it more adoptable by other apps. Envoy supports mulitple protocols such as HTTP/1.1, HTTP/2, gRPC, MangoDB, DynamMoDB etc. Envoy supports L3/L4 filtering. Envoy provides Advanced load balancing comparable to NGINX. Envoy supports Health checking, observability and Service Discovery.

Cilium 1.6 uses Envoy for the L7 Network Policies.

  • Cilium KVStore free operation

    Cilium can now operate entirely without a KVstore in the context of Kubernetes.

  • Cilium Socket-based load-balancing

    Cilium’s Socket-based load-balancing combines the advantages of client-side loadbalancing and network based load-balancing to provide a transparent loadbalancing service through kubernetes to map the ServiceIP to the Endpoint IP are done only once during connection establishment.

  • Cilium Generic CNI chaining

    CNI chaining framework is introduced in cilium 1.6 release that allows to run Cilium along with other CNI’s such as Weave, Calico, Flannel, AWS VPC CNI or the Lyft CNI plugin. This allows to utilize the eBPF based security policy enforcements and its features while getting the basic networking support from the legacy CNI plugin in-use.

  • Cilium Native AWS ENI mode

    Cilium 1.6 has introduced a new IPAM mode, AWS ENI mode that works along with a new operator-based design. This helps when running services in the AWS and managing IPAM from the operator defined pools while getting the network policy enforcement from Cilium..

  • Cilium Policy scalability improvements

    Cilium 1.6 provides an improved and scalable policy system that decouples handling of policy and identity definitions while moving to an entirely incremental model.

4 Deployment Scenarios

4.1 Kubernetes components

In Kubernetes there are two different machine types:

  • Control plane (called "Masters")

  • Workers

Control plane machines are responsible for running the main Kubernetes components, this includes:

  • etcd

    • etcd is a distributed key value store. It’s where all data from the cluster is persisted.

  • API server

    • The API server is responsible for serving all the API endpoints that are used by the different Kubernetes components and clients.

  • Main controllers

    • The main controllers are responsible for most of the core functionality from Kubernetes. When you create a Deployment, a controller will be responsible for creating the ReplicaSet, and in the same way, Pods will be created out of the ReplicaSet by a controller as well.

  • Scheduler

    • The scheduler is the component that assigns pods to the different nodes based on a number of restrictions and is aware of individual and collective resource requirements.

Both the control plane and worker nodes run an agent called kubelet and a container runtime (cri-o). The kubelet is responsible for talking to the container runtime, managing the Pod lifecycle that were assigned to this machine. The container runtime, is the component that will create the containers themselves.

4.2 High Availability Considerations

The default deployment aims for a "High Availability" (HA) Kubernetes service. In order to achieve HA it’s required to run several control planes.

Not any number greater than 1 is optimal for HA, and this can impact the fault tolerance. The reason for this is the etcd distributed key value store:

Cluster sizeFailure Tolerance

1

0

2

0

3

1

4

1

5

2

6

2

7

3

8

3

9

4

Given that etcd runs only on the control plane nodes, having 2 control plane nodes provides a HA solution that is fault tolerant for the Kubernetes components but not for the etcd cluster. If one of those two control planes nodes goes down, the cluster storage will be unavailable, and the cluster won’t be able to accept new changes (already running workloads won’t suffer any changes, but the cluster won’t be able to react to new changes from that point on).

In order to provide a fault tolerant HA environment you must have an odd number of control planes.

A minimum of 3 master nodes is required in order to tolerate a complete loss of one control plane node.

Control planes are only part of the whole picture. Most components will talk to the API Server, and the API Server must be exposed on all master nodes to communicate to the clients and the rest of the cluster components.

4.2.1 Load Balancer

The most reasonable way to achieve fault tolerant exposure of the API servers is a load balancer. The load balancer will point to all the control plane nodes. The clients and Kubernetes components will talk to the load balancer: which will perform health checks against all the control planes and maintain an active pool of backends.

If only one load balancer is deployed this creates a single point of failure. For a complete HA solution, more than one load balancer is required.

Important
Important

If your environment only contains one load balancer it cannot be considered highly available or fault tolerant, since the load balancer becomes the single point of failure.

4.3 Testing / POC

The smallest possible deployment comes without a load balancer and the minimum amount of nodes to be considered a cluster. This deployment type is in no way suitable for production use and has no fault tolerance whatsoever.

  • One master machine

  • One worker machine

Despite not recommended, it’s also possible to create a POC or testing environment with a single master machine.

4.4 Default Deployment

The default minimal HA scenario requires 5 nodes:

  • 2 Load Balancers

  • 3 Masters

plus, the number of workers necessary to host your workloads.

  • Requires:

    • Persistent IP addresses on all nodes.

    • NTP server provided on the host network.

    • DNS entry that resolves to the load balancer VIP.

    • LDAP server or OIDC provider (Active Directory, GitLab, GitHub, etc.)

  • (Optional) "Infrastructure node"

    • LDAP server if LDAP integration is desired and your organization does not have an LDAP server.

    • Local RMT server to synchronize RPMs.

    • Local mirror of the SUSE container registry (registry.suse.com)

    • Local mirror of the SUSE helm chart repository.

4.5 Air Gapped Deployment

In air gapped environments, the "Infrastructure node" is mandatory, as it’s needed to have a local RMT server mirroring the SUSE CaaS Platform repositories, a mirror of the SUSE container registry and a mirror of the SUSE helm chart repository.

4.6 Control plane nodes certificates

Certificates are stored under /etc/kubernetes/pki on the control plane nodes.

4.6.1 CA certificates

PathValid forCommon NameDescription

ca.crt

1 year

kubernetes

Kubernetes global CA

etcd/ca.crt

1 year

etcd-ca

etcd global CA

4.6.2 CA certificate keys

PathKey typeKey length (bits)

ca.key

RSA

2048

etcd/ca.key

RSA

2048

4.6.3 Certificates

PathValid forCNParent CAO (Subject)KindExtra SANs

apiserver-kubelet-client.crt

1 year

kube-apiserver-kubelet-client

kubernetes

system:masters

client

-

etcd/healthcheck-client.crt

1 year

kube-etcd-healthcheck-client

etcd-ca

system:masters

client

-

etcd/server.crt

1 year

master

etcd-ca

-

server

Hostname, IP address, localhost, 127.0.0.1, 0:0:0:0:0:0:0:1

etcd/peer.crt

1 year

master

etcd-ca

-

server, client

Hostname, IP address, localhost, 127.0.0.1, 0:0:0:0:0:0:0:1

apiserver.crt

1 year

kube-apiserver

kubernetes

-

server

Hostname, IP address, Control Plane Address, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local

apiserver-etcd-client.crt

1 year

kube-apiserver-etcd-client

etcd-ca

system:masters

client

-

4.6.4 Certificate keys

PathKey typeKey length (bits)

apiserver.key

RSA

2048

apiserver-kubelet-client.key

RSA

2048

apiserver-etcd-client.key

RSA

2048

etcd/server.key

RSA

2048

etcd/healthcheck-client.key

RSA

2048

etcd/peer.key

RSA

2048

Please refer to the SUSE CaaS Platform Administration Guide for more information on the rotation of certificates.

4.7 Worker nodes certificates

The CA certificate for the cluster is stored under /etc/kubernetes/pki on the worker nodes.

When a new worker machine joins the cluster, the kubelet performs a TLS bootstrap. It requests a certificate to the cluster using a bootstrap token, this request is automatically approved by the cluster, and a certificate is created. The new worker downloads this certificate and writes it to disk, so the kubelet uses this client certificate from now on to contact the apiserver.

4.7.1 CA certificates

The CA certificate is downloaded from the cluster (present in the cluster-info secret inside the kube-public namespace). Since this information is public, there’s no restriction to download the CA certificate.

This certificate is saved under /etc/kubernetes/pki/ca.crt.

PathValid forCommon NameDescription

/var/lib/kubelet/pki/kubelet.crt

1 year

worker-ca@random-id

Kubelet CA

4.7.2 CA certificate keys

PathKey typeKey length (bits)

kubelet.key

RSA

2048

4.7.3 Certificates

Certificates are stored under /var/lib/kubelet/pki on the worker nodes.

PathValid forCNParent CAO (Subject)KindExtra SANsNotes

kubelet-client-current.pem

1 year

system:node:worker

kubernetes

system:nodes

client

-

Symlink to kubelet-client-timestamp.pem

kubelet.crt

1 year

worker@random-id

worker-ca@random-id

-

server

Hostname

-

4.7.4 Certificate keys

PathKey typeKey length (bits)

kubelet.key

RSA

2048

Please refer to the SUSE CaaS Platform Administration Guide for more information on the rotation of certificates.

4.8 Cluster Management

Cluster lifecycle is managed using skuba. It enables you to manage nodes in your cluster:

  • Bootstrap a new cluster

  • Join new machines to the cluster

    • Master nodes

    • Worker nodes

  • Remove nodes from the cluster

    • Master nodes

    • Worker nodes

4.8.1 Creating a cluster definition

Creating a cluster definition is the first step to initialize your cluster. You can execute this task as follows:

~/clusters$ skuba cluster init --control-plane 10.86.3.149 <CLUSTER_NAME>
[init] configuration files written to /home/my-user/clusters/<CLUSTER_NAME>

This operation happens strictly offline and will generate a folder structure like the following:

~/clusters > tree <CLUSTER_NAME>/
<CLUSTER_NAME>/
├── addons
│   ├── cilium
│   │   ├── base
│   │   │   └── cilium.yaml
│   │   └── patches
│   ├── cri
│   │   └── default_flags
│   ├── dex
│   │   ├── base
│   │   │   └── dex.yaml
│   │   └── patches
│   ├── gangway
│   │   ├── base
│   │   │   └── gangway.yaml
│   │   └── patches
│   ├── kured
│   │   ├── base
│   │   │   └── kured.yaml
│   │   └── patches
│   └── psp
│       ├── base
│       │   └── psp.yaml
│       └── patches
├── kubeadm-init.conf
└── kubeadm-join.conf.d
    ├── master.conf.template
    └── worker.conf.template

18 directories, 9 files

At this point, you can inspect all generated files, and if desired you can experiment by providing your custom settings using with declarative management of Kubernetes objects using Kustomize.

To provide custom settings of the form of strategic merge patch or a JSON 6902 patch go to addons patches directory for example addons/dex/pathes and create custom settings file for example addons/dex/pathes/custom.yaml

Read https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#patchstrategicmerge and https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#patchjson6902 to get more information.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oidc-dex
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 5

4.8.2 Bootstrapping the first node of the cluster

From within your cluster definition folder, you can bootstrap your first master machine:

~/clusters/<CLUSTER_NAME>$ skuba node bootstrap --user sles --sudo --target <IP_ADDRESS/FQDN> <NODENAME>

This operation will read the kubeadm-init.conf file inside your cluster definition, will forcefully set certain settings to the ones required by SUSE CaaS Platform and will bootstrap the node remotely.

Prior to bootstrap it’s possible for you to tweak the configuration that will be used to create the cluster. You can:

  • Tweak the default Pod Security Policies or create extra ones. If you place extra Pod Security Policies in the addons/psp/base folder, those will be created as well when the bootstrap is completed. You can also modify the default ones and/or remove them.

  • Inspect the kubeadm-init.conf and set extra configuration settings supported by kubeadm. The latest supported version is v1beta1.

After this operation has completed several modifications will have happened on your cluster definition folder:

  • The kubeadm-init.conf file will contain the final complete contents used to bootstrap the node, so you can inspect what exact configuration was used to bootstrap the node.

  • An admin.conf file will be created on your cluster definition file, this is a kubeconfig file that has complete access to the cluster and uses client certificates for authenticating against the cluster.

4.8.3 Adding master nodes to the cluster

Adding new master nodes to the cluster can be achieved by executing the following skuba command:

~/clusters/<CLUSTER_NAME>$ skuba node join --role master --user sles --sudo --target <IP_ADDRESS/FQDN> <NODENAME>

This operation will try to read the kubeadm-join.conf.d/<IP_ADDRESS/FQDN>.conf file if it exists. This allows you to set specific settings for this new node prior to joining it (a similar procedure to how kubeadm-init.conf file behaves when bootstrapping). If this file does not exist, skuba will read the kubeadm-join.conf.d/master.conf.template instead and will create this file automatically when skuba node join is called.

This operation will increase the etcd member count by one, so it’s recommended to always keep an odd number of master nodes because as described in previous sections an even number of nodes does not improve fault tolerance.

4.8.4 Adding worker nodes to the cluster

Adding new worker nodes to the cluster can be achieved by executing the following skuba command:

~/clusters/<CLUSTER_NAME>$ skuba node join --role worker --user sles --sudo --target <IP_ADDRESS/FQDN> <NODENAME>

This operation will try to read the kubeadm-join.conf.d/<IP_ADDRESS/FQDN>.conf file if it exists. This allows you to set specific settings for this new node prior to joining it (a similar procedure to how kubeadm-init.conf file behaves when bootstrapping). If this file does not exist, skuba will read the kubeadm-join.conf.d/worker.conf.template instead and will create this file automatically when skuba node join is called.

4.8.5 Removing nodes from the cluster

Removing nodes from the cluster requires you to execute skuba from a folder containing an admin.conf file, because this operation is performed exclusively using Kubernetes, and no access to the node being removed or other nodes for that matter is required.

For removing a node, the following command has to be executed:

~/clusters/<CLUSTER_NAME>$ skuba node remove <NODENAME>

If the node to be removed is a master, specific actions will be automatically executed, like removing the etcd member from the cluster. Note that this node cannot be added back to the cluster or any other skuba-initiated kubernetes cluster without reinstalling first.

5 Glossary

AWS

Amazon Web Services. A broadly adopted cloud platform run by Amazon.

BPF

Berkeley Packet Filter. Technology used by Cilium to filter network traffic at the level of packet processing in the kernel.

CA

Certificate or Certification Authority. An entity that issues digital certificates.

CIDR

Classless Inter-Domain Routing. Method for allocating IP addresses and IP routing.

CNI

Container Networking Interface. Creates a generic plugin-based networking solution for containers based on spec files in JSON format.

CRD

Custom Resource Definition. Functionality to define non-default resources for Kubernetes pods.

FQDN

Fully Qualified Domain Name. The complete domain name for a specific computer, or host, on the internet, consisting of two parts: the hostname and the domain name.

GKE

Google Kubernetes Engine. Manager for container orchestration built on Kubernetes by Google. Similar for example to Amazon Elastic Kubernetes Service (Amazon EKS) and Azure Kubernetes Service (AKS).

HPA

Horizontal Pod Autoscaler. Based on CPU usage, HPA controls the number of pods in a deployment/replica or stateful set or a replication controller.

KVM

Kernel-based Virtual Machine. Linux native virtualization tool that allows the kernel to function as a hypervisor.

LDAP

Lightweight Directory Access Protocol. A client/server protocol used to access and manage directory information. It reads and edits directories over IP networks and runs directly over TCP/IP using simple string formats for data transfer.

OCI

Open Containers Initiative. A project under the Linux Foundation with the goal of creating open industry standards around container formats and runtime.

OIDC

OpenID Connect. Identity layer on top of the OAuth 2.0 protocol.

OLM

Operator Lifecycle Manager. Open Source tool for managing operators in a Kubernetes cluster.

POC

Proof of Concept. Pioneering project directed at proving the feasibility of a design concept.

PSP

Pod Security Policy. PSPs are cluster-level resources that control security-sensitive aspects of pod specification.

PVC

Persistent Volume Claim. A request for storage by a user.

RBAC

Role-based Access Control. An approach to restrict authorized user access based on defined roles.

RMT

Repository Mirroring Tool. Successor of the SMT. Helps optimize the management of SUSE Linux Enterprise software updates and subscription entitlements.

RPO

Recovery Point Objective. Defines the interval of time that can occur between to backup points before normal business can no longer be resumed.

RTO

Recovery Time Objective. This defines the time (and typically service level from SLA) with which backup relevant incidents must be handled within.

RSA

Rivest-Shamir-Adleman. Asymmetric encryption technique that uses two different keys as public and private keys to perform the encryption and decryption.

SLA

Service Level Agreement. A contractual clause or set of clauses that determines the guaranteed handling of support or incidents by a software vendor or supplier.

SMT

SUSE Subscription Management Tool. Helps to manage software updates, maintain corporate firewall policy and meet regulatory compliance requirements in SUSE Linux Enterprise 11 and 12. Has been replaced by the RMT and SUSE Manager in newer SUSE Linux Enterprise versions.

STS

StatefulSet. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods for a "stateful" application.

SMTP

Simple Mail Transfer Protocol. A communication protocol for electronic mail transmission.

TOML

Tom’s Obvious, Minimal Language. Configuration file format used for configuring container registries for CRI-O.

VPA

Vertical Pod Autoscaler. VPA automatically sets the values for resource requests and container limits based on usage.

VPC

Virtual Private Cloud. Division of a public cloud, which supports private cloud computing and thus offers more control over virtual networks and an isolated environment for sensitive workloads.

A Contributors

The contents of these documents are edited by the technical writers for SUSE CaaS Platform and original works created by its contributors.

B GNU Licenses

This appendix contains the GNU Free Documentation License version 1.2.

B.1 GNU Free Documentation License

Copyright © 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

B.1.1 0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

B.1.2 1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

B.1.3 2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

B.1.4 3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

B.1.5 4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

  1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

  2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

  3. State on the Title page the name of the publisher of the Modified Version, as the publisher.

  4. Preserve all the copyright notices of the Document.

  5. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

  6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

  7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.

  8. Include an unaltered copy of this License.

  9. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

  10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

  11. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

  12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

  13. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

  14. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

  15. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—​for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

B.1.6 5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

B.1.7 6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

B.1.8 7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

B.1.9 8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

B.1.10 9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

B.1.10.1 10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.

B.1.10.2 ADDENDUM: How to use this License for your documents

Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “ with…​Texts.” line with this:

with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.

If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.

If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.

Print this page