This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Software can be installed in three basic layers
Linux RPM packages, Kernel, etc. installation via AutoYaST, Terraform or Zypper.
Software that helps/controls execution of workloads in Kubernetes.
Here it entirely depends on the actual makeup of the container what can be installed and how. Please refer to your respecitve container image documentation for further details.
Installation of software in container images is beyond the scope of this document.
Applications that will be deployed to Kubernetes will typically contain all the required software to be executed. In some cases, especially when it comes to the hardware layer abstraction (storage backends, GPU), additional packages must be installed on the underlying operating system outside of Kubernetes.
The following examples show installation of required packages for Ceph, please adjust the list of
packages and repositories to whichever software you need to install.
While you can install any software package from the SLES ecosystem this falls outside of the support scope for SUSE CaaS Platform.
During the rollout of nodes you can use either AutoYaST or Terraform (depending on your chosen deployment type) to automatically install packages to all nodes.
For example, to install additional packages required by the Ceph storage backend you can modify
your autoyast.xml or tfvars.yml files to include the additional repositories and instructions to
install xfsprogs and ceph-common.
tfvars.yml
# EXAMPLE:
# repositories = {
# repository1 = "http://example.my.repo.com/repository1/"
# repository2 = "http://example.my.repo.com/repository2/"
# }
repositories = {
....
}
# Minimum required packages. Do not remove them.
# Feel free to add more packages
packages = [
"kernel-default",
"-kernel-default-base",
"xfsprogs",
"ceph-common"
]autoyast.xml
<!-- install required packages -->
<software>
<image/>
<products config:type="list">
<product>SLES</product>
</products>
<instsource/>
<patterns config:type="list">
<pattern>base</pattern>
<pattern>enhanced_base</pattern>
<pattern>minimal_base</pattern>
<pattern>basesystem</pattern>
</patterns>
<packages config:type="list">
<package>ceph-common</package>
<package>xfsprogs</package>
</packages>
</software>To install software on existing cluster nodes, you must use zypper on each node individually.
Simply log in to a node via SSH and run:
sudo zypper in ceph-common xfsprogs
As of SUSE CaaS Platform 4.5.2, Helm 3 is the default and provided by the package repository.
To install, run the following command from the location where you normally run skuba commands:
sudo zypper install helm3The process for migrating an installation from Helm 2 to Helm 3 has been documented and tested by the Helm community. Refer to:
A healthy SUSE CaaS Platform 4.5.x installation with applications deployed using Helm 2 and Tiller.
A system, which skuba and helm version 2 have run on previously.
The procedure below requires an available internet connection to install the 2to3 plugin. If the installation is in an air gapped environment, the system may need to be moved back out of the air gapped environment.
These instructions are written for a single cluster managed from a single Helm 2 installation. If more than one cluster is being managed by this installation of Helm 2, please reference https://github.com/helm/helm-2to3 for further details and do not do the clean-up step until all clusters are migrated.
This is a procedure for migrating a SUSE CaaS Platform 4.5 deployment that has used Helm 2 to deploy applications.
Install helm3 package in the same location you normally run skuba commands (alongside the helm2 package):
sudo zypper in helm3
Install the 2to3 plugin:
helm3 plugin install https://github.com/helm/helm-2to3.git
Backup Helm 2 data found in the following:
Helm 2 home folder.
Release data from the cluster. Refer to How Helm Uses ConfigMaps to Store Data for details on how Helm 2 stores release data in the cluster. This should apply similarly if Helm 2 is configured for secrets.
Move configuration from 2 to 3:
helm3 2to3 move config
After the move, if you have installed any custom plugins, then check that they work fine with Helm 3. If needed, remove and re-add them as described in https://github.com/helm/helm-2to3s.
If you have configured any local helm chart repositories, you will need to remove and re-add them. For example:
helm3 repo remove <my-custom-repo> helm3 repo add <my-custom-repo> <url-to-custom-repo> helm3 repo update
Migrate Helm releases (deployed charts) in place:
helm3 2to3 convert RELEASE
Clean up Helm 2 data:
Tiller will be cleaned up, and Helm 2 will not be usable on this cluster after cleanup.
helm3 2to3 cleanup
You may now uninstall the helm2 package and use the helm command line from the helm3 package from now on.
sudo zypper remove helm2
If you are upgrading in an air gap environment, you must manually install the "developer" version of the 2to3 plugin.
Install helm3 package in the same location you normally run skuba commands (alongside the helm2 package):
sudo zypper in helm3
Download the latest release from https://github.com/helm/helm-2to3/releases
On your internal workstation unpack the archive file:
mkdir ./helm-2to3 tar -xvf helm-2to3_0.7.0_linux_amd64.tar.gz -C ./helm-2to3
Install the plugin
export HELM_LINTER_PLUGIN_NO_INSTALL_HOOK=true helm plugin install ./helm-2to3
The expected output should contain a message like:
Development mode: not downloading versioned release. Installed plugin: 2to3
Now copy the installed plugin to a sub directory to allow manual execution
cd $HOME/.helm/plugins/helm-2to3/ mkdir bin cp 2to3 bin/2to3
Backup Helm 2 data found in the following:
Helm 2 home folder.
Release data from the cluster. Refer to How Helm Uses ConfigMaps to Store Data for details on how Helm 2 stores release data in the cluster. This should apply similarly if Helm 2 is configured for secrets.
Move configuration from 2 to 3:
helm3 2to3 move config
After the move, if you have installed any custom plugins, then check that they work fine with Helm 3. If needed, remove and re-add them as described in https://github.com/helm/helm-2to3s.
If you have configured any local helm chart repositories, you will need to remove and re-add them. For example:
helm3 repo remove <my-custom-repo> helm3 repo add <my-custom-repo> <url-to-custom-repo> helm3 repo update
Migrate Helm releases (deployed charts) in place:
helm3 2to3 convert RELEASE
Clean up Helm 2 data:
Tiller will be cleaned up, and Helm 2 will not be usable on this cluster after cleanup.
helm3 2to3 cleanup
You may now uninstall the helm2 package and use the helm command line from the helm3 package from now on.
sudo zypper remove helm2