This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

Chapter 1. Maintenance

Table of Contents

Keeping the Nodes Up-To-Date
Service Order on SUSE OpenStack Cloud Start-up or Shutdown
Upgrading from SUSE OpenStack Cloud Crowbar 8 to SUSE OpenStack Cloud Crowbar 9
Requirements
Unsupported configurations
Upgrading Using the Web Interface
Upgrading from the Command Line
Troubleshooting Upgrade Issues
Recovering from Compute Node Failure
Bootstrapping the Compute Plane
Bootstrapping the MariaDB Galera Cluster with Pacemaker when a node is missing
Updating MariaDB with Galera
Load Balancer: Octavia Administration
Removing load balancers
Periodic OpenStack Maintenance Tasks
Rotating Fernet Tokens

Keeping the Nodes Up-To-Date

Keeping the nodes in SUSE OpenStack Cloud Crowbar up-to-date requires an appropriate setup of the update and pool repositories and the deployment of either the Updater barclamp or the SUSE Manager barclamp. For details, see the section called “Update and Pool Repositories”, the section called “Deploying Node Updates with the Updater Barclamp”, and the section called “Configuring Node Updates with the SUSE Manager Client Barclamp”.

If one of those barclamps is deployed, patches are installed on the nodes. Patches that do not require a reboot will not cause a service interruption. If a patch (for example, a kernel update) requires a reboot after the installation, services running on the machine that is rebooted will not be available within SUSE OpenStack Cloud. Therefore, we strongly recommend installing those patches during a maintenance window.

Maintenance Mode

As of SUSE OpenStack Cloud Crowbar 8, it is not possible to put your entire SUSE OpenStack Cloud into Maintenance Mode (such as limiting all users to read-only operations on the control plane), as OpenStack does not support this. However when Pacemaker is deployed to manage HA clusters, it should be used to place services and cluster nodes into Maintenance Mode before performing maintenance functions on them. For more information, see SUSE Linux Enterprise High Availability documentation.

Consequences when Rebooting Nodes

Administration Server

While the Administration Server is offline, it is not possible to deploy new nodes. However, rebooting the Administration Server has no effect on starting instances or on instances already running.

Control Nodes

The consequences a reboot of a Control Node depend on the services running on that node:

Database, keystone, RabbitMQ, glance, nova:  No new instances can be started.

swift:  No object storage data is available. If glance uses swift, it will not be possible to start new instances.

cinder, Ceph:  No block storage data is available.

neutron:  No new instances can be started. On running instances the network will be unavailable.

horizon.  horizon will be unavailable. Starting and managing instances can be done with the command line tools.

Compute Nodes

Whenever a Compute Node is rebooted, all instances running on that particular node will be shut down and must be manually restarted. Therefore it is recommended to evacuate the node by migrating instances to another node, before rebooting it.