This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
In case you need to restart your complete SUSE OpenStack Cloud (after a complete shut down or a power outage), ensure that the external Ceph cluster is started, available and healthy. Then start nodes and services in the order documented below.
Service Order on Start-up
Control Node/Cluster on which the Database is deployed
Control Node/Cluster on which RabbitMQ is deployed
Control Node/Cluster on which keystone is deployed
For swift:
Storage Node on which the swift-storage role is deployed
Storage Node on which the swift-proxy role is deployed
Any remaining Control Node/Cluster. The following additional rules apply:
The Control Node/Cluster on which the neutron-server
role is deployed needs to be started before starting the node/cluster
on which the neutron-l3 role is deployed.
The Control Node/Cluster on which the nova-controller
role is deployed needs to be started before starting the node/cluster
on which heat is deployed.
Compute Nodes
If multiple roles are deployed on a single Control Node, the services are automatically started in the correct order on that node. If you have more than one node with multiple roles, make sure they are started as closely as possible to the order listed above.
To shut down SUSE OpenStack Cloud, terminate nodes and services in the order documented below (which is the reverse of the start-up order).
Service Order on Shutdown
Compute Nodes
Control Node/Cluster on which heat is deployed
Control Node/Cluster on which the nova-controller
role is deployed
Control Node/Cluster on which the neutron-l3
role is deployed
All Control Node(s)/Cluster(s) on which neither of the following services is deployed: Database, RabbitMQ, and keystone.
For swift:
Storage Node on which the swift-proxy role is
deployed
Storage Node on which the swift-storage role is
deployed
Control Node/Cluster on which keystone is deployed
Control Node/Cluster on which RabbitMQ is deployed
Control Node/Cluster on which the Database is deployed
If required, gracefully shut down an external Ceph cluster