12 Bare Metal #
The Bare Metal service provides physical hardware management features.
12.1 Introduction #
The Bare Metal service provides physical hardware as opposed to virtual machines. It also provides several reference drivers, which leverage common technologies like PXE and IPMI, to cover a wide range of hardware. The pluggable driver architecture also allows vendor-specific drivers to be added for improved performance or functionality not provided by reference drivers. The Bare Metal service makes physical servers as easy to provision as virtual machines in a cloud, which in turn will open up new avenues for enterprises and service providers.
12.2 System architecture #
The Bare Metal service is composed of the following components:
An admin-only RESTful API service, by which privileged users, such as operators and other services within the cloud control plane, may interact with the managed bare-metal servers.
A conductor service, which conducts all activity related to bare-metal deployments. Functionality is exposed via the API service. The Bare Metal service conductor and API service communicate via RPC.
Various drivers that support heterogeneous hardware, which enable features specific to unique hardware platforms and leverage divergent capabilities via a common API.
A message queue, which is a central hub for passing messages, such as RabbitMQ. It should use the same implementation as that of the Compute service.
A database for storing information about the resources. Among other things, this includes the state of the conductors, nodes (physical servers), and drivers.
When a user requests to boot an instance, the request is passed to the Compute service via the Compute service API and scheduler. The Compute service hands over this request to the Bare Metal service, where the request passes from the Bare Metal service API, to the conductor which will invoke a driver to successfully provision a physical server for the user.
12.3 Bare Metal deployment #
PXE deploy process
Agent deploy process
12.4 Use Bare Metal #
Install the Bare Metal service.
Setup the Bare Metal driver in the compute node's
nova.conffile.Setup TFTP folder and prepare PXE boot loader file.
Prepare the bare metal flavor.
Register the nodes with correct drivers.
Configure the driver information.
Register the ports information.
Use the
openstack server createcommand to kick off the bare metal provision.Check nodes' provision state and power state.
12.4.1 Use multitenancy with Bare Metal service #
12.4.1.1 Use multitenancy with Bare Metal service #
Multitenancy allows creating a dedicated project network that extends the
current Bare Metal (ironic) service capabilities of providing flat
networks. Multitenancy works in conjunction with Networking (neutron)
service to allow provisioning of a bare metal server onto the project network.
Therefore, multiple projects can get isolated instances after deployment.
Bare Metal service provides the local_link_connection information to the
Networking service ML2 driver. The ML2 driver uses that information to plug the
specified port to the project network.
|
Field |
Description |
|---|---|
|
|
Required. Identifies a switch and can be an LLDP-based MAC address or
an OpenFlow-based |
|
|
Required. Port ID on the switch, for example, Gig0/1. |
|
|
Optional. Used to distinguish different switch models or other vendor specific-identifier. |
12.4.1.1.1 Configure Networking service ML2 driver #
To enable the Networking service ML2 driver, edit the
/etc/neutron/plugins/ml2/ml2_conf.ini file:
Add the name of your ML2 driver.
Add the vendor ML2 plugin configuration options.
[ml2]
...
mechanism_drivers = my_mechanism_driver
[my_vendor]
param_1 = ...
param_2 = ...
param_3 = ...For more details, see Networking service mechanism drivers.
12.4.1.1.2 Configure Bare Metal service #
After you configure the Networking service ML2 driver, configure Bare Metal service:
Edit the
/etc/ironic/ironic.conffor theironic-conductorservice. Set thenetwork_interfacenode field to a valid network driver that is used to switch, clean, and provision networks.[DEFAULT] ... enabled_network_interfaces=flat,neutron [neutron] ... cleaning_network_uuid=$UUID provisioning_network_uuid=$UUID
WarningThe
cleaning_network_uuidandprovisioning_network_uuidparameters are required for theneutronnetwork interface. If they are not set,ironic-conductorfails to start.Set
neutronto use Networking service ML2 driver:$ ironic node-create -n $NAME --network-interface neutron --driver agent_ipmitool
Create a port with appropriate
local_link_connectioninformation. Set thepxe_enabledport attribute toTrueto create network ports for for thepxe_enabledports only:$ ironic --ironic-api-version latest port-create -a $HW_MAC_ADDRESS \ -n $NODE_UUID -l switch_id=$SWITCH_MAC_ADDRESS \ -l switch_info=$SWITCH_HOSTNAME -l port_id=$SWITCH_PORT --pxe-enabled true
12.5 Troubleshooting #
12.5.1 No valid host found error #
Problem#
Sometimes /var/log/nova/nova-conductor.log contains the following error:
NoValidHost: No valid host was found. There are not enough hosts available.
The message No valid host was found means that the Compute service
scheduler could not find a bare metal node suitable for booting the new
instance.
This means there will be some mismatch between resources that the Compute service expects to find and resources that Bare Metal service advertised to the Compute service.
Solution#
If you get this message, check the following:
Introspection should have succeeded for you before, or you should have entered the required bare-metal node properties manually. For each node in the
ironic node-listcommand, use:$ ironic node-show <IRONIC-NODE-UUID>
and make sure that
propertiesJSON field has valid values for keyscpus,cpu_arch,memory_mbandlocal_gb.The flavor in the Compute service that you are using does not exceed the bare-metal node properties above for a required number of nodes. Use:
$ openstack flavor show FLAVOR
Make sure that enough nodes are in
availablestate according to theironic node-listcommand. Nodes inmanageablestate usually mean they have failed introspection.Make sure nodes you are going to deploy to are not in maintenance mode. Use the
ironic node-listcommand to check. A node automatically going to maintenance mode usually means the incorrect credentials for this node. Check them and then remove maintenance mode:$ ironic node-set-maintenance <IRONIC-NODE-UUID> off
It takes some time for nodes information to propagate from the Bare Metal service to the Compute service after introspection. Our tooling usually accounts for it, but if you did some steps manually there may be a period of time when nodes are not available to the Compute service yet. Check that the
openstack hypervisor stats showcommand correctly shows total amount of resources in your system.