6 Boot from SAN and Multipath Configuration #
6.1 Introduction #
For information about supported hardware for multipathing, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 2 “Hardware and Software Support Matrix”, Section 2.2 “Supported Hardware Configurations”.
When exporting a LUN to a node for boot from SAN, you should ensure that LUN 0 is assigned to the LUN and configure any setup dialog that is necessary in the firmware to consume this LUN 0 for OS boot.
Any hosts that are connected to 3PAR storage must have a host
persona of 2-generic-alua set on the 3PAR.
Refer to the 3PAR documentation for the steps necessary to check this and
change if necessary.
iSCSI boot from SAN is not supported. For more information on the use of Cinder with multipath, see Section 22.1.3, “Multipath Support”.
To allow SUSE OpenStack Cloud 8 to use volumes from a SAN, you have to specify configuration options for both the installation and the OS configuration phase. In all cases, the devices that are utilized are devices for which multipath is configured.
6.2 Install Phase Configuration #
For FC connected nodes and for FCoE nodes where the network processor used is from the Emulex family such as for the 650FLB, the following changes need to be made.
In each stanza of the
servers.ymlinsert a line statingboot-from-san: true- id: controller2 ip-addr: 192.168.10.4 role: CONTROLLER-ROLE server-group: RACK2 nic-mapping: HP-DL360-4PORTThis uses the disk
/dev/mapper/mpathaas the default device on which to install the OS.In the disk input models, specify the devices that will be used via their multipath names (which will be of the form
/dev/mapper/mpatha,/dev/mapper/mpathb, etc.).volume-groups: - name: ardana-vg physical-volumes: # NOTE: 'sda_root' is a templated value. This value is checked in # os-config and replaced by the partition actually used on sda #for example sda1 or sda5 - /dev/mapper/mpatha_root ... - name: vg-comp physical-volumes: - /dev/mapper/mpathb
Instead of using Cobbler, you need to provision a baremetal node manually using the following procedure.
Assign a static IP to the node.
Use the
ip addrcommand to list active network interfaces on your system:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ffIdentify the network interface that matches the MAC address of your server and edit the corresponding configuration file in
/etc/sysconfig/network-scripts. For example, for theeno1interface, open the/etc/sysconfig/network-scripts/ifcfg-eno1file and edit IPADDR and NETMASK values to match your environment. Note that the IPADDR is used in the corresponding stanza inservers.yml. You may also need to setBOOTPROTOtonone:TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.13.111.14
Reboot the SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.
Add the
ardanauser and home directory:root #useradd -m -d /var/lib/ardana -U ardanaAllow the user
ardanato runsudowithout a password by creating the/etc/sudoers.d/ardanafile with the following configuration:ardana ALL=(ALL) NOPASSWD:ALL
When you start installation using the Cloud Lifecycle Manager, or if you are adding a SLES node to an existing cloud, you need to copy the Cloud Lifecycle Manager public key to the SLES node to enable passwordless SSH access. One way of doing this is to copy the file
~/.ssh/authorized_keysfrom another node in the cloud to the same location on the SLES node. If you are installing a new cloud, this file will be available on the nodes after running thebm-reimage.ymlplaybook. Ensure that there is global read access to the file/var/lib/ardana/.ssh/authorized_keys.Use the following command to test passwordless SSH from the deployer and check the ability to remotely execute sudo commands:
ssh stack@SLES_NODE_IP "sudo tail -5 /var/log/messages"
6.2.1 Deploying the Cloud #
Run the configuration processor:
tux >cd ~/openstack/ardana/ansibleardana >ansible-playbook -i hosts/localhost config-processor-run.ymlFor automated installation, you can specify the required parameters. For example, the following command disables encryption by the configuration processor:
ansible-playbook -i hosts/localhost config-processor-run.yml \ -e encrypt="" -e rekey=""Use the following playbook below to create a deployment directory:
tux >cd ~/openstack/ardana/ansibleardana >ansible-playbook -i hosts/localhost ready-deployment.ymlTo ensure that all existing non-OS partitions on the nodes are wiped prior to installation, you need to run the
wipe_disks.ymlplaybook. Thewipe_disks.ymlplaybook is only meant to be run on systems immediately after runningbm-reimage.yml. If used for any other case, it may not wipe all of the expected partitions.This step is not required if you are using clean machines.
Before you run the
wipe_disks.ymlplaybook, you need to make the following changes in the deployment directory.In the
~/scratch/ansible/next/ardana/ansible/roles/diskconfig/tasks/get_disk_info.ymlfile, locate the following line:shell: ls -1 /dev/mapper/ | grep "mpath" | grep -v {{ wipe_disks_skip_partition }}$ | grep -v {{ wipe_disks_skip_partition }}[0-9]Replace it with:
shell: ls -1 /dev/mapper/ | grep "mpath" | grep -v {{ wipe_disks_skip_partition }}$ | grep -v {{ wipe_disks_skip_partition }}[0-9] | grep -v {{ wipe_disks_skip_partition }}_part[0-9]In the
~/scratch/ansible/next/ardana/ansible/roles/multipath/tasks/install.ymlfile, set themultipath_user_friendly_namesvariable value toyesfor all occurrences.
Run the
wipe_disks.ymlplaybook:tux >cd ~/scratch/ansible/next/ardana/ansibleardana >ansible-playbook -i hosts/verb_hosts wipe_disks.ymlIf you have used an encryption password when running the configuration processor, use the command below, and enter the encryption password when prompted:
ardana >ansible-playbook -i hosts/verb_hosts wipe_disks.yml --ask-vault-passRun the
site.ymlplaybook:tux >cd ~/scratch/ansible/next/ardana/ansibleardana >ansible-playbook -i hosts/verb_hosts site.ymlIf you have used an encryption password when running the configuration processor, use the command below, and enter the encryption password when prompted:
ansible-playbook -i hosts/verb_hosts site.yml --ask-vault-pass
The step above runs
osconfigto configure the cloud andardana-deployto deploy the cloud. Depending on the number of nodes, this step may take considerable time to complete.
6.3 QLogic FCoE restrictions and additional configurations #
If you are using network cards such as Qlogic Flex Fabric 536 and 630 series, there are additional OS configuration steps to support the importation of LUNs as well as some restrictions on supported configurations.
The restrictions are:
Only one network card can be enabled in the system.
The FCoE interfaces on this card are dedicated to FCoE traffic. They cannot have IP addresses associated with them.
NIC mapping cannot be used.
In addition to the configuration options above, you also need to specify the FCoE interfaces for install and for os configuration. There are 3 places where you need to add additional configuration options for fcoe-support:
In
servers.yml, which is used for configuration of the system during OS install, FCoE interfaces need to be specified for each server. In particular, the mac addresses of the FCoE interfaces need to be given, not the symbolic name (for example,eth2).- id: compute1 ip-addr: 10.245.224.201 role: COMPUTE-ROLE server-group: RACK2 mac-addr: 6c:c2:17:33:4c:a0 ilo-ip: 10.1.66.26 ilo-user: linuxbox ilo-password: linuxbox123 boot-from-san: True fcoe-interfaces: - 6c:c2:17:33:4c:a1 - 6c:c2:17:33:4c:a9ImportantNIC mapping cannot be used.
For the osconfig phase, you will need to specify the
fcoe-interfacesas a peer ofnetwork-interfacesin thenet_interfaces.ymlfile:- name: CONTROLLER-INTERFACES fcoe-interfaces: - name: fcoe devices: - eth2 - eth3 network-interfaces: - name: eth0 device: name: eth0 network-groups: - EXTERNAL-API - EXTERNAL-VM - GUEST - MANAGEMENTImportantThe MAC addresses specified in the
fcoe-interfacesstanza inservers.ymlmust correspond to the symbolic names used in thefcoe-interfacesstanza innet_interfaces.yml.Also, to satisfy the FCoE restriction outlined in Section 6.3, “QLogic FCoE restrictions and additional configurations” above, there can be no overlap between the devices in
fcoe-interfacesand those innetwork-interfacesin thenet_interfaces.ymlfile. In the example,eth2andeth3arefcoe-interfaceswhileeth0is innetwork-interfaces.As part of the initial install from an iso, additional parameters need to be supplied on the kernel command line:
multipath=true partman-fcoe/interfaces=<mac address1>,<mac address2> disk-detect/fcoe/enable=true --- quiet
Since NIC mapping is not used to guarantee order of the networks across the
system the installer will remap the network interfaces in a deterministic
fashion as part of the install. As part of the installer dialogue, if DHCP
is not configured for the interface, it is necessary to confirm that the
appropriate interface is assigned the ip address. The network interfaces may
not be at the names expected when installing via an ISO. When you are asked
to apply an IP address to an interface, press Alt–F2 and in the console
window, run the command ip a to examine the interfaces
and their associated MAC addresses. Make a note of the interface name with
the expected MAC address and use this in the subsequent dialog. Press
Alt–F1 to
return to the installation screen. You should note that the names of the
interfaces may have changed after the installation completes. These names
are used consistently in any subsequent operations.
Therefore, even if FCoE is not used for boot from SAN (for example for
cinder), then it is recommended that fcoe-interfaces be
specified as part of install (without the multipath or disk detect options).
Alternatively, you need to run
osconfig-fcoe-reorder.yml before
site.yml or osconfig-run.yml is
invoked to reorder the networks in a similar manner to the installer. In
this case, the nodes will need to be manually rebooted for the network
reorder to take effect. Run osconfig-fcoe-reorder.yml
in the following scenarios:
If you have used a third-party installer to provision your bare-metal nodes
If you are booting from a local disk (that is one that is not presented from the SAN) but you want to use FCoE later, for example, for cinder.
To run the command:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts osconfig-fcoe-reorder.yml
If you do not run osconfig-fcoe-reorder.yml, you will
encounter a failure in osconfig-run.yml.
If you are booting from a local disk, the LUNs that will be imported over
FCoE will not be visible before site.yml or
osconfig-run.yml has been run. However, if you need to
import the LUNs before this, for instance, in scenarios where you need to
run wipe_disks.yml (run this only after first running
bm-reimage.yml), then you can run the
fcoe-enable playbook across the nodes in question. This
will configure FCoE and import the LUNs presented to the nodes.
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/verb_hosts fcoe-enable.yml
6.4 Installing the SUSE OpenStack Cloud 8 ISO for Nodes That Support Boot From SAN #
During manual installation of SUSE Linux Enterprise Server 12 SP3, select the desired SAN disk and create an LVM partitioning scheme that meets SUSE OpenStack Cloud requirements, that is it has an
ardana-vgvolume group and anardana-vg-rootlogical volume. For further information on partitioning, see Section 3.3, “Partitioning”.After the installation is completed and the system is booted up, open the file
/etc/multipath.confand edit the defaults as follows:defaults { user_friendly_names yes bindings_file "/etc/multipath/bindings" }Open the
/etc/multipath/bindingsfile and map the expected device name to the SAN disk selected during installation. In SUSE OpenStack Cloud, the naming convention ismpatha,mpathb, and so on. For example:mpatha-part1 360000000030349030-part1 mpatha-part2 360000000030349030-part2 mpatha-part3 360000000030349030-part3 mpathb-part1 360000000030349000-part1 mpathb-part2 360000000030349000-part2
Reboot the machine to enable the changes.