18 Using libvirt with Ceph #
The libvirt library creates a virtual machine abstraction layer between
hypervisor interfaces and the software applications that use them. With
libvirt, developers and system administrators can focus on a common
management framework, common API, and common shell interface
(virsh) to many different hypervisors, including
QEMU/KVM, Xen, LXC, or VirtualBox.
Ceph block devices support QEMU/KVM. You can use Ceph block devices
with software that interfaces with libvirt. The cloud solution uses
libvirt to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph
block devices via librbd.
To create VMs that use Ceph block devices, use the procedures in the
following sections. In the examples, we have used
libvirt-pool for the pool name,
client.libvirt for the user name, and
new-libvirt-image for the image name. You may use any
value you like, but ensure you replace those values when executing commands
in the subsequent procedures.
18.1 Configuring Ceph #
To configure Ceph for use with libvirt, perform the following steps:
Create a pool. The following example uses the pool name
libvirt-poolwith 128 placement groups.cephadm >ceph osd pool create libvirt-pool 128 128Verify that the pool exists.
cephadm >ceph osd lspoolsCreate a Ceph User. The following example uses the Ceph user name
client.libvirtand referenceslibvirt-pool.cephadm >ceph auth get-or-create client.libvirt mon 'profile rbd' osd \ 'profile rbd pool=libvirt-pool'Verify the name exists.
cephadm >ceph auth listNote
libvirtwill access Ceph using the IDlibvirt, not the Ceph nameclient.libvirt. See http://docs.ceph.com/docs/master/rados/operations/user-management/#user for a detailed explanation of the difference between ID and name.Use QEMU to create an image in your RBD pool. The following example uses the image name
new-libvirt-imageand referenceslibvirt-pool.Tip: Keyring File Location
The
libvirtuser key is stored in a keyring file placed in the/etc/cephdirectory. The keyring file needs to have a appropriate name that includes the name of the Ceph cluster it belongs to. For the default cluster name 'ceph', the keyring file name is/etc/ceph/ceph.client.libvirt.keyring.If the keyring does not exist, create it with:
cephadm >ceph auth get client.libvirt > /etc/ceph/ceph.client.libvirt.keyringroot #qemu-img create -f raw rbd:libvirt-pool/new-libvirt-image:id=libvirt 2GVerify the image exists.
cephadm >rbd -p libvirt-pool ls
18.2 Preparing the VM Manager #
You may use libvirt without a VM manager, but you may find it simpler to
create your first domain with virt-manager.
Install a virtual machine manager.
root #zypper in virt-managerPrepare/download an OS image of the system you want to run virtualized.
Launch the virtual machine manager.
virt-manager
18.3 Creating a VM #
To create a VM with virt-manager, perform the following
steps:
Choose the connection from the list, right-click it, and select .
by providing the path to the existing storage. Specify OS type, memory settings, and the virtual machine, for example
libvirt-virtual-machine.Finish the configuration and start the VM.
Verify that the newly created domain exists with
sudo virsh list. If needed, specify the connection string, such asvirsh -c qemu+ssh://root@vm_host_hostname/system listId Name State ----------------------------------------------- [...] 9 libvirt-virtual-machine runningLog in to the VM and stop it before configuring it for use with Ceph.
18.4 Configuring the VM #
In this chapter, we focus on configuring VMs for integration with Ceph
using virsh. virsh commands often
require root privileges (sudo) and will not return
appropriate results or notify you that root privileges are required. For a
reference of virsh commands, refer to
Virsh Command
Reference.
Open the configuration file with
virsh editvm-domain-name.root #virsh edit libvirt-virtual-machineUnder <devices> there should be a <disk> entry.
<devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/path/to/image/recent-linux.img'/> <target dev='vda' bus='virtio'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk>Replace
/path/to/image/recent-linux.imgwith the path to the OS image.Important
Use
sudo virsh editinstead of a text editor. If you edit the configuration file under/etc/libvirt/qemuwith a text editor,libvirtmay not recognize the change. If there is a discrepancy between the contents of the XML file under/etc/libvirt/qemuand the result ofsudo virsh dumpxmlvm-domain-name, then your VM may not work properly.Add the Ceph RBD image you previously created as a <disk> entry.
<disk type='network' device='disk'> <source protocol='rbd' name='libvirt-pool/new-libvirt-image'> <host name='monitor-host' port='6789'/> </source> <target dev='vda' bus='virtio'/> </disk>Replace monitor-host with the name of your host, and replace the pool and/or image name as necessary. You may add multiple <host> entries for your Ceph monitors. The
devattribute is the logical device name that will appear under the/devdirectory of your VM. The optional bus attribute indicates the type of disk device to emulate. The valid settings are driver specific (for example ide, scsi, virtio, xen, usb or sata). See Disks for details of the <disk> element, and its child elements and attributes.Save the file.
If your Ceph cluster has authentication enabled (it does by default), you must generate a secret. Open an editor of your choice and create a file called
secret.xmlwith the following content:<secret ephemeral='no' private='no'> <usage type='ceph'> <name>client.libvirt secret</name> </usage> </secret>Define the secret.
root #virsh secret-define --file secret.xml <uuid of secret is output here>Get the
client.libvirtkey and save the key string to a file.cephadm >ceph auth get-key client.libvirt | sudo tee client.libvirt.keySet the UUID of the secret.
root #virsh secret-set-value --secret uuid of secret \ --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xmlYou must also set the secret manually by adding the following
<auth>entry to the<disk>element you entered earlier (replacing the uuid value with the result from the command line example above).root #virsh edit libvirt-virtual-machineThen, add
<auth></auth>element to the domain configuration file:... </source> <auth username='libvirt'> <secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/> </auth> <target ...Note
The exemplary ID is
libvirt, not the Ceph nameclient.libvirtas generated at step 2 of Section 18.1, “Configuring Ceph”. Ensure you use the ID component of the Ceph name you generated. If for some reason you need to regenerate the secret, you will need to executesudo virsh secret-undefineuuid before executingsudo virsh secret-set-valueagain.
18.5 Summary #
Once you have configured the VM for use with Ceph, you can start the VM. To verify that the VM and Ceph are communicating, you may perform the following procedures.
Check to see if Ceph is running:
cephadm >ceph healthCheck to see if the VM is running:
root #virsh listCheck to see if the VM is communicating with Ceph. Replace vm-domain-name with the name of your VM domain:
root #virsh qemu-monitor-command --hmp vm-domain-name 'info block'Check to see if the device from
&target dev='hdb' bus='ide'/>appears under/devor under/proc/partitions:cephadm >ls /devcephadm >cat /proc/partitions