Virtualization
- WHAT?
By means of virtualization, you can run multiple virtual machines on a single bare-metal host.
- WHY?
Sharing host hardware between multiple virtualized guests significantly saves resources.
- EFFORT
It takes less than 15 minutes of your time to understand the concept of virtualization.
1 Introduction to virtualization #
Virtualization is a technology that provides a way for a machine (VM Host Server) to run another operating system (VM Guest) on top of the host operating system.
1.1 How does virtualization work? #
The primary component of VM Host Server that enables virtualization is a hypervisor. A hypervisor is a layer of software that runs directly on VM Host Server's hardware. It controls platform resources, sharing them among multiple VM Guests and their operating systems by presenting virtualized hardware interfaces to each VM Guest.
1.2 Benefits of virtualization #
Virtualization brings a lot of advantages while providing the same service as a hardware server.
Virtualization reduces the cost of your infrastructure. Servers are mainly used to provide a service to a customer. A virtualized operating system can provide the same service but with the following advantages:
Less hardware: you can run several operating systems on one host, therefore all hardware maintenance is reduced.
Less power/cooling: less hardware means you do not need to invest more in electric power, backup power, and cooling if you need more service.
Save space: your data center space is saved because you do not need more hardware servers (fewer servers than services running).
Less management: using a VM Guest simplifies the administration of your infrastructure.
Agility and productivity: virtualization provides migration capabilities, live migration and snapshots. These features reduce downtime and bring an easy way to move your service from one place to another without any service interruption.
2 Installation of virtualization components #
To run a virtualization server (VM Host Server) that can host multiple guest systems (VM Guests), you need to install required virtualization components on the server. These components vary depending on which virtualization technology you want to use.
You can install the virtualization tools required to run a VM Host Server either when installing the system (see the manual installation), or from an alerady installed system by installing a virtualization pattern. The later option is described bellow:
>sudozypper install -t pattern PATTERN_NAME
Replace the PATTERN_NAME with one of the following values:
kvm_serverInstalls a basic VM Host Server with the KVM and QEMU environments.
kvm_toolsInstalls
libvirttools for managing and monitoring VM Guests in the KVM environment.
3 Virtualization modes #
Virtualization is a technology that provides a way for a machine (VM Host Server) to run another operating system (VM Guest) on top of the host operating system. There are two basic modes of hosting VM Guests on virtual machines—full virtualization mode and paravirtual mode.
- Full virtualization (FV)
FV lets virtual machines run unmodified operating systems. It uses either Binary Translation or hardware-assisted virtualization technology, such as AMD* Virtualization or Intel* Virtualization Technology, to improve performance on processors that support it. In FV mode, VM Guest is also called the Hardware Virtual Machine (HVM).
TipCertain guest operating systems hosted in full virtualization mode can be configured to use drivers from the SUSE Virtual Machine Drivers Pack (VMDP) instead of drivers included in the operating system. Running virtual machine drivers improves performance on guest operating systems, such as Windows Server 2003.
- Paravirtualization (PV)
PV normally requires that guest operating systems are modified for the virtualization environment. VM Guests running in paravirtual mode have better performance than those running under full virtualization. Operating systems currently modified to run in paravirtual mode are called paravirtualized operating systems and include SLES.
- PV on HVM (PVHVM)
PVHVM enhances HVM (see Full virtualization (FV)) with paravirtualized drivers, and handling of paravirtualized interrupts and timers.
4 Virtualization limits and support #
Virtualization limits and support
QEMU is only supported when used for virtualization together with the KVM hypervisors. The TCG accelerator is not supported, even when it is distributed within SUSE products. Users must not rely on QEMU TCG to provide guest isolation, or for any security guarantees. See also https://qemu-project.gitlab.io/qemu/system/security.html.
4.1 Architecture support #
4.1.1 KVM hardware requirements #
SUSE supports KVM full virtualization on AMD64/Intel 64, AArch64, IBM Z and IBM LinuxONE hosts.
On the AMD64/Intel 64 architecture, KVM is designed around hardware virtualization features included in AMD* (AMD-V) and Intel* (VT-x) CPUs. It supports virtualization features of chipsets and PCI devices, such as an I/O Memory Mapping Unit (IOMMU) and Single Root I/O Virtualization (SR-IOV). You can test whether your CPU supports hardware virtualization with the following command:
>egrep '(vmx|svm)' /proc/cpuinfoIf this command returns no output, your processor either does not support hardware virtualization, or this feature has been disabled in the BIOS or firmware.
The following Web sites identify AMD64/Intel 64 processors that support hardware virtualization: https://ark.intel.com/Products/VirtualizationTechnology (for Intel CPUs), and https://products.amd.com/ (for AMD CPUs).
On the Arm architecture, Armv8-A processors include support for virtualization.
On the Arm architecture, we only support running QEMU/KVM via the CPU model
host(it is namedhost-passthroughin Virtual Machine Manager orlibvirt).
The KVM kernel modules only load if the CPU hardware virtualization features are available.
However, additional RAM for each virtualized guest is needed. It should at least be the same amount that is needed for a physical installation. It is also strongly recommended having at least one processor core or hyper-thread for each running guest.
AArch64 is a continuously evolving platform. It does not have a traditional standards and compliance certification program to enable interoperability with operating systems and hypervisors. Ask your vendor for the support statement on SUSE Linux Enterprise Server.
Running KVM hypervisor on the POWER platform is not supported.
4.2 Hypervisor limits #
New features and virtualization limits for KVM are outlined in the Release Notes for each Service Pack (SP).
Only packages that are part of the official repositories for SUSE Linux Enterprise Server are
supported. Conversely, all optional subpackages and plug-ins (for QEMU,
libvirt) provided at
packagehub are
not supported.
For the maximum total virtual CPUs per host. The total number of virtual CPUs should be proportional to the number of available physical CPUs.
4.2.1 KVM limits #
Supported (and tested) virtualization limits of a SUSE Linux Enterprise Server 16.0 host running Linux guests on AMD64/Intel 64. For other operating systems, refer to the specific vendor.
|
Maximum virtual CPUs per VM |
768 |
|
Maximum memory per VM |
4 TiB |
KVM host limits are identical to SUSE Linux Enterprise Server (see the corresponding section of release notes), except for:
Maximum virtual CPUs per VM: see recommendations in the Virtualization Best Practices Guide regarding the overcommitment of physical CPUs. The total virtual CPUs should be proportional to the available physical CPUs.
4.3 Guest VM restrictions and limitations (secure VM) #
Please be aware of the following functionalities and features that are not available or have limitations for guest VMs, especially when deployed within secure VM environments. These limitations are crucial for maintaining the enhanced security provided by the underlying hardware and software configurations.
Secure Boot (AMD side): Secure Boot functionality is not supported on the AMD platform for guest VMs within this secure environment. This means that guest VMs cannot leverage the UEFI Secure Boot mechanism to verify the digital signatures of boot components, which typically helps prevent the loading of unauthorized or malicious software during the boot process. Users should consider alternative methods for ensuring software integrity post-boot.
VM migration: The live migration of virtual machines between hosts is currently not supported. This implies that planned maintenance, load balancing, or disaster recovery scenarios requiring VM movement without downtime will need to involve a full shutdown and restart of the guest VM on the new host. This limitation is often a consequence of maintaining the cryptographic isolation and attestation state of secure VMs.
Suspend/restore: The ability to suspend a VM's execution state to disk and later restore it is not available. This impacts operational flexibility, as VMs cannot be paused and resumed seamlessly. Any interruption to a guest VM's operation will require a full shutdown and a fresh boot cycle, losing the immediate operational state.
Pass-through devices: Direct pass-through of host devices (such as GPUs, network cards, or storage controllers) to the guest VM is not supported. This limitation restricts scenarios where guest VMs require exclusive, high-performance access to specific hardware components. Workloads that heavily rely on direct hardware interaction, like certain graphical applications or specialized I/O operations, may experience reduced performance or incompatibility.
VM reboot: The internal reboot functionality for guest VMs is not supported. If a guest VM requires a restart, it must be fully shut down and then started again from the host management interface. This ensures that the secure state of the VM is properly re-established upon each boot, rather than relying on an internal reset that might bypass certain security checks.
Memory ballooning: Memory ballooning, which allows dynamic adjustment of VM memory by reclaiming unused guest memory back to the host, is not supported. This means that the allocated memory for a guest VM will remain fixed, regardless of its actual usage. Consequently, memory overcommitment strategies, where the sum of allocated VM memory exceeds the physical host memory, cannot be effectively utilized, potentially leading to less efficient memory utilization on the host.
Hotplug CPU/memory: The hotplugging (adding or removing) of CPU cores or memory modules while the VM is running is not supported. Any changes to the vCPU or memory configuration of a guest VM will require a full shutdown and a restart of the VM for the changes to take effect. This affects the agility and flexibility in dynamically scaling resources for running workloads.
Virtio graphics: Only Virtio block devices (for storage) and network devices are supported. Virtio graphics are not available for guest VMs in this environment. This implies that guest VMs will rely on basic graphics emulation, which may not provide optimal performance for graphically intensive applications, user interfaces, or remote desktop protocols requiring accelerated graphics.
Huge pages: The use of huge pages for memory allocation within the guest VM is not supported. Huge pages can improve performance by reducing Translation Lookaside Buffer (TLB) misses, especially for applications with large memory footprints. Without huge page support, memory management might incur slightly higher overhead, which could subtly impact the performance of memory-intensive applications.
vCPU limit (AMD SNP): The number of virtual CPUs (vCPUs) that can be assigned to a guest VM is limited to 255 when utilizing AMD Secure Nested Paging (SNP). This specific limitation is imposed by the AMD SNP architecture to maintain the integrity and performance characteristics of the secure execution environment. Workloads requiring more than 255 vCPUs cannot be deployed on these secure VMs.
4.4 Supported host environments (hypervisors) #
This section describes the support status of SUSE Linux Enterprise Server 16.0 running as a guest operating system on top of different virtualization hosts (hypervisors).
|
SUSE Linux Enterprise Server |
Hypervisors |
|---|---|
|
SUSE Linux Enterprise Server 12 SP5 |
KVM (SUSE Linux Enterprise Server 15 SP6 guest must use UEFI boot) |
|
SUSE Linux Enterprise Server 15 SP3 to SP7 |
KVM |
Windows Server 2019, 2022, 2025
You can also search in the SUSE YES certification database.
Support for SUSE host operating systems is full L3 (both for the guest and host), according to the respective product lifecycle.
SUSE provides full L3 support for SUSE Linux Enterprise Server guests within third-party host environments.
Support for the host and cooperation with SUSE Linux Enterprise Server guests must be provided by the host system's vendor.
4.5 Supported guest operating systems #
This section lists the support status for guest operating systems virtualized on top of SUSE Linux Enterprise Server 16.0 for KVM hypervisors.
Microsoft Windows guests can be rebooted by libvirt/virsh only if
paravirtualized drivers are installed in the guest. Refer to
https://www.suse.com/products/vmdriverpack/ for
more details on downloading and installing PV drivers.
SUSE Linux Enterprise Server 12 SP5
SUSE Linux Enterprise Server 15 SP3, 15 SP4, 15 SP5, 15 SP6, 15 SP6
SUSE Linux Enterprise Micro 6.0, 6.1, 6.2
Windows Server 2022, 2025
Oracle Linux 7, 8
SLED 15 SP3
Windows 10 / 11
Refer to the SUSE Multi-Linux Support documentation at https://documentation.suse.com/liberty for the list of available combinations and supported releases. In other cases, they are supported on a limited basis (L2, fixes if reasonable).
In other combinations, L2 support is provided but fixes are available only if feasible. SUSE fully supports the host OS (hypervisor). The guest OS issues need to be supported by the respective OS vendor. If an issue fix involves both the host and guest environments, the customer needs to approach both SUSE and the guest VM OS vendor.
All guest operating systems are supported both fully virtualized and paravirtualized. The exception is Windows systems, which are only supported fully virtualized (but they can use PV drivers: https://www.suse.com/products/vmdriverpack/), and OES operating systems, which are supported only paravirtualized.
All guest operating systems are supported both in 32-bit and 64-bit environments, unless stated otherwise.
4.5.1 Availability of paravirtualized drivers #
To improve the performance of the guest operating system, paravirtualized drivers are provided when available. Although they are not required, it is strongly recommended to use them.
The paravirtualized drivers are available as follows:
- Red Hat
Available since Red Hat Enterprise Linux 5.4. Starting from Red Hat Enterprise Linux 7.2, Red Hat removed the PV drivers.
- Windows
SUSE has developed Virtio-based drivers for Windows, which are available in the Virtual Machine Driver Pack (VMDP). For more information, see https://www.suse.com/products/vmdriverpack/.
4.6 Supported VM migration scenarios #
SUSE Linux Enterprise Server supports migrating a virtual machine from one physical host to another.
4.6.1 Offline migration scenarios #
SUSE supports offline migration, powering off a guest VM, then moving it to a host running a different SLE product, from SLE 12 to SLE 15 SPX. The following host operating system combinations are fully supported (L3) for migrating guests from one host to another:
| Target SLES host | 12 SP3 | 12 SP4 | 12 SP5 | 15 GA | 15 SP1 | 15 SP2 | 15 SP3 | 15 SP4 | 15 SP5 | 15 SP6 |
|---|---|---|---|---|---|---|---|---|---|---|
| Source SLES host | ||||||||||
| 12 SP3 | ✓ | ✓ | ✓ | ✓ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| 12 SP4 | ❌ | ✓ | ✓ | ✓1 | ✓ | ❌ | ❌ | ❌ | ❌ | ❌ |
| 12 SP5 | ❌ | ❌ | ✓ | ❌ | ✓ | ✓ | ❌ | ❌ | ❌ | ❌ |
| 15 GA | ❌ | ❌ | ❌ | ❌ | ✓ | ✓ | ✓ | ❌ | ❌ | ❌ |
| 15 SP1 | ❌ | ❌ | ❌ | ❌ | ✓ | ✓ | ✓ | ❌ | ❌ | ❌ |
| 15 SP2 | ❌ | ❌ | ❌ | ❌ | ❌ | ✓ | ✓ | ✓ | ❌ | ❌ |
| 15 SP3 | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✓ | ✓ | ✓ | ✓ |
| 15 SP4 | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✓ | ✓ | ✓ |
| 15 SP5 | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✓ | ✓ |
| 15 SP6 | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✓ |
| ✓ |
Fully compatible and fully supported |
| ✓1 |
Supported for KVM hypervisor only |
| ❌ |
Not supported |
4.6.2 Live migration scenarios #
This section lists support status of live migration scenarios when running virtualized on top of SLES. The following host operating system combinations are fully supported (L3 according to the respective product life cycle).
SUSE always supports live migration of virtual machines between hosts running SLES with successive service pack numbers. For example, from SLES 16 to 16.1.
SUSE strives to support live migration of virtual machines from a host running a service pack under LTSS to a host running a newer service pack, within the same major version of SUSE Linux Enterprise Server. SUSE only performs minimal testing of LTSS-to-newer migration scenarios and recommends thorough on-site testing before attempting to migrate critical virtual machines.
SLES 15 SP6 includes kernel patches and tooling to enable Intel TDX Confidential Computing technology in the product. As this technology is not yet fully ready for a production environment, it is provided as a technology preview.
| Target SLES host | 15 SP7 | 16 |
|---|---|---|
| Source SLES host | ||
| 15 SP7 | ✓ | ❌ |
| 16 | ❌ | ✓2 |
| ✓ |
Fully compatible and fully supported |
| ✓2 |
When available |
| ❌ |
Not supported |
4.7 Feature support #
Nested virtualization allows you to run a virtual machine inside another VM while still using hardware acceleration from the host. It has low performance and adds more complexity while debugging. Nested virtualization is normally used for testing purposes. In SUSE Linux Enterprise Server, nested virtualization is a technology preview. It is only provided for testing and is not supported. Bugs can be reported, but they are treated with low priority. Any attempt to live migrate or to save or restore VMs in the presence of nested virtualization is also explicitly unsupported.
Post-copy is a method to live migrate virtual machines that is intended to get VMs running as soon as possible on the destination host, and have the VM RAM transferred gradually in the background over time as needed. Under certain conditions, this can be an optimization compared to the traditional pre-copy method. However, this comes with a major drawback: An error occurring during the migration (especially a network failure) can cause the whole VM RAM contents to be lost. Therefore, we recommend using pre-copy only in production, while post-copy can be used for testing and experimentation in case losing the VM state is not a major concern.
4.7.1 Guest feature support #
Hotplugging of virtual network and virtual block devices, and resizing, shrinking and restoring dynamic virtual memory are supported in KVM only when PV drivers are used (VMDP).
For machines that support Intel FlexMigration, CPU-ID masking and faulting allow for more flexibility in cross-CPU migration.
For KVM, a detailed description of supported limits, features,
recommended settings and scenarios, and other useful information is
maintained in kvm-supported.txt. This file is
part of the KVM package and can be found in
/usr/share/doc/packages/qemu-kvm.
| Features | KVM FV guest | ||
|---|---|---|---|
| Virtual network and virtual block device hotplugging | ✓ | ||
| Virtual CPU Hotplug | ❌ | ||
| Virtual CPU Overcommit | ✓ | ||
| Dynamic virtual memory resize | ✓ | ||
| VM save and restore | ✓ | ||
| VM Live Migration | ✓ | ||
| VM snapshot | ✓ | ||
| Advanced debugging with GDBC | ✓ | ||
| Memory ballooning | ❌ | ||
| PCI Pass-Through | ✓ | ||
| AMD SEV ans SEV-SNP | ✓ [3] |
| ✓ |
Fully compatible and fully supported |
| ❌ |
Not supported |
| [1] | NetWare guests are excluded. |
| [2] | See https://documentation.suse.com/sles/html/SLES-amd-sev/article-amd-sev.html. |
5 For more information #
For further steps in virtualization, refer to the following sources:
6 Legal Notice #
Copyright© 2006–2025 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see https://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors, nor the translators shall be held liable for possible errors or the consequences thereof.
