Virtual machines require CPUs, memory, storage, and network access, similar to physical machines. This recipe will show you how to set up a basic KVM environment for easy resource management through libvirt.
A storage pool is a virtual container limited by two factors:
qemu-kvm
Storage pools may not exceed the size of the disk on the host. The maximum sizes are as follows:
For this recipe, you will need a volume of at least 2 GB mounted on /vm
and access to an NFS server and export.
We'll use NetworkManager
to create a bridge, so ensure that you don't disable NetworkManager
and have bridge-utils
installed.
Let's have a look into managing storage pools and networks.
In order to create storage pools, we need to provide the necessary details to the KVM for it to be able to create it. You can do this as follows:
localfs
storage pool using virsh
on /vm
, as follows:~]# virsh pool-define-as --name localfs-vm --type dir --target /vm
~# mkdir -p /nfs/vm
virsh
on NFS server:/export/vm
, as follows:~]# virsh pool-define-as --name nfs-vm --type network --source-host nfsserver --source-path /export/vm –target /nfs/vm
~]# virsh pool-autostart localfs-vm ~]# virsh pool-autostart nfs-vm
~]# virsh pool-start localfs-vm ~]# virsh pool-start nfs-vm
~]# virsh pool-list Name State Autostart ------------------------------------------- localfs-vm active yes nfs-vm active yes
At some point in time, you will need to know how much space you have left in your storage pool.
Get the information of the storage pool by executing the following:
~]# virsh pool-info --pool <pool name> Name: nfs-vm UUID: some UUID State: running Persistent: yes Autostart: yes Capacity: 499.99 GiB Allocation: 307.33 GiB Available: 192.66 GiB
As you can see, this command easily shows you its disk space allocation and availability.
Be careful though; if you use a filesystem that supports sparse files, these numbers will most likely be incorrect. You will have to manually calculate the sizes yourself!
To detect whether a file is sparse, run ls -lhs
against the file. The -s
command will show an additional column (the first), showing the exact space that the file is occupying, as follows:
~]# ls -lhs myfile 121M -rw-------. 1 root root 30G Jun 10 10:27 myfile
Sometimes, storage is phased out. So, it needs to be removed from the host.
You have to ensure that no guest is using volumes on the storage pool before proceeding, and you need to remove all the remaining volumes from the storage pool. Here's how to do this:
~]# virsh vol-delete --pool <pool name> --vol <volume name>
~]# virsh pool-destroy --pool <pool name>
~]# virsh pool-delete --pool <pool name>
Before creating the virtual networks, we need to build a bridge over our existing network interface. For the sake of convenience, this NIC will be called eth0
. Ensure that you record your current network configuration as we'll destroy it and recreate it on the bridge.
Unlike the storage pool, we need to create an XML configuration file to define the networks. There is no command similar to pool-create-as
for networks. Perform the following steps:
~]# nmcli connection add type bridge autoconnect yes con-name bridge-eth0 ifname bridge-eth0
~]# nmcli connection delete eth0
~]# nmcli connection modify bridge-eth0 ipv4.addresses <ip address/cidr> ipv4.method manual ~# nmcli connection modify bridge-eth0 ipv4.gateway <gateway ip address> ~]# nmcli connection modify bridge-eth0 ipv4.dns <dns servers>
~]# nmcli connection add type bridge-slave autoconnect yes con-name slave-eth0 ifname eth0 master bridge-eth0
For starters, we'll take a look at how we can create a NATed network similar to the one that is configured by default and called the default:
/tmp/net-nat.xml
, as follows:<network> <name>NATted</name> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <bridge name='virbr0' stp='on' delay='0'/> <ip address='192.168.0.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.0.2' end='192.168.0.254'/> </dhcp> </ip> </network>
~]# virsh net-define /tmp/net-nat.xml
Now, let's create a bridged network that can use the network bound to this bridge through the following steps:
/tmp/net-bridge-eth0.xml
, by running the following:<network> <name>bridge-eth0</name> <forward mode="bridge" /> <bridge name="bridge-eth0" /> </network>
~]# virsh net-define /tmp/net-bridge-eth0.xml
There's one more type of network that is worth mentioning: the isolated network. This network is only accessible to guests defined in this network as there is no connection to the "real" world.
/tmp/net-local.xml
, by using the following code:<network> <name>isolated</name> <bridge name='virbr1' stp='on' delay='0'/> <domain name='isolated'/> </network>
~]# virsh net-define /tmp/net-local.xml
Creating networks in this way will register them with the KVM but will not activate them or make them persistent through reboots. So, this is an additional step that you need to perform for each network. Now, perform the following steps:
~]# virsh net-autostart <network name>
~]# virsh net-start <network name>
~]# virsh net-list --all Name State Autostart Persistent ---------------------------------------------------------- bridge-eth0 active yes yes default inactive no yes isolated active yes yes NATted active yes yes
On some occasions, the networks are phased out; in this case, we need to remove the network from our setup.
Prior to executing this, you need to ensure that no guest is using the network that you want to remove. Perform the following steps to remove the networks:
~# virsh net-destroy --network <network name>
~]# virsh net-undefine --network <network name>
It's easy to create multiple storage pools using the define-pool-as command, as you can see. Every type of storage pool needs more, or fewer, arguments. In the case of the NFS storage pool, we need to specify the NFS server and export. This is done by specifying--source-host and--source-path respectively.
Creating networks is a bit more complex as it requires you to create a XML configuration file. When you want a network connected transparently to your physical networks, you can only use bridged networks as it is impossible to bind a network straight to your network's interface.
The storage backend created in this recipe is not the limit. Libvirt also supports the following backend pools:
Local storage pools are directly connected to the physical machine. They include local directories, disks, partitions, and LVM volume groups. Local storage pools are not suitable for enterprises as these do not support live migration.
Network storage pools include storage shared through standard protocols over a network. This is required when we migrate virtual machines between physical hosts. The supported network storage protocols are Fibre Channel-based LUNs, iSCSI, NFS, GFS2, and SCSI RDMA.
By defining the storage pools and networks in libvirt, you ensure the availability of the resources for your guest. If, for some reason, the resource is unavailable, the KVM will not attempt to start the guests that use these resources.
When checking out the man page for virsh (1), you will find a similar command to net-define
, pool-define
: net-create
, and pool-create
(and pool-create-as
). The net-create
command, similar to pool-create
and pool-create-as
, creates transient (or temporary) resources, which will be gone when libvirt is restarted. On the other hand, net-define
and pool-define
(as also pool-define-as
) create persistent (or permanent) resources, which will still be there after you restart libvirt.
You can find out more on libvirt storage backend pools at https://libvirt.org/storage.html
More information on libvirt networking can be found at http://wiki.libvirt.org/page/Networking