Linux has been prevalent inside of the enterprise data center for many years. LAMP services, Web servers, proxies, firewalls and load balancers are just a few of the many use cases that Linux has been providing the base operating system for. With the ease of use and documentation improving, many Linux distributions have seen a rise in use over the past decade or so. Somewhere along this increased usage timeline, we also introduced virtualization into our data centers. And with that, there are a few things to keep in mind when running Linux in a virtual machine.
Logical volume management
Logical volume management (LVM)is a technology included in many recent Linux distributions, which allows administrators to perform a number of tasks as it pertains to disk and partition management. Some of the striping features -- extending or striping data across multiple disks -- may not be prevalent in the virtualization world, as users are typically storing data on the same storage area network or data store. With that said, LVM provides other interesting functionality as well. By enabling LVM, administrators get the ability to perform online file system extends -- growing different partitions and file systems on the fly while keeping the file system online and accessible. Also, depending on strict compliance requirements, LVM allows us to perform volume-based snapshots for backup and recovery purposes, without invoking the vSphere-based functionality.
My advice is to partition your VM with LVM if you have a strict availability policy on your workload and take advantage of the online resizing functionality. If you do not require a high amount of uptime or do not plan on running separate partitions within your Linux installation, then leave LVM disabled, as the complexity will far outweigh the benefits.
Partition options
A default installation of Linux usually prompts the user to simply use one partition for all the files. This is OK in some instances, but as you try and tweak and improve your VMs' security and performance, having separate partitions for items such as /tmp, /var, /home and /usr can start to make sense -- especially if you want each and every partition to have different mount options. Mount options are applied to partitions by specifying them on the corresponding line within the /etc/fstab file, as shown below:
UUID=0aef28b9-3d11-4ab4-a0d4-d53d7b4d3aa4 /tmp ext4 defaults,noexec 1 2
If we take the example of a Web server, one of the most common use cases for Linux in a virtual machine, we can quickly see how some of the "default" mounting options end up hurting our security and performance initiatives.
Noatime/atime/relatime: These are mount options that dictate how the timestamps on the files contained within the partition are handled. On olderLinux distributions, 'atime' was the default, meaning the OS would write a timestamp to the files metadata every time it is read -- yes, simply a read invokes this). In terms of a Web server that is serving up files to the world all day long, you can imagine the overhead of this process. By specifying 'noatime' on the partition housing your Web server's data, you are able to alleviate the server of this overhead by not updating access time at all. The default option on newer distributions is 'relatime,' bringing the best of both worlds and only updating access time if the modification time happens to be newer.
Noexec/exec: This disables or enables the execution of binary files on a given partition. In terms of our Web server example, it makes perfect sense to mount a /tmp partition with 'noexec'. In fact, many hardening guides suggest using this option to improve security.
Users should use caution when changing access time parameters. Some applications, such as mail-related functionality, will require a full 'atime' mount option. In the Web server example, as long as access and security guidelines allow it, mount your Web server data with 'noatime.' In terms of noexec, use this option wisely, as many automated installers and packages may extract to /tmp and execute from there. This is something that can easily be turned on and off, but at the very least, I'd specify 'noexec' for /tmp.
VMXNET3 and PVSCSI
Utilizing the VMXNET3 network adapter and paravirtualized disk adapter has been the recommendation for quite some time now inside of a VM. On a Windows-based VM, we can simply specify these and the drivers are installed automatically within VMware tools. Linux presents a few challenges around utilizing this hardware. First, new versions of Linux distributions quite often come with their own driver for the VMXNET3 adapter and will use those drivers by default, even if VMware tools are installed.
Older Linux distributions may contain an outdated version of the VMXNET3 driver, which might not give you the full feature set that is included within VMware tools version. VMware's KB2020567 outlines how to enable certain features within the VMXNET driver. If you want to install the VMXNET3 driver within VMware tools, one can specify the following option during the VMware tools install:
./vmware-install.pl –clobber-kernel-modules=vmxnet3
The paravitualized SCSI adapter is a great way to gain a little extra throughput at some lower CPU cost and is usually recommended as the adapter of choice. Make sure to check the supported OS list before making this choice to ensure your kernel or distribution is supported by the paravirtual SCSI adapter.
I advise admins to use VMXNET3 and PVSCSI, if possible. If you're using an older kernel, install the VMware Tools VMXNET3 version. If you're using a newer kernel, use the native Linux driver inside your distribution.
Memory management
The Linux OS is constantly moving memory pages from physical memory into its local swap partition. This is by design, and, in fact, VMware does the same thing with its memory management functionality. But the Linux memory management behaves somewhat differently and will move memory pages even if physical memory -- which is now virtual memory -- is available. In order to cut down on the swap activity inside a Linux VM, we can adjust a 'swapiness' value. A higher value indicates more movements, while a lower value indicates memory will be left alone more often. To adjust this value, simply add the line "Vm.swappiness=##" inside of /etc/sysctl.conf and reboot -- replacing ## with your desired value.
I prefer to change this value to a lower number than the default of 60. There is no point in having both the OS and vSphere managing your memory swap. Again, it really depends on the application, but I normally set this to a value of between 15 and 20.
I/O scheduler
Just as ESXi does a great job with management memory, it also has its own special sauce as it pertains to scheduling I/O and writes to disk. Again, the Linux OS duplicates some of this functionality within itself. As of kernel 2.6, most distributions have been utilizing the Completely Fair Queuing as the default I/O scheduler. The others available are NOOP, Anticipatory and Deadline. VMware explains just how to change this value and why you might want to, as there is no sense in scheduling I/O twice. In short, the default I/O scheduler used within your Linux kernel can be switched by appending an elevator switch to your grub kernel entry.
There's no need to schedule within the OS, and then schedule once more in the hypervisor. I suggest using the NOOP I/O scheduler, as it does nothing to optimize the disk I/O, allowing vSphere to manage all of it.
Remove unused hardware and disable unnecessary services
How many times have you used that virtual floppy disk inside of your VMs in the past year? How about that internal PC speaker? If you don't plan on using these devices, then the simple answer is to blacklist them from loading within the Linux kernel. The commands to remove a floppy are as follows:
echo "blacklist floppy" | tee /etc/modprobe.d/blacklist-floppy.conf
rmmod floppy
update-initramfs -u
Also, there is no need to stop at unused hardware. While you're at it, you might as well disable any virtual consoles that you probably aren't using as well. This can be done by commenting out tty lines within /etc/inittab, as shown below.
1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
#3:23:respawn:/sbin/getty 38400 tty3
#4:23:respawn:/sbin/getty 38400 tty4
#5:23:respawn:/sbin/getty 38400 tty5
#6:23:respawn:/sbin/getty 38400 tty6
2:23:respawn:/sbin/getty 38400 tty2
#3:23:respawn:/sbin/getty 38400 tty3
#4:23:respawn:/sbin/getty 38400 tty4
#5:23:respawn:/sbin/getty 38400 tty5
#6:23:respawn:/sbin/getty 38400 tty6
I suggest that you get rid of the floppy disk. Keep in mind that you will also have to remove the hardware from the VMs configuration, as well as disable it within the VM's BIOS. Some other services that you can safely blacklist include monitors raid configuration (mptctl), pcspker, snd_pcm, snd_page_alloc, snd_timer, snd, snd_soundcore (all have to do with sound), coretemp (monitors temperateure of CPUs), parport and parport_pc (parrallel ports).
As always, be sure you aren't actually utilizing any of these services before blacklisting them. Also, I always leave a couple of virtual consoles enabled in the event that I might use them, but six is a bit overkill.
These are just a few items to keep in mind when running Linux virtualized in a VMware environment. In terms of performance gains, the age old saying of "it depends" applies to each and every one of them. You may see more performance increase out of some of the tweaks, and degradation out of others. As always, it's good practice to test any of these changes in a lab environment before implementing them in production. Technology is constantly changing and with that, so are the best practices. So, if you have any other tips or tricks, leave them in the comments below.