New technology often requires changes in all areas of an organization's overall infrastructure. Virtualization is no exception.
By understanding the storage-related needs of virtual machines, storage administrators can help their virtual environments scale and keep pace with demand. While some of the requirements for a virtualized environment are new, many of them involve the same storage best practices that are used for physical machines. When designing a storage infrastructure for virtual machines, it's important to measure performance statistics and to consider storage space and performance.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
While administrators often focus on CPU and memory constraints, storage-related performance is also a common bottleneck in a virtualized environment. In some ways, virtual machines can be managed like physical ones. After all, each virtual machine runs its own operating systems, applications and services.
But there are other considerations that must be taken into account when designing a storage infrastructure. By understanding the unique needs of virtual machines, storage managers can build a reliable and scalable data center infrastructure to support their virtual machines.Analyzing disk performance requirements
For many types of applications, the primary consideration around which the storage infrastructure is designed is I/O operations per second (IOPS). IOPS refer to the number of read and write operations that are performed, but this statistic does not always capture all of the characteristics of storage requirements. Further cconsiderations related to storage performance requirements include the type of disk I/O activity. .
For example, since virtual disks that are stored on network-based storage arrays must support disk activity for guest operating systems, the average I/O request size tends to be small. In addition, I/O requests are frequent and often random in nature. Paging can also create a lot of traffic on memory-constrained host servers. Other considerations will be workload-specific. For example, when designing the storage infrastructure it's a good idea to measure the percentage of read vs. write operations. This information will be useful when designing RAID configurations, as a low percentage of writes might indicate that overhead related to calculating and writing parity data will be minimal.
Now multiply all these statistics by the number of virtual machines (VMs) being supported on a single storage device, and you are faced with a real potential for large traffic jams. The solution? Optimize the storage solution for supporting small and non-sequential IO operations. Most importantly, distribute VMs based on their levels and types of disk utilization. Performance monitoring can help generate the information you need.
Network-based storage approaches
Many IT infrastructures use a combination of NAS, SAN and iSCSI-based storage to support their physical servers. These methods can still be used for hosting virtual machines, since most virtualization platforms provide support for them. For example, SAN- or iSCSI-based volumes that are attached to a physical host server can be used to store virtual machine configuration files, virtual hard disks and related data.
Note, however, that by default, the storage is attached to the host and not to the guest VM. Storage managers should keep track of which VMs reside on which physical volumes for backup and management purposes.
In addition to providing storage at the host-level, guest operating systems (depending on their capabilities) can take advantage of NAS and iSCSI-based storage. With this approach, VMs can directly connect to network-based storage. However, the potential drawback is that guest operating systems can be highly sensitive to latency, and even relatively small delays can lead to guest operating system crashes or file system corruption.
Leveraging storage functions to improve availability and performance
With virtualization now allowing organizations to place multiple mission-critical workloads on the same servers, these companies are using storage functions to improve reliability, availability and performance. Implementing RAID-based striping across arrays of many disks can significantly improve performance. The array's block size should be matched to the most common size of I/O operations. However, more disks means more chances for failures. So features such as multiple parity drives and hot standby drives are a must.
Fault tolerance can be implemented through the use of multi-pathing for storage connections. For NAS and iSCSI solutions, storage managers should look into having multiple physical network connections and implementing failover and load-balancing features by using network adapter teaming.
Finally, it's a good idea for host servers to have dedicated network connections to their storage arrays. While you can often get by with shared connections in low-utilization scenarios, the load placed by virtual machines can be significant and can increase latency.
Planning for backups
Storage administrators will have the need to backup many of their virtual machines. Apart from allocating the necessary storage space, it is necessary to develop a method for dealing with exclusively-locked virtual disk files. There are two main approaches:
- Guest-level backups: In this approach, VMs are treated like physical machines. Generally, you would install backup agents within VMs, define backup sources and destinations, and then let them go to work. The benefit of this approach is that only important data is backed up, thereby reducing required storage space. However, your backup solution must be able to support all potential guest operating systems and versions. Furthermore, the complete recovery process can involve many steps, including reinstalling and reconfiguring the guest OS.
- Host-level backups: Virtual machines are conveniently packaged into a few important files. Generally, this includes the VM configuration file and virtual disks. You can simply copy these files to another location. The most compatible approach involves stopping or pausing the VM, copying the necessary files and then restarting the VM.
However, this can require downtime. Numerous first- and third-party solutions are able to backup VMs while they're "hot," thereby eliminating service interruptions. Regardless of the method used, replacing a failed or lost VM is easy: Simply restore the necessary files to the same or another host server and you should be ready to go.
The biggest drawback to host-level backups is in the area of storage requirements. You're going to be allocating a ton of space for the guest operating systems, applications and data you'll be storing.
Options such as the ability to perform snapshot-based backups can also be useful. However, storage administrators should thoroughly test the solution and should look for explicitly-stated virtualization support from their vendors. Backups must be consistent to a point in time, and solutions that are not virtualization-aware might neglect to flush information stored in the guest OS's cache.
About the author: Anil Desai is the author of numerous technical books focusing on the Windows server platform, virtualization, Active Directory, SQL Server and IT management. His most recent books include The Rational Guide to Managing Microsoft Virtual Server and The Rational Guide to Scripting Microsoft Virtual Server.< He has made dozens of conference presentations at national events and is also a contributor to technical magazines.
Dig Deeper on Data Backup Resources