Network I/O latency is an important measurement, especially in latency-sensitive applications like those found in industries such as finance and trading, healthcare, transportation, and telecom (in particular, applications using Voice over IP). The end-to-end latency requirements of these applications vary from microseconds to milliseconds. And some of these applications, such as those utilizing Voice over IP, are more sensitive to jitter than end-to-end latency. So an important measurement to consider for vSphere performance in this scope is latency and jitter added due to virtualization.
Packets sent or received from a virtual machine experience some increase in network I/O latency compared to a bare metal (native) environment. This is because packets need to traverse through an extra layer of virtualization stack in the case of virtual machines. The additional latency due to the virtualization stack can come from several different sources. The most commons ones include:
- Emulation Overhead
- Packet Processing
- Scheduling
- Virtual Interrupt Coalescing
- Network Bandwidth Contention
- Halt-Wakeup Physical CPU
- Halt-Wakeup Virtual CPU
VMware designed and ran several performance experiments to show under what conditions the different sources of latency become dominant and how they affect end-to-end latency. The results will help you make proper design choices for your environment such that it fulfills your end-to-end latency requirements.
Network I/O Latency on VMware vSphere 5