Simplifying Network Virtualization
In today’s world of data center consolidation, the move to server virtualization has happened more quickly than most would have imagined. The primary benefit of virtualization is a reduction in the number of servers, enabling direct cost savings for server hardware, space, power, cooling, etc. Virtualization of the server infrastructure also has a direct impact on the underlying network. Virtual machine mobility adds requirements to the network in terms of extending Layer 2 VLANs between racks within a data center or between different geographic data centers. These moves typically require network configuration changes and in many cases the traffic may use a non-optimal path between data centers.
As enterprises begin to build their own private cloud computing environments, network virtualization is a key component to overall success. To realize the benefits of cloud computing, such as application access anywhere and anytime, along with the ability to add resources and services transparently, the need to create a virtualized data center backbone becomes apparent. This cloud computing environment will stress the network in different ways and the ability to be proactive in network infrastructure connectivity will require a new paradigm for data center design.
Network virtualization is required to support the growing needs of the data center in terms of cloud computing, workload mobility (e.g. virtual machine mobility), increased control of traffic flows, efficient use of bandwidth, and to reduce the amount of network equipment needed. The key is to virtualize the network without adding complexity – this is the goal of both Shortest Path Bridging (SPB) and Transparent Interconnect of Lots of Links (TRILL). The desire is to create a more robust Layer 2 topology by eliminating Spanning Tree while supporting both multipath forwarding and localized failure resolution. Both of these emerging technologies – SPB and TRILL – promise to do just that.