Virtualization: Virtualization implies the ability to make available resources across disparaging physical entities with a view to enabling virtual connections to these resources. When resources are virtualized, we can achieve a higher dimension of performance. For example data may be backed up in multiple data centers but to an end-user, the backed-up data is in a single "virtual disk." Similarly, an end-user might want to crunch numbers for his pet application using a super computer. The network, in conjunction with a parallel message-passing paradigm, may support this application over several hundred server blades, many of which exist in different locations. This is known as server virtualization, or processor virtualization. Graphically, virtualization could be one-to-many, or could be many-to-one, and even in some cases many-to-many.

In the one-to-many paradigm, a single customer is connected to multiple storage or multiple servers virtualized across a network. It is the responsibility of the network to provide the virtualization of resources (across the network). This intra-data center problem has been widely studied and is known to be efficiently solved for static traffic demands. However, as the demands become dynamic, the problem assumes enormous complexity and solutions are suboptimal and non-trivial. Achieving hard- SLAs (service level agreements) is difficult in such situations, and often a heuristic approach using load balancers to create virtual servers are proposed as approximate solutions. The virtualization can occur at the network, data, or transport (physical) layer. Also, communication between data-centers — storage and process centers — needs significant provisioning and control plane efforts. Synchronous back up between data-centers and distributed processing between multiple server blades in different locations is paramount to the success of this approach. In the many-to-one paradigm, multiple end-users are connected to a single network-attached-entity, such as a data-center. The engineering problem lies in segregating and creating boundaries within the data-center to meet the service level agreements of each of the customers. A second problem is when customers are not static and move between the membership domains of data-centers.

The many-to-many virtualization case is quite complicated with a static and a dynamic membership option. In the static many-to-many virtualization option, a larger number of end-users use services provided by multiple network-attached entities. The dynamic option involves a fluid set of end-users and/or network attached devices. Many-to-many virtualization is significantly complex, requiring immense network control.

Consolidation of Resources: By definition, consolidation of resources implies the ability of a network to facilitate virtualization. At a certain level of abstraction, consolidation and virtualization are a paradox when we consolidate resources. We cannot do one without the other. Consolidation implies optimizing network resources and reducing footprint. If virtualization tells us how to implement one-to-many, then consolidation helps us to compute how many is, indeed, many! Hence, consolidation is a micro-engineering problem as compared to virtualization. At the network layer, the routing/switching fabric is well entrenched to provide consolidated services, complete with service differentiation, multi-layer support and resiliency. Consolidation at the data layer is around the corner. Startups claim to consolidate resources using customized ASIC designs that lead to data-layer consolidated solutions (switches). A lot of emphasis needs to go into the design of layer-2 switches for cloud computing. To that end, the proposed Carrier Ethernet solutions may go a long way in making the switches more application aware, better managed (not just telecom-wise, but also application wise) and cost-efficient.