The desire to consolidate datacenter I/O has existed for several decades. It has been driven by I/O hardware providers who didn’t want to develop multiple versions of the same I/O card for different computer busses, and by end-users who did not want to stock multiple cards, as well as manage different fabrics with different tools. The problems that I/O consolidation needs to solve are:

Figure 1. Consolidated Ethernet and Fibre Channel Infrastructure

Standing up a server today costs more than the server itself, and is measured in weeks.

  • Lack of server I/O slots (and redundant host adapters) means that single points of failure exist at the host/network interface.
  • Most servers are over-provisioned in their I/O resources, since they are provisioned for peak I/O loads rather than average loads.
  • Leaf switches have proliferated in the datacenter to consolidate physical server connections and reduce the demand for high-cost director ports.

Key Solution Components

To be effective, an I/O consolidation solution must transparently consolidate protocols in the rack, while at the same time virtualizing connections from the rack to the network, and consolidating traffic from servers to the network fabrics, as explained below.

Protocol Consolidation: This allows multiple server protocols to be carried on a single high-performance transport. Such a capability should:

  1. transport all server protocols (Ethernet, Fibre Channel, FCoE, InfiniBand, SAS/SATA, etc);
  2. provide bandwidth far in excess of the demands of any single protocol; and
  3. be transparent to vendor-supplied I/O drivers.

When this capability is successfully implemented, it will eliminate the single point of failure problem caused by the reduction in server I/O slots, without requiring proprietary solutions.

Connection Virtualization: This concept allows network ports, addresses, etc. presented to the server to be logically separated from the actual network ports, addresses, etc that the network actually provides. This capability has several benefits:

Figure 2. Consolidated and Shared I/O Infrastructure MR-IOV enabled I/O is shared among servers and top of rack switches are moved to end of row.
  1. provisioning ports and addresses for servers can now be done without support from network and storage administrators;
  2. multiple physical ports to be aggregated into a single logical port; and
  3. changes in network technologies and topologies can be made without affecting the servers attached to them.

Traffic Consolidation: This capability breaks the 1:1 link (or 1:2 link for redundancy) between server protocols and the number of I/O cards required to support them. This successfully addresses the over-provisioning issue that is common in I/O systems.

I/O Consolidation Approaches

There are two possible places to address rack-level I/O consolidation in the datacenter:

  • Access Layer - The solution is implemented at the interface between the server and the network;
  • Network Layer - The solution is implemented within the fabric itself.

Cisco has positioned Fibre Channel over Ethernet (FCoE) as a successful network-layer approach to rack-level I/O consolidation. Cisco’s logic is that by unifying both Ethernet and Fibre Channel into a single protocol, one can reduce both the type and number of host adapters and switches required in the network.

While FCoE does reduce the types of switches and host adapters in a network, a recent Gartner study ( Myth: A Single FCoE Data Center Network = Fewer Ports, Less Complexity and Lower Costs , ID# G00174456, 11 March 2010) casts doubt on whether it would actually reduce the number of these devices in a network. Also because FCoE does not address protocols other than Ethernet or Fibre Channel, there can still be single points of failure due to a lack of slots for InfiniBand, SAS, or other host adapters.

FCoE also does not positively impact the amount of time required to add a server to a network since it doesn't provide connectivity virtualization, nor does it future-proof the networks since the FCoE connection (10Gb/s) is no faster than existing Ethernet, and only slightly faster than existing Fibre Channel (8Gb/s). These facts make FCoE (and other network approaches) only partially successful as rack-level I/O consolidation solutions.

Access-Level I/O Consolidation

There are two primary implementations of access-level I/O consolidation that are either on the market today, or are entering the market soon:

  • InfiniBand Consolidation: Similar to FCoE, it consolidates Ethernet and Fibre Channel traffic over a network topology. It’s greatest weakness is the fact that today’s implementations are limited to transport at 20Gb/s InfiniBand (IB). With only a 2X improvement in speed over 10GbE, it has a limited impact on the fabric port count per server and requires relatively high-cost IB host adapters. Offerings do not consolidate direct attach storage protocols (SAS, SATA). Finally, InfiniBand I/O consolidation requires non-industry standard drivers for its I/O devices, meaning that customers are locked into its solution.
  • PCI Express Consolidation: PCIe I/O consolidation is the only solution that meets all of the requirements for an effective rack-level I/O solution: protocol consolidation using 40Gb/sec PCIe and vendor-standard drivers; connection virtualization with server personalities; and traffic consolidation across all server protocols (FC, FCoE, Ethernet, IB, SAS, and SATA). It solves all the issues that rack-level I/O consolidation needs to address: eliminating server-network single points of failure, reduction of the time to stand up servers, elimination of over-provisioning, and reduction in the number of leaf switches and host adapters.

MR-IOV enabled I/O is shared among servers and top of rack switches are moved to end of row.

I/O Virtualization Standards Support

I/O Virtualization (IOV) typically requires three elements: a host I/O device for transmitting an I/O protocol out of the server, a switch device that can recognize the protocol and map the I/O connection, and an end point I/O device or target that can accept the I/O connection from the server and translate the data.

IOV technology is differentiated from other client/server technologies such as Ethernet because it consolidates more than one or two protocols. It can be argued there are three standards for IOV including: Multi-Root I/O Virtualization (MR-IOV), Fibre Channel over Ethernet (FCoE), and InfiniBand (IB). Of these three, FCoE and IB use networking protocols to consolidate Ethernet and Fibre Channel protocols over a single network transport. MR-IOV uses native PCI Express protocol for transporting any I/O over a PCIe switch fabric to a target device that would otherwise be physically installed inside a server. In addition, an MR-IOV enabled I/O device can accept connections from multiple physical servers connected to the switch fabric.

The MR-IOV Standard

The MR-IOV standard was completed by the PCI-SIG in May, 2008 and provides the blueprint for I/O device vendors to enable multiple rack and blade servers to access a single, shared Multi-Root Aware (MRA) I/O card over PCI Express. As shown in Figure 2, the server blades do not contain PCIe endpoint devices but instead connect root ports to an MRA switch. Top of rack fabric switches are replaced by an MRA switch which contains the server endpoint I/O devices.

Each MRA I/O device supports many Virtual Functions (VF), which to the server, appear as a PCI Express device. It can be thought of as many independent views of a single I/O adapter. The number of VFs per physical I/O device varies by vendor implementation, but can be 128 VFs or higher. This would allow an entire rack of servers to share a single I/O device rather than dedicated I/O in every server. The MRA PCIe switch controls the mapping of server connections to virtual adapters and although the connections are typically static, they can be reconfigured or remapped depending on application requirements.

The MRA switch not only supports IOV enabled adapters, but also any other PCI Express device including Single Root I/O Virtualization (SRIOV) adapters used in virtual machine environments, non-IOV adapters such as GPU s and Flash Storage devices, and PCIe bridges. Even though non-IOV adapters cannot be shared, they can still be accessed on a 1:1 basis with servers and their connections remapped among servers as a pool of I/O resources.

In summary, an effective rack-level I/O consolidation approach must be implemented at the access layer. Cisco’s FCoE approach, while providing network consolidation, does not address most I/O consolidation needs. InfiniBand alternatives provide access-layer implementations, but also have significant shortcomings that do not address all of the issues in rack-level I/O consolidation. Only PCI Express I/O consolidation addresses all of these issues without proprietary solution tie-in.

This article was written by Mike Lance, Director of Product Marketing, NextIO (Austin, TX). For more information, contact Mr. Lance at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit