Beamforming is critical to enable initiatives by the U.S. Federal Communications Commission (FCC) to increase spectrum capacity and provide additional cellular service and coverage through satellite and terrestrial systems. The key technology for this application is beamforming, which electronically steers data streams to and from a satellite via a combination of an array of antennas on the satellite and very sophisticated, ground-based computational engines.
Mercury Computer Systems secured an $8.6 million contract to develop a ground-based satellite beamforming platform where the much of the computational burden is accomplished on the ground and communicated to the satellite. In order to provide these services, upwards of 15 TeraOPs of computational performance is required to support the steering of hundreds of beams in the satellite. The standards-based communications platform is designed to include multiple carrier-grade field-programmable gate arrays, or FPGA-based, compute blades in an AdvancedTCA (ATCA) chassis. In its full configuration, the system interconnects more than 100 latest generation FPGAs with one-half terabit of streaming I/O, delivering one of the highest performing, signal-processing platforms in existence.
Mercury's customer, who is developing the ground-based beamformer, wanted to leverage the high-performance capabilities of ATCA technology. A series of industry-standard specifications for next-generation carrier-grade communications equipment, ATCA enabled the customer to take advantage of a broad and growing ecosystem, with processor blades and chassis, for example, available from a number of vendors, and FPGA compute blades available from Mercury. While a proprietary system could have handled the computing requirements, designers would not have been able to select from a variety of suppliers for the chassis, host processors, hard drives, and other support functions. Although some COTS system architectures could have handled the requirements, they would not have been suitable for a variety of reasons, including: scalability, modularity, and economics. Mercury would have had to customize the other infrastructures substantially, which would not have been feasible from a cost perspective.
Modularity is another reason ATCA made sense for a solution that needs to expand as system requirements grow. Rear-transition modules route the antenna data streams into and out of the system. AdvancedMCs (AMCs) serve as host processors and hard drives. The system also uses custom-built ATCA blades for the FPGAs performing the computing. ATCA is designed with small modules called AMCs that are hot-swappable from the front panel. Rear Transition Modules also can be plugged into the back to bring traffic into the chassis, giving system designers a number of ways to expand the system and making the system more cost effective.
In addition, the ATCA Intelligent Platform Management Interface (IPMI) infrastructure can be leveraged to accomplish management, upgrades, monitoring system health, and reporting alarms — a significant advantage compared to competing standards, particularly for telecommunications applications. These tasks all can be done speaking the same language, although there are different "dialects" within IPMI that require attention to detail during system integration.
FPGA: Evolving Computing
The two major components of the beamformer, the Analog Conversion Unit (ACU) and the Beamformer Computational Unit (BCU), comprise the central part of the 14-slot system (Figure 1). Based on Mercury's analysis of the customer's requirements and the suggested system components, designers determined that maximizing satellite receive power with beam-shaping, which enables more antenna gain and less interference, as well as leveraging existing low-power wireless devices, required:
- 300 Gbps of continuous I/O capacity in each direction. This implies 600 Gbps of intra-system, bidirectional capacity. This translates to 25 Gbps in each direction per FPGA board with 12 boards in each shelf.
- 15 TeraOPS of continuous computing per shelf. The beamforming is accomplished using either 25 SX-55 FPGAs @ 200 MHz or 12 SX55 FPGAs @ 400 MHz.
The ACU comprises 12 Analog Conversion Engines (ACEs) and two host processor modules. Each host processor module is a Gigabit Ethernet-based switch card, and a Pentium M processor and a hard drive that are plugged into AMC sites. The system is hosted from one slot and has a backup from the other. The BCU consists of 12 Beamformer Conversion Engines (BCEs), which serve as compute blades for the BCU, and two host processors. Figure 2 shows a BCE block diagram and how traffic flows in through the rear transition module (RTM) until being routed out to the FX60s.
As shown in Figure 1, connectivity to the satellite and between the ACU and BCU occurs via a fiber-optic rear transition module (FOM). A rear transition connector on the FOM brings the fibers in from the antenna, which is communicating to the satellite. Following conversion from analog to digital in the ACU, traffic reaches the BCU for the actual beamforming.
Each BCE has 10 FPGAs. In the past, customers might have used ASICs, but the advances in FPGA technology have opened the aperture on the possibility of deploying FPGAs for high-end compute demands. An FPGA-based solution aids with time to market and offers the customer the ability to continue to tune applications after the system is deployed.
Designing a system with more than 100 FPGAs in one 14-slot chassis (Figure 3) meant pushing the ATCA 200 W per-slot thermal limits. Mercury worked closely with its chassis partner to achieve more than 200 W per slot. While fewer FPGAs could have been put on the board to avoid cooling challenges, it would have meant additional systems. Early in architecting the solution, Mercury worked closely with the customer to compress five separate chassis to two chassis, thus needing to cool more per slot.
Mercury developed an FPGA Communications Infrastructure Firmware specifically for this beamforming application, making it possible to switch among the 100 or so FPGAs in real time, with very low latency so voice quality is maintained. The communications infrastructure is customized for beamforming applications, enabling reconfigurable segmentation and re-assembly of I/O streams in time and space.
The system has the most advanced platform management software for remote operation, administration, and management of FPGA applications. The goal was to make applications highly available and highly reliable, and improve the serviceability and visibility of field-deployed applications. Carrier-grade Linux deployed on processors throughout the system make up the system's Linux Support Package.
While the system addresses a telecommunications application, Mercury discovered during the development process that the ATCA architecture is suitable for a number of other applications. For example, beamforming for a base station is quite similar to beamforming for a radar system as both involve deploying massive numbers of FPGAs. Mercury is leveraging the ATCA infrastructure to other high-end computing applications and markets.
This article was written by Greg Tiedemann, Director of Business Development and Systems Engineering for Mercury Computer Systems' Communications Computing Segment in Chelmsford, MA. For more information, contact Mr. Tiedemann at