The task-specific nature of an embedded system application typically defines a narrow scope of performance requirements. But the range of options for achieving those requirements are broad — from multicore processors and rugged single board computers (SBCs) to I/O devices and the bus systems that tie everything together. And the choices to be made are critical in their impact on cost, on performance efficiency in compute-intensive operations, and on the ability to function reliably in hot, cold, dusty or wet environments.
The following look at processor, board, software and interface capabilities built to the CompactPCI (CPCI) and CPCI Express (CPCIe) standards offers some insights into the opportunities and tradeoffs — in terms of processing power, functionality, speed and compatibility — as well as the ability to withstand demanding processes in extreme application environments.
Building on the CPCI Standard
Although its roots lie in the telecom industry, CPCI and the subsequent CPCIe standard have found their way into a diverse set of applications spanning industrial, military, and medical industry sectors.
From the beginning, the compact footprint of 3U CPCI boards offered obvious space-saving advantages for more streamlined control system packaging in space-constrained embedded system applications. Today's availability of powerful SBCs built on the high-speed serial links of CPCIe, in that same compact 3U form factor, enables users to further relieve data bottlenecks, even within complex and demanding communications, graphics or digital/analog signal processing applications.
A significant advantage of the CPCIe bus is that it uses full-duplex, point-to-point high-speed serial link connections instead of forcing all data through a common parallel bus connection. For rugged performance, the 3U CPCI form factor offers a robust solution with the excellent shock and vibration characteristics of the Eurocard design and a high-density pin-and-socket connector that provides good mechanical stability.
Best of all, the backward compatibility of CPCIe with earlier PCI standards simplifies upgrades within legacy systems, while hybrid backplane solutions provide a cost-efficient migration path without having to scrap or modify legacy CPCI peripheral cards that still function acceptably through a CPCI bus.
Shifts in Design Strategy
Addressing the multiple concerns and various strategies regarding enhanced performance and reliability in mission-critical embedded systems applications involves dealing with a variety of operating conditions and environments. Some involve protective steps designed to mitigate the physical impact of heat, shock, vibration or environmental conditions. Others involve preventive strategies that seek to avoid problems in the first place.
But there's more to designing "rugged" applications than just dealing with a hostile environment. It's also a matter of accommodating more complex processing demands without compromising system stability, under the ever-present pressure to improve cost-effective productivity while minimizing expense and power consumption.
SBC designs integrating multi-core processors and the CPCIe bus interface as part of a robust package are one strategy helping to solve the challenges of embedded systems designed for demanding industrial environments. Multi-core processing delivers ample performance, without the complications of excessive power consumption and heat build-up or increased latency issues. One such solution — the F18 3U CompactPCI/Express Core 2 Duo SBC from MEN Micro Inc. for example — incorporates a high-performance Intel® Core™ 2 Duo processor running at 2.6 GHz to deliver 24,178 MIPS and 16,525 MFLOPS in industry standard testing.
The reduced power consumption, improved performance per watt and ruggedized board construction of such a configuration offer advantages for a variety of demanding applications. These range from mission-critical industrial control and automation in the hostile environments of steel mills or manufacturing plants to compute-intensive digital- image processing in CAD, video processing, modeling and rendering functions for data-intensive scientific, medical and communications applications.
Power Consumption: As the clock speed of a processor increases, it creates power dissipation problems. Faster clock speeds typically require more transistors and higher input voltages. However, since each transistor leaks a small amount of current, the cumulative effect becomes problematic. Multi-core processors, using two or more cores and more cache, deliver comparable or better performance and lower power demands than leading edge CPUs running at the highest available clock speeds. This improves the performance per watt, reduces heat generation and provides for longer life in battery-powered mobile applications.
Heat: Overheating issues are not limited to ambient temperatures alone. Thermal management in embedded systems needs to deal with the heat generated by the operation of the system itself, as well as the ambient temperature of the application environment. Design considerations such as heat sinks and thermal watchdogs for supervising processor and board temperature are two protective strategies. But using multi-core processors is beneficial as a preventative strategy for keeping processing throughput high while minimizing power draw and the associated effects of heat generation.
Latency: Multi-core technology provides greater processor density in a thermally restricted chassis and, in some cases, reduces latency. With multiple cores, it is possible to dedicate one or more of them to time-critical tasks and reduce latency by reducing the queuing of high-priority tasks.
Using advanced smart cache up to 4 MB of shared L2 cache — as used to deliver the performance results of the previously-cited MEN Micro SBC and Intel processor — significantly reduces latency to frequently used data. This improves performance and efficiency by increasing the probability that each execution core of a multi-core processor can access data from a higher-performance, more efficient cache subsystem.
Also, choosing storage devices from OEMs who already use hyper-threading technology enables the control software to migrate to multi-core applications more easily. That's because hyper-threading technology enables the execution of two software threads in an increasingly parallel manner, utilizing previously unused processor resources.
Other Integration Considerations
Board Design: Regardless of the processor architecture chosen, there are several aspects to consider when specifying an embedded computing solution. Confirm the minimum cooling airflow requirements needed to complement the heat-sink design concepts previously referenced. In mobile or stationary applications, with the potential to be subjected to the effects of shock and vibration, specifying boards with soldered components can provide a higher degree of reliability according to applicable DIN, EN or IEC industry standards. And in applications that might be forced to operate outside of a controlled environment, where dust, moisture or condensation could be concerns, specification of a conformal coating provides an added level of protection against the elements.
Peripheral Integration: When specifying an initial SBC configuration, it's important to ensure that boards can provide sufficient mass storage, graphics processing and I/O opportunities. Boards pre-configured for parallel and serial IDE (PATA/SATA), multiple USB 2.0 ports, Ethernet channels and various video and high-definition audio ports offer ample options for immediate use and future system potential. And choosing such boards that combine 3U CPCI ruggedness with high-bandwidth CPCIe capability complement increased bus performance, as well as enhanced data transfer rates and throughput for applications requiring complex communication, powerful visualization, and signal processing.
Software: How productive a customized embedded system is in a demanding application environment has as much to do with the efficiency of its software as it does with the robustness of its hardware. An SBC built around a multi-processor design provides software flexibility that can contribute to productivity as well as toward reliability.
Virtualization technology allows a single physical machine to function as multiple "virtual" machines. A layer of Virtual Machine Monitor (VMM) system software manages the execution of multiple operating systems without incurring significant emulation costs. The multiple operating systems share the hardware, with full transparency to the operating systems and application software (Figure B).
In a multi-core processing application, each of those separate operating systems can be dedicated to a specific processor core. For example, one core can support a real-time operating system (RTOS) while another is dedicated to a general-purpose operating system (GPOS). From a reliability perspective, with that arrangement, there is no need to interrupt RTOS operations in the event that a GUI interface running on the GPOS crashes and requires the operating system to reboot.
Virtualization technology increases system stability, scalability and serviceability. It also allows legacy software to run more efficiently. Applications can run as multitasking, distributed processing or threaded applications depending on the status and system's needs (Figure C).
Multi-core processing, virtualization and hyper-threading functionality must be supported by a compatible CPU, chip set and BIOS. Various compiler, analyzer and cluster tools are available to support these applications.
Putting It All Together
Whether an embedded system project calls for upgrading an existing application or developing an entirely new one, today's building blocks of COTS boards, peripherals, and software make it possible to satisfy multiple needs while eliminating potential problems. A multi-core architecture, implemented in an appropriate rugged SBC format with combined CPCI/CPCIe compatibility, holds significant promise for improving reliability, productivity and performance-per-watt efficiency. And new product and support options progressively offer increasing opportunities for greater reliability in rugged applications ranging from mission-critical real-time operations to compute-intensive applications such as multi-media processing.