Embedded system designers often find themselves trapped by CPU design choices they made years earlier, since switching costs can be astronomical. Hardware development often involves processor-specific interface chips and board design. Software switching costs can be even more onerous, since CPU architectures drive the purchase of development tools and the requirement for customized software. However, a couple of industry trends have opened up the CPU platform, and system designers are gaining the ability to mix and match CPU suppliers or even change CPU instruction sets to optimize products across a wider range of applications.
As always, semiconductor integration drives the technology trends, as more system functions have been pulled into fewer chips. While many high-volume markets might end up with a single-chip SOC, most embedded designs have unique requirements and are better-served with a general- purpose embedded processor that interfaces to hardware components that are specific to each system design. For the most flexibility in these moderate-volume products, many system designers have turned to Computer-on-Module (COM) to allow a single carrier board design that serves a range of products and allows rapid adoption of new CPU technology with standards-based daughter cards. The trend toward higher levels of integration has commoditized the advanced peripherals for high-speed interconnect technology, allowing the newest COM standards to include multiple channels of PCI Express, Gigabit Ethernet, USB 3.0, and DisplayPort.
With the broad adoption of new standards, such as COM Express Rev. 2.0, embedded system designers have now been decoupled from the CPU-specific legacy interfaces that were rooted in their PC heritage. The new interfaces have broad industry support and share technical characteristics for high-level, packet-based interconnect with a layer of hardware abstraction above the physical and link-layer bus architectures of the past. It doesn’t matter what sort of CPU is processing the data, since these new interfaces connect at a data transfer level and have broad support by a myriad of CPU types. Most of the COM standards still include support for general- purpose I/O and allow some PCtype functionality.
While this article focuses on the hardware aspects of CPU selection, the software trend toward abstraction is obviously also underway. Almost all embedded operating systems offer support for both x86 (Intel, AMD and VIA) and ARM-based CPUs from a variety of vendors. Application source code has become much more portable as cross-platform development tools and libraries have become the norm. Even binary portability has become easier as more applications adopt technologies like Java and HTML5. The trend toward software abstraction may even offer opportunities for other CPU architectures (MIPS, PPC, Tensilica, etc.) to stay competitive in markets dominated by x86 and ARM.
Open Standards and Software Abstraction
What are the technical implications of these trends? To answer this, look no further than Kontron’s announcement of broad support for ARM processors. As the market leader in COM Express, Kontron AG has primarily supported CPUs from its “strategic partner” Intel. However, the industry trends have opened an opportunity for Kontron AG to supply COM boards incorporating ARMbased CPUs from Texas Instruments. Kontron AG has also announced broader support for COM Express boards with AMD CPUs. Kontron will compete with dozens of other COM board suppliers offering standardsbased support for CPUs from all three x86 CPU vendors. Many of these COM board suppliers already deliver ARMbased products using the Qseven COM standard to take advantage of a legacyfree architecture for mobile applications. While Intel still covers the widest range of features and performance, the growth of these COM standards allows system designers to tailor each product to incorporate the CPU module that delivers the best optimization of performance, power and cost. Intel will need to accelerate innovation to keep up with these new competitive threats.
With standardsbased modules and competitive pressure from AMD, VIA and ARM, Intel has less ability to restrict features to higherpriced CPUs. In the past, an Intel-based embedded systems company would need to move up to a higher-priced Intel CPU (and chipset) to access features like faster memory, 64-bit processing, virtualization, advanced power management, hardware encryption, etc. Most of these features are designed into the CPU and then turned off to allow price separation. As long as most embedded systems companies remain based on Intel, the pricing strategy affects everyone equally. However, the industry trends are opening up every COM Express and Qseven socket to competition, and other CPU vendors may offer high-end features without charging the same premium as Intel.
Industry Trends in Embedded Computing
While VIA Technologies has supplied x86 CPUs to embedded markets for over a decade, the company’s Nano architecture should be well-positioned to take advantage of the industry trends that reduce CPU switching costs and allow VIA to compete for COM Express and Qseven sockets.
For VIA, the technical advantage stems from the ability to deliver highperformance features that are only available in high-end Intel CPUs. As an example, Intel uses the Atom architecture to address price-sensitive embedded markets, while reserving Intel Core and Xeon processor families for higherpriced applications. However, the available Atom CPUs are currently limited to only 800 MHz memory, so VIA’s Nanobased systems have 33% more memory bandwidth using 1066 MHz memory. The Atom comparison is further strained by the difference in the microarchitecture, since Atom’s dualissue, in-order architecture puts it at a disadvantage against VIA Nano’s 3-issue, out-of-order design. While Intel has kept VIA Nano at bay by using dualcore and hyper-threading, the newest VIA CPUs offer both dual and quad-core versions.
To find an Intel-based COM Express module that gets better performance than modules based on VIA’s Nano X2 or VIA QuadCore processors, a system designer would need to move up to Intel Core. However, the power consumption increases dramatically over Atom, unless the number of cores, clock rate, and memory speed are constrained — reducing the performance advantage over the VIA-based module.
Hardware encryption has become one of the high-end features that Intel reserves for its premium CPUs. Intel supports encryption through AES NI (new instructions), but Intel’s embedded product list appears to only offer this feature for Core i5 and higher. From its earliest CPUs, VIA has promoted hardware encryption as a musthave feature and even had its “Padlock” implementation validated by the US National Institute of Standards and Technology (NIST). Hopefully, Intel’s support for hardware encryption will encourage more system designers to improve security features, since software-based techniques have been less secure and consumed too much power. Competitive pressure from VIA may help make hardware encryption a standard feature that Intel enables for all of its CPUs.
Future Trends
If the industry trends of semiconductor integration and hardware/software abstraction continue, where does this lead? The integration trends obviously have an endpoint when all silicon content resides on a single chip. However, history has shown that every embedded market needs slightly different peripherals and interfaces. Only the highest-volume markets will get their own SOC, leaving most designs in the same state as today with a general-purpose, embedded CPU and application-specific hardware. While there may be some industry attempts to create a common package and pin-out for vendors of embedded CPU silicon, the COM approach offers technical advantages and should see even greater adoption. Even if the silicon content on a COM device is reduced to a single chip, the same semiconductor chip can be used for several different COM pin-outs (similar to the multiple connector types in COM Express).
For moderate-volume embedded applications, the extra flexibility for design re-use with a replaceable module is likely to continue as the driving factor that overcomes most advantages from soldering a CPU to the main board. Since the CPU will continue to be easily replaced, the switching costs should continue to drop as standard interfaces become widely adopted, and software abstraction continues. To avoid commoditization, the CPU vendors will face greater pressure to innovate new must-have features, while also trying to beat competitors on performance and power consumption. For embedded system designers, the trends are all positive, since CPU competition will spur industry growth and lead to opportunities to create entirely new types of products.
This article was written by J. Scott Gardner, President and Principal Analyst, Advantage Engineering LLC (Austin, TX). For more information, contact Mr. Gardner at