Embedded systems and desktop PC's have had a love hate relationship over the years. The PC has been the source of significant technological advances that have enabled embedded systems to evolve to their current levels of sophistication, using the faster processors and highly-integrated functionality of the CPU cores available today. Additionally, the PC world has also spun off I/O buses, both serial and parallel, that have enabled embedded systems designers to expand and configure their system I/O. On the other hand, the embedded industry has often been wary to adopt PC technology due to the short life cycle some PC technologies experience.
The many changing phases that the PC world passes through have been too cyclic for the stodgier, slower- to-evolve, embedded market. But once in a while, just as it appears the PC market is moving on to the next technological whiz-bang idea, embedded developers realize what a gem the PC market has in one of its technologies and move to adopt it. Such is the case with USB.
USB, thought by some to have already been supplanted by the much faster and sexier PCIe bus, has been gaining considerable momentum, evidenced by how CPU manufacturers continue to in - crease the number of USB ports contained in the chipsets. Furthermore, I/O vendors have become seduced by the beauty in its simplicity as embedded users begin to expand their understanding of how to design a system with USB. The staying power of this technology in embedded systems becomes obvious as one drills deeper and deeper into its capabilities.
Rethinking System Partitioning.
As USB makes its way into the embedded world, it has begun to shape how we think of a system design. If you were familiar with the PC/104 world, you'll recall that the CPU was assigned the task to use an IRQ interrupt over the PC/104 bus to manage I/O devices. The CPU received a signal from an I/O device that a control function needed to be serviced based on a given priority. In such a system, there would typically be several tasks which the CPU was charged with managing such as: network communication, data processing of a log file, writing to a disk drive, or even maintaining a display screen.
Was it ever really a good idea to have the exact same CPU that did data processing, network servicing, displays of graphical interfaces, and many other tasks, also be responsible for servicing an interrupt to shut down a conveyor belt that was moving gravel into a dump truck, or other critical equipment? That debate has gone on for years and still continues. Truthfully, the marriage of these two extremes — control and data processing — was the result of the availability of excess computing power. In those days, it was easier and cheaper to make advancements by simply pushing clock speeds to the extreme.
However, the world is changing. The excessive clock rates of the PC consume too much power and generate too much heat in an era where the word "Green" is starting to take precedence. But we cannot simply just reduce clock speeds and compromise system performance. No, everything must perform better and faster than its predecessors. So what can we do?
Using serial buses to replace the ancient parallel data bus is an obvious answer. The interrupts over parallel buses are passé when compared to the ease of implementing the much faster serial interfaces such as USB. With USB 2.0 High Speed data rates at 480 Mb/s and USB 3.0 Super Speed expected to enter the market at 4.8 Gbit/s, the speed gains are light years away. But what about heat and power consumption? Don't you still need a powerful CPU to handle all this USB activity you ask? The solution comes from an ancient Latin saying that means "Divide and Conquer". With devices as simple as 8-bit microcontrollers starting to include USB ports, why not make use of this cheap, powerful tool?
The advancements toward a better embedded system design proposed by StackableUSB are inevitable and will be realized sooner or later. It is a concept we are familiar with in all aspects of our lives, but we have traditionally turned a blind eye to it in the computing world due to our hunger for fast and powerful CPUs.
Think of an embedded system as your favorite restaurant for a moment. At some point in time, someone had an idea to start a business selling food. To get his dreams into motion, this person would have conducted most, if not all, of the tasks required by himself and would have had very little or no help except for a hired server perhaps. This was the PC/104 world, and the CPU was the person who dreamt of starting a restaurant. It served all the requirements of the system with very little or no help from the outside world. The hired server is like an analog-to-digital converter, serving the system by bringing information into the core just like the server at the restaurant brings an order back to the kitchen.
After a few years, the restaurant wants to expand, serve more people, build a larger menu, and offer additional services it did not offer before. In order to do this, the person who originally started the restaurant hires an accountant, a sous chef, a floor manager, three more cooks, a decorator, and more wait staff. By hiring these people, the owner is now able to divide all the tasks required to run the restaurant (the system) between different people and focus his time on doing a better job on more critical tasks, while not using as much energy and time required previously. In doing so, the owner has also created a more efficient system which allows the restaurant to operate more economically than before.
This is the same concept that StackableUSB applies to embedded computing. In an embedded system where USB is the primary bus, individual tasks can be micro-managed by providing a low power MCU on the I/O side with very little to no additional cost. Consider a GPS module meant to attach to a stack. If equipped with an onboard MCU, all the data processing can now be handled on the client side and the host CPU only needs to read some registers from the client to get the information it needs, freeing the host's system bus for more critical tasks than servicing and trying to make sense of raw GPS data. This concept can be applied to any device side application, creating a perfect Host- Client system harmony. With less cumbersome tasks needing to be handled on the host side, lower-power CPUs can now be used to reduce the cost and power consumption while increasing the overall life and efficiency of the entire system.
Interrupts and USB.
Although interrupts are passé, it is important to emphasize that USB does support interrupt transfers. However, a USB Interrupt Transfer is not equivalent to an interrupt at one of the IRQ inputs of the host processor. As with all USB transfers, the host must check for and initiate the interrupt transfer. The device can make the interrupt transfer data available when an event occurs, but the transfer does not start until the host checks for a pending interrupt and requests the data. The host is obligated to poll for the interrupt status at a specified periodic interval in order to guarantee the interrupt transfer latency. This is determined during enumeration and the host polls the device at this interval continuously.
The allowable range for interrupt transfer latency, or host-polling interval, varies with the USB bus speed. The smallest possible interrupt latency that can be achieved between a device and the host is 125 μsec for a High Speed device. However, there may be many factors that may cause the host processor to be unable to check an interrupt status at the requested interval. OS design, driver design, application software design, CPU speed, bus activity, and bandwidth may all limit the host's ability to meet the obligation to poll for interrupts within the required interval.
So, if IRQ interrupts are not available and there is built-in latency with USB polling interrupts, how do you implement a control system with USB? The answer to this question also lies in "dividing and conquering". By shifting some degree of control over to the device side, you are not only reducing the system load on the host CPU, but the need to service IRQ style interrupts between boards is eliminated. Critical parts of a system which require handling by interrupts can easily be handled on the client side and their status passed to the host.
The Future.
Mirroring the concept of multi-core processor technology at the system level in this fashion has tremendous advantages. Breaking down each task into a micro-architecture that distributes device functions and allows a more granular power control at each level is becoming more popular in all aspects of computing.
One reason this makes sense today is that it allows system designers to reduce power consumption, which reduces the costs of running a system in addition to the heat it may generate.
For many, board-to-board communication has been a poor man's version of multi-core processing. Each board plugged into a stack has been charged with a control, measurement and/or monitoring task for the embedded system. Now, with StackableUSB, the embedded systems industry is one step closer to achieving the perfect balance of tasks designation between the host and client devices. In the end, multicore mentality can help design engineers develop more efficient, targeted embedded control systems paired with highly modular software.
This article was written by Omair Khan, Hardware Engineer, Micro/sys, Inc. (Montrose, CA) and Susan Wooley, Chairman of the Board, StackableUSB (Montrose, CA). For more information, contact Mr. Khan at