As PC/104 celebrates its 20th anniversary as an open standard this year, it continues to grow in terms of new design-ins, applications, and integration of the latest technology. In today’s fast paced, throw-away world, this is a remarkable achievement. PC/104 users typically demand a long product life cycle of seven years or more, so for two decades, these small, stackable, embedded computer systems have found applications in military, medical, industrial, transportation, communications, pipelines, mining, utilities and a host of other industries. However, as technology changes and becomes more powerful and complex, challenges arise in implementation. This has never been truer than for the different implementation strategies for PCI Express on PC/104 size modules.
In the Beginning
PC/104 has grown and matured from technologies originally developed for the desktop and mobile markets. In 1992 when the first PC/104 specification was written, only the ISA bus was available as the building block for the first generation of stackable embedded PC modules. There was no PCI, USB, SATA, PCI Express (PCIe), wireless networking or other technologies that we now take for granted. Years later, the 32-bit, 33 MHz PCI bus was added to support higher speed I/O on PC/104 modules, resulting in the second generation specifications called PC/104-Plus (both PCI and ISA buses) and PCI-104 (PCI bus only). This was implemented by adding a second, 120-pin connector that was placed on the opposite side of the board from the original 104-pin, PC/104 connector. And now with even higher bandwidth PCI Express technology, we have seen the dawn of the 3rd generation of stackable PC/104 modules.
But since high speed peripheral interconnects have moved from PCI to PCI Express, the question became “What is the best way to implement PCI Express on a PC/104-size card?” A thorny problem arose with respect to where you put the new PCIe connector on the board. High-speed signal integrity connectors are required to pass the differential pair signals; certainly the rugged pin-in-socket style connectors cannot be used once again. Also when you add PCIe support, do you remove either one or both of the PC/104 and PCI-104 connectors? If you remove those connectors, then what happens to compatibility and what is the migration path for all the existing boards being manufactured or deployed in systems in the field?
Parting of the Ways
In 2007, numerous philosophical and architectural disagreements occurred within the PC/104 Embedded Consortium’s Technical Committee during the discussions of how to add a stackable PCI Express connector to PC/104 modules. Eventually the companies separated into two groups. Both groups agreed that for migration of existing systems, one connector should stay but they disagreed about which one should be replaced with the PCIe connector. One group felt that the low-cost PC/104 connector should remain while the other believed it should be the higher bandwidth PCI-104 connector. The former's logic was “why support both PCI and PCIe expansion on a single module” since it seemed redundant and costly, and the migration is transparent to application software. They argued that 70 to 80% of I/O modules in the vast ecosystem supported the PC/104 connector rather then the PCI-104. They said their solution would be an evolutionary migration with the least impact upon the current users.
The other group said why support the existing PC/104 rather than the PCI-104 connector since most current chipsets no longer directly support those signals and they must be generated from a bus bridge chip on the LPC (Low Pin Count) bus. They said an equally viable solution would be to support the existing PC/104 cards by adding another card in the stack with a PCI to ISA bridge chip. Both groups agreed that over time, the PC/104 and PCI-104 connector- based solutions would decrease as the newer stackable PCIe solutions increased. However, no one could project a timeline in which this would occur.
A second area of disagreement was what signals should be supported on the new PCIe connector. If the first and second generation PC/104-Plus connectors would be gradually phased out, shouldn't there be some low speed I/O interface signals included on the PCI Express connector as well? If so, which ones? With many high and low speed interfaces such as USB, LPC, I2C, SMBus, SPI, SATA, Ethernet, and CAN, available, there were many options facing the committee.
The answer to these questions is highly dependent upon the application. Selection was a controversial and difficult task since at that time the companies did not know about upcoming lowpower processors including Intel’s Atom, AMD’s Fusion, VIA’s Nano and DM&P’s Vortex86 product families. They also did not know which mix of PCI Express, USB, LPC, SPI, and other I/O interfaces would be supported on these new chips. Even today, the mix varies greatly across semiconductor companies and chipset generations.
A third point of contention was how wide the PCIe data lanes should be? One group argued for the “fat pipe” used in VPX and COM Express architectures - a very large step supporting both x1 and x16 PCIe lanes for applications requiring very high performance graphics and/or speed-intensive I/O. The other group, concerned about size, weight, power and cost, thought that PCIe offered so much more capability than PC/104-Plus’ existing capability, that a combination of some x1 and x4 PCIe lanes would be more than sufficient for most applications. More bandwidth supports more powerful processors, but they require more power and cooling of boards in a stack — even requiring heat pipes in some cases. So, what is the correct balance of performance, cost, upward migration, thermal issues, and complexity? Again, it is a function of the application requirements.
Both groups agreed upon the connector technology. It is Samtec’s Q2 double row, high-speed, 15.24mm Q-strip connector system with ground blade in the center. These connectors require a relatively small amount of real estate on a module yet can currently support up to Generation 2 PCI Express data rates of 5 Gbps with a stack of up to 4 boards. This data rate has been tested and verified by Samtec’s Signal Integrity group and is the enabling technology to allow PCIe to be supported within a stack of boards. It also allows both high and low speed signals to be mixed on the same connector.
These first three issues became “showstoppers” and the two groups were unable to reach a consensus. Since one group’s philosophy was to support low to moderate processing applications while the other group focused on moderate to high-speed processing applications, plus their difference of opinion on whether to keep the PC/104 or PCI-104 connector, resulted in the companies agreeing to disagree and go their separate ways. Consequently one group of board manufacturers stayed with the PC/104 Consortium and the other started a group called the Small Form Factor Special Interest Group (SFF-SIG). The SIG consisted primarily of the companies that had previously defined and introduced the successful EPIC form factor standard.
In early 2008, the SFF-SIG introduced SUMIT-ISM. SUMIT stands for Standard Unified Modular Interconnect Technology and it defines a connector and pin out for stacking modules together. On two, 52-pin SUMIT connectors it includes PCIe x1, PCIe x4, USB, LPC, SPI, and SMB channels. ISM stands for Industry Standard Module which is a new name for the popular of 90 × 96mm board outline, in order to phase out the confusing practice of using a single name “PC/104” to refer to a bus sometimes and to a board outline at other times. SUMIT-ISM is defined as an Industry Standard Module with the scalable SUMIT connectors (“and/or” only 52 pins each) placed in the same area as the PCI-104 connector.
A PC/104 connector is also supported in the SUMIT-ISM legacy configuration to allow PC/104 modules to be included in a stack of boards. SUMIT-ISM expansion maintains direct support for the vast number of PC/104 expansion I/O modules and enclosures, necessary for the installed base of hundreds of system manufacturers who would rather not reengineer their ISA-based software at this time. The SUMIT-ISM architects did not assume that there needed to be compatibility with PCI-104 modules since moving from PCI to PCIe doesnít require application software changes. However, a recent update to the ISM definition uses slotted mounting holes, allowing a SUMIT-ISM module to be defined and built using either the PC/104 or PCI-104 connector for legacy support.
Also around the same time in 2008, the PC/104 Embedded Consortium introduced PCI/104-Express and PCIe/104, both focused on very high performance applications. They are based upon a 3 bank connector for a total of 156 pins. In Bank 1, the PCIe/104 bus contains four PCI Express x1 links, 2 USB 2.0 ports, SMB, and some control signals. Bank 2 and Bank 3 contain only the PCI Express x16 link. PCI/104-Express is defined with both the PCI-104 and PCIe/104 connectors so that it can connect directly to PCI-104 cards in a stack of boards. Since the PCIe/104 bus is located where the PC/104 ISA bus connector was previously situated, a mezzanine module with a PCI to ISA and BIOS code is needed to allow existing PC/104 modules to operate in the stack, increasing the stack height by 1 card.
In 2011, the Consortium introduced a second pin definition called PCIe/104 Type 2 with the 2008 definition now referred to as Type 1. Bank 1 of the connector remained the same as before, but Banks 2 and 3 were changed. Rather than just supporting the PCI Express x16 link, it is replaced with two x4 PCIe links, two SATA ports, USB 3.0, LPC, and battery power. Other than the graphics card, most existing I/O board designs for PCIe/104 will be compatible with both Types since they only used Bank 1 signals.
As newer technology and faster high-speed serial buses have been standardized for the commercial / consumer PC market, it was necessary to create a bridge from the past to the future for PC/104-based products. The PC/104 Embedded Consortium and the SFFSIG share a common goal to grow the overall PC/104 marketplace by providing new PCI Express-based solutions that can add to the PC/104 ecosystem. Their architectures are similar yet have differences that result in performance, size, thermal, and price differences according to their target applications and markets.