UCLA researchers at the Center for Heterogeneous Integration and Performance Scaling (CHIPS) say that computers powered by traditional integrated circuit chips are reaching their limits and a redesign is needed. They aim to make the components of electronics fundamentally different in order to enable new generations of faster, cheaper, smaller, more powerful systems — as well as being flexible and implantable.

Every two years since 1965, the number of transistors on a computer chip has roughly doubled, as transistors have gotten exponentially smaller. In practical terms, this doubling — dubbed Moore’s Law — has meant faster, smaller, cheaper, and more powerful computers. But as transistors reach the tiniest atomic size allowed by material properties and physics, this doubling is due to stall.

As computer chips have gotten more powerful, crammed with now billions of transistors, the layout of chips on a circuit board hasn’t changed much. Each chip is spaced relatively far from its neighbors, with visible printed traces connecting them. This organization not only uses a relatively large amount of non-functional space, but it also slows things down.

“The energy it takes to communicate between chips has not changed; if anything, it has gone up. It’s a bit like having old, outdated roads trying to support a growing, bustling city,” said Subramanian Iyer, UCLA professor of electrical and computer engineering, and director of CHIPS.

Iyer and his colleagues are replacing run-of-the-mill circuit boards with silicon wafers. Since transistors themselves are made of silicon, the researchers can precisely align individual integrated circuits, called dies, onto each wafer. This allows all of these dies to act as if they were one giant chip the size of the wafer, which can be as large as 70,000 square millimeters — about the same area as a large dinner plate. In contrast, the largest chips made today are about 100-times smaller, or around 700 square millimeters.

Today’s single large chips (SoCs) are expensive to make and use dies that must be manufactured all at the same time in the same process. If you need both memory and processors, for example, you would have to compromise the performance of one, the other, or both to make them in a single process. The platforms being developed at UCLA can use dies from a variety of sources that can be mixed and matched.

“In separate processes, we can make memory and processor dies and not make any sacrifices in performance for either component,” Iyer said. Moreover, the dies can be packed together far more closely than bulky chips, and the silicon wires that connect them can be mere nanometers wide. A square of Silicon Interconnect Fabric (Si-IF), as they’ve named it, is many times smaller than a corresponding printed circuit board with the same capabilities.

Si-IF has other advantages compared to printed circuit boards: it weighs less, larger networks can be assembled in a smaller space, and, using a technology the team is calling Flextrate, it can be made to be completely flexible. These properties, the researchers say, make Si-IF and Flextrate especially ideal for medical devices. They’re collaborating with physicians and biomedical engineers to develop applications for the technology.

At Cal State Los Angeles, physiology and neuroscience researcher Selvan Joseph is using the Flextrate platform to design sensors that can track the physical movements and muscle activity of patients with movement disorders, or those recovering from spinal cord injuries. Flextrate allows him to put these sensors into a small flexible patch that patients can wear unobtrusively on their skin. “We can send this device home with our patients and then collect data remotely in real time in their day-to-day lives,” said Joseph.

The CHIPS team is also working on ways to make supercomputers and artificial intelligence machines more powerful. Tools called inference engines are behind much of the “smart” electronics in our daily lives. An inference engine collects data and makes logical conclusions. Search engines, virtual assistants like Apple’s Siri or Amazon’s Alexa, and self-driving cars all rely on inference engines. But the speed of these is limited by the fact that, to make an inference, the device must relay information back and forth between the memory and processing centers of a device. Iyer and his colleagues have invented a charge trap transistor (CTT), which combines both memory and processing centers such that the constant back-and-forth is no longer needed, saving both time and energy. “We can make inferences in a CTT 100 times more efficiently than the most efficient inference engine out there,” Iyer said. “We’ve demonstrated this on a small scale and are starting to add on more bells and whistles now.”

Shrinking transistors have driven the technology leaps of the last fifty years, but Iyer says those days are over. The question is, how will technology continue to improve without improvements in transistors? Making the leap to new kinds of devices, such as those that use Si-IF and CTTs, is one feasible answer but requires a buy-in from technology companies. That’s in part why CHIPS has organized a consortium with industry leaders to validate and shape the direction of the center.

Source