Typical design for an emerging field-effect transistor made with nanomaterials. The movement of current from the source electrode (gold, left top) across an ultrathin channel (blue) to the drain electrode (gold, top right) is controlled by the source voltage and the electric field produced by the gate electrode (gold, top center) that is separated from the channel by an insulating layer (light gray). At left: Atomic-thickness channel materials can be one-dimensional, such as carbon nanotubes, or two-dimensional layers. (Image: Z. Cheng/NIST)

To continue making smartphones, laptops, and other devices more powerful, yet energy efficient, industry is intensely focused on identifying promising next-generation designs and materials for the principal building blocks of modern electronics: the tiny electrical on-off switches known as field-effect transistors (FETs). When deciding how to direct billions of funding dollars for next-generation transistors, investors will base many of their decisions on published research results.

But a dismaying amount of research on FETs currently suffers from inconsistent reporting and benchmarking of results, increasing risks of misleading conclusions and inaccurate claims that set false expectations for the field. This problem and possible solutions are outlined in an article published today by an international group of leading experts on semiconductor devices.

“Industry is trying to determine the right materials and designs to use,” said Curt Richter, a physicist at the National Institute of Standards and Technology (NIST) and a co-author of the new article. “They want to know exactly what to make and how to make it. But the industry is getting terribly frustrated, they tell us, because they see a promising piece of information in one publication and another promising piece in another publication, but they’re incompatible. They have no way to compare them. Given the enormous cost of adopting design innovations, the industry can’t afford to make a mistake. What they want is uniform benchmarking.”

Richter, former NIST associate Zhihui Cheng (now at Intel), and Aaron Franklin of Duke University are leading an effort to create and promote guidelines for uniform test methods and reporting standards. They and more than a dozen colleagues from industry, academia, and government labs described their recommendations in an article published in the July 29, 2022 issue of Nature Electronics.

The paper provides specific criteria for evaluating and describing each of eight key parameters critical to emerging designs for field-effect transistors.

The research community at work on emerging FET designs includes physicists, materials scientists, chemists, electrical engineers and more — each approaching the subject a bit differently.

“At present, each group frequently has its own techniques and measurement methods,” Cheng said. “There are no uniform guidelines or metrics about how to measure and report a particular parameter. So, it is often very difficult to evaluate the significance of a reported result, and it is hard to tell whether the results are biased or incomplete.”

Inaccuracies in reporting are “not necessarily intentional,” said Franklin, the Addy Professor of Electrical and Computer Engineering at Duke. “But the impact that misreporting has on the field cannot be overstated. In addition to the negative effect on industry, it also affects the decisions made by funding agencies, program managers, and others who influence the direction of research in academic and government labs. Properly extracting and then keeping new findings in the proper context is critical to making true progress.

“It’s really a matter of providing education that is currently lacking. There’s no textbook out there about how to properly extract these parameters for emerging devices. You could think of our paper as a sort of abstract for such a textbook.”

Absent universal guidelines, the authors explain, it is too easy to deliver misleading results. For example, one of the key parameters to device performance is the relationship between the ramp-up of applied voltage it takes to turn the transistor “on” — that is, to get current moving through the channel between the source electrode and the drain electrode — and the amount of increase in current from the ramped-up voltage.

“There is a transition voltage as the current goes up from the lowest to the maximum and it’s not a straight line,” Richter said. “It has little variations in curvature. You want the slope of that curve to be as steep as possible so that you can work with smaller voltages to turn the current on. Some researchers will report the one spot where the slope is steep instead of reporting the entire voltage span. That misleads people into believing that you can operate at lower power.”

“It’s like you’re running a 100-meter race and you only report the last 10 meters where you run the fastest,” Cheng said.

Or researchers may attribute a positive result to novel channel characteristics “when in fact it is actually determined by the geometry of the transistor and the non-semiconducting materials,” Franklin said. “Reporting must be done in the proper context of the dimensions and materials of the transistor, rather than simply attributing everything, by default, to the semiconductor channel.”

Unless the scientists make enough tests with enough variations to account for all factors, the results may be deceptive. That poses difficulties for many labs. It can take many months to create and characterize a new material or emerging design and make even one or two samples. So, constructing enough variations on a device to enable reliable comparisons requires considerable time and resources.

But the effort must be made, the authors say, to avoid the downside of misreporting. “It’s often the case that once a paper is published, everybody believes it,” Richter said. “It becomes gospel. And if your research gets a different answer, you have to work ten times harder to overcome the effect of the first publication.”

Too many inadequate or misleading reports can prompt a sort of “gold rush” mentality similar to what occurred in the late 1990s and early 2000s with a then-emerging technology: carbon nanotubes (CNTs). Based on wildly enthusiastic early reports, many believed they would become the successors to silicon semiconductor elements in microelectronics. But when initial claims proved overstated, interest dried up along with funding.

“CNTs are a hugely instructive example,” Franklin said. “So much hype and overstated claims led to disillusionment and frustration rather than steady, collaborative, and accurate progress. An entire field of research was negatively impacted by overstated claims. After a frenzy of activity, the eventual distaste resulted in a massive shift of research funding and attention going to other materials. It has taken a decade to bring deserved attention back to CNTs, and even then, many feel it is not enough.”

Uniform benchmarking and reporting can minimize those sorts of convulsive effects and help scientists convince the research community that they have made genuine progress. “By using these guidelines,” the authors conclude, “it should be possible to comprehensively and consistently reveal, highlight, discuss, compare, and evaluate device performance.”