Artificial intelligence queries require approximately 10 times the electricity of traditional internet searches and the generation of original music, photos, and videos require much more. With the rise of increasingly complex AI models and cloud-based applications, data centers are becoming power-hungry giants. According to a study released in May 2025 by Electric Power Research Institute (EPRI), data centers could consume up to 9 percent of U.S. electricity generation by 2030 — more than double the amount currently used.

From sprawling data centers to edge devices, the computational demands of AI models are pushing hardware to its limits, consuming vast amounts of electricity and generating heat. Some scientists are pushing the boundaries of materials science to create hardware that’s faster, cooler, and more efficient.

To address the staggering power and energy demands of AI, engineers at the University of Houston (UH) have developed a revolutionary new thin-film material that promises to make AI devices significantly faster while dramatically cutting energy consumption.

The breakthrough, detailed in the journal ACS Nano, introduces a specialized two-dimensional (2D) thin film dielectric — or an electric insulator — designed to replace traditional, heat generating components in integrated circuit chips. This new thin film material, which does not store electricity, aims to help reduce the significant energy cost and heat produced by the high-performance computing necessary for AI.

In this interview, Alamgir Karim, Dow Chair and Welch Foundation Professor at the William A. Brookshire Department of Chemical and Biomolecular Engineering at UH, who is leading the research, discusses the new approach, its benefits, and how Nobel Prize-winning chemistry enabled this discovery.

Tech Briefs: What challenges does the surge in AI-related power consumption pose for the society?

Alamgir Karim: AI technologies rely on extremely dense calculations, and the energy required to perform these computations — and to cool the hardware that enables them — is increasing very rapidly. This growth places substantial pressure on existing power grids, raises carbon emissions, and significantly increases the operating costs of data centers. If these trends continue, the electricity demands of AI could approach the current scale of entire nations. As a society, we must find ways to support the advancement of AI while ensuring that its energy footprint does not outpace our infrastructure or environmental goals. Achieving this balance will require major improvements in chip-level efficiency; without such innovations, the trajectory of AI development may become environmentally and economically difficult to sustain.

This is the two-dimensional thin film electric insulator designed in a University of Houston lab to make AI faster and reduce power consumption. (Image: UH)
Tech Briefs: Are current efforts sufficient to offset the growing energy demands from AI and cloud computing?

Karim: Current efforts — such as designing efficient data centers, deploying advanced cooling technologies, and expanding renewable power usage — are valuable steps, but they are not sufficient to counter the pace at which AI and cloud computing energy demands are rising. The challenge is that AI workloads are increasing exponentially, driven by sophisticated models, higher data throughput, and the rapid expansion of AI services. In contrast, efficiency improvements at the infrastructure level tend to be incremental. Even with optimized server hardware and greener electricity sources, the underlying physics of today’s chips requires substantial energy to move information across densely packed circuits. This means that without breakthroughs at the chip-materials and fabrication level — where the energy of each signal can be fundamentally reduced — we risk continually “chasing” a moving target. To narrow the gap, we need innovations that cut energy use per computation, not just better ways of powering or cooling existing systems.

Tech Briefs: Your team has developed a new thin-film material that promises to make AI devices significantly faster while dramatically cutting energy consumption. Can you tell us more about it?

Karim: We created an ultra-thin insulating material, known as a low-k dielectric, that reduces the energy lost when electrical signals travel across a chip. In simple terms, it allows AI processors to send information faster while wasting far less electricity as heat. What makes our approach unique is the use of self-assembled polymer structures to create extremely uniform nanoscale pores. These pores lower the material’s dielectric constant, which directly increases chip speed and efficiency. Compared to conventional dielectrics, our films are lighter, more thermally stable, and easier to integrate with modern semiconductor manufacturing. This material can help AI chips run cooler, faster, and far more efficiently.

Reviewing the breakthrough material that promises to make AI faster and use less energy are Professor Alamgir Karim and doctoral student Saurabh Tiwary. (Image: UH)
Tech Briefs: What are the biggest technical hurdles in scaling this thin-film dielectric technology from lab prototypes to mass production for AI chips?

Karim: The biggest challenge is ensuring that the material performs flawlessly at the enormous scale and precision required by modern chip manufacturers. AI processors contain billions of tightly interconnected features, and even minute variations in a dielectric layer can influence speed, heat generation, and long-term reliability. Scaling this technology means achieving highly uniform pore structures across full wafers, maintaining mechanical robustness during high-temperature processing, and ensuring seamless compatibility with existing semiconductor fabrication steps. Another hurdle is encouraging manufacturers to adopt a new material without interrupting established workflows, which requires extensive reliability testing, lifetime studies, and close collaboration with industry partners. While the scientific foundation is strong, translating it into high-volume production demands rigorous engineering, sustained validation, and broad industry alignment.

Tech Briefs: How does the energy savings from these low-k materials compare to current industry standards, and what impact could this have on the carbon footprint of AI data centers globally?

Karim: Our low-k material has the potential to reduce signal delay and energy loss on chips by a substantial margin compared to current commercial dielectrics. Even modest improvements at the insulation level translate into meaningful reductions in total power usage, because trillions of data transfers occur every second within an AI processor. When scaled across thousands of servers, this efficiency gain can cut data-center energy consumption by double-digit percentages — an impact large enough to shift operational costs and sustainability metrics. As AI data centers are projected to rival major industrial sectors in electricity use, such reductions directly lower global carbon emissions and cooling demands. Ultimately, improving the efficiency of the smallest nanoscale components on a chip can cascade into significant environmental and economic benefits worldwide.

Tech Briefs: Could this breakthrough influence the design of next-generation AI hardware architectures, and if so, what changes might we see in chip design over the next decade?

Karim: Yes. Faster, low-energy signal transmission opens the door to new chip architectures that were previously limited by heat buildup and power constraints. Designers could stack more processing layers, place components closer together, or run them at higher speeds without overheating. Over the next decade, we may see larger AI accelerators, more efficient 3D-integrated chips, and architectures optimized for continuous high-performance workloads. This material also supports the trend toward specialized AI chips that require ultra-fast memory and interconnects. Essentially, better insulation at the nanoscale gives engineers new freedom to rethink how chips are organized and how much performance can be packed into each square millimeter.

Tech Briefs: What role did Nobel Prize-winning chemistry play in enabling this discovery, and how might similar interdisciplinary collaborations accelerate progress in AI energy efficiency?

Karim: Nobel Prize–winning chemistry played a foundational role in this discovery. Dr. Omar Yaghi’s pioneering work in reticular chemistry — the science of stitching molecular building blocks into precise, porous crystalline frameworks — laid the conceptual groundwork for modern “COFs”. Our thin-film dielectric leverages these same principles: by designing molecules that link together in a predictable, highly ordered network, we can create films with controlled nanoscale porosity that dramatically lower the dielectric constant. This level of structural precision is essential for reducing energy loss in AI chips. The success of this material reflects how breakthroughs at the molecular scale can ripple upward into transformative advances in computing. More broadly, it shows how interdisciplinary collaboration between chemistry, materials science, and semiconductor engineering can accelerate progress in making AI far more energy efficient.

The team created the new material with carbon and other light elements forming covalently bonded sheetlike films with highly porous crystalline structures. (Image: UH)
Tech Briefs: What policy or technological solutions do you see as most critical to addressing the staggering power and energy demands of AI in future?

Karim: On the technology side, the most important solutions are breakthroughs that make chips fundamentally more efficient — new materials, better interconnects, and architectures optimized for AI workloads. Advances in cooling, renewable-powered data centers, and energy-aware AI algorithms will also help. From a policy standpoint, we need incentives for energy-efficient semiconductor manufacturing, support for university–industry research partnerships, and standards that encourage adoption of low-carbon computing technologies. Without coordinated action, AI’s energy consumption could strain electrical grids and slow digital progress. The goal should be to grow AI capability while shrinking the power required for each computation — and that requires both scientific innovation and smart policy.

This article was written by Chitra Sethi, Editorial Director, SAE Media Group. For more information visit, here  .



Magazine cover
Tech Briefs Magazine

This article first appeared in the February, 2026 issue of Tech Briefs Magazine (Vol. 50 No. 2).

Read more articles from the archives here.