Researchers exploring the possibilities of nuclear energy will need ways of knowing how materials respond to radiation.
A team from the University of Wisconsin–Madison and Oak Ridge National Laboratory is offering the necessary training, but the recipient of the instruction is not your typical student.
Engineering graduate Wei Li, Oak Ridge staff scientist Kevin Field, and UW–Madison materials science and engineering professor Dane Morgan are teaching computers to quickly detect microscopic radiation damage within materials under consideration for nuclear reactors.
A structure damaged by radiation often resembles a cratered lunar surface. Using the machine-learning approach, Morgan and his colleagues trained a neural network to recognize a very specific type of radiation damage known as dislocation loops – atomic-scale defects difficult for even experienced experts to spot.
After training with 270 images, the neural network, in addition to a machine-learning algorithm called a cascade object detector , correctly identified and classified roughly 86 percent of the dislocation loops in a set of test pictures. (Human experts, by comparison, found 80 percent of the defects.)
“This is just the beginning,” said Morgan in a recent press release from the university . “In the future, I believe images from many instruments will pass through a machine learning algorithm for initial analysis before being considered by humans.”
Morgan spoke with Tech Briefs about how the artificial-intelligence approach stacks up against the human eye.
Tech Briefs: What was the inspiration behind this work?
Prof. Dane Morgan, UW–Madison materials science and engineering: Broadly the inspiration is to take the transformative power of machine learning and apply it to accelerate science and engineering research. In fact, I have founded a group dedicated to engaging undergraduates in this pursuit here at UW Madison called the Informatics Skunkworks . The motivation for this particular application is the potential and needs of advanced nuclear technologies.
Tech Briefs: Why look at nuclear/radiation damage specifically?
Prof. Morgan: Nuclear energy represents an amazing, carbon-free energy source which is undergoing a lot of exciting, new exploration to develop the next generation of reactors. Many of these reactor concepts, however, require a better understanding of the way materials respond to radiation. Researchers around the world are using techniques like electron microscopy to understand radiation damage, but it is a slow, tedious, and error-prone process. Our team saw a chance to greatly accelerate this part of the overall research, thereby supporting more rapid development of new, safe, cost-effective, and carbon-free nuclear power sources.
Tech Briefs: What kinds of defects are being caught?
Prof. Morgan: Right now, we are focusing on a dislocation known as a 111 loop. A dislocation is a defect in the otherwise very regular organization of atoms in a crystal. A 111 loop is a particular orientation – an extra plane of atoms in the crystal structure of a ferritic/martensitic steel.
Soon, however, we hope to include multiple defects such as 100 loops, so-called “black spot defects,” and potentially grain boundaries and pre-existing dislocations as well. The idea is that a researcher in this field could process an image and every defect of possible interest would be identified, its location and dimensions measured, and then the results would be reported and saved for easy analysis.
Tech Briefs: Is machine vision better than the human eye at detecting this kind of damage?
Prof. Morgan: It is not actually clear if humans or machines are fundamentally better at detecting these types of defects. Computers, however, have a number of advantages compared to the humans for this kind of work. First, computers are very fast and efficient, able to identify defects potentially orders of magnitude faster at negligible cost, and once trained never need to be trained again, unlike when new human researchers join the field.
Tech Briefs: Are computers prone to error?
Prof. Morgan: Computers are less error prone since they don’t get tired, distracted, or confused. So even if the computer and the best trained human are comparable, the computer is likely to be more accurate when dealing with large amounts of data. Furthermore, some defect detection is subjective, since a blurry image may be hard to identify as one defect type or another. While such identifications are ambiguous, at least the computer provides a consistent standard that can be easily used anywhere.
Tech Briefs: How important is scalability to the machine-learning approach?
Prof. Morgan: The computer analysis is scalable. As electron microscopes grow in their capabilities they will produce far more images far faster than humans can analyze. A modern electron microscopy can already produce tens of thousands of images in a given experiment. To keep pace with our own instruments we have to have automated aspects of the analysis.
Tech Briefs: In a press release from the university you said, “This is just the beginning.” What role do you envision machine learning tools playing in the future?
Prof. Morgan: I envision that machine learning models will become the standard for a first pass at analysis, freeing up researchers to spend their time looking at select surprising results rather than every piece of data. This is valuable, since it will make us better at what we already do, but I think the most exciting applications will come from how this enables us to do things we presently cannot do.
For example, one could image these algorithms running in real time on an instrument, allowing a researcher to guide where they image to explore heterogeneous distributions of the defects, or how defect behavior changes near other microstructural features like grain boundaries and precipitates. Doing such analysis in real time would greatly enhance the value of the data a researcher could take.
One could also imagine that these algorithms could process time series movies of systems evolving under irradiation, potentially even tracking the evolution of each defect, including its growth and motion, and providing a window in defect evolution that is far more detailed than what we have now. Outside of just radiation effects, these approaches could be used for all kinds of microstructural analysis, allowing researchers to automate and accelerate exploration of everything from nanoparticle structure to defect chemistry to dislocation evolution. I think that over the next 10-20 years we will see machine-learning algorithms permeating our analysis of materials data in all kinds of new and surprising ways.
What do you think? Is it only “just the beginning” for machine learning? Share your comments below.