Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing. Determining the ideal parameters (e.g., printing speed, how much material the printer deposits) that consistently print a new material effectively is often a case of manual trial-and-error.
However, MIT researchers have now used artificial intelligence (AI) to streamline this procedure. The team developed a machine-learning system that uses computer vision to watch the manufacturing process and correct errors in how it handles the material in real-time.
Simulations teach a neural network how to adjust printing parameters to minimize error, and then apply that controller to a real 3D printer. The system printed objects more accurately than all the other 3D-printing controllers they compared it to.
Eschewed is the expensive process of printing thousands — or millions — of real objects to train the neural network. The work could enable engineers to more easily incorporate novel materials into their prints, which could help develop objects with special electrical or chemical properties. It’s possible also to help technicians make adjustments to the printing process on-the-fly.
“This project is really the first demonstration of building a manufacturing system that uses machine learning to learn a complex control policy,” said Senior Author Wojciech Matusik, Professor of Electrical Engineering and Computer Science at MIT who leads the Computational Design and Fabrication Group (CDFG) within the Computer Science and Artificial Intelligence Laboratory (CSAIL).
“If you have manufacturing machines that are more intelligent, they can adapt to the changing environment in the workplace in real-time, to improve the yields or the accuracy of the system. You can squeeze more out of the machine,” added Matusik.
Determining the ideal parameters of a digital manufacturing process can be one of the most expensive parts of the process due to the trial-and-error. And once a technician finds a combination that works, those parameters are only ideal for that specific situation.
Using a machine-learning system is fraught with challenges. First, researchers needed to measure what was happening on the printer in real-time. To do so, they developed a machine-vision system using two cameras aimed at the nozzle of the 3D printer. The system shines light at material as it is deposited and based on passing light calculates its thickness.
The controller then processes the images it receives from the vision system and, based on errors, adjusts the feed rate and direction of the printer.
To train the controller, researchers used reinforcement learning, in which the model learns through trial-and-error with a reward. The model was tasked with selecting printing parameters that would create a certain object in a simulated environment. After being shown the expected output, the model was rewarded when the parameters it chose minimized the error between its print and the expected outcome.
In this case, an error insinuates the model either dispensed too much material, placing it in areas that should have been left open, or did not dispense enough, leaving open spots that should be filled. As the model performed more simulated prints, it updated its control policy to maximize the reward, becoming more accurate.
In practice, conditions typically change due to slight variations or noise in the printing process, so the researchers created a numerical model that approximates noise from the 3D printer. They used this to add noise to the simulation, which led to more realistic results.
When the team tested the controller, it printed objects more accurately than any other control method they evaluated, and it performed especially well at infill printing. Other controllers deposited so much material that the printed object bulged up, but the researchers’ controller adjusted the printing path so the object stayed level.
Next, the researchers want to develop controllers for other manufacturing processes; see how the approach can be modified for scenarios where there are multiple layers of material, or multiple materials being printed at once; and use AI to recognize and adjust for viscosity in real-time.