Soon-Jo Chung is Bren Professor of Aerospace in the Division of Engineering and Applied Science (EAS) at Caltech and research scientist at Jet Propulsion Laboratory. He and his team developed a method for using a deep neural network to help autonomous drones “learn” how to land more safely and smoothly.

Tech Briefs: What motivated you to start this project?

Soon-Jo Chung: We talk a lot about AI machinery these days, but it is still a challenge when you apply these techniques to complex dynamic physical systems, such as drones, biologically-inspired robots, or spacecraft. So, we started looking at the intersection of machine learning and dynamic robotic systems that function in complex environments. We wanted to focus on aerodynamics and flight dynamics, so we chose drones. They are subject to wind bursts, turbulence, and additional ground effects when they land or take off, or when they fly close to walls or tables. I started working on this problem together with Caltech AI experts Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences, and Yisong Yue, Assistant Professor of Computing and Mathematical Sciences, thanks to the newly created Center for Autonomous Systems and Technologies (CAST) at Caltech. We were a team of AI machine learning people, robotics people, and control researchers. We worked together to bridge the gap between safety and practicality when using machine learning for controlling dynamic physical systems.

Tech Briefs: Is there a future for your approach beyond this specific application?

Dr. Chung: Absolutely. We’re only focusing on landing because that way we can generate unpredictable wind gusts and turbulence, or what we might call more correctly: ground effects. So, we can repeatably land an aircraft and see how our controller works. However, in the very near future — actually we’re already working on it — we have a giant array of 1296 computer fans to generate all different sorts of winds including swells, vortex flow, and turbulent air like a hurricane. We can generate uniform flow and wind gusts anywhere within that volume, so we can fly our drone and test our AI-based controller against these very challenging wind disturbances.

Tech Briefs: Will you test in-flight stabilization as well as landing?

Dr. Chung: That's our current work. We need to generate wind gusts and we have the right facility to do so without having to deploy drones outdoors, which takes a lot of time and where the winds would be random.

Tech Briefs: Did you generate these winds when you tested the lander.

Dr. Chung: No, not yet. When you’re landing in different orientations, speeds, and altitudes, you essentially end up generating these very different complex wind patterns.

Tech Briefs: Are the drones you tested fully autonomous?

Dr. Chung: Yes. Although we have our own custom fully autonomous drones, for this experiment, we used a commercial off-the-shelf drone with our custom neural lander controller implemented.

Tech Briefs: What inputs are fed to the controller for feedback?

Dr. Chung: We have a deep neural network as a feedforward prediction term in the controller. When you train that neural network, the input is basically full spatial information, including position, 3D velocity in x, y, z coordinates, orientation (attitude) of the drone, and angular rate — full spatial information as a function of time is the input. The output is the additional force the drone will need because of the landing and interaction with the ground. That output is used to command the controller to adjust the rotor speeds to provide the required additional force. Since our drone has a quad rotor system, we can individually control the four different rotor speeds.

Tech Briefs: Are these standard PID controllers?

Dr. Chung: No. Ours is a nonlinear controller, where we take into account nonlinear dynamics. The largest of these are caused by the ground effects, which are very difficult to predict or model. We decided to bypass the effort to develop a sophisticated system-identification model, since there is no model of ground effects in the current academic literature that captures the landing orientation and speed in a 3-D world. Instead, we are using a deep neural network to predict the behavior. The word prediction is very important; we use prediction for the controller design. The “baseline” we use for comparison is a nonlinear tracking controller, which is not a standard PID, but is more sophisticated, in that it can handle the nonlinear dynamics. It does have a proportional term, a derivative added term, and even an integral added term, but is a nonlinear version.

Tech Briefs: According to your press release, your system decreases vertical error by 100%. Could you explain what that means?

Dr. Chung: We input a desired landing trajectory for the controller to implement. With the baseline controller, the drone couldn’t reach the ground, which means there was a steady-state error. Because the baseline controller has maybe a 10-cm error, the drone hovers and then drops instead of smoothly landing. On the other hand, our controller was able to track the drone with a much smaller error and then safely land. That’s what we mean when we say our controller reduced the error by 100%.

Tech Briefs: Are there practical uses for your technique?

Dr. Chung: There are a lot, because all drones have to land at some point. In fact, landing is the trickiest maneuver for any aircraft, as you know. That is especially true for any vertical take-off and landing (VTOL) craft like a quad rotor drone or a helicopter. Landing presents the greatest challenge because of the ground effects and also the difficulty of perfectly controlling the orientation of the VTOL aircraft due to a second disturbance, like sidewinds. So, we are tackling that problem first. But at the same time, we are looking at an autonomous flying ambulance that can actually carry human passengers. The bigger the drone, the more complex and challenging are the ground effects you experience. So, our deep neural controller will significantly contribute to enhancing the safety of this kind of drone system.

The second application I want to mention is that we have a project with a scientist at Caltech to use an autonomous drone to deploy a box on a remote island, for example. Then this drone can take off autonomously from the box, observe the island and take a lot of images that can be processed to create a 3D map. Then it could autonomously land on a very small landing pad very precisely. That’s an example of how the neural controller can be used as a navigation system. But even if you have a very precise navigation system, if you don’t have a good controller, then the drone can’t land on the landing pad very precisely.

Tech Briefs: When might this be commercialized?

Dr. Chung: Because we have already implemented our software system in actual hardware, pushing it into the commercial domain would take a short amount of time. The only question is to make really sure that our algorithm is scalable and could be translated into the many different types of drones. If there is interest in this kind of technology, I expect that within a couple of years, you will see deep neural network-based control systems applied to commercial drones.

Tech Briefs: What other physical systems besides drones do you see deep neural networks being used in?

Dr. Chung: There are a lot of joint projects between Caltech and JPL that are pushing the envelope of autonomy in space. The dynamics of spacecraft are relatively slower than drones because space has a relatively slower time scale. But there is a huge challenge because once you launch a spacecraft, it’s very difficult to have direct communication or remote control with it. You therefore need to have an increased level of autonomy, especially for travel into far deep space, including Mars or even beyond. So, a particular project I’m working on is to develop an autonomous navigation system for a spacecraft approaching a comet. There is some similarity between our neural lander work and that kind of project. We’re trying to develop a neural network (machine learning)-based kind of a prediction model of our navigation system, so that once you have successive video images of the unknown comet, you can generate a shape model of it as well as the relative location of your spacecraft and the position and orientation of the comet, in real time. For this, we use a sophisticated neural network model to improve the accuracy of our estimation system. And obviously, if a spacecraft has to land on a planet or on a small body like a comet, our neural lander system would be very applicable.

An edited version of this interview appeared in the August Issue of Tech Briefs .