An ultra-low-power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. Combined with new generations of low-power motors and sensors, the new application-specific integrated circuit (ASIC), which operates on milliwatts of power, could help intelligent swarm robots operate for hours instead of minutes.

A robotic car controlled by an ultra-low-power hybrid chip is shown in an arena to demonstrate its ability to learn and collaborate with another robot. (Photo: Allison Carter, Georgia Tech)

To conserve power, the chips use a hybrid digital-analog time-domain processor in which the pulse width of signals encodes information. The neural network IC accommodates both model-based programming and collaborative reinforcement learning, potentially providing the small robots better capabilities for reconnaissance, search-and-rescue, and other missions.

Small robotic cars powered by the ASICs demonstrated the use of inertial and ultrasound sensors to determine their location and detect objects around them. Information from the sensors goes to the hybrid ASIC, which serves as the “brain” of the vehicles. Instructions then go to a Raspberry Pi controller, which sends instructions to the electric motors.

An ultra-low-power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. (Photo: Allison Carter, Georgia Tech)

In palm-sized robots, three major systems consume power: the motors and controllers used to drive and steer the wheels, the processor, and the sensing system. In the cars built for the demonstration, the low-power ASIC means that the motors consume the bulk of the power. The team is working on motors that use microelectromechanical systems (MEMS) technology able to operate with much less power than conventional motors. Palm-sized robots with efficient motors and controllers could provide runtimes of several hours on AA batteries.

The system can be programmed to follow model-based algorithms and it can learn from its environment using a reinforcement system that encourages better performance over time. The system starts out with a predetermined set of weights in the neural network so the robot does not crash immediately or give erroneous information. When it is deployed in a new location, the environment will have structures that the robot will recognize and some that the system will have to learn. The system will then make decisions on its own, and it will gauge the effectiveness of each decision to optimize its motion.

For more information, contact John Toon at This email address is being protected from spambots. You need JavaScript enabled to view it.; 404-894-6986.