Angel Perez Garcia, a student at NTNU in Norway, uses the movements of his eyes, eyebrows, and other parts of his face to control a robot. "With my eyebrows, I can select which of the robot's joints I want to move," explains Angel. Facial grimaces generate major electrical activity (EEG signals) across our heads, and the same happens when Angel concentrates on a symbol, such as a flashing light, on a computer monitor. In both cases, the electrodes read the activity in the brain. The signals are then interpreted by a processor that sends a message to the robot to make it move in a pre-defined way.
The work has thus been to find out how a robot can be trained to imitate human movements. This was solved using a system by which the robot is guided using a Kinect camera of the type used in games technology. To demonstrate, a user stands about a meter and a half in front of the camera. First, the right hand is put into the air, and a clicking sound is made. This causes the camera to register the person, and trace the movements of their hand. When they move their hand up and to the right, the robot imitates the movement.
The Kinect camera has built-in algorithms that can trace the movements of a hand. The data is transposed to define the position they want the robot to assume, and a communications system is set up between the sensors in the camera and the robot. In this way, the robot receives a reference along the lines of “we want you to move over here,” and a built-in regulator then computes how it can achieve the movement and how much electricity the motor requires to carry out the movement.