Refining the system

Technological advances have always been geared toward the improvement of efficiency in industry, medicine, day to day life and a variety of fields. Robotics more than anything has been crucial as a factor towards this end. Scientists from Al laboratory at the Massachusetts Institute of Technology are in the process of developing a robot that does not need to be physically operated for it to carry out commands.

Instead, it would read commands from the mind of the handler by collecting data on brain activity and signals using electroencephalography or the EEG monitor. Researchers used the EEG to monitor brain activity and detect when the human noticed an error by the robot. The robot responded by correcting the error. Instead of the individual having to look at something corresponding with the robot’s task, they only have to think the task.

[Checkout: 5 things that robots can never do better than humans]

How are these robots mind controlled?

Researchers created a feedback system which allows the robots to correct their errors with ease. The mechanism works in real time thanks to the machine learning algorithms, which makes the system capable of grouping brain waves in 10 to 20 milliseconds.

Advertisement

According to the CSAIL director, Daniela Rus, the senior author of the team research paper, the handler only has to mentally agree or disagree with what it is doing. This new research was published only on Monday and will be presented at the IEEE International Conference on Robotics and Automation to be held in Singapore.

In the study, the researchers have described how they collected EEG data from volunteers as those individuals watched a common industrial robot decide between picking one of two objects.

Those puzzling brainwaves

Frank Guenther’s neuroscience laboratory had previously looked into brain signals in monkeys when a mistake was being made and the objective was investigating these signals in human beings and how it could interact with robots. Apparently, the act of noticing an error can be quite complicated which a human being does automatically and could be quite difficult to program. Therefore they capitalize on the fact the human brain is watching and instantly detects an error.

For practical applications, this would aid with supervising robots in factories or when it comes to driverless cars because the robot can be programmed to react when the human operator notices a mistake being made. Obviously, the monitoring of brain signals is just one piece of the puzzle, though.

In order to fully control a robot using the human brain, it would take more than that. However, this research done on an intuitive human to robot interaction opens possibilities which entail application in real world scenarios that need human control.

Refining the system

In the end though, the accuracy of the system needs a lot of improvement. In real time experiments, the robot only performed slightly better than 50 percent success when grouping the brain signals as ErrPs. This means that nearly half the time, it would fail to notice the corrections from the observer.

In the offline analysis, the system only got it right only 65 percent of the time. However, when the machine missed an ErrP signal and failed to correct course, the human observer produced a second and stronger ErrP according to research scientist Stephanie Gil. When analyzed offline, the performance boosted significantly and could go to upwards of 90 percent in the future.

Naturally, the next step would be to detect those in real time and to move closer to the objective of controlling the robots in an accurate manner. Doing this could prove problematic considering the system needs to be told when to look out for the ErrP signal. At present, this is done using mechanical switches which are activated when the robot arm starts to move.

[More: Robots in human skin: robots will soon look even more like us]

Comments

comments

2 COMMENTS

Leave a Reply