Aptima Develops Sense-making System for Robots Inspired by Neuroscience
Robots can be ideal in situations where humans are not, like entering a radioactive facility, or disposing of explosive ordinance. But even so, these mobile machines still require considerable human oversight as they lack the intelligence to respond to the unexpected on their own.
If robots are to become more autonomous and useful, how can they be made to understand the surroundings and situations that would be obvious to humans?
Unmanned Aerial Vehicles: Electric UAVs 2014-2024
To address this challenge, Aptima has developed Cognitive Patterns, a knowledge-based, collaborative sense-making system for robots to better recognize, adapt to, and intelligently work with their human counterparts in novel situations. What makes Cognitive Patterns distinct is an architecture that borrows from the neuroscience of human perception and sense-making. First, the high-level knowledge on board the robot is combined with lower level sensor data so the robot can recognize a situation as much as possible on its own, just as humans do. Second, when confronted with ambiguous information or scenarios that don’t match its current knowledge, the system blends existing concepts to generate new knowledge for the robot, akin to the sense-making mind. Networked with the robot, the human operator can adjust how it categorizes objects, people, and environments, boosting the robot’s high-level knowledge and ability to draw conclusions from its sensory data.
Aptima, which applies expertise in human-inspired machine systems, developed the Cognitive Patterns prototype for DARPA's Defense Sciences Office and the US Army Research Laboratory’s Cognitive Robotics team. The ROS-compliant technology is expected to advance a new class of robots with higher level decision-making, in turn, lowering pre-mission preparation costs, minimizing the need for human intervention, and increasing mission flexibility.
“Even with their state-of-the-art sensors, robots aren’t capable of recognizing what they haven’t seen before, which severely limits their usefulness,” said Webb Stacy, Aptima’s Principal Investigator for the Cognitive Patterns contract. “They’re designed to operate from the bottom up. If the images hitting its camera don’t match what’s in its brain, they’re unable to understand what would be clear to us, which requires lots of ‘hand-holding’.
“Humans, on the other hand, make sense of the world from the top down. We blend concepts in our visual memory or mind’s eye that allows us to recognize a friend regardless of the clothing they wear, or identify an object as a coffee cup despite the innumerable colors, shapes, and sizes they come in,” Stacy added.
As an automated system, Cognitive Patterns combines both top-down and bottom up processing, allowing the robot and human to each do what they do best. It matches sensory input to abstract patterns in a manner similar to the mechanisms used for visual perception in the human brain. The result is a rich situation model shared between robot and human that could not have been created by either alone. The robot needs to interact with the operator only when something unusual or unexpected occurs and to receive mission orders.
To see Cognitive Patterns in action, watch the demo video at https://vimeo.com/62734969.
Source : Aptima, Inc.
Nov 17 - 18, 2015 - Washington, United States