Two-handed robot system adapts to working in new environments Thursday, 15 June 2017

A team at Germany's Bielefeld University has developed a robotic hand system that is able to familiarise itself with novel objects, learning how to grasp them.

The two-handed robot system, developed at the university’s Cluster of Excellence Cognitive Interaction Technology (CITEC), works without previously knowing the characteristics of the objects, such as pieces of fruit or tools. The learnings from the technology could be used for future service robots, for example, that are able to independently adapt to working in new environments.

The robotic hands are based on human hands in terms of shape and mobility, and attached to a robotic brain which learns how everyday objects like pieces of fruit, dishes or stuffed animals can be distinguished on the basis of their colour or shape, and also other factors that matter when attempting to grasp the object.

The project is being headed by neuroinformatics professor Dr Helge Ritter, who worked together with sports scientist and cognitive psychologist Professor Dr Thomas Schack and robotics Privatdozent Dr Sven Wachsmuth.

According to Dr Ritter, the system learns by trying out and exploring on its own – just as babies approach new objects. This allows it to recognise that a banana can be held, and a button can be pressed.

“The system learns to recognise such possibilities as characteristics, and constructs a model for interacting and re-identifying the object,” explained Dr Ritter.

To accomplish this, the team combined work in artificial intelligence with research from other disciplines. Dr Schack’s research group, for instance, investigated which characteristics study participants perceived to be significant in grasping actions. They found that weight hardly plays a role – instead, humans rely mostly on shape and size when differentiating objects.

In another study, test subjects’ eyes were covered and they had to handle cubes that differed in weight, shape, and size. Infrared cameras recorded their hand movements, exploring how people touch an object and which strategies they prefer to use to identify its characteristics.

The robotic system was also “trained” by a human learning mentor, who instructed the robot hands using gestures and voice commands on what objects it should inspect next.

Using colour cameras and depth sensors, two monitors display how the system perceives its surroundings and reacts to instructions from humans.

“In order to understand which objects they should work with, the robot hands have to be able to interpret not only spoken language, but also gestures,” explains Sven Wachsmuth, of CITEC’s Central Labs.

“And they also have to be able to put themselves in the position of a human to also ask themselves if they have correctly understood.”