Taming industrial robots with a new human-machine interface Thursday, 07 January 2016

Robotics researcher Madeline Gannon has developed a gesture-based control software that allows industrial robots to see people and respond to them in a shared space, opening the way for robots to work in a more flexible manner on a range of tasks.

Industrial robots are not just fast, powerful and precise, but also highly adaptable. Changing the toolset of a robot allows it to paint, handle materials, weld, etc. However, they have been relegated to the factory floor over the past half a century, restricted to repetitive, programmed tasks, with little to no awareness of the environment outside of their code. This lack of awareness has meant that they could only be used in highly controlled environments where unpredictable objects like people are strictly isolated from their work zones.

In recent years, as the industry reaches the limitations of automation tasks where the human is entirely removed from the equation, collaborative robots have started to emerge. This approach aims to utilise machines to augment human abilities, rather than replace them. For this to happen, new interfaces between machines and humans are needed.

Gannon's Quipt project is one such effort, retrofitting collaborative intelligence on industrial robots. The project combines a gesture-based control software, wearable markers and a motion capture system, allowing industrial robots to safely follow, mirror and avoid humans in collaborative tasks.

In her primary use case, Quipt augmented an ABB IRB 6700 industrial robot with a Vicon motion capture system. Her software then interpreted the motion capture data into corresponding movement commands for the robot, using an open-source library called Robo.Op.

Quipt also visualises debugging data in an Android app, so a human collaborator has a mobile, continuous view of what the robot is seeing.

According to Gannon, Quipt achieves two outcomes: it makes industrial robots safe to use outside of the highly controlled environment of the factory, and it reduces the technical skill needed to program robots to carry out tasks.

Users don wearable markers made from retroreflective tape on the hand, around the neck, or elsewhere on the body, which the robot detects with a motion capture system, allowing it to "see" the controller and respond appropriately.

Quipt effectively instructs the robot how the person is moving and how it should move in response. Quipt uses three primitive spatial behaviours to guide the robot's movements: follow, mirror, and avoid, which are the basic components of how people interact with one another while working together in a shared space. This means humans can intuitively understand the robot's movements and where it is going next.

Madeline Gannon heads MADLAB.CC, a design collective exploring computational approaches to design, craft, and interaction. She is a researcher, designer, and educator at Carnegie Mellon University.