1. Extra Categories
  2. Editor's Pick
January 31, 2019updated 01 Feb 2019 2:27pm

Soft robots that can see, feel and perceive: Human-style perception system comes to robotics

By Lucy Ingham

Engineers have developed a sophisticated perception system for soft robots that is modelled on the way we as humans process information about our bodies and the world around us.

Comprising a motion capture system, soft sensors, a soft robotic finger and a neural network, it is designed to form the foundation for robots that can interact with their environments without any external sensors, as humans do.

Motion capture systems provide a proxy for human vision, sensors mimic touch and the finger allows for interaction with the world around it. The neural network, meanwhile, processes the data from the other systems, becoming a proxy for a brain.

In this sense, it is the beginnings of the technology to create robots with a human-like appearance, reminiscent of those in sci-fi movies.

Soft robots and arbitrary sensors

The soft robots perception system was developed by engineers at the University of California San Diego, has been created in a manner that is inspired more by organic systems than traditional machines.

Notably, when embedding sensors into the soft robotic finger, the engineers did not adopt a uniform, tailored approach, but instead placed them arbitrarily in order to support a wide range of motions.

They then used machine learning techniques to develop a system to interpret these signals into actionable data. This in turn enabled the researchers to predict both the finger’s movement and the forces applied to it, which are being used to develop predictive models that can be used to help process sensor data.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” explained study senior author Michael Tolley, a professor of mechanical and aerospace engineering at the University of California San Diego.

“Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

While the approach has significant promise, there is a long way to go before it can be used to form complex, functioning robots. Next the researchers will increase the number of sensors present to create a surface more similar to biological skin.

The research was published this week in the journal Science Robotics.