Humans can easily identify any object by feeling it or by looking at it. It is a big challenge for machines to understand anything by seeing it or feeling it. A new robot has been developed by the people from MIT Computer Science with the help of Artificial Intelligence, which can identify an object by looking at it or touching it.
The team ended up taking a KUKA robot arm and added some features like a tactile sensor which is popularly called GelSight which was originally created by Ted Adelson’s group called CSAIL. All the information which is collected was then fed to the Artificial Intelligence and then it was taught to build a relationship between the visual and tactile information.
The team ended up recording a total of 12,000 videos of a total of 200 objects. The video was then broken down into still images and the Artificial Intelligence used their dataset to connect tactile and virtual data.
“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, says Yunzhu Li, CSAIL Ph.D. student and lead author on a new paper about the system. “By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”
The robot can easily identify a few known objects, but only under a controlled environment. The next thing to do is build a larger set so that the robot can work in a more diverse nature.
“Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’,” says Andrew Owens, a postdoctoral researcher at the University of California at Berkeley. “This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”