Robots of the future could be useful in homes and offices, thanks to MIT scientists who have developed an advanced computer vision that enables machines to inspect random objects and accomplish specific tasks.
Breakthroughs in computer vision have enabled robots to make basic distinctions between objects. However, the systems do not truly understand objects’ shapes, so there is little the robots can do after a quick pick-up.
The new system created by researchers at Massachusetts Institute of Technology (MIT) in the U.S., called Dense Object Nets (DON), looks at objects as collections of points that serve as sort of visual roadmaps.
This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar things. “Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said Lucas Manuelli, a PhD student, MIT.
The team views potential applications in manufacturing and in homes. “Imagine giving the system an image of a tidy house, and letting it clean while you’re at work, or using an image of dishes so that the system puts your plates away while you’re on vacation,” researchers said.
None of the data was actually labelled by humans. Instead, the system is what the team calls “self-supervised,” not requiring any human annotations.
The DON system essentially creates a series of coordinates on a given object, which serve as a kind of visual roadmap, to give the robot a better understanding of what it needs to grasp, and where.
The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualise an object’s 3D shape, similar to how panoramic photos are stitched together from multiple photos.
After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point.