MIT model improves robots’ ability to handle, manipulate objects

An AI model built by researchers at the Massachusetts Institute of Technology helps robots better predict how they’ll interact with solid objects and liquids, improving their ability to mold deformable materials.

Yunzhu Li and Jiajun Wu, both graduate students at MIT’s Computer Science and Artificial Intelligence Laboratory, headed the study, which involved constructing and training a graph neural network to predict how materials’ individual particles reshape and react to touch. In its entirety, the model is known as DPI-NETS, or the “particle interaction network.”

“Humans have an intuitive physics model in our heads where we can imagine how an object will behave if we push or squeeze it,” Li said in a press release. “Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots. We want to build this type of intuitive model for robots to enable them to do what humans can do.”

The authors first constructed a set of dynamic interaction graphs, which connect thousands of nodes and edges within a particle to capture the particle’s complex behaviors. Those graphs served as the basis for the team’s neural network, which amassed a database of particle properties over time to predict how different materials would react to different levels of force. To calculate an estimation, the model implicitly considers a host of specific properties—like a particle’s mass and elasticity—to predict if and where the particle will move when disturbed.

Li et al. tested the technique on the two-fingered “RiceGrip” robot, which they asked to clamp target shapes out of deformable foam. The robot first used a depth-sensing camera and object-recognition techniques to identify the foam, then activated the authors’ neural network to visualize the position of the material’s particles. The model added edges between particles, reconstructing the foam into a dynamic graph customized for deformable materials.

Because of the model’s depth of knowledge, the robot had a good idea of how each touch would affect the particles on the graph. As it indented the foam, it matched the real-world position of the particles to those targeted on the graph.

When particles didn’t align between the robot’s movement and the graph, the model was programmed to send an error signal, tweaking itself to better match the real-world physics of material in front of it. The authors said they’re looking to further develop the model and its complexities in the near future.

“When children are five months old, they already have different expectations for solids and liquids,” Wu said in the release. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Li and colleagues’ paper will be presented at the International Conference on Learning Representations in May.

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Trimed Popup
Trimed Popup