3D Generative Model for Robot Gripper Form Design

The 3D shape of a robot's end-effector plays a critical role in determining it's functionality and overall performance. Many industrial applications rely on task-specific gripper designs to ensure the system's robustness and accuracy. However, the process of manual hardware design is both costly and time-consuming, and the quality of the resulting design is dependent on the engineer's experience and domain expertise, which can easily be out-dated or inaccurate. The goal of this work is to use machine learning algorithms to automate the design of task-specific gripper fingers. We propose Fit2Form, a 3D generative design framework that generates pairs of finger shapes to maximize design objectives (i.e., grasp success, stability, and robustness) for target grasp objects. We model the design objectives by training a Fitness network to predict their values for pairs of gripper fingers and their corresponding grasp objects. This Fitness network then provides supervision to a 3D Generative network that produces a pair of 3D finger geometries for the target grasp object. Our experiments demonstrate that the proposed 3D generative design framework generates parallel jaw gripper finger shapes that achieve more stable and robust grasps compared to other general-purpose and task-specific gripper design algorithms.


To appear at Conference on Robot Learning 2020.
Paper can be found on ArXiv.


Code and instructions to download data can be found on Github.


    title={{Fit2Form}: 3{D} Generative Model for Robot Gripper Form Design},
    author={Ha, Huy and Agrawal, Shubham and Song, Shuran},
    booktitle={Conference on Robotic Learning (CoRL)},

Technical Summary Video (with audio)

Real robot results

Following is a comparison of Fit2Form with different baselines for the same target object:

Following are a few more examples of Fit2Form generated fingers for various target objects:

Simulation results

Following are examples of Fit2Form generated fingers for various objects in simulation:


Columbia University in the City of New York

* denotes equal contribution


We would like to thank Alberto Rodriguez, Hod Lipson, and Zhenjia Xu for fruitful discussions, and Google for the UR5 robot hardware. This work was supported in part by the Amazon Research Award, the Columbia School of Engineering, as well as the National Science Foundation under CMMI-2037101.


If you have any questions, please feel free to contact Huy Ha