We present a learning-based scheme for estimating clothing tightness as well as the human shape on clothed 3D human scans robustly and accurately. Our approach maps the clothed human geometry/appearance to a geometry image that we call clothed-GI. To align clothed-GI under different clothing, we extend the parametric human model and employ skeleton detection and warping for reliable alignment. For each pixel on the clothed-GI, we extract a feature vector including color/texture, position, normal, etc. and train a modified conditional GAN network for per-pixel tightness prediction using a comprehensive 3D clothing. Our technique significantly improves the accuracy of human shape prediction, especially under loose and fitted clothing. We further demonstrate using our results for human/clothing segmentation, cloth retargeting and animations.
[ArxivPage] [Paper] [Code] [Video] [BibTex]
The pipeline of TightCap. The first step is to warp our enhanced clothing-adapted SMPL with scanned mesh. Then, we deform warped mesh using Multi-Stage Alignment. After, we estimate the tightness map and clothing mask from mapped clothed-GI with Tightness Prediction. The final step, Multi-layer Reconstruction is to recover body shape from predicted tightness on the mesh, and segment cloth.
The Clothing Tightness Dataset(CTD) is coming soon. Before this, you can download our sampling dataset with this Google Drive link.
Please cite this paper in your publications if it helps your research: