TightCap: 3D Human Shape Capture
with Clothing Tightness

Xin Chen1,  Anqi Pang1,  Wei Yang2,  Lan Xu2,3,  Jingyi Yu1

1ShanghaiTech University,   2DGene,   3Hong Kong University of Science and Technology

figure

Abstract


We present a learning-based scheme for estimating clothing tightness as well as the human shape on clothed 3D human scans robustly and accurately. Our approach maps the clothed human geometry/appearance to a geometry image that we call clothed-GI. To align clothed-GI under different clothing, we extend the parametric human model and employ skeleton detection and warping for reliable alignment. For each pixel on the clothed-GI, we extract a feature vector including color/texture, position, normal, etc. and train a modified conditional GAN network for per-pixel tightness prediction using a comprehensive 3D clothing. Our technique significantly improves the accuracy of human shape prediction, especially under loose and fitted clothing. We further demonstrate using our results for human/clothing segmentation, cloth retargeting and animations.


[ArxivPage] [Paper] [Code] [Video] [BibTex]

Pipeline


figure
The pipeline of TightCap. The first step is to warp our enhanced clothing-adapted SMPL with scanned mesh. Then, we deform warped mesh using Multi-Stage Alignment. After, we estimate the tightness map and clothing mask from mapped clothed-GI with Tightness Prediction. The final step, Multi-layer Reconstruction is to recover body shape from predicted tightness on the mesh, and segment cloth.

CTD Dataset


Download

The Clothing Tightness Dataset(CTD) is coming soon. Before this, you can download our sampling dataset with this Google Drive link.

Introduce our dataset

Results


figure

Citation


Please cite this paper in your publications if it helps your research:

https://arxiv.org/abs/1904.02601

License


TightCap dataset is freely available for free non-commercial use.
For commercial purpose, please contact to Xin Chen or Jingyi Yu.