1The Hong Kong University of Science and Technology
2Waseda University
3The University of Hong Kong
4Carnegie Mellon University
5Nanyang Technological University
6Texas A&M University
†Corresponding author
UNIC learns an instance-specific neural deformation field that deforms arbitrarily complex garments to follow unseen character poses in real time. We first encode the character poses of two consecutive frames into a compact latent space. Then, we categorically sample a latent vector from the learned space and concatenate it with garment vertex coordinates. After that, we feed the concatenation into an MLP-based deformation decoder to predict deformation offsets for all vertices. Finally, post-process intersection handling is applied to avoid the intersection between the avatar mesh and the deformed garment mesh.
Given garments of different topology complexity and geometry density, UNIC achieves comparable simulation quality against to the "gold standard".
We present input garment geometries and simulation results of each method. Note that we select garment samples that are closest to our dataset within each method's data domain to fairly evaluate the simulation quality.
We test both learning-based methods and physical simulation method on all four garment styles in our dataset, ranging from simple t-shirt to complex tulle skirt. In our experiment, UNIC consistently outperforms all the other approaches including GPU-accelerated professional software (Marvelous Designer 2009) in efficiency. The runtime cost is measured in fps on a single NVIDIA RTX 3090 GPU.
@InProceedings{zhao2025unic, title = {UNIC: Neural Garment Deformation Field for Real-time Clothed Character Animation}, author = {Zhao, Chengfeng and Qi, Junbo and Dou, Zhiyang and Li, Minchen and Liu, Yuan}, journal = {arXiv preprint arXiv:}, year = {2025} }