Utonia: Toward One Encoder for All Point Clouds
This repository contains the model weights for Utonia, a step toward one-from-all and one-for-all point cloud encoder presented in the paper Utonia: Toward One Encoder for All Point Clouds.
- Paper: Utonia: Toward One Encoder for All Point Clouds
- Project Page: https://pointcept.github.io/Utonia/
- Inference: https://github.com/Pointcept/Utonia
Models
The default model take [coord, color, normal] as input, which can deal with input without color and normal. If colors and normals are not available, please set them to zeros.
| Model Size | Channels | Depths | Head nums |
|---|---|---|---|
| Utonia | (54, 108, 216, 432, 576) | (3, 3, 3, 12, 3) | (3, 6, 12, 24, 32) |
Abstract
We dream of a future where point clouds from all domains can come together to shape a single model that benefits them all. Toward this goal, we present Utonia, a first step toward training a single self-supervised point transformer encoder across diverse domains, spanning remote sensing, outdoor LiDAR, indoor RGB-D sequences, object-centric CAD models, and point clouds lifted from RGB-only videos. Despite their distinct sensing geometries, densities, and priors, Utonia learns a consistent representation space that transfers across domains. This unification improves perception capability while revealing intriguing emergent behaviors that arise only when domains are trained jointly. Beyond perception, we observe that Utonia representations can also benefit embodied and multimodal reasoning: conditioning vision-language-action policies on Utonia features improves robotic manipulation, and integrating them into vision-language models yields gains on spatial reasoning. We hope Utonia can serve as a step toward foundation models for sparse 3D data, and support downstream applications in AR/VR, robotics, and autonomous driving.
Usage
For detailed installation, data preparation, and testing instructions, please refer to the inference demo.
Citation
If you find Utonia useful in your research, please cite the following papers:
@misc{pointcept2023,
title={Pointcept: A Codebase for Point Cloud Perception Research},
author={Pointcept Contributors},
howpublished = {\url{https://github.com/Pointcept/Pointcept}},
year={2023}
}
@misc{zhang2026utoniaencoderpointclouds,
title={Utonia: Toward One Encoder for All Point Clouds},
author={Yujia Zhang and Xiaoyang Wu and Yunhan Yang and Xianzhe Fan and Han Li and Yuechen Zhang and Zehao Huang and Naiyan Wang and Hengshuang Zhao},
year={2026},
eprint={2603.03283},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.03283},
}