LiDAR-Net: A Real-scanned 3D Point Cloud Dataset for Indoor Scenes

Yanwen Guo1Yuanqi Li1Dayong Ren1, Xiaohong Zhang1, Jiawei Li1Liang Pu1Changfeng Ma1Xiaoyu Zhan1Jie Guo1Mingqiang Wei2Yan Zhang1, Piaopiao Yu1, Shuangyu Yang1, Donghao Ji1, Huisheng Ye1, Hao Sun1, Yansong Liu1, Yinuo Chen1, Jiaqi Zhu1, Hongyu Liu1

1. Nanjing University 2. Nanjing University of Aeronautics and Astronautics

Overview

LiDAR-Net is a new real-scanned indoor point cloud dataset, containing nearly 3.6 billion precisely point-level annotated points, covering an expansive area of 30,000 m2. LiDAR-Net encompasses three prevalent daily environments, including learning scenes, working scenes, and living scenes. LiDAR-Net is characterized by its non-uniform point distribution, e.g., scanning holes and scanning lines. Additionally, it meticulously records and annotates scanning anomalies, including reflection noise and ghost. These anomalies stem from specular reflections on glass or metal, as well as distortions due to moving persons. LiDAR-Net’s realistic representation of non-uniform distribution and anomalies significantly enhances the training of deep learning models, leading to improved generalization in practical applications. Crucially, our research identifies several fundamental challenges in understanding indoor point clouds, contributing essential insights to future explorations in this field.


Updates

[2023-11-05]

[2023-11-01]

Download links are now available.

Website created.


Citation

If you use the LiDAR-Net data or code please cite:

@inproceedings{guo2024lidarnet,
  title={LiDAR-Net: A Real-scanned 3D Point Cloud Dataset for Indoor Scenes},
  author={Guo, Yanwen and Li, Yuanqi and Ren, Dayong and Zhang, Xiaohong and Li, Jiawei and Pu, Liang and Ma, Changfeng and Zhan, Xiaoyu and Guo, Jie and Wei, Mingqiang and Zhang, Yan and Yu, Piaopiao and Yang, Shuangyu and Ji, Donghao and Ye, Huisheng and Sun, Hao and Liu, Yansong and Chen, Yinuo and Zhu, Jiaqi and Liu, Hongyu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024}
}