Abstract—Traditional accurate 3D reconstruction can be roughly divided into two categories according to types of sensor. One is through expensive laser scanner, and the other is by photogrammetry, which perform the reconstruction using photos taken from different angles. However, in recent years, the rapid development of depth camera has threw some new light upon the field. In this paper, we present a 3D reconstruction plan for indoor scenes using a very low-cost depth camera, the Microsoft Kinect sensor. Our system include four steps: data preprocessing, pose estimation of the sensor, fusion of the depth data, 3D surface extraction. Firstly, we project each frame of depth data back into the space to obtain the point-cloud. Secondly, estimate the sensor pose using ICP algorithm. Thirdly, fuse all of the depth data using a volumetric method, and finally extract 3D surface from the global model.
Index Terms—3D reconstruction, 3D registration, depth camera, kinect sensor, volumetric representation.
The authors are with the School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, Hubei, China.
[PDF]
Cite:Song Tiangang, Lyu Zhou, Ding Xinyang, and Wan Yi, "3D Surface Reconstruction Based on Kinect Sensor," International Journal of Computer Theory and Engineering vol. 5, no. 3, pp. 567-573, 2013.