TY - GEN
T1 - 3D Body and Background Reconstruction in a Large-scale Indoor Scene using Multiple Depth Cameras
AU - Kobayashi, Daisuke
AU - Thomas, Diego
AU - Uchiyama, Hideaki
AU - Taniguchi, Rin Ichiro
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5/7
Y1 - 2019/5/7
N2 - 3D reconstruction of indoor scenes that contain non-rigidly moving human body using depth cameras is a task of extraordinary difficulty. Despite intensive efforts from the researchers in the 3D vision community, existing methods are still limited to reconstruct small scale scenes. This is because of the difficulty to track the camera motion when a target person moves in a totally different direction. Due to the narrow field of view (FoV) of consumer-grade red-green-blue-depth (RGB-D) cameras, a target person (generally put at about 2-3 meters from the camera) covers most of the FoV of the camera. Therefore, there are not enough features from the static background to track the motion of the camera. In this paper, we propose a system which reconstructs a moving human body and the background of an indoor scene using multiple depth cameras. Our system is composed of three Kinects that are approximately set in the same line and facing the same direction so that their FoV do not overlap (to avoid interference). Owing to this setup, we capture images of a person moving in a large scale indoor scene. The three Kinect cameras are calibrated with a robust method that uses three large non parallel planes. A moving person is detected by using human skeleton information, and is reconstructed separately from the static background. By separating the human body and the background, static 3D reconstruction can be adopted for the static background area while a method specialized for the human body area can be used to reconstruct the 3D model of the moving person. The experimental result shows the performance of proposed system for human body in a large-scale indoor scene.
AB - 3D reconstruction of indoor scenes that contain non-rigidly moving human body using depth cameras is a task of extraordinary difficulty. Despite intensive efforts from the researchers in the 3D vision community, existing methods are still limited to reconstruct small scale scenes. This is because of the difficulty to track the camera motion when a target person moves in a totally different direction. Due to the narrow field of view (FoV) of consumer-grade red-green-blue-depth (RGB-D) cameras, a target person (generally put at about 2-3 meters from the camera) covers most of the FoV of the camera. Therefore, there are not enough features from the static background to track the motion of the camera. In this paper, we propose a system which reconstructs a moving human body and the background of an indoor scene using multiple depth cameras. Our system is composed of three Kinects that are approximately set in the same line and facing the same direction so that their FoV do not overlap (to avoid interference). Owing to this setup, we capture images of a person moving in a large scale indoor scene. The three Kinect cameras are calibrated with a robust method that uses three large non parallel planes. A moving person is detected by using human skeleton information, and is reconstructed separately from the static background. By separating the human body and the background, static 3D reconstruction can be adopted for the static background area while a method specialized for the human body area can be used to reconstruct the 3D model of the moving person. The experimental result shows the performance of proposed system for human body in a large-scale indoor scene.
UR - http://www.scopus.com/inward/record.url?scp=85066297706&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85066297706&partnerID=8YFLogxK
U2 - 10.1109/APMAR.2019.8709280
DO - 10.1109/APMAR.2019.8709280
M3 - Conference contribution
AN - SCOPUS:85066297706
T3 - Proceedings of the 2019 12th Asia Pacific Workshop on Mixed and Augmented Reality, APMAR 2019
BT - Proceedings of the 2019 12th Asia Pacific Workshop on Mixed and Augmented Reality, APMAR 2019
A2 - Weng, Dongdong
A2 - Chan, Liwei
A2 - Lee, Youngho
A2 - Liang, Xiaohui
A2 - Sakata, Nobuchika
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th Asia Pacific Workshop on Mixed and Augmented Reality, APMAR 2019
Y2 - 28 March 2019 through 29 March 2019
ER -