TY - GEN
T1 - Revisiting Depth Image Fusion with Variational Message Passing
AU - Thomas, Diego
AU - Sirazitdinova, Ekaterina
AU - Sugimoto, Akihiro
AU - Taniguchi, Rin Ichsiro
PY - 2019/9
Y1 - 2019/9
N2 - The running average approach has long been perceived as the best choice for fusing depth measurements captured by a consumer-grade RGB-D camera into a global 3D model. This strategy, however, assumes exact correspondences between points in a 3D model and points in the captured RGB-D images. Such assumption does not hold true in many cases because of errors in motion tracking, noise, occlusions, or inconsistent surface sampling during measurements. Accordingly, reconstructed 3D models suffer unpleasant visual artifacts. In this paper, we visit the depth fusion problem from a probabilistic viewpoint and formulate it as a probabilistic optimization using variational message passing in a Bayesian network. Our formulation enables us to fuse depth images robustly, accurately, and fast for high quality RGB-D keyframe creation, even if exact point correspondences are not always available. Our formulation also allows us to smoothly combine depth and color information for further improvements without increasing computational speed. The quantitative and qualitative comparative evaluation on built keyframes of indoor scenes show that our proposed framework achieves promising results for reconstructing accurate 3D models while using low computational power and being robust against misalignment errors without post-processing.
AB - The running average approach has long been perceived as the best choice for fusing depth measurements captured by a consumer-grade RGB-D camera into a global 3D model. This strategy, however, assumes exact correspondences between points in a 3D model and points in the captured RGB-D images. Such assumption does not hold true in many cases because of errors in motion tracking, noise, occlusions, or inconsistent surface sampling during measurements. Accordingly, reconstructed 3D models suffer unpleasant visual artifacts. In this paper, we visit the depth fusion problem from a probabilistic viewpoint and formulate it as a probabilistic optimization using variational message passing in a Bayesian network. Our formulation enables us to fuse depth images robustly, accurately, and fast for high quality RGB-D keyframe creation, even if exact point correspondences are not always available. Our formulation also allows us to smoothly combine depth and color information for further improvements without increasing computational speed. The quantitative and qualitative comparative evaluation on built keyframes of indoor scenes show that our proposed framework achieves promising results for reconstructing accurate 3D models while using low computational power and being robust against misalignment errors without post-processing.
UR - http://www.scopus.com/inward/record.url?scp=85075004249&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075004249&partnerID=8YFLogxK
U2 - 10.1109/3DV.2019.00044
DO - 10.1109/3DV.2019.00044
M3 - Conference contribution
T3 - Proceedings - 2019 International Conference on 3D Vision, 3DV 2019
SP - 328
EP - 337
BT - Proceedings - 2019 International Conference on 3D Vision, 3DV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th International Conference on 3D Vision, 3DV 2019
Y2 - 15 September 2019 through 18 September 2019
ER -