TY - GEN
T1 - Robust 2D-3D alignment based on geometrical consistency
AU - Hara, Kenji
AU - Kabashima, Yuuki
AU - Iwashita, Yumi
AU - Kurazume, Ryo
AU - Hasegawa, Tsutomu
N1 - Copyright:
Copyright 2008 Elsevier B.V., All rights reserved.
PY - 2007
Y1 - 2007
N2 - This paper presents a new registration algorithm of a 2D image and a 3D geometrical model, which is robust for initial registration errors, for reconstructing a realistic 3D model of indoor scene settings. One of the typical techniques of pose estimation of a 3D model in a 2D image is the method based on the correspondences between 2D photometrical edges and 3D geometrical edges projected on the 2D image. However, for indoor settings, features extracted on the 2D image and jump edges of the geometrical model, which can be extracted robustly, are limited. Therefore, it is difficult to find corresponding edges between the 2D image and the 3D model correctly. For this reason, in most cases, the relative position has to be manually set close to correct position beforehand. To overcome this problem, in the proposed method, firstly the relative pose is roughly estimated by utilizing geometrical consistencies of back-projected 2D photometrical edges on a 3D model. Next, the edge-based method is applied for the precise pose estimation after the above estimation procedure is converged. The performance of the proposed method is successfully demonstrated with some experiments using simulated models of indoor scene settings and actual environments measured by range and image sensors.
AB - This paper presents a new registration algorithm of a 2D image and a 3D geometrical model, which is robust for initial registration errors, for reconstructing a realistic 3D model of indoor scene settings. One of the typical techniques of pose estimation of a 3D model in a 2D image is the method based on the correspondences between 2D photometrical edges and 3D geometrical edges projected on the 2D image. However, for indoor settings, features extracted on the 2D image and jump edges of the geometrical model, which can be extracted robustly, are limited. Therefore, it is difficult to find corresponding edges between the 2D image and the 3D model correctly. For this reason, in most cases, the relative position has to be manually set close to correct position beforehand. To overcome this problem, in the proposed method, firstly the relative pose is roughly estimated by utilizing geometrical consistencies of back-projected 2D photometrical edges on a 3D model. Next, the edge-based method is applied for the precise pose estimation after the above estimation procedure is converged. The performance of the proposed method is successfully demonstrated with some experiments using simulated models of indoor scene settings and actual environments measured by range and image sensors.
UR - http://www.scopus.com/inward/record.url?scp=47349114896&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=47349114896&partnerID=8YFLogxK
U2 - 10.1109/3DIM.2007.44
DO - 10.1109/3DIM.2007.44
M3 - Conference contribution
AN - SCOPUS:47349114896
SN - 0769529399
SN - 9780769529394
T3 - 3DIM 2007 - Proceedings 6th International Conference on 3-D Digital Imaging and Modeling
SP - 273
EP - 280
BT - Proceedings - 6th International Conference on 3-D Digital Imaging and Modeling, 3DIM 2007
T2 - 6th International Conference on 3-D Digital Imaging and Modeling, 3DIM 2007
Y2 - 21 August 2007 through 23 August 2007
ER -