TY - GEN
T1 - Automatic feature extraction using CNN for robust active one-shot scanning
AU - Sagawa, Ryusuke
AU - Shiba, Yuki
AU - Hirukawa, Takuto
AU - Ono, Satoshi
AU - Kawasaki, Hiroshi
AU - Furukawa, Ryo
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/1/1
Y1 - 2016/1/1
N2 - Active one-shot scanning techniques have been widely used for various applications. Stereo-based active one-shot scanning embeds a positional information regarding the image plane of a projector onto a projected pattern to retrieve correspondences entirely from a captured image. Many combinations of patterns and decoding algorithms for active one-shot scanning have been proposed. If the capturing environment lacks the assumed conditions, such as the absence of strong external lights, then reconstruction using those methods is degraded, because the pattern decoding fails. In this paper, we propose a general reconstruction algorithm that can be used for any kind of patterns without strict assumptions. The technique is based on an efficient feature extraction function that can drastically reduce redundant information from the raw pixel values of patches of captured images. Shapes are reconstructed by efficiently finding correspondences between a captured image and the pattern using low-dimensional feature vectors. Such a function is created automatically by a convolutional neural network using a large database of pattern images that are efficiently synthesized by using GPU with wide variation of depth and surface orientation. Experimental results show that our technique can be used for several existing patterns without any ad hoc algorithm or information regarding the scene or the sensor.
AB - Active one-shot scanning techniques have been widely used for various applications. Stereo-based active one-shot scanning embeds a positional information regarding the image plane of a projector onto a projected pattern to retrieve correspondences entirely from a captured image. Many combinations of patterns and decoding algorithms for active one-shot scanning have been proposed. If the capturing environment lacks the assumed conditions, such as the absence of strong external lights, then reconstruction using those methods is degraded, because the pattern decoding fails. In this paper, we propose a general reconstruction algorithm that can be used for any kind of patterns without strict assumptions. The technique is based on an efficient feature extraction function that can drastically reduce redundant information from the raw pixel values of patches of captured images. Shapes are reconstructed by efficiently finding correspondences between a captured image and the pattern using low-dimensional feature vectors. Such a function is created automatically by a convolutional neural network using a large database of pattern images that are efficiently synthesized by using GPU with wide variation of depth and surface orientation. Experimental results show that our technique can be used for several existing patterns without any ad hoc algorithm or information regarding the scene or the sensor.
UR - http://www.scopus.com/inward/record.url?scp=85019089519&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019089519&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2016.7899639
DO - 10.1109/ICPR.2016.7899639
M3 - Conference contribution
AN - SCOPUS:85019089519
T3 - Proceedings - International Conference on Pattern Recognition
SP - 234
EP - 239
BT - 2016 23rd International Conference on Pattern Recognition, ICPR 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd International Conference on Pattern Recognition, ICPR 2016
Y2 - 4 December 2016 through 8 December 2016
ER -