TY - GEN
T1 - Converting near infrared facial images to visible light images using skin pigment model
AU - Goh, Kimshing
AU - Matsukawa, Tetsu
AU - Okabe, Takahiro
AU - Sato, Yoichi
N1 - Publisher Copyright:
© 2013; MVA Organization. All rights reserved.
PY - 2013
Y1 - 2013
N2 - In this paper, we propose a physics-based method to synthesize facial images in visible wavelengths from multi-band near infrared (NIR) images. The study on photometric properties of human skin shows that melanin and hemoglobin components are dominant factors that affect the skin appearance under different light spectrum. Specifically, a set of intensities observed at a certain surface point with varying wavelength is represented by a linear combination of both the pigment components. Our proposed method learns the spectral basis vectors, which describe absorbance due to both the pigments, from multispectral image dataset by using Independent Component Analysis (ICA). Then, our method estimates the coefficients, which are pixel-wise densities of both the pigments, from a multiband NIR image, and finally converts it to a visible light (VIS) image. We demonstrate that our proposed method works well for real facial images even though only a small dataset is available for learning basis vectors.
AB - In this paper, we propose a physics-based method to synthesize facial images in visible wavelengths from multi-band near infrared (NIR) images. The study on photometric properties of human skin shows that melanin and hemoglobin components are dominant factors that affect the skin appearance under different light spectrum. Specifically, a set of intensities observed at a certain surface point with varying wavelength is represented by a linear combination of both the pigment components. Our proposed method learns the spectral basis vectors, which describe absorbance due to both the pigments, from multispectral image dataset by using Independent Component Analysis (ICA). Then, our method estimates the coefficients, which are pixel-wise densities of both the pigments, from a multiband NIR image, and finally converts it to a visible light (VIS) image. We demonstrate that our proposed method works well for real facial images even though only a small dataset is available for learning basis vectors.
UR - http://www.scopus.com/inward/record.url?scp=85042766228&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85042766228&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85042766228
SN - 9784901122139
T3 - Proceedings of the 13th IAPR International Conference on Machine Vision Applications, MVA 2013
SP - 153
EP - 156
BT - Proceedings of the 13th IAPR International Conference on Machine Vision Applications, MVA 2013
PB - MVA Organization
T2 - 13th IAPR International Conference on Machine Vision Applications, MVA 2013
Y2 - 20 May 2013 through 23 May 2013
ER -