TY - GEN
T1 - TU-Net and TDeepLab
T2 - 2nd International Conference on Multimedia Information Processing and Retrieval, MIPR 2019
AU - Iwashita, Yumi
AU - Nakashima, Kazuto
AU - Stoica, Adrian
AU - Kurazume, Ryo
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/4/22
Y1 - 2019/4/22
N2 - In this paper we propose two novel deep learning-based terrain classification methods robust to illumination changes. The use of cameras is challenged by a variety of factors, of most importance being the changes in illumination. On the other hand, since the temperature of various types of terrains depends on the thermal characteristics of the terrain, the terrain classification can be aided by utilizing the thermal information in addition to visible information. Thus we propose 'TU-Net (Two U-Net)' based on the U-Net and 'TDeepLab (Two DeepLab)' based on DeepLab, which combine visible and thermal images and train the network robust to illumination changes implicitly. To improve the network's learning capability, we expand the proposed methods to the Siamese-based method, which explicitly trains the network to be robust to illumination changes. We also investigate multiple options to fuse the visible and thermal images at at the bottom layer, middle layer, or the top layer of the network. We evaluate the proposed methods with a challenging new dataset consisting of visible and thermal images, which were collected from 10 am till 5 pm (after sunset), and we show the effectiveness of the proposed methods.
AB - In this paper we propose two novel deep learning-based terrain classification methods robust to illumination changes. The use of cameras is challenged by a variety of factors, of most importance being the changes in illumination. On the other hand, since the temperature of various types of terrains depends on the thermal characteristics of the terrain, the terrain classification can be aided by utilizing the thermal information in addition to visible information. Thus we propose 'TU-Net (Two U-Net)' based on the U-Net and 'TDeepLab (Two DeepLab)' based on DeepLab, which combine visible and thermal images and train the network robust to illumination changes implicitly. To improve the network's learning capability, we expand the proposed methods to the Siamese-based method, which explicitly trains the network to be robust to illumination changes. We also investigate multiple options to fuse the visible and thermal images at at the bottom layer, middle layer, or the top layer of the network. We evaluate the proposed methods with a challenging new dataset consisting of visible and thermal images, which were collected from 10 am till 5 pm (after sunset), and we show the effectiveness of the proposed methods.
UR - http://www.scopus.com/inward/record.url?scp=85065603700&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065603700&partnerID=8YFLogxK
U2 - 10.1109/MIPR.2019.00057
DO - 10.1109/MIPR.2019.00057
M3 - Conference contribution
AN - SCOPUS:85065603700
T3 - Proceedings - 2nd International Conference on Multimedia Information Processing and Retrieval, MIPR 2019
SP - 280
EP - 285
BT - Proceedings - 2nd International Conference on Multimedia Information Processing and Retrieval, MIPR 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 28 March 2019 through 30 March 2019
ER -