Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization

Kazuto Nakashima, Hojung Jung, Yuki Oto, Yumi Iwashita, Ryo Kurazume, Oscar Martinez Mozos

研究成果: ジャーナルへの寄稿学術誌査読

4 被引用数 (Scopus)

抄録

Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

本文言語英語
ページ(範囲)750-765
ページ数16
ジャーナルAdvanced Robotics
32
14
DOI
出版ステータス印刷中 - 1月 1 2018

!!!All Science Journal Classification (ASJC) codes

  • 制御およびシステム工学
  • ソフトウェア
  • 人間とコンピュータの相互作用
  • ハードウェアとアーキテクチャ
  • コンピュータ サイエンスの応用

フィンガープリント

「Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル