TY - JOUR
T1 - Learning Viewpoint-Invariant Features for LiDAR-Based Gait Recognition
AU - Ahn, Jeongho
AU - Nakashima, Kazuto
AU - Yoshino, Koki
AU - Iwashita, Yumi
AU - Kurazume, Ryo
N1 - Publisher Copyright:
Authors
PY - 2023
Y1 - 2023
N2 - Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition systems, and previous studies have utilized depth information captured by RGB-D cameras, such as Microsoft Kinect. In recent years, multi-layer LiDAR sensors, which can obtain range images of a target at a range of over 100 m in real time, have attracted significant attention in the field of autonomous mobile robots and self-driving vehicles. Compared with general cameras, LiDAR sensors have rarely been used for biometrics due to the low point cloud densities captured at long distances. In this study, we focus on improving the robustness of gait recognition using LiDAR sensors under confounding conditions, specifically addressing the challenges posed by viewing angles and measurement distances. First, our recognition model employs a two-scale spatial resolution to enhance immunity to varying point cloud densities. In addition, this method learns the gait features from two invariant viewpoints (i.e., left-side and back views) generated by estimating the walking direction. Furthermore, we propose a novel attention block that adaptively recalibrates channel-wise weights to fuse the features from the aforementioned resolutions and viewpoints. Comprehensive experiments conducted on our dataset demonstrate that our model outperforms existing methods, particularly in cross-view, cross-distance challenges, and practical scenarios.
AB - Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition systems, and previous studies have utilized depth information captured by RGB-D cameras, such as Microsoft Kinect. In recent years, multi-layer LiDAR sensors, which can obtain range images of a target at a range of over 100 m in real time, have attracted significant attention in the field of autonomous mobile robots and self-driving vehicles. Compared with general cameras, LiDAR sensors have rarely been used for biometrics due to the low point cloud densities captured at long distances. In this study, we focus on improving the robustness of gait recognition using LiDAR sensors under confounding conditions, specifically addressing the challenges posed by viewing angles and measurement distances. First, our recognition model employs a two-scale spatial resolution to enhance immunity to varying point cloud densities. In addition, this method learns the gait features from two invariant viewpoints (i.e., left-side and back views) generated by estimating the walking direction. Furthermore, we propose a novel attention block that adaptively recalibrates channel-wise weights to fuse the features from the aforementioned resolutions and viewpoints. Comprehensive experiments conducted on our dataset demonstrate that our model outperforms existing methods, particularly in cross-view, cross-distance challenges, and practical scenarios.
UR - http://www.scopus.com/inward/record.url?scp=85177052673&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85177052673&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3333037
DO - 10.1109/ACCESS.2023.3333037
M3 - Article
AN - SCOPUS:85177052673
SN - 2169-3536
VL - 11
SP - 1
JO - IEEE Access
JF - IEEE Access
ER -