Real-World Autonomous Driving Control: An Empirical Study Using the Proximal Policy Optimization (PPO) Algorithm

Peng Zhao, Zhongxian Yuan, Kyaw Thu, Takahiko Miyazaki

Research output: Contribution to journalArticlepeer-review

Abstract

This article preprocesses environmental information and use it as input for the Proximal Policy Optimization (PPO) algorithm. The algorithm is directly trained on a model vehicle in a real environment, allowing it to control the distance between the vehicle and surrounding objects. The training converges after approximately 200 episodes, demonstrating the PPO algorithm's ability to tolerate uncertainty, noise, and interference in a real training environment to some extent. Furthermore, tests of the trained model in different scenarios reveal that even when the input information is processed and does not provide a comprehensive view of the environment, the PPO algorithm can still effectively achieve control objectives and accomplish challenging tasks.

Original languageEnglish
Pages (from-to)887-899
Number of pages13
JournalEvergreen
Volume11
Issue number2
Publication statusPublished - Jun 2024

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Ceramics and Composites
  • Surfaces, Coatings and Films
  • Management, Monitoring, Policy and Law

Fingerprint

Dive into the research topics of 'Real-World Autonomous Driving Control: An Empirical Study Using the Proximal Policy Optimization (PPO) Algorithm'. Together they form a unique fingerprint.

Cite this