Introduction and control of subgoals in reinforcement learning

Junichi Murata, Yasuomi Abe, Keisuke Ota

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Reinforcement learning (RL) can be applied to a wide class of problems because it requires no other information than perceived states and rewards to find good action policies. However, it takes a large number of trials before acquiring the optimal policy. In order to make RL faster, use of subgoals is proposed. Since errors and ambiguity are inevitable in subgoal information provided by human designers, a mechanism is proposed that controls use of subgoals. The method is applied to examples and the results show that use of subgoals is very effective in accelerating RL and that the proposed control mechanism successfully suppresses possible critical damages on the RL performance caused by errors and ambiguity in subgoal information.

Original languageEnglish
Title of host publicationProceedings of the IASTED International Conference on Artificial Intelligence and Applications, AIA 2007
Pages329-334
Number of pages6
Publication statusPublished - 2007
EventIASTED International Conference on Artificial Intelligence and Applications, AIA 2007 - Innsbruck, Austria
Duration: Feb 12 2007Feb 14 2007

Publication series

NameProceedings of the IASTED International Conference on Artificial Intelligence and Applications, AIA 2007

Other

OtherIASTED International Conference on Artificial Intelligence and Applications, AIA 2007
Country/TerritoryAustria
CityInnsbruck
Period2/12/072/14/07

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Introduction and control of subgoals in reinforcement learning'. Together they form a unique fingerprint.

Cite this