Abstract

S. Koenig and R.G. Simmons. Complexity Analysis of Real-Time Reinforcement Learning. In AAAI Conference on Artificial Intelligence (AAAI), pages 99-105, 1993.

Abstract: This paper analyzes the complexity of on-line reinforcement learning algorithms, namely asynchronous real-time versions of Q-learning and value-iteration, applied to the problem of reaching a goal state in deterministic domains. Previous work had concluded that, in many cases, tabula rasa reinforcement learning was exponential for such problems, or was tractable only if the learning algorithm was augmented. We show that, to the contrary, the algorithms are tractable with only a simple change in the task representation or initialization. We provide tight bounds on the worst-case complexity, and show how the complexity is even smaller if the reinforcement learning algorithms have initial knowledge of the topology of the state space or the domain has certain special properties. We also present a novel bi-directional Q-learning algorithm to find optimal paths from all states to a goal state and show that it is no more complex than the other algorithms.

Download the paper in pdf.

Download the paper in gzipped postscript (large download time).

Many publishers do not want authors to make their papers available electronically after the papers have been published. Please use the electronic versions provided here only if hardcopies are not yet available. If you have comments on any of these papers, please send me an email! Also, please send me your papers if we have common interests.


This page was automatically created by a bibliography maintenance system that was developed as part of an undergraduate research project, advised by Sven Koenig.