Abstract

S. Koenig. Agent-Centered Search. Artificial Intelligence Magazine, 22, (4), 109-131, 2001.

Abstract: In this article, we describe agent-centered search (sometimes also called real-time search or local search) and illustrate this planning paradigm with examples. Agent-centered search methods interleave planning and plan execution and restrict planning to the part of the domain around the current state of the agent, for example, the current location of a mobile robot or the current board position of a game. They can execute actions in the presence of time constraints and often have a small sum of planning and execution cost, both because they trade-off planning and execution cost and because they allow agents to gather information early in nondeterministic domains, which reduces the amount of planning they have to perform for unencountered situations. These advantages become important as more intelligent systems are interfaced with the world and have to operate autonomously in complex environments. Agent-centered search methods have been applied to a variety of domains, including traditional search, STRIPS-type planning, moving-target search, planning with totally and partially observable Markov decision processes models, reinforcement learning, constraint satisfaction, and robot navigation. We discuss the design and properties of several agent-centered search methods, focusing on robot exploration and localization.

Download the paper in pdf.

Download the paper in gzipped postscript (large download time).

Many publishers do not want authors to make their papers available electronically after the papers have been published. Please use the electronic versions provided here only if hardcopies are not yet available. If you have comments on any of these papers, please send me an email! Also, please send me your papers if we have common interests.


This page was automatically created by a bibliography maintenance system that was developed as part of an undergraduate research project, advised by Sven Koenig.