Project "Fast Replanning (Incremental Heuristic Search)"
(scroll down for publications)

Agents (such as robots or video game characters) often operate in domains that are only incompletely known or change dynamically. One way of dealing with incomplete information is to interleave search with action execution. In this case, the agents need to replan quickly. To make search fast, we have studied both heuristic search methods with limited lookahead (agent-centered search, such as real-time heuristic search) and heuristic search methods that reuse information from previous searches (incremental heuristic search). Incremental heuristic search methods are often faster than A* searches from scratch, yet differ from other replanning methods (such as planning by analogy) in that their solution quality is as good as the solution quality of A* searches from scratch. We have developed a variety of incremental heuristic search methods, including incremental heuristic search methods that work according to different principles than the few previously existing ones, analyzed their properties, and applied them in different settings to demonstrate their advantages. Existing settings include goal-directed navigation in initially unknown terrain. For example, we have been able to show that the worst-case travel distance of the navigation strategy behind Anthony Stentz' D* is close to optimal. Novel settings include STRIPS-type symbolic planning, mapping, reinforcement learning and any-angle search. For example, our Minimax LPA* seems to be the first incremental heuristic minimax search method and can speed up the parti-game algorithm substantially. Similarly, our PINCH system can speed up one-time planning with HSP (a popular heuristic search-based planner at the time) substantially, and our SHERPA system can speed up repeated optimal planning with HSP substantially. Finally, our Planning on Demand architecture demonstrated that incremental heuristic search allows the path-planning module of mobile robot architectures to handle initially unknown obstacles rather than having to delegate this task to the reactive-navigation module.

Our D* Lite (which is different from D*) and our other incremental heuristic search methods seem to have become popular incremental heuristic search methods with a variety of research groups (without any involvement by us). Here are some examples:

Most of our publications include a version of the following disclaimer: "It is important to realize that experimental results, such as the runtimes of the search algorithms, depend on a variety of factors, including implementation details (such as the data structures, tie-breaking strategies, and coding tricks used) and experimental setups (such as whether the gridworlds are four-neighbor or eight-neighbor gridworlds). We do not know of any better method for evaluating search algorithms than to implement them as best as possible, publish their runtimes, and let other researchers experiment with their own and thus potentially different implementations and experimental setups." The reason for this disclaimer is that it would be important to have good computational models that can predict the performance of incremental heuristic search methods as a function of the search problems, implementation details and experimental setups. For example, D* Lite runs much faster than A* in some situations but also much slower in other situations. Its runtime advantage depends on the connectivity of the grid (for example, whether it is a four-neighbor or eight-neighbor grid) and its implementation of the priority queue (for example, whether the priority queue is implemented with a binary heap or buckets). In general, runtime proxies, such as the number of expansions, cannot be used to evaluate incremental heuristic search methods since they all need different amounts of time per expansion. Furthermore, they typically operate on search problems that fit into memory. Big O-analyses do not make sense for small search problems and implementation details really do matter and can have a large effect on the runtime. We definitely need to understand better which heuristic search method to use in which situation.

The game programming webpages on "Amit's Thoughts on Path-Finding and A*" link to this page.




The following code should be correct but is somewhat out of date. In particular, Lifelong Planning A* and D* Lite can now use buckets as priority queue or, if they use a heap as priority queue, break ties towards larger g-values. These improvements are not yet reflected in the code.

Class Project

Representative Overview Publications

Representative Publications on Heuristic Search

Representative Publications on Moving Target Search

Representative Publications on Symbolic Planning

Representative Publications on Robotics

Representative Publications on Reinforcement Learning

Some of this material is based upon work supported by the National Science Foundation under Grant No. 0350584. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Many publishers do not want authors to make their papers available electronically after the papers have been published. Please use the electronic versions provided here only if hardcopies are not yet available. If you have comments on any of these papers, please send me an email! Also, please send me your papers if we have common interests.

This page was automatically created by a bibliography maintenance system that was developed as part of an undergraduate research project, advised by Sven Koenig.

Home Page of Sven Koenig