Projects of Sven Koenig
See the bottom of this webpage for links to my research
projects, a common theme of which is search and planning under
uncertainty and incomplete information, including how to represent
decision-making tasks, which preference models to use, and how to solve the
resulting planning problems efficiently. Our research tackles ...
- different ways of dealing with uncertainty and incomplete information:
One way of dealing with uncertainty is to model it probabilistically using
Markov decision process models, which results in probabilistic planning and
reinforcement-learning problems. This is what our POMDP-based robot navigation
architecture does. Another way is to use greedy on-line planning methods (such
as agent-centered search and assumption-based planning), which interleave
deterministic planning and plan execution. For example, we have analyzed
Greedy Mapping (an agent-centered search method) and Planning with the
Freespace Assumption (an assumption-based planning method) for the first
time. A third way is for robots to leave information on the ground, which is
what our ant robots do.
- different preference models: Decision-support systems need to use
similar preference models as the human decision makers that they support,
otherwise they are of not much use. However, most artificial intelligence
planners use simplistic preference models. Planning systems for crisis
situations (such as oil spills) or space applications, for example, typically
maximize the probability of achieving their goals or minimize the expected
plan-execution cost, even though human decision makers are risk-averse in
these situations. We have therefore studied how to plan with more realistic
preference models than is typically done.
- fast planning despite complex preference models and large numbers of
contingencies: Systems that deal with uncertainty by interleaving planning
and plan execution need to plan fast since they have to plan repeatedly. To
make planning fast, we have studied both planning methods with limited
lookahead (such as agent-centered search) and planning methods that reuse
information from previous searches (such as re-planning methods). For example,
we have introduced several new incremental heuristic search methods.
- coordinating teams of agents despite uncertainty and incomplete
information: Teams of agents are faster and more fault tolerant than
single agents. We have therefore studied how agents can coordinate efficiently
under uncertainty. For example, we have helped to lay a theoretical foundation
for ant robots (that coordinate implicitly by reading trails left by other
robots) and auction robots (that coordinate explicitly via auctions). We have
also improved distributed constraint optimization methods.
We are currently pursuing the following research directions:
We once pursued the following research directions:
(You can click on the project titles for more information and publications.)
Home Page of Sven Koenig