Abstract
S. Koenig, C. Muise and S. Sanner. Non-Traditional Objective Functions for MDPs. In IJCAI-18 Workshop on Goal Reasoning (GRW), 2018.Abstract: Completely (and partially) observable Markov Decision Processes (MDPs) have been studied in operations research for many decades. Optimization problems in operations research are typically thought of as consisting of constraints (that determine the feasibility of solutions) and the objective function (that determines the quality of feasible solutions, such as their optimality). Thus, operations research has often studied different classes of objective functions and how they affect the process of finding optimal solutions, including its complexity. Not surprisingly, the early operations research literature on MDPs studied different objective functions for them. Later, AI researchers discovered MDPs as a good foundation for probabilistic (or, synonymously, decision-theoretic) planning. However, they have overall been more interested in exploiting their structure for efficient planning for simple traditional cost-optimal goal-based objective functions than in studying more realistic and thus also more complex objective functions. In the following, we try to understand the reasons for it, outline both some early and current AI research on non-traditional objective functions and conclude by charging AI researchers to focus more on realistic objective functions to make probabilistic planning more attractive for actual applications.
Many publishers do not want authors to make their papers available electronically after the papers have been published. Please use the electronic versions provided here only if hardcopies are not yet available. If you have comments on any of these papers, please send me an email! Also, please send me your papers if we have common interests.
This page was automatically created by a bibliography maintenance system that was developed as part of an undergraduate research project, advised by Sven Koenig.