Extends the PBNF framework from the IJCAI-09 paper below to bounded suboptimal and anytime settings. Also includes a proof of correctness for PBNF and two general pruning rules for suboptimal search. The talk slides include a comparison to the recently-proposed HDA* variant of PRA*.
A summary of some of the results from the ICAPS-09 paper above and the IJCAI-09 paper below.
Presents and explains a counter-intuitive result: it can improve performance (particularly in a FastDownward-style planner) to throw away work during anytime search!
Attempts to revive the idea of using `distance-to-go' estimates in heuristic search. Presents improved versions of A*epsilon and dynamically weighted A* and shows the importance of tie-breaking. Bonus feature: includes a much simpler proof of Likhachev, Gordon, and Thrun's result that weighted A* can drop duplicate nodes while maintaining bounded suboptimality.
A shorter version of this paper was published as: Jordan T. Thayer and Wheeler Ruml, Using Distance Estimates in Heuristic Search, Proceedings of the Nineteenth International Conference on Automated Planning and Scheduling (ICAPS-09), 2009. PDF (131kb).
A method for best-first search (eg, A*) in a parallel shared-memory setting. We assume that the search will fit in main memory.
More detail on the second method discussed in the STAIR-08 paper below. If you care about quality bounds and your domain doesn't have tons of duplicate states, you should probably consider this simple method.
Presents two new approaches to solving shortest path problems when insufficient time is available to guarantee the optimal solution. Both approaches return solutions within a given factor of the optimal solution's cost.
A preliminary version of this work was presented at the Third North East Student Colloquium on Artificial Intelligence (NESCAI-08).
Presents a variation of best-first heuristic search called Bugsy, for use when insufficient time is available to find the optimal solution using A*. Unlike weighted A*, Anytime A* or ARA*, the algorithm explicitly tries to balance solution quality and computation time according to the user's utility function. This strategy is a useful alternative to anytime algorithms for resource-bounded computation.
Note: I have found a silly bug in the implementation used to obtain the results reported in this paper. The results from the buggy implementation were correctly reported, however, so the algorithm is even better than it looks. I am currently working to obtain new results.
A preliminary version of the work appeared as Wheeler Ruml and Elisabeth H. Crawford, Best-first Utility-Guided Search, Working Notes of the IJCAI-05 Workshop on Planning and Learning in A Priori Unknown or Dynamic Domains, 2005. PDF (151kb).
Describes a very preliminary investigation of using learned models of algorithm behavior to guide reformulation of an optimization problem.
Presents the first gradient-following local search procedure that is complete: it is guaranteed to find a solution in bounded time if one exists and it can declare a problem unsolvable. We present results on the problem of boolean satisfiability, but the framework applies to any finite-domain constraint satisfaction problem.
Presents the work described in the adaptive probing and best-leaf-first search papers below, but at a more leisurely pace. This is the original uncorrected version.
Presents a model-based approach to searching the trees that arise in combinatorial optimization and constraint satisfaction problems. The idea is to visit leaves in an efficient approximation of increasing predicted cost according to the model. This best-first framework unifies tree search for optimization with heuristic search for shortest-path problems.
A preliminary version was made available as Harvard CS Technical Report TR-01-02. PostScript (456kb) or PDF (173kb).
A preliminary investigation of the use of previous search experience on similar problems to improve the performance of adaptive probing. Adaptive probing can adapt to a variety of trees but wastes time learning basic features of the tree that are built into other algorithms, such as the fact that the heuristic is often helpful. By reusing the model that is learned on previous problems, the algorithm can avoid a lengthy delay in adapting to the problem while maintaining its flexibility.
A short note arguing that, contrary to the widespread preconception, stochastic search can be worthwhile in tree-structured search spaces and that tree representations that work well for complete search algorithms are not necessarily the best for incomplete methods. While conventional search trees select a variable and value it differently at each child, I suggest considering trees in which a different variable is chosen at each child. Experimental results are presented on the problem of number partitioning.
Presents a method for learning how to search a tree to find a good leaf. Rather than imposing a predetermined search order, as in depth-first search or depth-bounded discrepancy search, we attempt to learn the cost of a discrepancy on-line. The results seem promising, except for the case in which the heuristic node-ordering function is very accurate. This framework can also provide a rationalization of aspects of other proposed algorithms (HBSS, GRASP, and Ant Colony Optimization).
A preliminary version of this paper was made available as Harvard CS Technical Report TR-02-01. PostScript (226kb).
Compares the performance of a simple incomplete tree search algorithm with the popular incomplete graph search algorithm WalkSAT. To better reflect the way such algorithms are used in practice, we evaluate the ability of the algorithms to determine satisfiability given a bounded amount of time (rather than measuring the time needed to find a model in a problem known to be satisfiable). Our results indicate that, for the random problems on which WalkSAT excels, the tree search method can be just as effective on small problems, although it seems to scale poorly.
Presents a careful empirical comparison of algorithms for graph bisection that are based on a simple relative connectedness heuristic. One of the connectedness algorithms performs better than Kernighan-Lin or a simple spectral method in time-equated tests. This is an expanded version of the ALEX98 paper below.
Presents a simple algorithm for graph bisection (bipartitioning), and more importantly, describes a time-equated comparison against hundreds of runs of the standard Kernighan-Lin algorithm, which has a high variance.
We investigate the power of `decoders' which map the objects in a search space to actual solutions for the problem instances. This allows the search algorithm to explore the space of the inputs of the decoder, which may be richer or smoother than the space of solutions. This idea is popular in the genetic algorithm community, but can be applied separately from crossover. In this paper, we test three different decoders on the NP-complete problem of number partitioning, using four different search algorithms. The choice of decoder plays a much greater role in the effectiveness of the search than the choice of search algorithm.
The preliminary work for the above paper. Includes more representations, search algorithms, and experiments, but less analysis.
Adapts the technique of optimization in hindsight to on-line continual planning, in which actions are deterministic but goals arrive stochastically on-line. This naturally yields an on-line planner capable of both preemptive and delayed actions.
Extensions to our "high-speed manufacturing" planner to support execution failure (via replanning) and multiple objectives (via linear combination, tie-breaking, and pattern databases). Includes discussion of the demands of our real-world application and the first publicly-published photo of our fanciest hardware prototype.
A short introduction to the work from the ICAPS-05 and ICAPS-06 papers below. Includes more detail on the concrete application and a performance comparison with two leading generic planning systems.
Proposes on-line planning as an important research direction and presents a generic on-line domain simulator for use by researchers to evaluate on-line planning algorithms. Introduces adaptations of two existing planning algorithms to the on-line setting.
A summary of this work also appeared in the booklet for the Second International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS-07).
Shows how multi-commodity network flow can be used to estimate throughput of manufacturing plants with complex job routing requirements while maintaining tractability.
A short paper summarizing differences we've noticed between academic and real-world planning research. Also reports a simple and effective temporal planning graph heuristic.
Discusses our on-line temporal planner in slightly more detail than the workshop paper below. Includes explicit regression rules! (Winner of the best application paper award.)
Presents a simple manufacturing domain that requires on-line integrated planning and scheduling and gives preliminary results from a hybrid state-space temporal planner. The planner uses state-space heuristic search and constraint-based temporal reasoning for handling resource contention.
Tests the abilities of various spreading-activation theories of lexical access to account for data from 50 Italian-speaking patients. An automated model-construction procedure is used to adapt previous proposals to Italian. Interactivity doesn't appear to be a crucial component as far as matching patient performance is concerned. We also discuss Dell et al's term "continuity".
Suggests additive clustering (Shepard and Arabie, 1979) as a principled and automatic way to obtain distributed feature representations for use in connectionist models of cognition. Presents a new algorithm for additive clustering, based on heuristic combinatorial optimization. A novel variation of the Kernighan-Lin search method, KL Break-Out, is described. Using extensive empirical tests on human and synthetic data, we find that the new algorithm outperforms previous methods, particularly on large problems.
A slightly different version is available as Assigning Features using Additive Clustering, Harvard CS Technical Report TR-04-01. Gzipped PostScript (138kb) or PDF (180kb).
Reports on a patient, D.M., whose speech errors are almost always fluent mispronunciations. His performance is likely to be the result of damage only to phonological representations or processes and not to semantic or lexical subsystems. Using computer simulations, we show that D.M.'s behavior challenges theories postulating global damage in aphasic patients and full interactivity in the lexicon.
Presents the picture naming of thirteen patients and attempts to account for their performances using variations on several recently proposed models. By varying the assumptions of the models, we find that peripheral pragmatic assumptions are as important as theoretically more central components. We argue that there are strong limitations on the conclusions that can legitimately be drawn from simulation studies, but that close analysis of individual patients can allow sound testing of potentially more accurate models.
Evaluates a well-known computer simulation of how normal and brain-damaged people retrieve the pronunciation of words when speaking. We find that, even when we use a novel fitting algorithm to optimize the model's parameters, it does a relatively poor job of matching and predicting human performance. We also examine the logic of its developers' theoretical claims.
A variation of the patching technique from the Infocom paper below that is better for localization in large networks because no single global map is constructed.
A unified presentation of the material from the following MobiHoc and Infocom papers.
Improvements to the methods of the MobiHoc paper and simulations on a very challenging C-shaped layout. Includes the patching together of many local maps.
Using multidimensional scaling (MDS) to determine the locations of network nodes from information about who is within communication range of whom.
Extends the Design Galleries approach to a three dimensional interface, using a fly-through metaphor with an artificial horizon and a "shower curtain" to ease navigation and browsing.
A system for sampling and displaying the diversity of the large parameter spaces which commonly arise in computer graphics. This greatly eases the task of parameter tweaking, while also presenting all possible options. I worked mostly on the `arrangement' part of the system, which uses graph partitioning or multidimensional scaling to visualize the results of the sampling in a perceptually organized way. See also a related page at MERL.