AAAI-23MAPFWorkshop

webmaster: Sven Koenig

AAAI-23 Workshop on Multi-Agent Path Finding

Welcome!

Multi-Agent Path Finding (MAPF) is the problem of computing collision-free paths for a team of agents from their current locations to given destinations in a known environment. Application examples include autonomous aircraft towing vehicles, automated warehouse systems, office robots, and game characters in video games. Solving the MAPF problem optimally is NP-hard for many objectives, such as minimizing the sum of the travel costs or the makespan, and can even be NP-hard to approximate. Yet, practical systems must find high-quality collision-free paths for the agents quickly because shorter paths result in higher throughput or lower operating costs (since fewer agents are required). For information on MAPF, including tutorials, publications, software, benchmark instances, teaching material, and a mailing list, please direct your web browser to mapf.info.

In recent years, many researchers have explored different variants of the MAPF problem as well as different approaches with different properties. Also, different applications have been studied in artificial intelligence, robotics, and theoretical computer science. The purpose of this workshop is to bring these researchers together to present their research, discuss future research directions, and cross-fertilize the different communities. Researchers and practitioners whose research might apply to MAPF or who might be able to use MAPF techniques in their research are welcome.

All submissions that relate to collision-free path planning or navigation for multiple agents are welcome, including but not limited to:

  • Search-, rule-, reduction-, reactive-, and learning-based MAPF planners
  • Combination of MAPF and task allocation, scheduling, and execution monitoring
  • Variants and generalizations of the MAPF problem
  • Real-world applications of MAPF planners
  • Customization of MAPF planners for actual robots (including kinematic and dynamic constraints)
  • Multi-agent reinforcement learning for centralized and decentralized MAPF
  • MAPF for agents with motion uncertainty, communication limitation, and environment change
  • Standardization of MAPF terminology and benchmarks

Submissions can contain relevant work in all possible stages, including work that was recently published, is under submission elsewhere, was only recently finished, or is still ongoing. Authors of papers published or under submission elsewhere are encouraged to submit these papers or short versions (including abstracts) of them to the workshop to educate other researchers about their work, as long as resubmissions are clearly labeled to avoid copyright violations. Position papers and surveys are also welcome. Submissions will go through a light review process to ensure a fit with the topic of the workshop and acceptable quality. Non-archival workshop notes will be produced containing the material presented at the workshop.

Information for Participants

This workshop is part of the 37th AAAI Conference on Artificial Intelligence (AAAI-23), which will be held in Washington, DC. We plan for this workshop to be an in-person workshop and are exploring opportunities for accommodating remote participation.

Submission Link: https://cmt3.research.microsoft.com/WoMAPF2023/Submission/Index

Important Dates:

  • Paper submission deadline: Oct 21, 2022 Oct 31, 2022
  • Paper notification: Nov 18, 2022
  • Final version: Jan 4, 2023

Note: all deadlines are “anywhere on earth” (UTC-12)

Invited Speakers

Amanda Prorok
University of Cambridge

Learning-Based Methods for Multi-Agent Navigation

  • Abstract: In this talk, I discuss our work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. In my first case-study, I show how we use GNNs to find a decentralized solution by learning what the agents need to communicate to one another. This communication-based policy is able to achieve near-optimal performance; when combined with an attention mechanism, we can drastically improve generalization to very-large-scale systems. But instead of optimizing agent policies, what if we could modify the navigation environment, instead? I next introduce an environment optimization approach that guarantees the existence of complete solutions, improving agent navigation success rates over heuristic methods. Finally, I discuss challenges in the transfer of learned policies to the real world.
  • Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College. She has been honoured by numerous research awards, including an ERC Starting Grant, an Amazon Research Award, the EPSRC New Investigator Award, the Isaac Newton Trust Early Career Award, and several Best Paper awards. Her PhD thesis was awarded the Asea Brown Boveri (ABB) prize for the best thesis at EPFL in Computer Science. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL) and Associate Editor for Autonomous Robots (AURO). Prior to joining Cambridge, Amanda was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, USA, where she worked with Prof. Vijay Kumar. She completed her PhD at EPFL, Switzerland, with Prof. Alcherio Martinoli"

Guillaume Sartoretti
National University of Singapore

Distributed Learning towards Scalable and Cooperative Multi-Agent Pathfinding

  • Abstract: Multi-agent path finding (MAPF) is an indispensable component of large-scale robot deployments in numerous domains ranging from airport management to warehouse automation. However, as the number of agents (robots) in the system grows, so does the combinatorial complexity of coordinating them, drastically affecting the scalability of coupled planners and the solution quality of fully decoupled methods. As a new direction for research into scalable, high-quality MAPF, my work has embraced advances in distributed reinforcement learning (RL) to let multiple agents learn decentralized, collaborative policies in a time-efficient manner. In this talk, I will first introduce our recent AI-based frameworks for MAPF (PRIMAL) and lifelong MAPF (PRIMAL2), which combine deep RL with imitation learning from a complete, optimal planner. As a result, the policies learned by PRIMAL/PRIMAL2 naturally scale to near-arbitrary team sizes while exhibiting the type of coordinated behaviors usually only associated with coupled planners. The reactive nature of these policies is also particularly well suited for robotic deployments, especially in dynamic environments, as exhibited by our numerical and experimental results. I will then discuss our recent works, which focus on explicitly combining learning-based MAPF with conventional, complete planners towards highly scalable, yet provably complete MAPF. Finally, I will touch on some of our recent advances into communication learning, where, agents are tasked with learning a movement policy as well as a communication protocol, allowing them to identify, encode, and share relevant information about their environment/intention towards true team-level cooperation in MAPF. If time permits, I will also present preliminary results of our most recent, context-learning-based MAPF works, which aims at mitigating the effect of partial observability in decentralized, AI-based MAPF.
  • Guillaume Sartoretti joined the Mechanical Engineering department at the National University of Singapore (NUS) as an Assistant Professor in 2019. He founded and is directing the Multi-Agent Robotic Motion (MARMot) lab. Before that, he was a Postdoctoral Fellow in the Robotics Institute at Carnegie Mellon University (USA), where he worked with Prof. Howie Choset. He received his Ph.D. in robotics from EPFL (Switzerland) in 2016 for his dissertation on "Control of Agent Swarms in Random Environments," under the supervision of Prof. Max-Olivier Hongler. He also holds a B.S. and an M.S. degree in Mathematics and Computer Science from the University of Geneva (Switzerland). His research focuses on the distributed/decentralized coordination of numerous agents, at the interface between stochastic modelling, conventional control, and artificial intelligence. Applications range from multi-robot systems, where independent robots and systems need to coordinate their actions to achieve a common goal, to high-DoF articulated robots, where joints need to be carefully coupled during locomotion in rough terrain. In 2018, he was awarded a Manufacturing Futures Initiative (MFI) postdoctoral fellowship for his recent work on scalable multi-agent path finding via distributed reinforcement learning.

Jingjin Yu
Rutgers University

Stack Rearrangement, Rubik Tables, and Multi-Robot Routing

  • Abstract: In a basic Rubik table problem, there are $n$ item types, each with a multiplicity of $n$. These $n^2$ items are randomly placed in an $n x n$ table, one in each cell. In a shuffle, a row (resp., column) of the table may be permuted to achieve an arbitrary configuration of the items in that row (resp., column). The Rubik table problem asks how many shuffles are needed to sort the table so that type $i$ items occupies column $i$ of the table. Somewhat surprisingly, $2n$ shuffles suffice. Rubik tables and its many generalizations enables the successful tackling of difficult sequential combinatorial rearrangement problems, including stack rearrangement and multi-robot path planning, resulting in polynomial time algorithms for computing asymptotically optimal solutions that were previously not possible, to our knowledge. In particular, using Rubik table as a building block, we are able to achieve (1, 1.5] asymptotic makespan optimality for multi-robot path planning on grids in polynomial time, with high probability.
  • Jingjin Yu is an Associate Professor in the Department of Computer Science, Rutgers University at New Brunswick. He received his B.S. from the University of Science and Technology of China, and obtained his M.S. in Computer Science and Ph.D. in Electrical and Computer Engineering, both from the University of Illinois, where he briefly stayed as a postdoctoral researcher. Before joining Rutgers, he was a postdoctoral researcher at the Massachusetts Institute of Technology. He is broadly interested in the area of algorithmic robotics, focusing on issues related to optimality, complexity, and the design of efficient decision-making methods. He is a recipient of the NSF CAREER award and an Amazon Research Award.

Workshop Schedule

Date: Tue, Feb 14, 2023.

Room: 145B

Remote: Underline

Start Time (EST)End Time (EST)Activities and Presentation TopicAuthors
08:25:0008:30:00Workshop Begin & Welcome Talk
08:30:0009:20:00Invited Talk from Guillaume Sartoretti, National University of Singapore
Tea Break (10 mins)
Presentation Session 1: MAPF Algorithms
09:30:0009:40:00Greedy Priority-Based Search for Suboptimal Multi-Agent Path FindingShao-Hung Chan; Roni Stern; Ariel Felner; Sven Koenig
09:40:0009:50:00Multi-agent Pathfinding on Large Maps Using Graph Pruning: This Way or That Way?Jiří Švancara; Philipp Obermeier; Matej Husár; Roman Barták; Torsten Schaub
09:50:0010:00:00On Merging Agents in Multi-Agent Pathfinding AlgorithmsEli Boyarski; Shao-Hung Chan; Dor Atzmon; Ariel Felner; Sven Koenig
10:00:0010:10:00Exact Any-Time Multi-Agent Path Finding Using Branch-and-Cut-and-Price and Large Neighborhood SearchEdward Lam; Daniel Harabor; Peter Stuckey; Jiaoyang Li
10:10:0010:20:00To Wait, or Not to Wait, That Is the QuestionJiří Švancara; Etienne Tignon; Roman Barták; Torsten Schaub; Philipp Wanko; Roland Kaminski
Tea Break (10 mins)
Presentation Session 2: Task Planning and Online MAPF
10:30:0010:40:00Priority Basis Task Allocation for Drone SwarmsKolitha Warnakulasooriya; Aviv Segev; Ryan Benton
10:40:0010:50:00Exact Multi-Agent Pickup and Delivery Using Branch-and-Cut-and-PriceEdward Lam; Peter Stuckey; Daniel Harabor
10:50:0011:00:00Online Multi-Agent Path Finding: New ResultsJonathan Morag; Ariel Felner; Roni Stern; Dor Atzmon; Eli Boyarski
11:00:0011:10:00Deadline-Aware Multi-Agent Tour PlanningTaoan Huang; Vikas Shivashankar; Michael Caldara; Joseph W Durham; Jiaoyang Li; Bistra Dilkina; Sven Koenig
11:10:0011:20:00Optimally Solving the Multiple Watchman Route Problem with Heuristic SearchYaakov Livne, Dor Atzmon, Shawn Skyler, Eli Boyarski, Amir Shapiro, and Ariel Felner
11:20:0011:30:00Multi-Agent Traveling Salesman Path FindingZhongqiang Ren; Sivakumar Rathinam; Howie Choset
Tea Break (10 mins)
11:40:0012:30:00Invited Talk from Amanda Prorok, University of Cambridge
 
Lunch Break (1.5 hours)
 
Presentation Session 3: MAPF and Learning
14:00:0014:10:00No Panacea in Planning: Algorithm Selection for Suboptimal Multi-Agent Path FindingWeizhe Chen; Zhihan Wang; Jiaoyang Li; Sven Koenig; Bistra Dilkina
14:10:0014:20:00Multi-agent Collective Construction using 3D DecompositionAkshaya Kesarimangalam Srinivasan; Shambhavi Singh; Geordan Gutow; Howie Choset; Bhaskar Vundurthy
14:20:0014:30:00SocialMAPF: Optimal and Efficient Multi-Agent Path Finding with Strategic Agents for Social NavigationRohan Chandra; Rahul Bharadwaj Maligi Anantha Sesha Jayanth; Arya M Anantula; Joydeep Biswas
14:30:0014:40:00Towards Learning Human-Like and Efficient Multi-Agent Path FindingAriel Kwiatkowski; Vicky Kalogeiton; Julien Pettré; Marie-Paule x CANI
14:40:0014:50:00Multi-Agent Path Finding via Tree LSTMYuhao Jiang; Kunjie Zhang; Qimai Li; JIAXIN CHEN; Xiaolong Zhu
Tea Break (10 mins)
15:00:0015:50:00Invited Talk from Jingjin Yu, Rutgers University
Tea Break (10 mins)
Presentation Session 4: MAPF Applications
16:00:0016:10:00Top Gun: Cooperative Multi-Agent PlanningJames Chao; Wiktor Piotrowski; Mitch Manzanares; Douglas Lange
16:10:0016:20:00Toward Efficient Physical and Algorithmic Design of Automated GaragesTeng Guo; Jingjin Yu
16:20:0016:30:00Imagine All Objects Are Robots: A Multi-Agent Pathfinding Perspective on Manipulation Among Movable ObjectsDhruv M Saxena; Maxim Likhachev
16:30:0016:40:00Improving Problem Decomposition and Regulation in Distributed Multi-Agent Path Finder (DMAPF)Poom Pianpak; Son Tran
16:40:0016:50:00Distributing Collaborative Multi-Robot Planning with Gaussian Belief PropagationAalok Patwardhan; Riku Murai; Andrew Davison
 
16:50:0017:00:00Community Discussion & Close Talk & Poster Session Start
17:00:0018:00:00Poster Session

Organizing Committee

  • Jiaoyang Li, Carnegie Mellon University, (jiaoyanl@andrew.cmu.edu)
  • Zhongqiang Ren, Carnegie Mellon University, (zhongqir@andrew.cmu.edu)
  • Han Zhang, University of Southern California, (zhan645@usc.edu)
  • Zhe Chen, Monash University, (zhe.chen@monash.edu)

Advisory Board

PC Members

  • Ariel Felner, Ben-Gurion University
  • Bojie Shen, Monash university
  • Christopher Leet, University of Southern California
  • Dor Atzmon, Ben-Gurion University
  • Edward Lam, Monash University
  • Hang Ma, Simon Fraser University
  • Jingjin Yu, Rutgers University
  • Jingkai Chen, Amazon Robotic
  • Jingyao Ren, University of Southern California
  • Jiří Švancara, Charles University
  • Keisuke Okumura, Tokyo Institute of Technology
  • Kiril Solovey, Technion - Israel Institute of Technology
  • Konstantin Yakovlev, Federal Research Center for Computer Science and Control RAS
  • Nathan R Sturtevant, University of Alberta
  • Pavel Surynek, Czech Technical University
  • Rishi Veerapaneni, Carnegie Mellon University
  • Roman Barták, Charles University
  • Roni Stern, Ben Gurion University
  • Saman Ahmadi, RMIT University
  • Shao-Hung Chan, University of Southern California
  • Sumedh Pendurkar, Texas A&M University
  • Teng Guo, Rutgers University
  • Wolfgang Hoenig, TU Berlin
  • Yi Zheng, University of Southern California

Previous MAPF workshops


(last updated in 2022)