Abstract

O. Walker, F. Vanegas, F. Gonzalez and S. Koenig. A Deep Reinforcement Learning Framework for UAV Navigation in Indoor Environments. In IEEE Aerospace Conference (AeroConf), 2019.

Abstract: This paper presents a framework for UAV navigation in indoor environments using a deep reinforcement learning based approach. The implementation models the problem as two seperate problems, a Markov Decision Process (MDP), and a Partially Observable Markov Decision Processes (POMDP), separating the search problem into high-level planning and low-level action under uncertainty. We apply deep learning techniques to this layered problem to produce policies for the framework that allow a UAV to plan, act, and react. The approach is simulated and visualised using Gazebo and is evaluated using policies trained using deep-learning. Using recent deep-learning techniques as the basis of the framework, our results indicate that it is capable of providin smooth navigation for a simulated UAV agent exploring an indoor environment with uncertainty in its position. Once extended to real-world operation, this framework could enable UAVs to be applied in an increasing number of applications, from underground mining and oil refinery surveys and inspections, to search and rescue missions and biosecurity surveys.

Download the paper in pdf.

Many publishers do not want authors to make their papers available electronically after the papers have been published. Please use the electronic versions provided here only if hardcopies are not yet available. If you have comments on any of these papers, please send me an email! Also, please send me your papers if we have common interests.


This page was automatically created by a bibliography maintenance system that was developed as part of an undergraduate research project, advised by Sven Koenig.