Thesis: Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots

This thesis has been published in 2018.

Abstract

Robotic navigation in static and known environments is well understood. However, the operation in unknown, dynamic or unstructured environments still poses major challenges. In order to support humans in a varied set of applications, such as the assistance of elderly or disabled, transportation, search and rescue or agriculture, the operation in shared workspaces among robots and humans is a key factor. This thesis introduces four data-driven navigation approaches for mobile robot navigation in challenging real-world environments. It covers the problems of navigation among humans in shared workspaces and through environments, where no knowledge of a map is available. In both areas, humans show outstanding capabilities by relying on their “common sense”, i.e. the experience gained throughout many years in their lives. The underlying goal of using data-driven approaches instead of classical hand-engineered solutions is to reduce the amount of hand-tuning and improve the navigation performance and social acceptance of robots. When navigating in dynamic environments shared with other agents, forecasting the evolution of the environment is an important factor. Therefore, in Part A of this thesis, two approaches for interaction-aware robot navigation are introduced. First, we present a framework which allows for fully cooperative robot motion planning in environments shared with pedestrians. The maximum entropy probability distribution is learned from pedestrian demonstrations using inverse reinforcement learning in order to avoid hand-tuning of the model parameters. Using this approach, cooperative real-world robot navigation is shown in environments shared with pedestrians. The results point out the importance of interaction-aware navigation strategies in order to improve the social compliance of robots. The second contribution is a data-driven model for interaction-aware pedestrian prediction in real-world environments with both static and dynamic obstacles based on Long Short-Term Memory neural networks. This model is designed to be used with standard predict-react planners while still taking into account interactions among pedestrians. The presented results show state-of-the-art prediction accuracy and the importance of taking into account static obstacles for pedestrian prediction is evaluated. Part B of this thesis covers another challenging problem in the area of mobile robot navigation – the map-less end-to-end navigation. While classical approaches rely on the interplay of various different modules and prior knowledge of the map, map-less navigation targets the flexible deployment of mobile robots in unknown environments. In this thesis, two data-driven approaches for target-driven end-to-end navigation are presented. First, imitation learning based on expert demonstrations is used in order to find the complex mapping between sensor data i Abstract and robot motion commands, which is represented by a neural network model. Second, this work is combined with deep reinforcement learning in order to find a more general and robust end-to-end navigation policy. Both approaches are purely trained in simulation and transferred to a real robotic platform. The results show successful navigation to the desired target locations using end-to-end policies relying on local perception only. An extensive evaluation and comparison to map-based motion planners is provided, both in simulation and real-world experiments

Details

  • Title: Learning to Navigate: Data-driven Motion Planning for Autonomous Ground Robots
  • Author: Pfeiffer, Mark
  • Date of publication: 01/01/2018
  • View article