Deliverable 1.4: 1st Round Test Evaluation Report

In this deliverable we present the results of the first-round evaluations of all our crowdbots (Pepper, the smart wheelchair, cuyBot and Qolo). The evaluations are both quantitative and qualitative. They range from testing the efficacy of core components that are developed within the project and underpin the Crowdbots (e.g. sensing, localisation, simulation, planning), to fully integrated tests of each platform for some of the scenarios identified in D1.1 (Specification of Scenarios Requirements).

The results demonstrate that the prototype crowdbots are currently able to operate in low density crowds. However, throughout the evaluation, some common themes emerged. Our autonomous robots (Pepper and cuyBot), both of which require localisation, were able to operate well in low density crowds over extended periods of evaluation. Complementary approaches (e.g. “visual-inertial ceiling-based localization” and “2D-LiDAR-based localization”) were used to increase robustness. However, as the crowd density increased there were times when both localisation methods failed. In these instances, we would most likely need to rely upon a dead-reckoning approach until a new pose could be determined.

Moreover, people’s perceptions of the autonomous robots (i.e. Pepper and cuyBot) are very different to those of our robots that are co-located with a human user (i.e. smart wheelchair and Qolo). Consequently, we observed different crowd interaction behaviour in a number of scenarios, where the Pepper robot and cuyBot were treated more as mobile obstacles to be avoided, whilst the human crowd was generally more patient with the wheelchair and Qolo. This resulted in behaviours where people would overtake and cut abruptly in front of cuyBot and Pepper, or even deliberately try to impede their motion. Conversely, people within the crowd would often wait to allow the wheelchair to pass, make eye contact with the user and in some cases even move proactively out of the way. The robot speed also appeared to play a substantial role in affecting the crowd behaviour. This has implications on how we model crowd motion and subsequent robotic planning, in response to the type and speed of the crowdbot.

The results of this deliverable are currently being used to inform D1.3 (Specification of Scenarios Requirements Update). In particular we are prioritising and refining the scenarios to be tested (D1.1), as well as developing a common set of metrics, which will allow us to better compare between robots, scenarios and algorithms.

Download deliverable 1.4.

Deliverable 5.3: 2nd Updated and Extended Robot System

Work package 5 (WP5) is about developing a coherent theoretical and functional system architecture that can accommodate the targeted scenarios and facilitate the integration of the different work packages across all three robotic platforms. However, this process must be iterative and reviewable after each milestone so that the achievement of such objectives is guaranteed during all project development. In order to reach the previous, coordinating periodic integration activities among partners, updating the platforms and incorporating new developments is a must. Specifically, we want to ensure that we are incorporating all project experiment needs, both in the robot architecture model and the hardware. At the same time, we must ensure the quality of individual components as well as its robustness is enough for posterior commercial and academic purposes. Then, ensuring that the low level sensor data from the robot and/or additional sensors is correctly managed so that high level situation assessment becomes possible.                                      

The focus of this deliverable is to propose a flexible architecture that can be easily adapted for the three robot platforms that are involved in the CROWDBOT project. Therefore, the present document extends the already delivered D52.

Download deliverable 5.3.

Deliverable 2.2: Local Sensing 1st Prototype

In this report we present our first prototype of the perception pipeline developed for the CROWDBOT project. Currently, the focus of the perception pipeline is on detecting and tracking pedestrians in low to medium density scenarios using RGB-D cameras and 2D LiDAR sensors. We begin with reviewing the major detection and tracking methods used in our perception pipeline, as well as their ROS implementation. Then we proceed with quantitative evaluations, presenting results on the detection, tracking, and run-time performance. Finally, we discuss some on-going work to extend the perception pipeline, including interactive data annotation tools, optical flow aided pedestrian tracking, and detailed person analysis modules.

Download deliverable 2.2.

Deliverable 4.2: Crowd Simulator – Intermediate Version

The CrowdBot project’s crowd simulators perform two essential roles for the safe navigation of robots in populated environments:

  1. Crowd simulation for short-term prediction of the evolution of the situation of people in the vicinity of the robot.
  2. Crowd simulation for testing and evaluating the navigation functions of a robot in a densely populated environment.

This deliverable reports on the developments made by the partner Inria to meet these two needs between months M1 and M20 of the project.

Download deliverable 4.2.

Deliverable 3.1: 1st Release of Localization, Mapping & Local Motion Planning

This report details the mapping, localisation and path planning solutions developed for the CROWDBOT project between months M1 to M20. Each of the three technical components have been designed with the explicit goal of achieving robot navigation in crowded environments, where many existing methods struggle due to the high degree of dynamic motion around the robot. In particular, the three key challenges that we address are:

  • Generating clean and coherent maps of the static environment despite the presence of dynamic obstacles during mapping;
  • Achieving fast, and accurate localisation when prior information on the robot pose is unavailable;
  • Executing low-latency local motion planning that balances the collision avoidance routines of the robot with making progress along the path.

Download deliverable 3.1.