Deliverable 1.5: Second Round Test Evaluation Report

In this final deliverable of WP1, we discuss the second series of evaluations performed with each of our integrated Crowdbots: the smart wheelchair, Qolo, Pepper and cuyBot. The evaluations are both quantitative and qualitative and range from testing the efficacy of core components that are developed within the project and underpin the Crowdbots — namely: sensing, localisation, simulation, planning — to fully integrated tests of each platform for some of the scenarios identified in D1.3 “Specification of Scenarios Requirements Update”.

Two distinct evaluations are performed on the smart wheelchair. First, the performance of our proposed reinforcement learning method for creating a more personalized shared control experience, which is validated in the CrowdBot simulator. Second, we evaluated our previously proposed probabilistic shared control method PSC-DWAGVO (D3.6) on the physical wheelchair in a controlled sparse crowd environment, before demonstrating its feasibility in a natural sparse crowd on the main UCL campus in London, UK.

Qolo has been tested in natural crowds in Lausanne, Switzerland. Reactive navigation without a high-level planner was successful and the shared control felt very natural, with people going by without noticing the robot. By contrast, the direct point-to-point controller had problems of not interacting properly with the environment and resulted in less intuitive behavior. People detection was reported to work very well, however, tracking was a bottleneck when there were many people surrounding Qolo. All the controllers, and dataset of experiments have been made publicly available along with analysis tools for pedestrian and robot performance analysis.

For our fully autonomous Crowdbots, training robust sensor-to-control policies for robot navigation remains a challenging task. We trained two end-to-end, and 18 unsupervised-learning-based architectures, and compared them, along with existing approaches, in unseen test cases. We also demonstrated our approach working on the Pepper robot in a real environment. Our results show that unsupervised learning methods are competitive with end-to-end methods. We also highlight the importance of various components such as input representation, predictive unsupervised learning, and latent features. Our models have been made publicly available, along with the training and testing environments, and tools.

Finally, the compliant motion control on cuyBot was evaluated during physical experiments with human participants. The experiment demonstrated that the current implementation of the compliant motion control on the cuyBot robot was safe, even with inexperienced users. Our results suggest that the concept of robots getting into close contact with humans may be accepted by the public, as long as the functionality of compliant motion is implemented in a safe and intuitive way. Once persons gathered experience with the robot, on average they have a better than neutral attitude towards close proximity and physical contacts.

In summary, we have demonstrated that our two semi-autonomous systems have both shown promising results for the use of shared control in crowded environments. Similarly, our two fully-autonomous Crowdbots have demonstrated the feasibility of both navigation capability in dynamic human environments, as well as the safety and reasonable acceptance of our compliant motion control when making physical contact with pedestrians.