Project reports

You can find in this category all the public deliverables of the CrowdBot project that have been published so far.

Standard

Deliverable 1.5: Second Round Test Evaluation Report

In this final deliverable of WP1, we discuss the second series of evaluations performed with each of our integrated Crowdbots: the smart wheelchair, Qolo, Pepper and cuyBot. The evaluations are both quantitative and qualitative and range from testing the efficacy of core components that are developed within the project and underpin the Crowdbots — namely:

Standard

Deliverable 5.4: Final integrated system also including simulation, crowd navigation for experimental scenario

Work package 5 (WP5) is about developing a coherent theoretical and functional system architecture that can accommodate the targeted scenarios and facilitate the integration of the different work packages across all four robotic platforms. However, this process must be iterative and reviewable after each milestone so that the achievement of such objectives is guaranteed during

Standard

Deliverable 4.3: Crowd simulator – final version

This report details the crowd simulation software and tools that were developed for the CrowdBot project and presents their last version reached at the end of the project, up to month M42. The development of the crowd simulation tools was mainly performed in the frame of Workpackage 4 of the project. The simulation tools for

Standard

Deliverable 2.3: Local sensing system

In this report we describe the development in the perception pipeline, which is crucial towards safe robotic navigation. In Section 2, we briefly recap the previous pipeline, composed of multi-modal (RGB cameras and LiDAR sensors) person detection and joint tracking in a unified world coordinate system, and we present an overview of the updated components.

Standard

Deliverable 3.6: Shared control navigation

Work Package 3 of the CrowdBot project focuses on navigation. Half of our prototype crowdbots (Pepper and cuyBot) are designed to be fully autonomous and so the navigation algorithm must deal with both global and local aspects of planning. However, the other two crowdbots (smart wheelchair and Qolo) are designed to support human users improving

Standard

Deliverable 6.3: Proceedings of ESAB Workshops & Report on Ethical Protocols

This report reports on the ethical protocols developed and followed during the CROWDBOT project and on all ethical and safety advisory board (ESAB) meetings held throughout the project. The ESAB served as a committee. We report here on the opinions of several experts as a group. After these meetings, we organized an international workshop to

Standard

Deliverable 3.4: Reactive Motion Planning

This report details the reactive navigation techniques developed for the CrowdBot project between months M1 to M30. In this sense, we have investigated three main technical components for achieving reactivity in different types of mobile or service robots when navigating in crowded environments.Each of the three technical components are designed to complement high-level planning techniques

Standard

Deliverable 3.5: Social Navigation

This task brings socially aware navigation strategies on the commercial platform Pepper. Special focus was put on the factors: Safety: No physical harm.  Comfort: Absence of annoyance and stress for humans. Naturalness: Similarity between robots and human’s behaviour patterns. Sociability: Adherence to explicit high-level socio-cultural conventions. This deliverable reports on the developments made by Softbank

Standard

Deliverable 3.2: Robust Localization and Mapping

This report details the robust localization and mapping algorithms developed for the Crowdbot project between months M1 and M30. Our proposed solutions are designed with the explicit goal of achieving robot navigation in crowded environments, where many existing methods struggle due to the high degree of dynamic motion around the robot. This report primarily serves

Standard

Deliverable 3.3: Local Interaction Aware Motion Planning

State-of-the-art approaches for robot navigation among humans are typically restricted to planar movement actions. This work addresses the question of whether it can be beneficial to use interaction actions, such as saying, touching, and gesturing, for the sake of allowing robots to navigate in unstructured, crowded environments. To do so, we first identify challenging scenarios