In this deliverable we present the results of the first-round evaluations of all our crowdbots (Pepper, the smart wheelchair, cuyBot and Qolo). The evaluations are both quantitative and qualitative. They range from testing the efficacy of core components that are developed within the project and underpin the Crowdbots (e.g. sensing, localisation, simulation, planning), to fully integrated tests of each platform for some of the scenarios identified in D1.1 (Specification of Scenarios Requirements).
demonstrate that the prototype crowdbots are currently able to operate in low
density crowds. However, throughout the evaluation, some common themes emerged.
Our autonomous robots (Pepper and cuyBot), both of which require localisation,
were able to operate well in low density crowds over extended periods of
evaluation. Complementary approaches (e.g. “visual-inertial ceiling-based
localization” and “2D-LiDAR-based localization”) were used to increase
robustness. However, as the crowd density increased there were times when both
localisation methods failed. In these instances, we would most likely need to
rely upon a dead-reckoning approach until a new pose could be determined.
people’s perceptions of the autonomous robots (i.e. Pepper and cuyBot) are very
different to those of our robots that are co-located with a human user (i.e.
smart wheelchair and Qolo). Consequently, we observed different crowd
interaction behaviour in a number of scenarios, where the Pepper robot and
cuyBot were treated more as mobile obstacles to be avoided, whilst the human
crowd was generally more patient with the wheelchair and Qolo. This resulted in
behaviours where people would overtake and cut abruptly in front of cuyBot and
Pepper, or even deliberately try to impede their motion. Conversely, people
within the crowd would often wait to allow the wheelchair to pass, make eye
contact with the user and in some cases even move proactively out of the way.
The robot speed also appeared to play a substantial role in affecting the crowd
behaviour. This has implications on how we model crowd motion and subsequent
robotic planning, in response to the type and speed of the crowdbot.
The results of this deliverable are currently being used to inform D1.3 (Specification of Scenarios Requirements Update). In particular we are prioritising and refining the scenarios to be tested (D1.1), as well as developing a common set of metrics, which will allow us to better compare between robots, scenarios and algorithms.
Download deliverable 1.4.
Work package 5 (WP5) is about developing a
coherent theoretical and functional system architecture that can accommodate
the targeted scenarios and facilitate the integration of the different work
packages across all three robotic platforms. However, this process must be
iterative and reviewable after each milestone so that the achievement of such
objectives is guaranteed during all project development. In order to reach the
previous, coordinating periodic integration activities among partners, updating
the platforms and incorporating new developments is a must. Specifically, we
want to ensure that we are incorporating all project experiment needs, both in
the robot architecture model and the hardware. At the same time, we must ensure
the quality of individual components as well as its robustness is enough for
posterior commercial and academic purposes. Then, ensuring that the low level
sensor data from the robot and/or additional sensors is correctly managed so
that high level situation assessment becomes possible.
The focus of this deliverable is to propose a flexible architecture that can be easily adapted for the three robot platforms that are involved in the CROWDBOT project. Therefore, the present document extends the already delivered D52.
Download deliverable 5.3.
In this report we present our first prototype of the perception pipeline developed for the CROWDBOT project. Currently, the focus of the perception pipeline is on detecting and tracking pedestrians in low to medium density scenarios using RGB-D cameras and 2D LiDAR sensors. We begin with reviewing the major detection and tracking methods used in our perception pipeline, as well as their ROS implementation. Then we proceed with quantitative evaluations, presenting results on the detection, tracking, and run-time performance. Finally, we discuss some on-going work to extend the perception pipeline, including interactive data annotation tools, optical flow aided pedestrian tracking, and detailed person analysis modules.
Download deliverable 2.2.
CrowdBot project’s crowd simulators perform two essential roles for the safe
navigation of robots in populated environments:
- Crowd simulation for short-term prediction of the evolution of the
situation of people in the vicinity of the robot.
- Crowd simulation for testing and evaluating the navigation functions of a
robot in a densely populated environment.
This deliverable reports on the developments made by the partner Inria to meet these two needs between months M1 and M20 of the project.
Download deliverable 4.2.
details the mapping, localisation and path planning solutions developed for the
CROWDBOT project between months M1 to M20. Each of the three technical
components have been designed with the explicit goal of achieving robot
navigation in crowded environments,
where many existing methods struggle due to the high degree of dynamic motion
around the robot. In particular, the three key challenges that we address are:
- Generating clean and coherent maps of the static environment despite the presence of dynamic obstacles during mapping;
- Achieving fast, and accurate localisation when prior information on the robot pose is unavailable;
- Executing low-latency local motion planning that balances the collision avoidance routines of the robot with making progress along the path.
Download deliverable 3.1.
This document aims at
identifying risks that may arise when a robot navigates through a crowd and to
provide a preliminary set of tools for identifying and evaluating potential
physical hazards deriving from interaction with crowdbots.
For the scope of this
document, we define crowdbots as mobile robots operating in public, densely
populated spaces, capable of establishing physical contacts with human beings.
Excerpts of this report have been included in a journal article submitted to ACM Transactions on Human Robot Interactions on December 19th, 2018.
Download deliverable 6.1.
As part of external stakeholder engagements, two types of experiments ─ user studies and robotic tests ─ are planned for the Crowdbot project. User studies are further classified as structured interviews and focus group engagements. Both user study experiments will be used to collect information and better understand the viewpoints and concerns of various stakeholders that will be exposed to our robots. Some examples of these stakeholders are potential users of our robots, robotic experts, property owners and venue managers whose space our robots may likely roam and members of the general public that will interact with robots. Several rounds of user studies are planned over the duration of the project. They work in tandem with robotic tests in several rounds interview-test-interview cycles. Data collected from user studies assist the test team in selecting specific types of robotic test cases that are most meaningful and relevant to the main project goal of safe navigation of robots among dense human crowds. After completion of robotic tests, collected data is then used to inform stakeholders of observed robot-human interactions and to solicit their feedback in the next round of user studies.
Download deliverable 1.2.
package 5 (WP5) is about developing about a coherent theoretical and functional
system architecture that can accommodate the targeted scenarios and facilitate
the integration of the different work packages across all three robotic
platforms. However, this process must be iterative and reviewable after each
milestone so that the achievement of such objectives is guaranteed during all
project development. In order to reach the previous, coordinating periodic
integration activities among partners, updating the platforms and incorporating
new developments is a must.
we want to ensure that we are incorporating all project experiment needs, both
in the robot architecture model and the hardware. At the same time, we must
ensure the quality of individual components as well as its robustness is enough
for posterior commercial and academic purposes. Then, ensuring that the low
level sensor data from the robot and/or additional sensors is correctly managed
so that high level situation assessment becomes possible.
focus of this deliverable is to propose a flexible architecture that can be
easily adapted for the three robot platforms that are involved in the CROWDBOT
project. However, this document will provide further information in terms of
hardware and software specifications limited only to the robotic platform
Therefore, the present document extends the already delivered D51 System architecture document in M4, so that communication through messages exchanges between the different modules are specified in section 3. Then, a deeper view of the system components as well as their function is provided in section 4. Moreover, the first prototype of Pepper with additional sensors and augmented computational power in order to cover the experimental scenarios delivered in D11 Specification of scenarios requirements is described in section 6 and 7. As a consequence, several experimental procedures and results are presented in order to back up the decisions in terms of design and architecture, in section 5.
Download deliverable 5.2.
Scenarios are descriptors that portray use cases and operational procedures of mobile robots in human crowd environments such as hospitals, shopping malls, train stations and other public or private venues. Nowadays we are witnessing the presence of robots in both public and private places but their efficacy and technological features are rather modest due to their limited mobility and interaction with humans. The main focus of the Crowdbot Project is to demonstrate safe and efficient mobile robot navigation in a dense crowded human environment. This report details various navigation scenarios we plan to test and validate as part of the overall project goal of research, innovation, ethics and feasibility study for technology transfer in mobile robotics.
Download deliverable 1.1.
CROWDBOT project aims for tight navigation of mobile robots in a dense crowd
and thus physical interaction (both contact and non-contact) between a robot
and human crowd is anticipated. This report addresses our approach for
modeling, analysis and experimentation of robot-human physical interaction.
Here the term “physical” means that a robot will come close (i.e. non-contact)
or in contact with a human or humans while it navigates and moves amongst them.
Hence, physical interaction is all about physics, mechanics, locomotion and
possibly bodily harm. The other related term “social interaction” refers to
interpersonal, cultural and give-and-take exchange between two entities to
avoid collision. This topic is outside the scope of this report. Here, we
dedicate solely to the topic of robot-human physical interaction, also commonly
annotated in the literature as pHRI (physical Human-Robot Interaction).
The report is divided into three main sections: 1) modeling of physical interactions, 2) bibliographic study of physical interactions and 3) case studies of physical interactions for robots that apply specifically to CROWDBOT.
Download deliverable 4.1.