The 2025 ICRA Arena Challenge

About

As part of the workshop Advances in Social Navigation: Planning, HRI and Beyond, the Arena 2025 challenge aims to benchmark recent state of the art social navigation approaches for navigation in crowded collaborative environments. We utilize our arena platform to host the competition. The Arena Challenge 2025 invites participants to tackle the complexities of social navigation in crowded, dynamic, and collaborative environments. The goal is to develop autonomous navigation solutions capable of efficiently maneuvering through environments filled with pedestrians, dynamic obstacles, and realistic social interactions.

Participants are required to design and develop navigation algorithms that can handle:

  • Dynamic and Complex Environments: Navigate through environments populated with moving pedestrians and unpredictable social dynamics.
  • Socially Aware Navigation: Respect social norms, avoid collisions, and interact safely with dynamic groups.
  • Scalable Difficulty Levels: Overcome environments that gradually increase in complexity and difficulty.

Competition Structure

The Arena Challenge dataset comprises 12 pre-generated dynamic navigation environments, ranging from moderately busy spaces to highly congested and complex scenarios, and an environment generator that can produce novel dynamic social environments. The task is to navigate a standardized mobile robot from a predefined start to a goal location while efficiently maneuvering through dynamic crowds and avoiding collisions. The robot is standardized with a 2D LiDAR sensor, a differential drive system with a maximum speed of 2 m/s, and onboard computational resources provided by the organizers. Participants will need to develop navigation systems that consume the standardized LiDAR input, perform all computation onboard using the provided resources, and output motion commands to drive the robot. Participants are welcome to use any approaches to tackle the social navigation problem, such as classical sampling-based or optimization-based planners, end-to-end learning techniques, or hybrid approaches.

The following infrastructure will be provided by the competition organizers:

  • The 12 pre-generated dynamic navigation environments (s. Picture 1)
  • The dynamic environment generator to produce novel social navigation scenarios
  • Baseline navigation systems including classical approaches (DWB, TEB) and our ROSNAV agent
  • A training pipeline running the standardized mobile robot in either Gazebo simulation or Isaac Sim with Robot Operating System (ROS) Noetic on Ubuntu 22.04, with the option of being containerized in Docker or Singularity containers for fast and standardized setup and evaluation
  • A standardized evaluation pipeline to compete against other navigation systems
Competition Structure
Figure 1: Exemplary test world in Arena 5.0

How to Participate

  1. Clone our repository via Arena-Rosnav.
  2. Download the simulation environments at Arena-Simulation-Setup.
  3. Integrate your navigation algorithm using our Nav2 Template (s. Fig. 2) and test it locally on your computer.
  4. Reach out to us so we can integrate and test it together!
  5. Upload your code on your branch and make a pull request.

Note: You can continuously push updates until the submission deadline. On the day of the submission deadline, we will pull the latest version and run them on the worlds.

How to integrate planners: A short tutorial — below you can see the system of our wrapper to integrate your planner as a nav2 plugin using the ROS2 interface nav2. We have integrated a number of planners using this system (see Arena 5.0 Paper).

Competition Structure
Figure 2: Architecture of our Nav2Py plugin

Participants will integrate their planner into our Arena Platform and upload their code on a dedicated branch for us to test.

Important: We have integrated a number of state-of-the-art planners in the past and have encountered issues such as overfitting, functionality errors, non-operational planners, outdated dependencies, and more. These challenges make reproducibility a key concern. Please reach out to us if you experience any problems during integration. Our team of dedicated researchers is available to assist you and can even retrain your algorithms on our servers. Feel free to contact us for any support you may need.

Evaluation Criteria and Calculation of Metrics

Algorithms will be evaluated based on:

  • Efficiency: How quickly and effectively the robot can navigate through the environment.
  • Safety: Avoidance of collisions with pedestrians and dynamic obstacles.
  • Social Compliance: Adherence to social norms and smooth navigation in collaborative scenarios.

Performance evaluation will be based on the following metrics:

1. Completion Time (T)

The time taken by the robot to navigate from the start to the goal location.

2. Collision Penalty (P)

A penalty incurred for each collision with dynamic obstacles or pedestrians, calculated as:

$$ P = N_c \times p $$

Where:

  • \( N_c \) = Number of collisions
  • \( p \) = Penalty per collision (a constant value determined by the organizers)

3. Social Compliance Score (S)

A score reflecting the robot's adherence to social norms, such as maintaining appropriate distances from pedestrians and avoiding abrupt movements. This score ranges between 0 (non-compliant) and 1 (fully compliant).


🚀 Overall Performance Score

The overall performance score is calculated as follows:

$$ \text{Score} = \frac{1}{T + P} \times S $$

Where:

  • \( T \) = Completion time (in seconds)
  • \( P \) = Total collision penalty
  • \( S \) = Social compliance score

Note: A higher score indicates better performance, balancing efficiency, safety, and social compliance.

Prize Money

Top-performing teams will be awarded prize money in recognition of their innovative and effective solutions.

👥 Who Should Participate?

Everyone is welcome to participate! Participants include, but are not limited to, researchers in robotics, AI, and autonomous systems; developers focusing on social navigation and dynamic path planning; and undergraduate or graduate students interested in testing how their planners perform in social scenarios.

Note: The planning approaches do not necessarily have to be specifically designed for social navigation. We are also interested in observing how classical navigation planners or other dynamic obstacle avoidance methods perform, as these approaches can sometimes prove more efficient.

🗓️ Timeline

  • Challenge Begins: 01 April 2025 (AoE)
  • Submission Deadline: 19 May 2025 (AoE)
  • Winner Announcement: 21 May 2025 (3pm ET)

Leaderboard: Arena Challenge 2025

The leaderboard ranks participating teams based on their overall performance scores, considering completion time, collision penalties, and social compliance.

Rank Team Name Completion Time (s) Collision Penalty Social Compliance Score Final Score
1 DWB 120.5 2.0 0.95 0.0076
2 RosNAV (Ours) 135.2 1.5 0.90 0.0065
3 DWB 140.8 3.0 0.92 0.0063
4 Dragon 150.0 4.0 0.88 0.0055
5 Crowdnav++ 160.3 2.5 0.85 0.0052

Note: The leaderboard will be continuously updated as teams submit and improve their solutions.

👥 Organizers


The Arena Challenge 2025 is organized by a dedicated team of experts in robotics, AI, and autonomous navigation. Our goal is to advance research in socially-aware navigation and contribute to real-world robotics applications.

Dr. Linh Kaestner
Linh Kaestner
National University Singapore
Dr. Alex Doe
Volodymyr Sebyna
Technical University Berlin
Prof. Jamie Lee
Huajian Zeng
Technical University Munich
Samim Ahmadhi
Huu Giang Nguyen
National University Singapore
Jordan Smith
Allan Wang
Carnegy Mellon University
Advisor 2
Nathan Tsoi
Yale
Organizer 7
Phani Teja Singamaneni
Laas-Crsi
Organizer 8
Erica Weng
Carnegie Mellon University
Jordan Smith
Jorge de Heuvel
University Bonn
Advisor 2
Anthony Francis
Logical Robotics