REVERIE Challenge @ CSIG 2022

New data, New channel, New rules!

Total Prizes: 200,000RMB (~30,000USD) supported by SK Group, South Korean

Updates:

·  30/07/2022
A Word version of technical report template is provided here. You can also use Latex as long as your report contains the same content as the Word template.
·  28/07/2022
We extend the deadline to August 5th 23:59:59 UTC+8 to make up time cost caused by the recent rule clarification.
·  27/07/2022
3D Coordinates of viewpoints and objects (GPS) cannot be used as input data. But you can use the rel_distance, rel_heading, and rel_elevation among viewpoints from the simulator. This means you do not have a global map at the beginning. You can build it as the agent navigates, do pre-exploration, etc.
·  21/07/2022
For fair comparisons in the competition, agents are not permitted to use the test environments during training in any way.
·  14/07/2022
Note that the grounding model of Channel 1 uses object files in a different format, which has been updated here.
·  12/07/2022
For fair comparison, we will require the inference code of the winner and runner-up to reproduce their results locally. Please prepare your code and running environment in advance.
·  21/05/2022
For participants in China, please refer to this link to sign up for this challenge. For participants from other countries, please send the following information to reverie.challenge@gmail.com to sign up: team name, team member, and Institution/Company/Organization.
·  27/04/2022
We updated the referring expresson grounding model (see here) for channel 1 and corresponding baseline results on leaderboard.
·  21/04/2022
(1) Submision method is changed. See the "Submission" section on the Challenge page. (2) The referring expression grounding model for channel 1 is released. See the "Channel" section on the Challenge page.


Introduction

demo

The objective of REVERIE Challenge is to benchmark the state-of-the-art for the remote object grounding task defined in the paper, in the hope that it might drive progress towards more flexible and powerful human interactions with robots. The REVERIE task requires an intelligent agent to correctly localise a remote target object (can not be observed at the starting location) specified by a concise high-level natural language instruction, such as 'bring me the blue cushion from the sofa in the living room'. In distinction to other embodied tasks such as Vision-and-Language Navigation (VLN) and Embodied Question Answering (EQA), REVERIE evaluates the success based on explicit object grounding rather than the point navigation in VLN or the question answering in EQA. This more clearly reflects the necessity of robots' capability of natural language understanding, visual navigation, and object grounding. More importantly, the concise instructions in REVERIE represent more practical tasks that humans would ask a robot to perform. Those high-level instructions fundamentally differ from the fine-grained visuomotor instructions in VLN, and would empower high-level reasoning and real-world applications. Moreover, compared to the task of Referring Expression (RefExp) that selects the desired object from a single image, REVERIE is far more challenging in the sense that the target object is not visible in the initial view and needs to be discovered by actively navigating in the environment. Hence, in REVERIE, there are at least an order of magnitude more object candidates to choose from.



New data, New channel, New rules

·  New data:
Instead of only considering objects within 3 meters to the viewpoint, in this challenge ALL visible objects are considered for object grounding.
·  New channel:
This year we set two channels: Channel 1 uses our referring expression grounding model here, and Channel 2 uses your own referring expression grounding model.
·  New rules:
  1. For each channel, one participant can only be in one team. Each team can have at most six members.
  2. Send your results on test split to reverie.challenge@gmail.com, and we will evaluate the results for you. You can submit only 5 times totally for the challenge. We provide an evaluation script here for self-evaluation on val_seen and val_unseen splits.
  3. Technical report must be sent to reverie.challenge@gmail.com before the challenge deadline. A template will be provided.
  4. Results must be better than our baseline (see Leaderboard).
  5. Refer to the Challenge page for more details.


Prizes

·  Channel 1: 1 Champion with 50,000RMB(~7,400USD), 1 Runner-Up with 20,000RMB(~3,000USD), and 30,000RMB(~4,400USD) for the first 30% teams (except the Champion and Runner-Up)
·  Channel 2: 1 Champion with 50,000RMB(~7,400USD), 1 Runner-Up with 20,000RMB(~3,000USD), and 30,000RMB(~4,400USD) for the first 30% teams (except the Champion and Runner-Up)



Important Dates

·  Challenge starts: April 15, 2022, 0:00 UTC-0
·  Submission deadline: July 31, 2022, 23:59 UTC-0 August 5, 2022, 23:59:59 UTC+8
·  Results Notice: August 5, 2022 August 10, 2022
·  Award Ceremony: One day between 19~21 August 2022



Results of REVERIE Challenge 2022

·  Winner Team: Zun Wang, Yi Wang, Yinan He, Yu Qiao
·  Winner Tech Report: download from here

Results of REVERIE Challenge 2021

·  Winner Team: Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schimid, and Ivan Laptev
·  Winner Tech Report: download from here

Results of REVERIE Challenge 2020

·  Winner Team: Chen Gao, Jinyu Chen, Erli Meng, Liang Shi, Xiao Lu, and Si Liu
·  Winner Tech Report: download from here
·  Winner Talk: