REVERIE Challenge @ACL Workshop 2020 (ALVR)




Introduction

The objective of REVERIE Challenge is to benchmark the state-of-the-art for the remote object grounding task defined in the paper, in the hope that it might drive progress towards more flexible and powerful human interactions with robots. The REVERIE task requires an intelligent agent to correctly localise a remote target object (can not be observed at the starting location) specified by a concise high-level natural language instruction, such as 'bring me the blue cushion from the sofa in the living room'. In distinction to other embodied tasks such as Vision-and-Language Navigation (VLN) and Embodied Question Answering (EQA), REVERIE evaluates the success based on explicit object grounding rather than the point navigation in VLN or the question answering in EQA. This more clearly reflects the necessity of robots' capability of natural language understanding, visual navigation, and object grounding. More importantly, the concise instructions in REVERIE represent more practical tasks that humans would ask a robot to perform (see Dataset page). Those high-level instructions fundamentally differ from the fine-grained visuomotor instructions in VLN, and would empower high-level reasoning and real-world applications. Moreover, compared to the task of Referring Expression (RefExp) that selects the desired object from a single image, REVERIE is far more challenging in the sense that the target object is not visible in the initial view and needs to be discovered by actively navigating in the environment. Hence, in REVERIE, there are at least an order of magnitude more object candidates to choose from.

  




Important Dates

· April 15, 2020: Dataset available for download (training, validation and test set)
· April 15, 2020: Web Site and Call for Participation Ready
· April 15, 2020: Baseline codes and models available for download
· June 15, 2020: Results submission deadline
· June 20, 2020: Paper submission deadline



Results

· Winner Team: Chen Gao, Jinyu Chen, Erli Meng, Liang Shi, Xiao Lu, and Si Liu
· Winner Tech Report: download from here
· Winner Talk: