Problem Statements
NeurIPS 2022 IGLU Challenge - RL Task
Use an RL agent to build a structure with natural language inputs
NeurIPS 2022 IGLU Challenge - NLP Task
Language assisted Human - AI Collaboration
Important Links
π New Multitask Hierarchical Baseline for RL Task
π΅οΈ Introduction
The general goal of the IGLU challenge is to facilitate research in the area of Human-AI collaboration through the natural language. The aim of this edition is to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. By interactive agent, we mean agents that can follow instructions in natural language and ask for clarification when needed. Ultimately, the agent should be able to quickly adapt newly acquired skills, just like humans do in collaborative interaction with each other. Despite all the recent progress in interactive problem solving, the task of interactive learning is far from solved. To facilitate research in this direction, we present IGLU β a voxel-based collaborative environment and set of tasks to study interactive grounded language understanding and learning.
In the IGLU setup, human and embodied AI agents have to exchange information using language to accomplish a common goal. Specifically, the human β the Architect β gets to see a 3D structure made of colored cubes and has to provide language instructions to the other agent β the Builder β who can place blocks and interact within the environment. The Builder can also ask clarifying questions to the Architect whenever the provided instructions are ambiguous. IGLU is naturally related, but not limited, to two main areas of AI research: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL). With this challenge, we hope to bring the RL and NLU communities together and work towards a common goal: building language-grounded interactive agents (as demonstrated in the example below).
Top: architect's instruction was clear, builder proceeds with placing block. Bottom: builder asks a clarifying question, then proceeds.
If you are working on IGLU task consider reading and citing the following two papers:
@inproceedings{kiseleva2022interactive, title={Interactive Grounded Language Understanding in a Collaborative Environment: IGLU 2021}, author={Kiseleva, Julia and Li, Ziming and Aliannejadi, Mohammad and Mohanty, Shrestha and ter Hoeve, Maartje and Burtsev, Mikhail and Skrynnik, Alexey and Zholus, Artem and Panov, Aleksandr and Srinet, Kavya and others}, booktitle={NeurIPS 2021 Competitions and Demonstrations Track}, pages={146--161}, year={2022}, organization={PMLR} }
@article{kiseleva2022iglu, title={IGLU 2022: Interactive Grounded Language Understanding in a Collaborative Environment at NeurIPS 2022}, author={Kiseleva, Julia and Skrynnik, Alexey and Zholus, Artem and Mohanty, Shrestha and Arabzadeh, Negar and C{\^o}t{\'e}, Marc-Alexandre and Aliannejadi, Mohammad and Teruel, Milagro and Li, Ziming and Burtsev, Mikhail and {ter Hoeve}, Maartje and Volovikova, Zoya and Panov,{Aleksandr I.} and Sun, Yuxuan Sun and Srinet, Kavya and Szlam,Arthur and Awadallah, {Ahmed Hassan}}, journal={arXiv preprint arXiv:2205.13771}, year={2022} }
π Tasks
Understanding the complexity of the challenge, we offer the participants two tracks they can tackle separately.
π· RL Task: Building Structures
This task is about following natural language instructions to build a target structure without seeing what it should look like at the end. The RL agent observes the environment from a first-person point-of-view and is able to move around and place different colored blocks within a predefined building zone. Its task is provided as a dialog between an Architect and a Builder. Specifically, the dialog is split into two parts: the context utterances defining blocks placed previously, and target utterances defining the rest of the blocks to be placed. At the end of an episode, the RL agent receives a score reflecting how complete is the built structure compared to the ground truth target structure.
Head over the RL Task - Building Structures challenge for more details and get started!
The best performing solutions for the Building Structures task will be further evaluated with human-in-the-loop, where the developed agents have a chance to interact with actual human users. The ultimate goal is to see how the proposed offline evaluation correlates or does not correlate with human perspectives of the task. The winners will be nominated according to the offline evaluation used on the leaderboard. Check out the new Multitask Hierarchical Baseline to make your submission.
π NLP Task: Asking Clarifying Questions
This task is about determining when and what clarifying questions to ask. Given the instruction from the Architect (e.g., βHelp me build a house.β), the Builder needs to decide whether it has sufficient information to carry out that described task or if further clarification is needed. For instance, the Builder might ask βWhat material should I use to build the house?β or βWhere do you want it?β. The NLP task is formulated independently from learning to interact with the 3D environment. The original instruction and clarification can be used as input for the Builder to guide its progress.
The NLP Task will be released soon! Come back later for updates on this.
π Timeline
- July: Releasing materials: IGLU framework and baselines code.
- 25th July: The warm-up phase of the competition begins! Participants are invited to start submitting their solutions.
- 13th August: End of warming up phase! The official competition begins.
- 21st October: Submission deadline for RL task. Submissions are closed and organizers begin the evaluation process.
- 31st October: Submission deadline for NLP task. Submissions are closed and organizers begin the evaluation process.
- November: Winners are announced and are invited to contribute to the competition writeup.
- 2nd-3rd of December: Presentation at NeurIPS 2022 (online/virtual).
π Prizes
The challenge features a Total Cash Prize Pool of $16,500 USD.
This prize pool is divided as follows:
- NLP Task
- 1st place: $4,000 USD
- 2nd place: $1,500 USD
- 3st place: $1,000 USD
- RL Task
- 1st place: $4,000 USD
- 2nd place: $1,500 USD
- 3st place: $1,000 USD
- Research prizes: $3,500 USD
Task Winners. For each task, we will evaluate submissions as described in the Evaluation section. The three teams that score highest on this evaluation will receive prizes of $4,000 USD, $1,500 USD, and $1000 USD.
Research prizes. We have reserved $3,500 USD of the prize pool to be given out at the organizersβ discretion to submissions that we think made a particularly interesting or valuable research contribution. If you wish to be considered for a research prize, please include some details on interesting research-relevant results in the README for your submission. We expect to award around 2-5 research prizes in total.
Authorship. In addition to the cash prizes, we will invite the top three teams from both the RL and NLP tasks for authorship summary manuscript at the end of the competition. At our discretion, we may also include honourable mentions for academically interesting approaches. Honourable mentions will be invited to contribute a shorter section to the paper and have their names included inline.
π₯ Team
The organizing team:
- Julia Kiseleva (Microsoft Research)
- Alexey Skrynnik (MIPT)
- Artem Zholus (MIPT)
- Shrestha Mohanty (Microsoft Research)
- Negar Arabzadeh (University of Waterloo)
- Marc-Alexandre CΓ΄tΓ© (Microsoft Research)
- Mohammad Aliannejadi (University of Amsterdam)
- Milagro Teruel (Microsoft Research)
- Ziming Li (Amazon Alexa)
- Mikhail Burtsev (DeepPavlov)
- Maartje ter Hoeve (University of Amsterdam)
- Zoya Volovikova (MIPT)
- Aleksandr Panov (MIPT)
- Yuxuan Sun (Meta AI)
- Kavya Srinet (Meta AI)
- Arthur Szlam (Meta AI)
- Ahmed Awadallah (Microsoft Research)
The advisory board:
- Tim RocktΓ€schel (UCL & DeepMind)
- Julia Hockenmaier (University of Illinois at Urbana-Champaign)
- Katja Hofmann (Microsoft Research)
- Bill Dolan (Microsoft Research)
- Ryen W. White (Microsoft Research)
- Maarten de Rijke (University of Amsterdam)
- Oleg Rokhlenko (Amazon Alexa Shopping)
- Sharada Mohanty (AICrowd)
π Similar challenges
If you are interested in embodied agents interacting with Minecraft-like environments, you will be interested in the ongoing MineRL Basalt competition. They offer cutting edge pretrained agents ready to be finetuned!
π€ Sponsors
Special thanks to our sponsors for their contributions.
π± Contact
We encourage the participants to join our Slack workspace for discussions and asking questions.
You can also reach us at info@iglu-contest.net or via the AICrowd discussion forum.
Participants
Getting Started
Leaderboard
01 | felipe_b | 5.000 |
02 | betheredge | 20.000 |
02 | 745H1N | 20.000 |
03 |
|
31.000 |
04 | amulil | 37.000 |