Loading
17k
386
35
202

Participate in the next edition of Neural MMO - the largest RL competition at NeurIPS 2022! 

NMMO 2.0 is rebuilt for NeurIPS 2023 with faster performance, new focus on task completion, and a new RL baseline.

🚀Starterkit - Everything you need to submit.

📃Project Page  - Documentation, API reference, and tutorials.

📹WebViewer - A web replay viewer for our challenge.

📞Discord - Our main support and discussion channel
📞WeChat - Our second support channel

👥 Find Teammates 💬 Share feedback & queries 

🚂Competition Tracks

Your objective is to train agents to complete tasks they have never seen before against opponents they have never seen before on maps they have never seen before

Reinforcement Learning: Customize the RL algorithm, model, and reward structure. 

Curriculum Generation: Design the task generator, task sampler, and reward using Python.

No Holds Barred Track: Bring it on! No restrictions; entrants provide their own compute to win via any way possible - except hacking our servers!

    -    LLM Agents: Use GPT or local LLMs to generate scripted agents. We're still working on evaluation support for this, see Documentation/Discord for details. 

📆Competition Timeline

October: Competition launches - warm-up rounds vs. baselines

November-December: Main competition. Task completion evaluated against other participants’ Agent policies. Competition tasks get harder over rounds.

TBA Before NeurIPS: Submissions close, final evaluation of top 16 submissions in each track

NeurIPS: Winners notified!

🏆Awards and Prizes

$20K in prizes sponsored by Parametrix.ai. Winners will co-author the summary manuscript following the competition. Per-track and round split to be announced. Winners for the Reinforcement Learning and Curriculum Generatrion tracks are required to open-source full code for their submissions. This is encouraged but not required for the No Holds Barred track. 

Check out last year’s winners!

📏Rules

Reinforcement Learning Track: 

You may modify the model architecture, RL algorithm, and reward function. 
You may not alter the training tasks or the sampling order of the training tasks. 
You may not precompute large amounts of work, for example, through neural architecture search or massive hyperparameter sweeps that tune to multiple significant digits. 

Winners will be required to open-source their code in order to be eligible for a cash prize and co-authorship. We will retrain your submission from scratch for 8 hours on an A100 with at least 12 cores. The compute limit is intended to make this track fair for academic labs and independent researchers.
 

Curriculum Generation Track: 

You may modify the generation and sampling of tasks as well as their rewards. 
You may not alter the model architecture or RL algorithm. 
You may not precompute and upload a specific set of tasks through large scale simulation. 

Winners will be required to open-source their code in order to be eligible for a cash prize and co-authorship. We will retrain the baseline from scratch with your curriculum generator for 8 hours on an A100 with at least 12 cores. The compute limit is intended to make this track fair for academic labs and independent researchers.
 

No Holds Barred Track: 

You may modify the model architecture, RL algorithm, reward function, task generation, task sampling, etc. and are not constrained by compute. 
Upload your trained model for evaluation. 

Winners are strongly encouraged but not required to open-source their code. 
Winners will be required to disclose their general approach in order to be eligible for co-authorship.
 

General Rules for All Tracks:

  1. Do not attempt to circumvent the submission limit by making multiple accounts or otherwise.
  2. Do not interfere with our leaderboard by uploading submissions that violate track-specific restrictions.
  3. Do not attempt to modify the stats recorded to the leaderboard or surreptitiously determine held-out tasks.
  4. Do not attempt to access or modify other participants’ submissions
  5. Do not write code that makes alliances with other participants' policies
  6. Your participation is at our discretion. Harassing organizers, other participants, or other disruptive or rule-breaking behavior will result in a ban and forfeiture of any and all prizes.

🤖Team Members

Massachusetts Institute of Technology
Joseph Suarez
Phillip Isola

Carper AI
Kyoung Whan Choe
David Bloomin
Hao Xiang Li
Nikhil Pinnaparaju
Nishaanth Kanna
Daniel Scott
Ryan Sullivan
Rose S. Shuman
Lucas de Alcântara
Herbie Bradley
Louis Castricato

Parametrix.ai
Mudou Liu
Enhong Liu
Kirsty You
Yuhao Jiang
Qimai Li
Jiaxin Chen
Xiaolong Zhu

AICrowd
Dipam Chakraborty
Sharada Mohanty

Participants

Notebooks

See all
Preparing the pkl submission file for a custom policy
By
kyoung_whan_choe
11 months ago
2
Manually create your curriculum
By
kyoung_whan_choe
About 1 year ago
1
Train and Submit from Colab
By
kyoung_whan_choe
About 1 year ago
0
Colab Starter Kit
By
joseph_suarez
About 1 year ago
0