📢 A newer, improved version of Flatland has launched!!
🚂 Check out the Flatland 3 Challenge
Train a complete solution directly on Colab!DQN / PPO
📑 The Flatland Paper is out!Check it out
🚉 Introduction
This challenge tackles a key problem in the transportation world:
How to efficiently manage dense traffic on complex railway networks?
This is a real-world problem faced by many transportation and logistics companies around the world such as the Swiss Federal Railways and Deutsche Bahn. Your contribution may shape the way modern traffic management systems are implemented, not only in railway but also in other areas of transportation and logistics!
This edition of the challenge is affiliated with the AMLD 2021 and ICAPS 2021 conferences:
🚂 Background
The Flatland challenge aims to address the problem of train scheduling and rescheduling by providing a simple grid world environment and allowing for diverse experimental approaches.
This is the third edition of this challenge. In the first one, participants mainly used solutions from the operations research field. In subsequent editions, we are encouraging participants to use solutions which leverage the recent progress in reinforcement learning.
Flatland: the core task of this challenge is to manage and maintain railway traffic on complex scenarios in complex networks
📜 Tasks
Your goal is to make all the trains arrive at their target destination with minimal travel time. In other words, we want to minimize the number of steps that it takes for each agent to reach its destination. In the simpler levels, the agents may achieve their goals using ad-hoc decisions. But as the difficulty increases, the agents have to be able to plan ahead!
Problem example: this is a teaser of what we expect you to do
A central question while designing an agent is the observations used to take decisions. As a participant, you can either work with one of the base observations that are provided or better, design an improved observation yourself!
These are the three provided observations:
- Global Observation: The whole scene is observed.
- Local Grid Observation: A local grid around the agent is observed.
- Tree Observation: The agent can observe its navigable path to some predefined depth.
⚖ Evaluation metrics
The primary metric is the normalized return from your agents - the higher the better.
For each episode, the minimum possible value is 0.0, which occurs if none of the agents reach their goal. The maximum possible value is 1.0, which would occur if all the agents reached their targets in one time step, which is generally not achievable.
The agents have to solve as many episodes as possible. During each evaluation, the agents have to solve environments of increasingly large size. The evaluation stops when the agents don't perform well enough anymore, or after 8 hours, whichever comes first. Read the documentation for more details:
🏆 Prizes
Winners will be invited to speak in the AMLD 2021 Competition Track.
More prizes for the AMLD 2021 Flatland challenge will be announced soon!
📅 Timeline
Here's the tentative timeline:
- January 15th - February 28th: Warm-Up Round
- The new Timeline will be announced.
There are no qualifying rounds: participants can join the challenge at any point until the final deadline. Prizes will be awarded according to Round 2 ranking.
🚉 Next stops
The Flatland documentation contains everything you need to get started with this challenge!
Want to dive straight in?
🔗 Submit in 10 minutes
New to multi-agent reinforcement learning?
🔗 Step by step guide
Want to explore advanced solutions such as distributed training and imitation learning?
🔗 Research baselines
📑 Citation
The Flatland paper is out on arXiv!
🔗 Flatland-RL : Multi-Agent Reinforcement Learning on Trains
🍁 NeurIPS Talks
Flatland was one of the NeurIPS 2020 Competition, and was presented both in the Competition Track and in the Deep RL Workshop. We also organized a NeurIPS Townhall where participants and organizers discussed their experience.
The recording of all these talks are now publicly available:
- Competition Design & Results (11 min)
- Winner Talks Team An Old Driver (5 min)
- Winner Talks Team JBR_HSE (5 min)
- Winner Talks, Team ai-team-flatland (5 min)
- "Real world applications of Flatland" Panel (8 min)
- Deep RL Workshop Talk (8 min)
- NeurIPS Townhall (2 hours)
📱 Contact
Join the Discord channel to exchange with other participants!
If you have a problem or question for the organizers, use either the Discussion Forum or open an issue:
We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. But if you're looking for a direct communication channel, feel free to reach out to us at:
- mohanty [at] aicrowd.com
- florian [at] aicrowd.com
- erik.nygren [at] sbb.ch
For press inquiries, please contact SBB Media Relations at press@sbb.ch
🤝 Partners
Participants
Getting Started
Notebooks
3
|
0
|
|
4
|
0
|
|
8
|
Accelerate the learning – increase agents behavior at higher speed!
By
adrian_egli
Almost 4 years ago
|
0
|
2
|
0
|