View on GitHub

rl-on-trains-workshop

Flatland: Multi-Agent Reinforcement Learning on Trains

Scheduling trains is hard: railway networks are growing fast, and the decision-making methods commonly used don’t scale well. How can we solve this problem?

With machine learning, of course! In this workshop, we will use reinforcement learning to tackle this real-world challenge.

In the morning, we will introduce the main reinforcement learning methods. Participants will get familiar with them by solving toy problems. In the afternoon, participants will design their own agents, which will then compete with other people’s agents in a (friendly) competitive setting.

We will use the Flatland railway simulator, developed in collaboration with SBB and Deutsche Bahn. We plan to invite SBB researchers to give insights on this problem, as well as competitive participants in previous Flatland challenges.

Following this workshop, participants can take part in the other Flatland workshop organized by Deutsche Bahn and InstaDeep, which will introduce the bleeding-edge innovations they have been working on to tackle this problem.

Topic and relevance

In this tutorial, we will provide an introduction to reinforcement learning, followed by an explanation of one of its fundamental methods: Deep Q-Learning. We will then introduce the concepts of prioritised experience replay (PER) and intrinsic curiosity module (ICM) as extensions to DQN. We will provide Colab notebooks where participants can get familiar with these concepts by solving toy problems.

Participants will then get hands-on experience by building and tweaking agents in a competitive setting. We will use the AIcrowd platform to run this competition. The participants who will reach the best scores will be invited to come on stage to explain their method and insights to everyone.

We will use the Flatland railway simulator, developed in collaboration with SBB, Deutsche Bahn and SNCF. We plan to invite SBB researchers to give insights on this problem, as well as competitive participants in previous Flatland challenges.

Reinforcement learning is becoming more and more relevant in all sorts of applications. With reinforcement learning, one can build solutions for online learning problems and areas where there is not enough labelled or unlabeled data available. Due to its generality, reinforcement learning has found applications in many disciplines. Recently, reinforcement learning methods have achieved breakthroughs on complex tasks such as board games, video games, robotics, molecule discovery and chip design. In the web domain, news recommendation, online web systems auto-configuration and online advertising real-time bidding are possible applications for reinforcement learning.

All presenters have a publishing background, practical experience and previously organized workshops and challenges on deep reinforcement learning. Three of the organizers have already organized previous versions of this workshop, which were very well received.

Interaction style

Hands-on tutorial

Intended audience and level

Technical skill level needed to attend the workshop: Intermediate. Each participant is expected to actively take part by designing and training RL agents on their machine. Participants can also form teams, working on the same laptop on a single agent. The training will be done on the Google Colab service, which is free but requires a Google account. Participants should have a good knowledge of Python, and at least a basic understanding of machine learning. No knowledge of reinforcement learning is expected. We will use the PyTorch framework, but we don’t expect participants to be familiar with it.

Participants will discover what reinforcement learning is, what it can do, and what are its current limitations and perspectives. They will get hands-on experience by building and tweaking agents in a competitive setting.

Organisers

Florian Laurent, ML Engineer, AIcrowd Christian Scheller, Research associate, FHNW Sharada Mohanty, CEO, AIcrowd Yanick Schraner, Research assistant, Master student, FHNW