Challenges
AMLD2020 Accepted Challenges

Here is the list of AMLD2020 challenges and competitions. The challenges take place before the event. Selected solutions and challenges outcomes will be presented during a competition track during AMLD2020 on January 27 & 28, 2020, at EPFL in Switzerland. Selected challenge participants will be invited to present and discuss their results during the track.

Note that challenges are independently organized and depends solely on the organizing team of each competition. All information is subject to change. Contact the organizers of each competition directly for more information.

Organizer

Erik Nygren
SBB (Swiss Federal Railways)

Organizer

Sharada Mohanty
AIcrowd

Start of competition

July 2019

End of competition

December 2019

The Flatland Challenge is a competition to facilitate the progress of multi-agent reinforcement learning for any vehicle re-scheduling problem (VRSP). The challenge addresses a real-world problem faced by many transportation and logistics companies around the world (such as the Swiss Federal Railways (SBB)).

Using reinforcement learning (or operations research methods), you must solve different tasks related to VRSP on a simplified 2D multi-agent railway simulations environment. Your contribution might influence and shape the way modern traffic management systems (TMS) are implemented not only in railway but also in other areas of transportation and logistics.

D’Avatar - Reincarnation of Personal Data Entities in Unstructured Text Datasets

Organizer

Balaji Ganesan
IBM Research

Start of competition

TBD

End of competition

TBD

Challenge URL

Link available soon

Named Entity Recognition (NER) and Entity Classification (EC) are well known tasks in Natural Language Processing. Detection of Personal Data Entities (PDE) in unstructured text is a specialized form of NER and EC, that is required for applications like data loss prevention, de-identification, and bias detection.

Because of recent advances in Deep Learning, it has become possible to detect PDEs in text with high precision and recall. However, this research requires large amounts of texts with personal information, which is hard to obtain because of privacy reasons. While few domain specific datasets like i2b2 and MIMIC exist, there are restrictions on their usage.

One feasible way to generate datasets for this research, is to impute random unrelated PDEs in already redacted data. This challenge aims to produce such a synthetic dataset that could be made publicly available to the research community.

TrajNet Challenge

Organizer

Rodolphe Farrando
EPFL

Start of competition

TBD

End of competition

TBD

Challenge URL

Link available soon

Over the past few years, several methods for trajectory-based activity forecasting have been proposed. However, most techniques have been evaluated on a limited number of sequences. Further, these methods have either been evaluated on different subsets of the available data, using different evaluation scripts, or on contrasting coordinate systems (2D, 3D).

These inconsistencies have made it difficult to objectively compare forecasting techniques. One potential solution involves creating a standardized benchmark to serve as an objective measure of performance; despite their potential pitfalls, benchmarks hold great promise in addressing such comparison issues. There have been a limited number of attempts at trajectory forecasting benchmarks, such as the ETH and the UCY datasets.

However, a common technique for presenting forecasting results requires both a standard dataset and evaluation metrics. We introduce TrajNet, a new, large scale trajectory-based activity benchmark, that uses a unified evaluation system to test gathered state-of-the-art methods on various trajectory-based activity forecasting datasets.

Organizer

Thijs Vogels
EPFL

Organizer

Martin Jaggi
EPFL

Start of first round

August 2019

End of first round

December 2019

Results of first round

December 2019

Submissions for live competition

January 2020

The AutoTrain competition has the goal to fully automate the training process of deep learning models. Submitted optimizers will be benchmarked on a set of unknown architecture/dataset pairs, in a live competition at AMLD2020 (following an earlier trial round event).

The winning optimizers will be made publicly available as open source and bring significant value to practitioners and researchers, by removing the need of expensive hyperparameter tuning, and by providing fair benchmarking of all optimizers.

Transfer Learning for International Crisis Response

Organizer

Navid Rekabsaz
Idiap Research Institute

Start of competition

TBD

End of competition

TBD

Challenge URL

Link available soon

The challenge stems from DEEP, a platform facilitating collaborative, and joint analysis of unstructured data with elements of ML and NLP. The platform is open-source and set-up to serve the humanitarian community at large.

The DEEP project provides effective solutions to analyze and harvest data from secondary sources such as news articles, social media, and reports that are used by responders and analysts in humanitarian crises. During crises, rapidly identifying important information from the constantly-increasing available data is crucial to understand the needs of affected populations and to improve evidence-based decision making.

DEEP allows users to submit documents and applies a number of NLP processes such as extraction and classification of text snippets (sentences/paragraphs), Named Entity Recognition (NER), and document clustering. The challenge focuses on improving the text snippet classification feature of the platform.

The participants will be provided with the data of all domains, consisting of text snippets and their corresponding target labels, where each domain has different analytical frameworks (target labels). The aim is to learn novel text classification models, able to transfer the knowledge across the domains, and specifically improve the classification effectiveness of the domains with smaller amount of or no available training data. Ideally, transfer and joint learning methods provide a robust solution for the lack of data in the data-sparse or new domains.