Here is the list of AMLD2020 challenges and competitions. The challenges take place before the event. Selected solutions and challenges outcomes will be presented during a competition track during AMLD2020 on January 27 & 28, 2020, at EPFL in Switzerland. Selected challenge participants will be invited to present and discuss their results during the track.
Note that challenges are independently organized and depends solely on the organizing team of each competition. All information is subject to change. Contact the organizers of each competition directly for more information.
SBB (Swiss Federal Railways)
The Flatland Challenge is a competition to facilitate the progress of multi-agent reinforcement learning for any vehicle re-scheduling problem (VRSP). The challenge addresses a real-world problem faced by many transportation and logistics companies around the world (such as the Swiss Federal Railways (SBB)).
Using reinforcement learning (or operations research methods), you must solve different tasks related to VRSP on a simplified 2D multi-agent railway simulations environment. Your contribution might influence and shape the way modern traffic management systems (TMS) are implemented not only in railway but also in other areas of transportation and logistics.
Link available soon
Named Entity Recognition (NER) and Entity Classification (EC) are well known tasks in Natural Language Processing. Detection of Personal Data Entities (PDE) in unstructured text is a specialized form of NER and EC, that is required for applications like data loss prevention, de-identification, and bias detection.
Because of recent advances in Deep Learning, it has become possible to detect PDEs in text with high precision and recall. However, this research requires large amounts of texts with personal information, which is hard to obtain because of privacy reasons. While few domain specific datasets like i2b2 and MIMIC exist, there are restrictions on their usage.
One feasible way to generate datasets for this research, is to impute random unrelated PDEs in already redacted data. This challenge aims to produce such a synthetic dataset that could be made publicly available to the research community.
Link available soon
Trajectory forecasting in dynamic scenes has become an important topic in recent times because of the increasing demands of emerging applications of artificial intelligence like autonomous cars and service robots.
In the past few years, several novel methods have been proposed for trajectory forecasting. However, most methods have been evaluated on limited data. Furthermore, these methods have been either evaluated on different subsets of the available data or on contrasting coordinate systems (2D, 3D) making it difficult to objectively compare the forecasting techniques.
One potential solution is to create a standardized benchmark to serve as an objective measure of performance. Benchmarks hold great promise in addressing such comparison issues. There have been a limited number of attempts at trajectory forecasting benchmarks, such as the ETH and the UCY datasets. Moreover, a good benchmark requires not only a standard dataset but also proper evaluation metrics.
In this challenge, we introduce TrajNet++, a new, large scale trajectory-based benchmark, that uses a unified evaluation system to test the gathered state-of-the-art methods on various trajectory-based activity forecasting datasets for a fair comparison.
The AutoTrain competition has the goal to fully automate the training process of deep learning models. Submitted optimizers will be benchmarked on a set of unknown architecture/dataset pairs, in a live competition at AMLD2020 (following an earlier trial round event).
The winning optimizers will be made publicly available as open source and bring significant value to practitioners and researchers, by removing the need of expensive hyperparameter tuning, and by providing fair benchmarking of all optimizers.
Idiap Research Institute
Joint IDP Profiling Service (JIPS)
Data Friendly Space
Link available soon
The challenge stems from DEEP, a platform facilitating collaborative, and joint analysis of unstructured data with elements of ML and NLP. The platform is open-source and set-up to serve the humanitarian community at large.
The DEEP project provides effective solutions to analyze and harvest data from secondary sources such as news articles, social media, and reports that are used by responders and analysts in humanitarian crises. During crises, rapidly identifying important information from the constantly-increasing available data is crucial to understand the needs of affected populations and to improve evidence-based decision making.
DEEP allows users to submit documents and applies a number of NLP processes such as extraction and classification of text snippets (sentences/paragraphs), Named Entity Recognition (NER), and document clustering. The challenge focuses on improving the text snippet classification feature of the platform.
The participants will be provided with the data of all domains, consisting of text snippets and their corresponding target labels, where each domain has different analytical frameworks (target labels). The aim is to learn novel text classification models, able to transfer the knowledge across the domains, and specifically improve the classification effectiveness of the domains with smaller amount of or no available training data. Ideally, transfer and joint learning methods provide a robust solution for the lack of data in the data-sparse or new domains.