AI & Trust
13:30-17:00 January 28
& 09:00-12:30 January 29

½ day x2

Schedule

AI & Privacy

Introduction

13:30-13:35 January 28 · with Joseph Dureau

Technologies for Privacy-Preserving Machine Learning

13:35-14:00 January 28 · with Morten Dahl

Talk Title TBD

14:00-14:20 January 28 · with Peter Kairouz

When foes are friends: a privacy perspective on adversarial examples

14:20-14:40 January 28 · with Carmela Troncoso

Spoken Language Understanding on the Edge

14:40-15:00 January 28 · with Alice Coucke

Coffee Break

15:00-15:30 January 28

AI for Good

Introduction

15:30-15:35 January 28 · with Daniel Whitenack

Talk Title TBD

15:35-15:55 January 28 · with Jennifer Marsman

Solve for H: Leveraging AI to solve problems for human kind

15:55-16:15 January 28 · with Anna Bethke

Artificial intelligence for making earth accessible, searchable and insightful

16:15-16:35 January 28 · with Frank de Morsier

Talk Title TBD

16:35-16:55 January 28 · with Chris Benson

AI & Security

Introduction

09:00-09:01 January 29 · with Tereza Iofciu

A Marauder's Map of Security and Privacy in Machine Learning

09:01-09:25 January 29 · with Nicolas Papernot

Byzantine Machine Learning: Safeguarding AI from Data Poisoning and Hacked Machines

09:25-09:35 January 29 · with El Mahdi El Mhamdi

The past, present and future of generative models

09:35-10:00 January 29 · with Mihaela Rosca

Building a security ML-based startup from scratch

10:00-10:20 January 29 · with Raul Popa

How do attacks look in a world dominated by AI?

10:20-10:30 January 29 · with Sharada Mohanty

Coffee Break

10:30-11:00 January 29

Trusting AI

Introduction

11:00-11:05 January 29 · with Marisa Tschopp

Trusting AI and the Future of War

11:05-11:30 January 29 · with Anja Kaspersen

Trusting real-world AI applications

11:30-11:50 January 29 · with Marc Schöni

How people perceive AI - Trust & Explanation

11:50-12:10 January 29 · with Pearl Pu

Trusting AI - IBM's strategy, research, and practical approaches

12:10-12:30 January 29 · with Dorothea Wiesmann

Speakers

Alice Coucke

Senior Machine Learning Scientist, Snips

More info

Anja Kaspersen

Director Office for Disarmament Affairs, United Nations

More info

Anna Bethke

Head of AI for Social Good, Intel

More info

Carmela Troncoso

Professor, EPFL

More info

Chris Benson

Chief Strategist, Artificial Intelligence Programs, Lockheed Martin

More info

Daniel Whitenack

Data Scientist, SIL International

More info

Dorothea Wiesmann

Department Head, Cognitive Computing & Industry Solutions, IBM Research

More info

El Mahdi El Mhamdi

PhD Stundent, EPFL

More info

Frank de Morsier

Picterra

More info

Jennifer Marsman

Principal Software Development Engineer, Microsoft

More info

Joseph Dureau

CTO, Snips

More info

Marc Schöni

Advanced Analytics & AI, Microsoft

More info

Marisa Tschopp

Researcher, scip

More info

Mihaela Rosca

Research Engineer, DeepMind

More info

Morten Dahl

Research Scientist, OpenMined & Dropout Labs

More info

Nicolas Papernot

Research Scientist, Google Brain

More info

Pearl Pu

Professor, EPFL

More info

Peter Kairouz

Researcher, Google

More info

Raul Popa

CEO & Data Scientist, Typing DNA

More info

Sharada Mohanty

PhD Student, EPFL / Co-Founder, crowdAI

More info

Tereza Iofciu

Data Scientist, mytaxi

More info

Details

Trust is indispensable to the prosperity and well-being of societies. For millennia, we developed trust-building mechanisms to facilitate interactions. But as they become increasingly digital, many traditional mechanisms no longer function well, hence trust breaks down. As a result, low levels of trust discourage us from engaging in new forms of interactions and constrain business opportunities.

We must therefore invent trust mechanisms that will contribute to prosperous and peaceful societies in the digital age, and Artificial Intelligence could clearly contribute to implement trust in the digital world


Subtrack AI & Privacy

AI often relies on Machine Learning, which requires massive training datasets. The current status quo in AI are cloud-based services that process the raw data of all users, and learn from all or a subset of it. Following the rise of IoT, and the deployment of sensors (cameras, microphones, etc) in our homes, this raw data becomes more and more sensitive. The development of legislation around privacy, and progresses made on private machine learning and edge computing, open the way to interesting alternatives. Key academic and industrial efforts are now being made to reconcile AI and Privacy.


Subtrack AI for Good

In the midst of increasing concern over widespread misuse of data and the implications of AI, it's important to highlight those applying AI for social good. Whether its helping African farmers detect diseased crops, advancing sustainability, or helping the visually impaired, there are some great stories out there that should be shared. These stories include amazing technical achievements and outcomes that are positive for humanity, and a few of these will be highlighted during the "AI for Good" sub-track.


Subtrack AI & Security

More and more AI and Machine Learning solutions are deployed across industries affecting society at scale. The well defined models for attack and defence in classical computing security do not transfer perfectly to the ML systems and are not coping well with the variety of possible ML attacks. The attack surfaces are not yet clearly defined, one can do minor alterations to the input data in order to manipulate or poison the system. On the other hand, as the AI industry is an emerging one, there are no obvious standards or formal definitions for testing and security for ML systems. This session brings together experts in an attempt to highlight the advances in the field of secure ML.


Subtrack Trusting AI

Trust is the social glue that enables humankind to progress through interaction with each other and the environment, including technology. In the AI context, there are various reasons, why trust has become a very popular topic in research and practice. There is a lack of clear definition of processes, performance and especially the purpose of the AI, with respect to the intentions of its provider. Furthermore, open questions regarding ethical standards, the notion of dual-use research, lack of regulations and questionable data privacy as well as the uncontested supremacy of the tech-giants, leave a feeling of uncertainty behind. The Subtrack "Trusting AI" encompasses questions and issues from research and practice, which look at the heterogeneous concept and associations of trust from an individual, a relationship, a situation and/ or a process perspective: From the initial trust formation and antecedents to repairing a trust relationship once it is broken. Trust is one of the most critical influencing factors in the AI context.

Co-organizers

Center for Digital Trust

Website

Olivier Crochat

Executive Director, C4DT EPFL

Website

Daniel Whitenack

Data Scientist, SIL International

Twitter  ·  Website

Marisa Tschopp

Researcher, scip

Twitter

Joseph Dureau

CTO, Snips

Twitter  ·  Website

Tereza Iofciu

Data Scientist, mytaxi

Twitter  ·  Website

Previous
AI & Transport

January 26-29, 2019

© 2018 Applied Machine Learning Days