Bio Michael Hofmann
Michael is Software Engineering Manager at TomTom’s Autonomous Driving product unit. He is responsible for leading multiple R&D and engineering teams on automated perception for HD mapping using the cutting edge of machine learning.
Prior to TomTom, he has worked in the augmented reality domain, developing and leading development of core algorithms for real-time object detection and tracking on mobile phones, and large-scale image retrieval.
Michael holds a PhD degree in computer science from the University of Amsterdam, and has 16 years of experience in computer vision, machine learning, and software engineering.
AI for Mapmaking: Embedding Loss Generative Adversarial Networks for Lane Detection
Generating accurate high definition maps through a scalable process is an important milestone for realizing reliable self-driving cars. This presentation is about our machine learning approach to understanding and semantically modeling the road environment for the purpose of high-definition mapping.
In particular, we will go into details on our recent embedding loss driven adversarial training approach, which we use to efficiently impose desired structural consistencies on our semantic dense prediction of road dividers.
Bio Catherine Pelachaud
Catherine Pelachaud is Director of Research at CNRS in the laboratory ISIR, Sorbonne University. Her research interest includes embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors and socio-emotional agents. She is associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and ACM Transactions on games. She has co-edited several books on virtual agents and emotion-oriented systems. She is recipient of the ACM – SIGAI Autonomous Agents Research Award 2015 and was honored the title Doctor Honoris Causa of University of Geneva in 2016. Her Siggraph’94 paper received the Influential paper Award of IFAAMAS (the International Foundation for Autonomous Agents and Multiagent Systems).
Interacting with socio-emotional characters
Socio-emotional characters are software entities endowed with communicative capabilities. Through verbal and nonverbal behaviours they can interact with human interlocutors. Through their nonverbal behaviours these characters can show their engagement in the interaction, their interpersonal attitude or to manage the impression they create. We have developed computational models to allow them to adapt their behaviours in real-time to convey specific intentions and to adapt to their interlocutors’ behaviours. While most of the working in human-agent interaction looks at dyadic situation, in everyday life we often interact in groups. Lately we have been developing models to control group of characters, how they exchange speaking turns and how having group of characters arguing between themselves or with the user influences the outcome of the interaction. In this talk I will present the different works we are conducting in these areas. I will also describe our platform of Embodied Virtual Agent Greta/VIB in which these works are implemented.