Back to All Events

Vision-Based Activity Detection Inside the Vehicle

Abstract

As distracted drivers cause many road fatalities, activity detection is crucial to automotive in-cabin sensing. In today's cars, cameras are used to monitor the driver's gaze. But while gaze aversion is a critical factor of driver distraction, studies show that the risk increases based not only on what you look at but how you engage.

Robust activity detection in a car requires the fusion of a multitude of signals that each have their unique strength. For example, body movement over time can show that a person is engaged in a secondary, non-driving-related task. To further understand the scene, this will have to be combined with an appearance-based algorithm, such as a deep convolutional neural network that detects interaction with objects. On top of this whole cabin view, the well-established driver monitoring signals can further strengthen insight into occupant activity, such as detecting speaking by analyzing lip movement or gaze direction, which can indicate talking or texting on the phone.

This talk will give insight into the steps in developing such an activity detection system – from data collection and annotation to integration to a System on Chip. Examples will be presented in addition to some useful tips and tricks when developing various detection algorithms.

Mattias Ulmestrand

Technical Developer @ SMART EYE

Mattias works as a technical developer at Smart Eye, developing deep learning models for computer vision on embedded environments. Prior to working at Smart Eye, Mattias studied Engineering Physics with a master's in Complex Adaptive Systems at Chalmers University of Technology and ETH Zürich. In his spare time, Mattias enjoys working on personal projects in Machine Learning or related subjects, as well as playing and composing music.