Most major machine learning breakthroughs and achievements in recent years can be attributed to deep learning – a family of highly complex and nonlinear models. However, a major drawback with deep learning models is that they are very hard to interpret – we know what the model predicts but we have very little insight into how it reasons. With more machine learning models being deployed into real-life scenarios, safety requirements on these systems have increased, especially in the area of model interpretability.
During this talk, you will get an introduction to model interpretability – why it is important and why it is so hard to understand some of our most powerful models. Different methods of interpreting models will be presented together with some theoretical background on the topic, as well as new research in this area.
Machine Learning Engineer @ Annotell
The only speaker on the list of this year’s conference, who also appeared last year, is the talented Marko Cotra. His talk last year was very popular, and we are glad that he wanted to join us again, using his pedagogical skills to help us understand machine learning models, and their pros and cons. Marko comes from the machine learning startup Annotell, helping companies create and manage good training data.