We Need to Talk About ML Debugging

Room
Time
Theme
Difficulty
Congress Hall
Room H1+H2
Room G3
To be released
10:00
To be released
Machine Learning
To be released
D2

Machine learning systems often fail in the most frustrating way possible: nothing crashes, training looks fine, and yet real-world performance is poor. Debugging these silent failures is one of the hardest and most underestimated challenges in applied ML. Most practitioners learn through painful experience, slowly building a personal “bag of tricks” from past mistakes and relying on intuition to guess what went wrong. In this talk, we explore how this intuition-driven approach emerges, why it sometimes works, and why it breaks down as ML systems grow more complex. We unpack how availability bias shapes where engineers look first, what they ignore, and why so many debugging efforts are often spent in the wrong parts of the pipeline. Finally, the talk introduces a structured approach to ML debugging. It starts from visible symptoms and uses small, targeted experiments to guide the next steps. This transforms debugging from guesswork into a systematic reduction of the search space, giving attendees a framework they can apply directly in their own ML projects.

Speakers

Juliano Tusi Amaral Laganá Pinto

Staff Engineer
Modulai
Juliano Tusi Amaral Laganá Pinto

Bio

Juliano is a Staff Engineer at Modulai, where he provides technical mentorship to machine learning engineers and develops ML systems across diverse industries using a range of ML techniques. He also founded Juliano Labs, an education business focused on high-quality workshops and courses on ML topics and structured problem-solving. Juliano brings a pragmatic and experiment-driven approach to ML debugging, shaped by 10 years of building and debugging ML pipelines across academia and industry.

Recording