A few years ago, building a machine learning system meant training your own model from scratch. Today, most teams start with an API call to a proprietary model, and that’s usually the right choice. But as products mature, the equation changes. Costs and control become limiting factors, fine-tuning becomes relevant, and open-source models are often considered.
In this talk, I’ll share what I’m seeing from working with AI startups and scaleups like Lovable, Suno, Reducto, and Cognition. When sticking with hosted APIs makes sense, when fine-tuning or switching to open source is the smarter move, and why some companies are still training their own models. We’ll look at the changing role of model training, and why it’s becoming rarer, but more important, than ever.

Rebecka Storm has a background in machine learning and has held data leadership roles at iZettle and Tink. After co-founding data orchestration startup Twirl, which was acquired by Modal, she now works on AI infrastructure, including serverless GPUs for model training and inference and sandboxes for executing AI-generated code.
In 2018, Rebecka co-founded Women in Data Science Sweden, an organization that promotes inclusivity through conferences, mentorship programs, a speaker database, and other initiatives to inspire and support women working in data.