LLMs are becoming great at generating code. But anyone who has used Claude Code, Lovable, or Cursor knows that generating code isn't enough. The code also has to run somewhere, ideally in an iterative loop where an agent writes code, executes it, observes the result, refines it, and repeats. That feedback loop is what has made coding agents far more useful for building software in just the past few months.
Supporting these loops at scale is a surprisingly hard problem, and has led to the rise of a new infrastructure primitive: Sandboxes. In this talk, I will explain what Sandboxes are, how they differ from traditional container primitives, and how they are used in practice, for vibe coding platforms like Lovable, background coding agents like Ramp’s Inspect and large-scale reinforcement learning experiments at Meta. I will share practical lessons from building and operating these systems, and why this layer is quickly becoming central to the AI stack.

Rebecka Storm has a background in machine learning and has held data leadership roles at iZettle and Tink. After co-founding data orchestration startup Twirl, which was acquired by Modal, she now works on AI infrastructure, including serverless GPUs for model training and inference and sandboxes for executing AI-generated code. In 2018, Rebecka co-founded Women in Data Science Sweden, an organization that promotes inclusivity through conferences, mentorship programs, a speaker database, and other initiatives to inspire and support women working in data.