
Machine Learning / Data Science / Data Engineering
April 20, 2026 @ Svenska Mässan
Workshops, April 21 @ Svenska Mässan
Buy Tickets NowGAIA organises a one-day conference for people interested in artificial intelligence and all things data. We create an environment for learning, networking, and knowledge-sharing around these shared interests among individuals, organisations, academia, and the public sector. The conference focuses on applied machine learning, data science, and data engineering. It is an event by practitioners for practitioners. Across the three tracks, we also cover adjacent topics relevant to building successful AI products and companies, such as business, legal, design, and ethics. The conference presents diverse content from enthusiastic domain experts. It covers developments within the field in Gothenburg and the Nordics, as well as global trends.
Watch our previous conference on our YouTube channel. The GAIA Conference 2026 is already around the corner. See you there!
Practitioners and academics will give fascinating talks. We expect to be inspired and learn about techniques, strategies, and tools commonly used by people in the field. We hope to leave the conference with a long list of new things to explore further!
Of course, food and drinks are included in the ticket price. We will provide you with breakfast, lunch, and fika, along with unlimited coffee and tea, to keep you sharp throughout the day. We recommend planning for some extra time after the closing remarks, as we will finish the day with bubbles!
We host the conference and the workshops at Svenska Mässan, conveniently located near the Korsvägen stop. As usual, this prime venue enables us to bring together a large number of attendees, partners, and startups.
We are honoured to have so many representatives from the AI community share their knowledge and thoughts. They will share their tips and experiences, and you will have the opportunity to meet with other enthusiasts who share similar problems and interests.
We are now releasing the speakers for the 2026 edition of our conference. The full program will be revealed soon.
Our tracks include deep technical presentations for expert practitioners, as well as sessions on adjacent fields like design, legal, business, and ethics, necessary for building AI products and companies. We also include inspirational and introductory talks for users and those looking to get into AI.
We label each talk based on the prerequisites to appreciate it and the main topic covered to help you pick the talks you want to attend. The prerequisite levels are defined as follows.


At Jeppesen, we develop world-leading optimization solutions for the aviation industry, solving NP-hard problems like crew scheduling using methods such as Linear Programming and Mixed-Integer Programming.
In this talk, we will share insights from a recent exploration where our AI enablement and optimization teams collaborated to evaluate the potential of combining Evolutionary Algorithms with the creative power of Generative AI in our problem space. During this exploration, we pushed the boundaries of traditional optimization by enabling AI to mutate algorithms and code using biologically inspired algorithms. This fusion of "old meets new" not only sparked innovative solutions but also demonstrated the value of thinking outside the box when tackling optimization challenges and gave a taste of what the future could hold.
Alexander is a Data Scientist at Jeppesen ForeFlight, where he focuses on crew scheduling optimization for the aviation industry. His technical background is rooted in C++, and he currently explores the synergy between traditional optimization and AI. Specifically, he has been looking into how evolutionary algorithms, such as OpenEvolve, can be integrated into the optimization process. At GAIA, he will discuss how we can bridge these methods to solve complex scheduling challenges.


In this talk, we explore the potential of generative AI, specifically Large Language Models (LLMs) and Vision-Language Models (VLMs), to support the development and operation of safe autonomous driving.
The process begins by identifying critical scenarios in real-world data, then generating safety concepts to mitigate them, and ultimately implementing them in code. To enable this "from concept to code" workflow, we first examine the strengths and limitations of LLMs in automotive system engineering tasks. Building on these insights, we design multi-agent systems that enhance robustness, consistency, and validity of the generated artifacts. We then showcase a set of tailored pipelines, each supporting a different activity in the DevSafeOps cycle, from field data to code generation, refinement, and evaluation. This approach is not limited to rule-based software; it also extends to ML-based software units, enabled by integrating 3D Gaussian Splatting for synthetic data generation and VLMs to trigger data collection.
These AI-enabled pipelines accelerate automotive system and software engineering activities, illustrate how generative AI can integrate into the continuous software development process in automotive, and highlight the emerging role of LLMs in this field.
Ali Nouri is an AI expert at Volvo Cars, working on autonomous driving, and a researcher at Chalmers University of Technology. He has more than ten years of experience in autonomous driving systems. He represents Sweden in international ISO standardization efforts, including ISO 8800 (Safety of AI). His research focuses on accelerating DevOps for autonomous driving software development through generative AI.


This talk explores what happens when AI systems are designed not only to respond to our emotions but to influence them. Emotional AI tools, including chatbots and AI companions, increasingly use affective design, reward loops and adaptive emotional cues that can quietly shape user behavior. These techniques, grounded in behavioral psychology and neuroeconomics, raise new questions about autonomy, manipulation and the limits of traditional consent.
Ana Catarina de Alencar shows how emotionally responsive AI can trigger patterns of dependency, create subtle forms of influence and blur the boundary between persuasion and manipulation. When an AI system comforts, validates, or rewards users at precisely the right emotional moment, can we still say that consent is freely given? And what does informed consent mean when the system itself affects the user’s emotional state and decision-making?
Drawing from emerging real-world cases (including situations of emotional attachment to AI companions), the talk highlights why existing privacy and consent frameworks are insufficient for emotionally interactive technologies. Attendees will gain a clear understanding of how manipulative design manifests in Emotional AI, why it matters for ethics and regulation, and what needs to change to protect user autonomy.
Ana Catarina de Alencar is an international lawyer and ethicist based in Paris, working at the intersection of AI governance, regulation, and philosophy. She is the Resident Philosopher at The AI Collective, a San Francisco–based organization dedicated to shaping responsible AI futures. Ana is also a PhD researcher at Université de Lille, where she develops an interdisciplinary project on emotional AI, bridging law, philosophy, and neurobiology. She holds a Master’s degree in Philosophy and Technology of Law and is the author of several publications, including Artificial Intelligence, Ethics and Law (2022).


This keynote explores the intersection of visualization, artificial intelligence, and human decision support, focusing on how multimodal interfaces are shaping next-generation AI-enabled workflows. Effective visualization empowers users to interpret complex data, while AI augments human insight, making decision processes more robust and transparent. Multimodal interfaces—combining visual, auditory, and tactile channels—are emerging as pivotal tools in facilitating seamless human-AI collaboration, enabling users to interact with systems in more intuitive and meaningful ways.
To realize these advancements, Sweden requires a robust AI compute infrastructure capable of supporting both research and industrial innovation. The talk will highlight examples such as the build-up of Sferical AI to provide key industries in Sweden with state-of-the-art AI compute. The Wallenberg AI, Autonomous Systems and Software Program (WASP) plays a central role in this landscape, driving forward the development of advanced AI technologies and cultivating a new generation of experts. Through its comprehensive initiatives, WASP provides critical competence that bridges the gap between academic breakthroughs and real-world industrial adoption.
Anders Ynnerman is a renowned Swedish scientist specializing in scientific visualization, computer graphics, and visual AI. He currently serves as the Director of Strategic Research at the Knut and Alice Wallenberg Foundation and is Executive Chairman of Spherical AI. With a strong background in supercomputing infrastructure, Ynnerman has served as director of the National Supercomputer Centre (NSC), contributing significantly to the development of Sweden’s computational research capabilities. In addition, he has played a pivotal role in the Wallenberg AI, Autonomous Systems and Software Program (WASP)—having previously served as Program Director and now acting as Chair of WASP, Sweden’s largest research program. Ynnerman is also the leader of the national WISDOME project for Science Communication. His research base is as a Professor of Scientific Visualization at Linköping University, where he leads the Visualization Center C. Through these leadership positions, Ynnerman continues to advance the field of digital science communication, supercomputing, and innovative research in Sweden and beyond.


This presentation introduces the new Guidance Note NI692 developed by Bureau Veritas Marine & Offshore for assessing Machine Learning Systems (aaccessible here).
This Guidance Note provides recommendations for the transparent and trustworthy development and operation of machine learning systems in marine settings. It emphasizes human oversight and risk-based assessment across the system lifecycle, from data and design to deployment, monitoring, and maintenance.
Key highlights of the Guidance Note include:
NI692 builds upon the EU AI Act, ISO standards, and the IMO MASS Code, and emphasizes key aspects of ML systems such as transparency, explainability, human oversight, and traceability.
Bérénice Le Glouanec holds a master's degree in Language Technology from the University of Gothenburg. She is the AI and data technical expert at Bureau Veritas Marine & Offshore, working within the Digital and Autonomous Ship team in the Rule Development department. She is responsible for drafting rule notes and recommendations, including NI692, a guideline that addresses the entire lifecycle of machine learning systems and ensures their trustworthiness. She also contributes to the International Association of Classification Societies (IACS) on the revision of Recommendation 183 on Ship Data Quality and follows AI standardization activities within AFNOR and CEN-CENELEC.


In this talk, The Good AI Lab and Doctors Without Borders share how they are building a long-term collaboration where AI is taught and developed openly, accountably, and in service of humanitarian work. They will outline how AI enablement, strategy, and applied research projects can fit into a single roadmap, and what they have learned so far about making AI useful in high-stakes, low-resource settings without losing sight of dignity, context, and constraints.
Carlo Rapisarda is a software engineer and co-founder of The Good AI Lab. He currently works at GeoGuessr, focusing on product engineering with an emphasis on mobile development and 3D/graphics experiences. Previously, he led applied AI initiatives at Framna. At The Good AI Lab, Carlo helps translate responsible-AI principles into deployable systems and practical training programs for humanitarian and public-interest partners, helping ensure that real-world AI deployments are useful, safe, and aligned with mission needs.


Marketing teams rarely pursue a single objective: lifting sales must coexist with brand‑awareness targets, customer‑acquisition caps, or sustainability constraints. Traditional media‑mix models (MMMs) treat each metric in isolation, leaving decision‑makers to reconcile incompatible recommendations by hand. This talk demonstrates how graphical Bayesian modelling enables simultaneous inference and optimization across multiple objectives, producing allocation strategies that respect every KPI—and clearly flag when trade‑offs become infeasible.
Description: During the days when every euro of media spend juggles revenue, reach, churn, and business constraints, classic media‑mix models fall short: they optimize one KPI and leave the rest to managerial guesswork. This talk reveals how graphical Bayesian modelling lets multiple causal MMMs—each focused on a different target variable—cohabit in a single, principled budget‑allocation problem.
We'll focus on the PyMC Ecosystem, showing how PyTensor, PyMC and PyMC-Marketing help to solve this issue. Mentioning the advantages of graphical models, and how to use them to build causal media mix models and perform complex operations in a very straightforward manner.
Attendees will leave with a reproducible notebook template, a principled framework for embedding several MMMs in one optimisation problem, and a checklist for detecting—and communicating—when certain goals cannot be met concurrently. The material assumes familiarity with Bayesian inference but will provide a concise refresher on PyMC syntax and the latest PyMC‑Marketing utilities.
Carlos Trujillo is a marketing scientist and data scientist specializing in Bayesian statistics and advanced analytics. He has over seven years of experience applying statistical modeling and machine learning to real-world marketing problems, including work at Omnicom Media Group and Bolt, and he currently works at Wise. He is a contributor to PyMC-Marketing, an open-source Bayesian marketing analytics library within the PyMC ecosystem, where he helps develop tools for media mix modeling, budget optimization, and causal inference. His work bridges theory and practice, and he regularly collaborates with international teams while contributing to the open-source community and technical knowledge sharing.


This case concerns the development of an end-to-end geospatial pipeline that integrates curated spatial attributes with Husqvarna’s internal data to produce a coherent view of golf course environments. Developed in collaboration with Knowit Solutions, the pipeline establishes the technical foundation for geospatial analytics that drive business insights. Structured spatial indexing and robust metadata contribute to an annotated dataset that supports computer vision workflows in validating patterns and enriching geospatial analytics.
Husqvarna is further advancing its data-driven approach by using geospatial analytics to generate deeper insights into green spaces and understand regional variation. This evolving technical foundation is strengthening Husqvarna’s ability to combine visual and spatial information for more precise green space evaluation and identification, enabling more focused and informed decision-making across the organization.
Christos Marinos is a Data Engineer specializing in designing and building scalable, robust data platforms that enable analytics, AI, and machine learning products. He focuses on data extraction, transformation, and modeling, delivering reliable pipelines that support end-to-end AI and ML workflows. He is dedicated to creating data products that generate business value, working closely with stakeholders to align technical solutions with real needs. With a strong emphasis on data governance, scalability, and operational robustness, he builds maintainable platforms that support long-term, value-driven decision-making.


Unlabeled data is abundant in industry, but ground truth is scarce. Trafikverket captures hundreds of terabytes of high-resolution imagery annually to monitor Sweden’s railway network. The sheer volume of this data renders manual labeling—and therefore traditional supervised learning—unfeasible.
This talk explores how we are overcoming the "labeling bottleneck" by deploying a self-supervised foundation model trained on this massive, unlabeled archive. We will bridge the gap between theory and practice, starting with a comparison of Self-Supervised Learning (SSL) paradigms. Moving beyond theory, we will demonstrate the practical application of SSL in a large-scale industrial setting.
Attendees will learn how Trafikverket leverages SSL to learn robust visual representations, enabling the fine-tuning of models for critical downstream tasks—such as detecting rail cracks and identifying key infrastructure assets—with minimal labeled data.
Edvin Listo Zec is a Senior Machine Learning Engineer at Eghed, where he applies advanced deep learning to critical business challenges. He holds a PhD from KTH Royal Institute of Technology, where his research focused on distributed deep learning. Previously, Edvin served as a Research Scientist at RISE Research Institutes of Sweden and a Visiting Researcher at NYU. With a background spanning representation learning and out-of-distribution generalization, he is dedicated to bridging the gap between theoretical research and impactful, sustainable applications in the physical world.


The talk will outline how Zenseact is using spatio-temporal Transformers to realize E2E driving with safety guardrails. We'll discuss opportunities and challenges that come with this approach.
Erik is currently Senior Director of AI and Perception at Zenseact. Prior to this role, he held several key technical leadership positions at Zenseact (and formerly Zenuity), including Chief AI Officer, Chief Architect, Product Area Owner for Computer Vision, and Technical Expert in Deep Learning.
From 2014 to 2017, he served as Director of Automated Driving and Preventive Safety at Autoliv’s global research division. Erik holds a PhD in superstring theory from Chalmers University of Technology (2006) and has eight years of experience in statistical accident research at Autoliv.


In the depths of a Boliden mine, 700 meters underground, running traditional cloud-first AI on heavy-duty machinery falls short on delivering results. In a collaboration with Volvo Group and academic partners, we validated a novel approach: turning the trucks themselves into intelligent and interactively queryable computational databases. By embedding a tiny combined main-memory database and computation engine directly onto heavy-duty mining vehicles, we transformed the fleet into a distributed system where data streams are analyzed and queried at the source in real-time.
This talk shares the architectural lessons learned from deploying this "database-on-wheels" model to monitor critical metrics like battery health, energy regeneration, and driving patterns in a connectivity-constrained environment.
Beyond the immediate deployment, this architecture offers a fundamental shift in how we build industrial AI. We will explore how exposing physical assets via a familiar SQL-like interface paves the way for the next generation of Agentic Workflows. Instead of dealing with rigid firmware cycles, autonomous agents can simply “query” the fleet for insights or update model weights as easily as updating a row in a database table. We will discuss the trade-offs of edge-native computational query processing and how this approach decouples rapid AI innovation from the slower engineering cycles of heavy machinery.
Erik Zeitler is a co-founder of Stream Analyze and a database systems engineer focused on real-time analytics on constrained edge devices. He holds a Ph.D. in Computer Science from Uppsala University, with research contributions to large-scale data stream processing. Before founding Stream Analyze, he led data infrastructure architecture at Klarna, enabling real-time risk and fraud systems at scale. Erik’s work spans academia and industry, with deployments ranging from cloud platforms to heavy-duty industrial vehicles.


At Jeppesen, we develop world-leading optimization solutions for the aviation industry, solving NP-hard problems like crew scheduling using methods such as Linear Programming and Mixed-Integer Programming.
In this talk, we will share insights from a recent exploration where our AI enablement and optimization teams collaborated to evaluate the potential of combining Evolutionary Algorithms with the creative power of Generative AI in our problem space. During this exploration, we pushed the boundaries of traditional optimization by enabling AI to mutate algorithms and code using biologically inspired algorithms. This fusion of "old meets new" not only sparked innovative solutions but also demonstrated the value of thinking outside the box when tackling optimization challenges and gave a taste of what the future could hold.
Jacob is a 27-year-old data scientist at Jeppesen ForeFlight, where he works as part of the AI Enablement Team. Their mission is to elevate the organization's overall AI maturity, with mainly focusing on assisting developer teams in overcoming both immediate and long-term challenges.
As a pilot in his free time, he is passionate about blending his love for aviation with his burning interest in AI. Every day, he strives to explore how we can leverage predictive technologies to make smarter, more informed decisions in the aviation industry.


While LLMs and deep learning are all the rage nowadays, many problems still make some of these models infeasible. In cases such as network filtering, real-time security monitoring, etc., where one might need to make a decision within a few nanoseconds, there is no time to copy data to a userspace process, let alone to a GPU. This talk will outline work on building ML models to run in Linux kernel space, in particular, the eBPF virtual machine and the restrictions imposed on running models there. We will use a fairly naive dataset from Kaggle for malicious traffic detection and showcase how a model can be trained, compiled, and deployed in a real-life kernel without restarting the machine.
eBPF in and of itself poses a set of interesting constraints:
What you get in return is a formally-verified program (and model) that is guaranteed to be safe w.r.t., e.g. out-of-bound reads, predictable runtime complexity, and more importantly, a program you can deploy on any Linux machine with a fairly small set of capabilities without having to modify the running kernel.
The talk is aimed at both ML Engineers, developers and a little bit of SecOps, where I will outline the task and dataset with some flashbacks of caveats in traditional feature-engineering, in particular for (network) packet inspection. There might be lessons on how NOT to build datasets and on the importance of domain expertise in both dataset creation and feature engineering. Additionally, Jesper will give an introduction to eBPF and kernel development, and share lessons learned from building code (and models) that is portable across eBPF while still maintaining backwards compatibility with the original embedded-C targets.
In some sense, this is an ode to classical ML and to how traditional feature engineering still has a place, even if it's a small one.
This work is not sponsored by any company; it is open-source and in the public domain.
Jesper is a freelancing Tech Lead and Senior Machine Learning Engineer with a career spanning through AI/ML, software development, architecture, and research. His journey in machine learning spans more than 20 years, from developing an embedded computer-vision system to more recently internet-scale infrastructure, where roughly 30% of internet traffic flows through his models. Over the years, his career has spanned everything from natural-language processing to reinforcement learning (and everything in between). He is also an active contributor to many open-source projects, including the Linux kernel.


We are building AI systems everywhere—from customer service to clinical decision support—while not even fully trusting the chatbots. And we are wise not to trust them. The truth is that when we talk about "human-in-the-loop", we are giving our AI an F, and we are losing out on most of the automation and scalability. So if we can't trust a simple chatbot, how can we design enterprise-wide autonomous AI?
This talk dives into the root cause of those trust issues: the gap between probabilistic-based prediction and fact-based understanding, and why modern AI needs more than just bigger models doing the RAG time shake to become reliable.
I will dig into the concept of World Models: internal representations of reality. Every intelligent system—whether a worm, a human, or an AI—must build a small world inside itself. This internal model is what allows it to predict rather than just react. It’s how the brain, the worm, and the algorithm navigate and find their way in a messy world.
Through examples from various industries (including modelling global supply chains with millions of graph relationships), we’ll explore how grounding AI in structured knowledge dramatically improves its ability to reason, stay consistent, and avoid hallucination.
Attendees will learn how knowledge graphs, ontologies, and hybrid neuro-symbolic architectures create AI systems that can explain their answers, keep track of facts, and align with organisational logic. Finally, we’ll look at practical patterns for bringing reliability into real-world AI setups—from digital twins and decision support to enterprise agents that can be trusted with actual work.
If your chatbot also has “trust issues,” this talk will help you understand why and what it takes to fix the relationship.
Johan Müllern-Aspegren is Emerging Tech Lead at Capgemini’s Applied Innovation Exchange Nordics. With a passion for innovation and shaking things up, he co-founded Imagine Scandinavia’s largest bootcamp for innovation, founded the Careful AI Test Lab in the City of Helsingborg, and established RIOT Labs at Capgemini to churn out bleeding-edge prototypes. Now part of the AI Futures Lab, he works hands-on with tomorrow’s AI capabilities while helping organisations build AI that is reliable, understandable, and genuinely useful. A sought-after mentor and speaker, he connects industry, academia, and startups to turn emerging technology into practical impact.


How do you guarantee that the source code produced by an AI has high quality? And how do you measure it?
In times when everyone can build advanced software applications, discussions tend to focus on how to do this as efficiently as possible. Should you use integrations like MCP or CLI, orchestrations like subagents or agent protocols, or a prompting strategy like Spec Driven Development? Benchmarks like SWE-Bench measure how effective AI agents are at finding workable solutions, but very few seem to be talking about how to produce high-quality source code that not only runs optimally in terms of performance and memory usage, but is also well-documented and beautifully structured.
In this talk, Johan Sanneblad presents his production-proven process for creating complex applications with very high source code quality, achieving hundreds of thousands of downloads. You will learn how to create source code with AI agents that can be used as a software foundation for decades, instead of just focusing on how to produce code as quickly as possible. This process is already being used by some of the largest companies in Sweden, migrating old applications to modern tech stacks and creating new services in record time.
Johan Sanneblad, PhD, has worked with software development for large global organizations like Apple, EA, Google, Lego, Microsoft, Sony and Yamaha. Before starting TokenTek, Johan was Director of Human-centered AI at RISE. One of his most recent applications, Notebook Navigator for Obsidian, was the #1 most downloaded new plugin for the text editor Obsidian in 2025.


OpenEuroLLM is Europe’s answer to closed-source AI. With a massive strategic allocation of EuroHPC’s total capacity, including access to the upcoming JUPITER supercomputer, we are building the next generation of open European foundation models. In this talk, we unpack the reality of R&D for this continental operation. We will detail AI Sweden’s full-stack leadership: from defining scaling laws and custom tokenizers to curating data pipelines and driving the critical post-training phase. Supported by Vinnova, national and European partners, and integrated with other European projects such as TrustLLM and DeployAI, this project ensures digital sovereignty and green computing while retaining top-tier AI talent here in Sweden and across Europe.
Jonas Lind (PhD) is the Head of Research for the NLU team at AI Sweden. With over 25 years of experience in language technology, his career spans academia, AI for intelligence applications, speech technology, and forensic voice biometrics. Before leading NLU initiatives for OpenEuroLLM, Jonas worked extensively in national security for governmental agencies and as a founder of his own private company. He now oversees the post-training and refinement of large-scale European foundation models, ensuring they meet rigorous standards for performance, safety, and digital sovereignty.


Machine learning systems often fail in the most frustrating way possible: nothing crashes, training looks fine, and yet real-world performance is poor. Debugging these silent failures is one of the hardest and most underestimated challenges in applied ML. Most practitioners learn through painful experience, slowly building a personal “bag of tricks” from past mistakes and relying on intuition to guess what went wrong. In this talk, we explore how this intuition-driven approach emerges, why it sometimes works, and why it breaks down as ML systems grow more complex. We unpack how availability bias shapes where engineers look first, what they ignore, and why so many debugging efforts are often spent in the wrong parts of the pipeline. Finally, the talk introduces a structured approach to ML debugging. It starts from visible symptoms and uses small, targeted experiments to guide the next steps. This transforms debugging from guesswork into a systematic reduction of the search space, giving attendees a framework they can apply directly in their own ML projects.
Juliano is a Staff Engineer at Modulai, where he provides technical mentorship to machine learning engineers and develops ML systems across diverse industries using a range of ML techniques. He also founded Juliano Labs, an education business focused on high-quality workshops and courses on ML topics and structured problem-solving. Juliano brings a pragmatic and experiment-driven approach to ML debugging, shaped by 10 years of building and debugging ML pipelines across academia and industry.


No data should go unnoticed in the company. When we stumbled upon old, forgotten audio files gathering digital dust, our eyes lit up—these seemingly idle recordings turned out to be a rich source of insights, just waiting to be analyzed. We quickly mapped out the plan: download the files, transcribe them, extract the useful details—and just like that, a whole beautiful project was ready to roll! It may be hard to imagine all the little pitfalls, twists, and bumps that come with an initiative like this. How to measure conversation quality when all you have are mono-channel audio files? How to evaluate the AI system itself? How can prompts be updated safely without introducing risk? And how to make the solution fully compliant? We faced these challenges, and we’re excited to share our journey. The session will cover how we used AI to ensure quality and compliance in the contact center and the simple but effective solutions we applied.
Justyna Krok is a Data Lead at B2 Impact S.A. She began her journey in data and modelling by earning both a bachelor’s and a master’s degree in Informatics and Econometrics from AGH University of Krakow. Since then, she has worked in data science and machine learning roles, designing and delivering data-driven and AI-based solutions to address challenges in the debt management domain. To further develop her expertise, she recently started a PhD focused on multi-agent AI systems.


Implementing conversational BI in the private sector is hard; doing it for a public sector school administration is a different beast entirely. In December 2025, we kicked off a pilot with Göteborgs Stad. By the time we stand on this stage, we will have 3 months of real-world data, user feedback, and likely a few scars to show for it.
This talk moves beyond the hype of “chatting with data” to explore the practical reality of deploying AI in a municipality. We will share the unpolished truth of our journey: navigating strict compliance, managing expectations for a diverse, non-technical user base, and the technical realities of bridging modern AI and municipal data. A transparent look at what happened when we tried to bring AI to the classroom administration.
Karl Alfredsson is the Head of Development and Transformation at the City of Gothenburg’s Department of Education and a PhD candidate at University West. His work sits at the intersection of artificial intelligence, digital transformation, and modern public governance. Through real-world experimentation and action research, he explores how AI and data-driven systems are reshaping responsibility, trust, and learning in public organizations. With a background spanning education, leadership, and creative technology, Karl brings a forward-looking, human-centered perspective on how the public sector can evolve in the era of intelligent systems.


In many real-world vision systems, performance is limited less by model architecture than by the cost and quality of the data. This talk introduces a practical approach for "min-maxing" your data: maximizing expressibility while minimizing sample count through information-based subset selection. Whether choosing calibration frames or deciding which images deserve annotation, selecting the most representative samples can lead to faster results, lower labeling effort, and more accurate models, especially when building few-shot or data-efficient pipelines.
Taigatech is a Gothenburg-based startup that is making sawmills around the world more efficient with AI. Deployments are fast-paced, and AI models need to uphold quality quickly. In this context, smart data selection is not an academic luxury but a necessity for quick deliveries and maintainable neural networks.
Drawing on Taigatech's work in computer vision, I'll show how these techniques improve reliability and efficiency in production systems where data collection is expensive and conditions are harsh. Attendees will learn simple, deployable strategies for using their data more efficiently and building CV systems that perform better with less.
Mattias first encountered machine learning and computer vision in my bachelor's thesis when studying Engineering Physics at Chalmers. Fascinated by the subject, he continued by studying Complex Adaptive Systems, specializing in machine learning.
Since then, he has worked in computer vision throughout his career, including serving as Taigatech's first hire and helping design multiple vision engines from the ground up. These products are used worldwide, improving yields and sustainability.
Beyond his professional work, he pursues several personal projects in machine learning, with a particular interest in reinforcement learning.


Unlabeled data is abundant in industry, but ground truth is scarce. Trafikverket captures hundreds of terabytes of high-resolution imagery annually to monitor Sweden’s railway network. The sheer volume of this data renders manual labeling—and therefore traditional supervised learning—unfeasible.
This talk explores how we are overcoming the "labeling bottleneck" by deploying a self-supervised foundation model trained on this massive, unlabeled archive. We will bridge the gap between theory and practice, starting with a comparison of Self-Supervised Learning (SSL) paradigms. Moving beyond theory, we will demonstrate the practical application of SSL in a large-scale industrial setting.
Attendees will learn how Trafikverket leverages SSL to learn robust visual representations, enabling the fine-tuning of models for critical downstream tasks—such as detecting rail cracks and identifying key infrastructure assets—with minimal labeled data.
Mladen Gibanica is a Senior Data Scientist at eghed, working primarily with Trafikverket to automate railway maintenance using computer vision. He was previously at Volvo Cars, initially as an industrial PhD student, and later as a data scientist working on projects from different domains.
Mladen holds a PhD in Applied Mechanics and has co-founded Ingenjörsarbete För Klimatet, a non-profit organisation conducting engineering work for a sustainable civilisation.


AI has transformed software development at unprecedented speed. AI-assisted development is now evolving into fully agentic systems that can orchestrate themselves and work independently. While autonomy is the goal, the missing link is grounding AI systems in high-fidelity, domain-specific context. Successful adoption, therefore, depends on leveraging MCP servers and specialized skills to anchor AI in the enterprise’s real-world environment.
This talk explores the technical and business foundations required to securely scale agentic capabilities. We will highlight the importance of clarifying intent through Specification-Driven Development, grounding AI models in domain context for reliable performance, and establishing a practical governance model to ensure the safe, compliant, and responsible integration of MCP servers.
Nasser Mohammadiha is an AI/ML expert specializing in the large-scale adoption of generative AI and enhancing R&D efficiency. With extensive experience spanning academia and industry, he is dedicated to fostering industry–academia collaboration and strengthening the Gothenburg AI ecosystem. Recently, his work has focused on high-impact R&D use cases, AI governance and risk mitigation, and building the internal communities necessary to scale generative AI across Ericsson.


Artificial Intelligence has become increasingly impactful in the life sciences over the last few years, pushing scientific boundaries, as exemplified by the recent success of AlphaFold2. In this talk, I will provide an overview of how AI has impacted drug design in the last few years, where we are now and what progress we can reasonably expect in the coming years. The presentation will focus on deep learning-based molecular de novo design; however, aspects of using multi-agent systems and the integration of AI with automation will also be covered.
Ola Engkvist, PhD, is Executive Director and Head of Molecular AI within Discovery Sciences at AstraZeneca R&D, where he leads the development and application of machine learning and artificial intelligence to accelerate drug design. He has published over 180 peer-reviewed scientific articles and is an ELLIS fellow at the European Laboratory for Learning and Intelligent Systems and a 2025 Clarivate Highly Cited Researcher. He holds an adjunct professorship in machine learning and AI for drug design at Chalmers University of Technology, serves as a Trustee of the Cambridge Crystallographic Data Centre, and is recognized for his work in pharmaceutical innovation.


Implementing conversational BI in the private sector is hard; doing it for a public sector school administration is a different beast entirely. In December 2025, we kicked off a pilot with Göteborgs Stad. By the time we stand on this stage, we will have 3 months of real-world data, user feedback, and likely a few scars to show for it.
This talk moves beyond the hype of “chatting with data” to explore the practical reality of deploying AI in a municipality. We will share the unpolished truth of our journey: navigating strict compliance, managing expectations for a diverse, non-technical user base, and the technical realities of bridging modern AI and municipal data. A transparent look at what happened when we tried to bring AI to the classroom administration.
P-A Gustafsson is the CEO and co-founder of Inblick.ai, an AI platform that turns organizational data into real-time, decision-ready insights. Today, his work focuses on applying AI in public-sector and educational contexts, helping school administrations and municipalities make better, faster decisions within strict governance and compliance frameworks. He began his career helping shape the classic video game Worms at Team17, joined DICE as one of its first designers, and later founded and exited the game studio Solidicon. Through his agency Future Memories, he has delivered 70+ projects for organizations including STC, Axfood and Mölndals Stad. P-A is also co-founder of Nestic.ai and an early investor and board member of health-tech scaleup Weekly Revolt.


A few years ago, building a machine learning system meant training your own model from scratch. Today, most teams start with an API call to a proprietary model, and that’s usually the right choice. But as products mature, the equation changes. Costs and control become limiting factors, fine-tuning becomes relevant, and open-source models are often considered.
In this talk, I’ll share what I’m seeing from working with AI startups and scaleups like Lovable, Suno, Reducto, and Cognition: when sticking with hosted APIs makes sense, when fine-tuning or switching to open source is the smarter move, and why some companies are still training their own models. We’ll look at the changing role of model training, and why it’s becoming rarer, but more important, than ever.
Rebecka Storm has a background in machine learning and has held data leadership roles at iZettle and Tink. After co-founding data orchestration startup Twirl, which was acquired by Modal, she now works on AI infrastructure, including serverless GPUs for model training and inference and sandboxes for executing AI-generated code. In 2018, Rebecka co-founded Women in Data Science Sweden, an organization that promotes inclusivity through conferences, mentorship programs, a speaker database, and other initiatives to inspire and support women working in data.


As the second-hand fashion industry grows rapidly, companies face increasing challenges in managing large product inventories—from identifying fashion items accurately to setting competitive prices across multiple resale channels. Bencha addresses these challenges with an automated pricing engine, real-time product identification, and a recommerce hub that guides clients on where, how, and at what price to sell their items. Under the hood, our systems process millions of unstructured product documents every day to power these capabilities.
For large-scale agentic and AI workflows, cost and service-level objectives (SLOs) such as latency and throughput quickly become bottlenecks when relying on large general-purpose models or proprietary APIs. At the same time, there is often headroom to improve accuracy and output quality for narrow, well-defined tasks. As model capabilities advance, one powerful pattern is to leverage large models for supervision while training much smaller specialized models that satisfy strict SLOs—with far lower variance, latency, and resource footprint—and still preserve quality or even outperform larger systems in domain-specific scenarios.
This talk focuses on how resource footprint, latency, throughput, and reliability constraints shape architectural and modeling choices in the context of open-weight AI. Demonstrated through a real-world production case study using Vision Language Models, it will detail Bencha’s systematic methodology for scoping an MVP, and iterating on data curation, fine-tuning, and evaluation strategies to reflect production behavior rather than benchmark scores. Attendees will gain practical strategies for extracting maximum value at scale—especially when defaulting to costly proprietary APIs is not an option.
Robert Cedergren is Founding Head of AI at Bencha, where he architects the company’s AI platform. He is a hands-on ML/AI engineer with a strong foundation in Computer Vision, NLP, and Generative AI, and has built and scaled production machine learning systems at multiple startups—spanning complex training workflows to engineering cloud-native runtimes that deliver both low latency and high throughput. Robert holds an MSc in Machine Learning from KTH and thrives in high-ownership roles shipping production-grade, end-to-end AI systems.


Soon, all AI workloads will demand the highest levels of performance and security—requirements that virtualized clouds increasingly struggle to meet. This talk introduces Bare Metal: physically isolated, purpose-built infrastructure granting direct access to raw compute power without layers of virtualization. We'll dissect this single-tenant architecture from A to Z, focusing on true data sovereignty through ownership and control of the entire supply chain end-to-end: data centers, GPUs, and networks. We'll also explore confidential compute and how to eliminate the leakage risks and "noisy neighbor" unpredictability common in shared environments. As a proof-of-concept, we'll showcase AI Sweden's "Svea" project, which leverages this sovereign infrastructure for public sector needs.
Co-founder and CEO of Airon, a Swedish technology company pioneering AI factories that transform Sweden's abundant electricity into high-value computing power. Robert launched the first facility in 2020, well before the AI boom, securing early partnerships with Nvidia and positioning Airon ahead of most European competitors. Airon maintains true sovereignty over the entire stack end to end: designing, building, owning and operating its data centers, owning the GPUs, and controlling the networking infrastructure.


In this talk, The Good AI Lab and Doctors Without Borders share how they are building a long-term collaboration where AI is taught and developed openly, accountably, and in service of humanitarian work. They will outline how AI enablement, strategy, and applied research projects can fit into a single roadmap, and what they have learned so far about making AI useful in high-stakes, low-resource settings without losing sight of dignity, context, and constraints.
Sofia Giannotti works in project design and innovation at Médecins Sans Frontières (MSF) Italy, where she develops cross-departmental initiatives to strengthen organizational learning, stakeholder engagement, and social impact. She develops participatory formats for volunteers and youth and supports the integration of digital tools and data-informed approaches into MSF’s strategies.
With a background in Politics, Philosophy and Economics and a Master’s in International Humanitarian Action (NOHA), Sofia combines human-centered design and analytical thinking to foster innovation within complex humanitarian systems. She is passionate about building inclusive processes that enable organizations to adapt and create meaningful change.


In this presentation, Sofia Tapani explores the transformative journey of Statisticians and Data Scientists in the age of AI and Automation. Drawing from her own evolution—from a hands-on coder to a strategic leader in drug development—Sofia illustrates how Statisticians are no longer just technical experts but central drivers of innovation, quality, and decision-making in modern medicine.
Key themes include:
This session will inspire attendees to rethink their role—not just as analysts, but as strategic catalysts in the AI-powered transformation of healthcare.
Sofia Tapani (PhD) is Executive Director of Statistics at AstraZeneca, leading strategic initiatives that advance the role of quantitative science in drug development. With a background in mathematics and biostatistics and a commitment to modernizing evidence generation, she champions the integration of advanced analytics and AI to improve decision‑making across clinical programs. Sofia is recognized for her contributions to industry thought leadership on the evolving role of statisticians in an AI‑driven future. She is passionate about fostering collaboration between statisticians, data scientists, and cross‑functional partners to accelerate impactful innovation.


In the depths of a Boliden mine, 700 meters underground, running traditional cloud-first AI on heavy-duty machinery falls short on delivering results. In a collaboration with Volvo Group and academic partners, we validated a novel approach: turning the trucks themselves into intelligent and interactively queryable computational databases. By embedding a tiny combined main-memory database and computation engine directly onto heavy-duty mining vehicles, we transformed the fleet into a distributed system where data streams are analyzed and queried at the source in real-time.
This talk shares the architectural lessons learned from deploying this "database-on-wheels" model to monitor critical metrics like battery health, energy regeneration, and driving patterns in a connectivity-constrained environment.
Beyond the immediate deployment, this architecture offers a fundamental shift in how we build industrial AI. We will explore how exposing physical assets via a familiar SQL-like interface paves the way for the next generation of Agentic Workflows. Instead of dealing with rigid firmware cycles, autonomous agents can simply “query” the fleet for insights or update model weights as easily as updating a row in a database table. We will discuss the trade-offs of edge-native computational query processing and how this approach decouples rapid AI innovation from the slower engineering cycles of heavy machinery.
Stefan Månsby is the CEO of Stream Analyze, an Edge AI company enabling intelligent decision-making on distributed devices. A serial co-founder and team builder, he co-founded Basefarm, growing it from a startup to a leading European managed services provider and has built innovation units, consulting practices, and AI teams throughout his 25+ year career. His experience spans CTO, CIO, and senior advisory roles at Orange Business and Basefarm, where he led projects in industrial IoT, fintech, and cybersecurity across three continents. And he still writes code.


This case concerns the development of an end-to-end geospatial pipeline that integrates curated spatial attributes with Husqvarna’s internal data to produce a coherent view of golf course environments. Developed in collaboration with Knowit Solutions, the pipeline establishes the technical foundation for geospatial analytics that drive business insights. Structured spatial indexing and robust metadata contribute to an annotated dataset that supports computer vision workflows in validating patterns and enriching geospatial analytics.
Husqvarna is further advancing its data-driven approach by using geospatial analytics to generate deeper insights into green spaces and understand regional variation. This evolving technical foundation is strengthening Husqvarna’s ability to combine visual and spatial information for more precise green space evaluation and identification, enabling more focused and informed decision-making across the organization.
Vivek Chaurasia is a full‑stack data scientist with a PhD in Computational Physics and extensive experience delivering AI solutions that drive measurable business value. Specializing in machine learning, computer vision, and large‑scale data engineering, Vivek has led end‑to‑end development of forecasting, segmentation, and prescriptive analytics systems used across global operations. Vivek builds scalable ML pipelines, strengthens data quality and governance, and translates complex analytics into actionable insights for commercial and technical stakeholders. With a strong track record of accelerating AI adoption, mentoring teammates, and shaping strategic data initiatives, Vivek consistently turns advanced analytics into tangible operational and financial impact.
Tickets are available now! Be an early-bird until December 25 to get the best price. Secure your tickets by March 19, 2026, to avoid becoming a late bird! Stay tuned for updates, and be ready to join the excitement!
As always, we offer significant discounts for students! We refer all other participants to the General Admission tickets. If you have any questions about your purchase, please email us using the contact information at the bottom of the page.
Partners with a silver package or above and speakers will receive a discount code for additional tickets, applicable to regular general admission tickets. Restrictions apply.
All our prices are subject to 25% VAT, as we sell through Meetx.
On April 21, 2026, the day after the conference, we continue with a curated set of in-depth workshops at Svenska Mässan. We selected this year’s workshops to ensure relevance, quality, and practical value. Each session runs for two hours or more and is designed for focused, small-group interaction.
The workshops are intended for practitioners, leaders, and specialists who want to move from conference insights to concrete application. Unlike conference talks, these sessions allow time to:
Participation is limited to ensure meaningful discussion. Workshops vary in prerequisites. Some focus on leadership, governance, and strategy. Others require hands-on technical experience in software development, data science, or machine learning. Please review the target audience and prerequisites for each workshop before registering.
Each workshop requires a separate ticket and can be combined with the conference according to your interests. Light fika and coffee are served during the sessions.
Knowit
300
SEK
April 21, 2026 8:00
This workshop is about how organizations can lead and govern AI to drive business value while managing risk and regulatory responsibility. It focuses on AI leadership and strategy, using global AI management principles and ISO/IEC 42001 as a reference model. Participants will gain a clear, non-technical understanding of what an AI management system consists of and why it matters.
The workshop combines short presentations with interactive discussions and practical reflection. Participants will work with their own organizational context, explore leadership responsibilities, and discuss priorities and next steps for governing AI in a structured but pragmatic way.
From this workshop, you will gain a clear understanding of how AI should be led and governed at a strategic and leadership level. You will learn how leadership decisions, structures, and responsibilities shape successful and responsible use of AI across the organization. The workshop provides practical insight into priorities, roles, and next steps for leading AI—not only for executives, but for everyone involved in guiding AI initiatives. You will also benefit from shared learning and discussion with peers facing similar leadership challenges.
This workshop is especially relevant for executive teams and boards, AI and digitalization leaders, CIOs, CDOs, and CTOs, as well as risk, compliance, and security functions. It also targets business leaders responsible for AI-driven products or services who want to lead AI strategically while balancing growth, responsibility, and trust.
The workshop is hosted by Knowit, a leading Nordic consultancy working at the intersection of business, technology, and sustainability. Knowit supports organizations across the private and public sectors with digital transformation, AI strategy, and responsible use of technology. The company works closely with executive teams to turn advanced technologies into measurable business value, while ensuring trust, compliance, and long-term competitiveness in a rapidly changing regulatory landscape.
The workshop is hosted by Torbjörn Lindgren, Director of AI at Knowit Insight, and Gregor Tidholm, AI business development expert. Both work closely with leadership teams on AI strategy, governance, and value creation. They are also appointed national trainers and contributors to the development of the Swedish national training on ISO/IEC 42001 – AI Management Systems. Their combined experience bridges executive leadership, business innovation, and responsible AI governance in practice.


Callista Enterprise
300
SEK
April 21, 2026 8:00
Move beyond basic chatbots and unlock the potential of autonomous agents in your software. This hands-on workshop is tailored for developers ready to build systems that can reason, plan, and execute complex tasks. Learn how to architect, build, and evaluate practical “agentic” applications that go beyond simple text generation to perform real work. We will also describe concepts like human-in-the-loop and transparency.
In this workshop, you’ll learn about the following concepts:
The workshop will be an applied programming session in TypeScript using the Mastra agentic framework on an example application we have prepared for the participants.
Software developers interested in Agentic applications. Intermediate general programming skills.
Bring your own computer. You will need to install Node.js, Git and an IDE of your choice. Go to https://github.com/callistaenterprise/gaia-2026-agentic-workshop.git for code download, installation, and further instructions.
Callista offers expertise in architecture, frontend, and backend development. We are constantly seeking novel and innovative solutions within system development and select technologies and methodologies that demonstrate practicality and will benefit our clients. At Callista, we believe in sharing our knowledge and experience with our customers and colleagues in the industry through our assignments, meetups, blogs,and conferences.
Senior Software Engineer designing and building high-quality software since 2000 for customers like AstraZeneca, Volvo Cars, Volvo AB, Wireless Car, Telia, Qmatic and SpeedLedger. Have been learning about Machine Learning since 2017 and have held several talks on ML and AI at Callista’s developer conference CADEC. Currently supporting one of our customers industrializing ML and AI. Been part of the GAIA Conference Committee since 2019 and chairing the Program Committee for the 2023 and 2024 conferences. Also tutored the workshop “Building LLM applications” on GAIA 2025.
Software Engineer designing and building robust and business-critical software solutions since 2016 for customers like Persomics, Volvo Cars, and Volvo AB. Began exploring machine learning in 2017, including its use in analyzing and processing large-scale image datasets in medical imaging. Presented on ML and AI at Callista’s developer conference, and most recently hosted a technical workshop on LLM applications at GAIA 2025.
Software Engineer specialized in frontend development, building applications using frameworks like ReactJS and React Native for daily living. Also has experience in native app development for both iOS and Android. Presented the talk Great Fun With Tiny ML at Callista’s developer conference 2023, and was involved in the workshop “Building LLM applications” at GAIA 2025.
Full-stack software engineer with a long history of building high-quality applications. He also has a background as an entrepreneur and Java teacher. In the last years, he has worked actively with language models and agentic programming. Presented the “Beyond chatbots - how to build next generation AI assistants” at Callista Developer Conference, CADEC 2026.




Smartr
300
SEK
April 21, 2026 10:30
This interactive workshop shares our proven methods for taking customers from AI ideas to functioning, implemented solutions. You'll gain concrete insights into how we tackle common challenges and ensure successful AI projects, with business value as the cornerstone throughout the process.
You will learn how to create better conditions for your PoC to become reality, how the process for AI initiatives differs from traditional software, and how to build confidence in realising the potential of AI in your business.
Target audience
This workshop is for those wanting to push AI initiatives in their organisation. It’s especially relevant for those in a decision-making role—business leaders, team and company level managers, and anyone else with such a mandate.
Smartr is a specialist agency focused on helping forward-thinking organisations with every step of their AI journey—from strategy and first ideas to real, measurable results from development and implementations.
Louise Duker is the Chief Commercial Officer at Smartr, with 7+ years of experience in innovation and AI.
Fredrik Ring is a data and AI enabler at Smartr, with 7+ years of experience as a consultant conceptualising, building, and realising the potential of data and AI solutions.


Chalmers/GU, SystemWeaver, TReqs, and AI Sweden
300
SEK
April 21, 2026 10:30
For safety-critical and regulated products, there is an assurance challenge: how can AI support engineering work while keeping outputs auditable, reproducible, accountable, and aligned with the governed system baseline?
Modern systems engineering depends on consistent traceability across requirements, architecture, variants, and verification artefacts. At the same time, large language models (LLMs) enable powerful natural-language interaction and automation, but introduce well-known risks: non-deterministic outputs, hallucinations, weak provenance, and difficulty maintaining configuration control.
We will work hands-on with a small, anonymised system model. One representative (but widely applicable) example is an end-to-end chain such as hazards → safety goals → requirements → components → tests, enriched with related attributes. Typical questions—directly tied to the assurance challenge above—include:
In groups, participants will first answer these questions by manually navigating the model (e.g., change impact, safety coverage, gap identification). They will then design how an AI assistant should answer the same questions, treating the system model as the ground truth and requiring the assistant to cite exact model elements (IDs/names) in its responses. A short optional demo lets you try a simple notebook; coding support will be provided.
Our aim is to support you in gaining knowledge of concrete patterns you can apply in your own environment: how to turn traceability links into a structured AI context, how to ask natural-language questions while keeping answers verifiable, and how to plan for accountability in joint AI/human coding activities. Based on this, you will define one AI-assisted workflow with your group, keeping in mind the clear boundaries: what the AI does, and what the engineer must review.
Additionally, you will get an opportunity to bounce your ideas with experts from industry, academia, and an innovation hub in the same workshop.
This workshop is for anyone who works with complex products and wants to explore how AI can support systems engineering in a trustworthy way, without losing control or traceability. Typical participants include engineers, systems and safety engineers, architects, product owners, project leaders, development managers, researchers, and AI/ML engineers. You do not need to be a SystemWeaver user or an AI expert—basic familiarity with requirements, components, tests, or modern AI assistants is enough. We would also adapt our approach to keep it interesting for experts.
The workshop is jointly hosted by Eric (Chalmers), Shahid and Jonas (SystemWeaver), Filip and Philipp (TReqs Technologies AB), and Mats (AI Sweden). Eric is a professor in software engineering at Chalmers University of Technology and the University of Gothenburg, focusing on requirements and traceability management in DevOps and AI-enabled systems. He will provide overall moderation and framing.
Shahid and Jonas represent SystemWeaver, working on practical LLM use cases for systems engineering and PLM. They bring a strong background in safety-critical development and traceability-driven engineering to the workshop, and focus on making AI assistance useful, explainable, and safe to adopt. They will drive the main technical part of the workshop.
Filip and Philipp represent TReqs Technologies, a new startup that brings requirements and traceability management to software repositories, to facilitate continuous compliance and accountable AI-enabled software development. They will provide a technical demo.
Mats is the director of AI Labs at AI Sweden. At AI Sweden, some 180 partners across all aspects of the Swedish ecosystem collaborate to accelerate the use of AI. Such collaboration ranges from research and innovation via adoption activities to the development of talents and leaders. Mats will complement the workshop with deep knowledge and a broad overview of the Swedish and international AI landscape.
Chalmers University of Technology is a leading university in Sweden with a vision to become a world-leading technical university. The University of Gothenburg is one of the largest higher education institutions in Sweden, taking responsibility for societal development and a sustainable world. The Department of Computer Science and Engineering is shared between Chalmers and Gothenburg University. It is engaged in research and education across the full spectrum of computer science, computer engineering, AI, cybersecurity, software engineering, and interaction design, from foundations to applications.
SystemWeaver is a Swedish software company providing a graph-based platform for system engineering and product lifecycle management. It is used in industries where traceability matters—such as automotive, aerospace, and industrial systems—to manage requirements, architecture, variants, verification, and safety artefacts. In this workshop, we keep the approach tool-agnostic: the key idea is that if your system model is already a well-linked graph, it is ready to be “fed” to an AI assistant for trustworthy answers.
TReqs Technologies is a Swedish startup that provides capabilities for managing traceability in software repositories to support continuous compliance.
AI Sweden is the Swedish national centre for applied artificial intelligence. Its mission is to accelerate the use of AI for the benefit of our society, our competitiveness, and for everyone living in Sweden.





Göteborgsregionen
300
SEK
April 21, 2026 13:30
This workshop explores how AI, particularly AI agents and agent-based simulations, could be used to imagine and shape the future of welfare and public sector systems. We are interested in new ways of working, where AI is used not only as a tool or assistant, but as a means to simulate, visualize, and reason about complex organizational and social dynamics.
We will briefly present how we have so far explored agent-based simulations within the public sector, focusing on their potential to help leadership teams and public administrations reflect on the consequences of decisions, policies, and organizational structures.
By bringing together participants with diverse backgrounds, the workshop aims to explore how agent-based simulations could create value in areas such as education and social services, and what is required—technically, organizationally, and ethically—to make this possible. Through speculative design, we will think together about potential futures, including ideas such as digital twins of organizations used to explore possible futures rather than predict them.
Participants will gain insights, perspectives, and shared visions of what the future of welfare and public services could look like when explored through AI, agents, and simulations. The workshop offers a space to think beyond current constraints and explore alternative ways of organizing, delivering, and governing welfare systems.
Through speculative design and collective reflection, participants will develop a deeper understanding of how agent-based simulations can be used to explore questions related to efficiency, cost-effectiveness, service quality, and time use, not as guaranteed outcomes, but as dimensions that can be examined, challenged, and discussed in simulated futures.
Participants will also gain practical inspiration for how such approaches could support learning, dialogue, and decision-making in complex organizations, as well as a richer vocabulary for discussing AI’s role in public sector transformation
The workshop values diverse perspectives and is intentionally designed for a broad, mixed audience. We welcome participants from across roles, disciplines, and levels of experience, including developers, data scientists, designers, product owners, researchers, policymakers, public-sector leaders, strategists, and practitioners working in or around welfare and public services.
What matters is curiosity about how complex socio-technical systems work and an interest in exploring how AI-driven simulations and speculative design could shape the future of public-sector decision-making and welfare systems.
The Gothenburg Region Innovation Arena is a collaborative initiative where municipalities jointly explore AI-driven solutions to address key challenges facing the region’s public sector and welfare systems. A central focus is the supply of competence within welfare services. The work addresses long-term societal challenges, including an ageing population with increasing needs for care and support, the importance of enabling more young people to complete their education, and the growing difficulties of recruiting and retaining qualified staff, in a context of rising competition for talent across both the public and private sectors.
Johan and Henrik work at the Gothenburg Region (GR), where they focus on exploring how AI will shape Swedish welfare. Henrik has a background as an art and media teacher, with extensive experience in creativity, innovation, and digitalization, and now focuses on emerging technologies and foresight. Johan, with a background in EdTech, combines technology, pedagogy, and design to maximize learning. Today, he spends his time designing and developing AI solutions for the welfare sector.


Scaleout Systems
300
SEK
April 21, 2026 13:30
The rapid adoption of Large Language Models (LLMs) has transformed a wide range of industries, including healthcare, finance, education, software engineering, and public services. These models enable advanced capabilities such as natural language understanding, automated content generation, and intelligent decision support, making them a central component of modern data-driven systems. As organizations increasingly seek to fine-tune LLMs for domain-specific tasks, they face growing challenges related to computational cost, communication efficiency, and the management of large, distributed datasets.
In many real-world settings, the data required to adapt LLMs is decentralized, sensitive, and governed by regulatory, operational, or trust-related constraints, making centralized training infeasible. This has driven interest in scalable and communication-efficient training paradigms that respect data locality. Federated Learning (FL) offers a compelling solution by enabling collaborative model training without sharing raw data. However, applying FL to large-scale models such as LLMs introduces additional challenges, particularly related to training stability, convergence behavior, and heterogeneous client dynamics under strict communication and resource constraints.
This workshop addresses the above-mentioned challenges by combining Federated Learning with Parameter-Efficient Fine-Tuning (PEFT) techniques such as LoRA, along with quantization strategies that significantly reduce model size and computation. We demonstrate how PEFT reduces the number of trainable parameters exchanged during federation, while quantization further lowers memory and communication costs, together enabling cross-site LLM fine-tuning on devices that previously lacked the capacity for such workloads.
The workshop uses the Scaleout AI Platform, a federated learning framework built for real-world, large-scale deployments. The platform supports heterogeneous compute environments, communication-efficient orchestration, and flexible deployment across on-premise, cloud, and edge infrastructures.
Participants will gain hands-on experience orchestrating distributed LLM fine-tuning using the Scaleout platform, applying PEFT and quantization to meet practical deployment constraints. By the end, attendees will have concrete skills and design insights for addressing the emerging challenges at the intersection of FL and LLMs. We will also share lessons and results from previous and ongoing projects across sectors including healthcare, finance, and defense.
The overall goal of this workshop is to bridge cutting-edge research and real-world deployment challenges in federated learning (FL), with a focus on LLM fine-tuning, parameter-efficient techniques, and quantization.
The workshop will feature a hands-on session where participants will:
This live session will be completed in 90 minutes, providing a practical and engaging experience.
The presenters have extensive experience in conducting demos and workshops using the Scaleout’s AI Platform. Scaleout has developed an FL platform for testing and trialing industrial use cases. During the workshop, the platform will be used, and all participants will receive a free account. This will allow them to explore the platform during and after the workshop, empowering them to implement and test their strategies for real-world applications.
(Part 1)
(Part 2)
Introductory level understanding of neural networks.
Concepts
Software
Hardware
This workshop is designed for a diverse audience interested in exploring cutting-edge advancements in federated learning and its practical applications. It is ideal for:
PhD Students/Researchers/ML Engineers: Engage in hands-on learning, designed for PhD students, researchers, MLOps professionals, data engineers, and machine learning practitioners seeking to expand their skill set in decentralized AI. (Focus Area: Part 1 and Part 2)
ML/LLM Experts: Gain insights into PEFT and quantization techniques for LLMs in federated learning environments, covering both technical implementations and mathematical foundations.. (Focus Area: Part 1 and Part 2)
Technology Experts: Explore the technical depth of the platform and its potential to address real-world scalability concerns for LLM use cases. (Focus Area: Part 1 and Part 2)
Business Leaders: Gain high-level insights into how federated learning can drive innovation while preserving data privacy and security. (Focus Area: Part 1)
Product Owners: Understand the opportunities and challenges of integrating federated learning into your product roadmap. (Focus Area: Part 1)
Whether you are a decision-maker exploring privacy-preserving AI solutions, an academic or industry researcher, or a technical professional interested in the practical aspects of federated learning and LLMs, this workshop offers valuable knowledge and actionable insights tailored to your needs.
Salman Toor: Associate Professor in Scientific Computing at Uppsala University and the co-founder and CTO of Scaleout Systems. He is an expert in distributed infrastructures, applied machine learning and cybersecurity. Toor is one of the lead architects of the FEDn framework and heads the research and development initiatives at the company.
Jonas Frankemölle: Machine Learning Engineer at Scaleout Systems, where he helps organizations leverage federated learning to overcome challenges in data privacy and data accessibility. His work focuses on real-world applications of computer vision and large language models.


Partner with the 2026 GAIA Conference and connect with our vibrant AI community in Gothenburg and beyond. Our partnership tiers allow you to choose the one that best suits your needs. As a partner, you will connect with engaged AI professionals, showcase your offerings, and establish your leadership in the field.
For 2026, we revisit the Congress Hall at Svenska Mässan and expand with a third track. We expect to continue growing the conference, as all our previous conferences have sold out. We anticipate 1,200 attendees on April 20, 2026. We also offer our partners more ways to stand out beyond the traditional booth.
The GAIA Conference relies on partners like you to become a success. So, what are you waiting for? Partner with the 2026 GAIA Conference today to connect with our amazing AI community!
Read more about the partnership tiers in our info letter—GAIA Partnership Information 2026.
We have compiled the most frequently asked questions and their answers below to help you achieve the best possible GAIA experience. Please feel free to contact us if you have any further questions.
No. The Gothenburg Artificial Intelligence Alliance (GAIA) is a Swedish non-profit organisation. However, starting with the 2026 Conference, we will be selling through the company Meetx. Therefore, a VAT of 25% applies to all purchases from us.
The official language of the GAIA Conference is English, and all talks and workshops will be conducted in English, possibly with a healthy portion of the local "Gothenburg English" dialect.
Yes, the whole event is wheelchair accessible through ramps and elevators. A guide dog is also welcome. Additional information is available on Svenska Mässan's website: https://svenskamassan.se/shared-contents/accessibility/. Please do not hesitate to contact us if you have any questions.
Yes, we record our talks and post them on our YouTube channel, usually within a few weeks after the event. We want to allow you to watch all talks across the tracks and revisit the key insights from the day after the event. However, individual speakers and companies may have restrictions on what we publish, so please attend the sessions that interest you the most. We do not record our workshops.
Yes, Gothia Towers includes a hotel so that you can stay in the same building. There are plenty of other options within walking distance. However, we do not offer any hotel packages directly.
You do not need to bring anything beyond your ticket to the conference. Some workshops expect you to bring your laptop, so check the details of the workshops you will attend. Comfortable shoes are always a conference recommendation! A wardrobe will be available during the conference, but please bring only what you need. We are not responsible for any items left there.
Yes, we offer a significant student discount for the main conference on April 20. The discount applies to all students, including PhD students, with a valid Mecenat Student ID.
The 2026 GAIA Conference, scheduled for April 20–21, 2026, comprises two parts: the main conference event on Monday, April 20, and the workshops on Tuesday, April 21. Both are located at Svenska Mässan in Gothenburg, Sweden. You can purchase tickets for the workshops and conference separately, allowing you to attend the sessions that best suit your interests. The GAIA Conference is an in-person event; meeting others in the AI, machine learning, data science, and data engineering community is integral to the experience.
We serve breakfast and lunch; therefore, please refrain from bringing or purchasing any food during the day. Additionally, we offer a sweet treat, known as "fika", in the afternoon and provide coffee and tea throughout the day. After the conference, we host our After Conference mingle, featuring sparkling wine and non-alcoholic options. Everything is included in the ticket price. Please ensure you register any food preferences when purchasing your ticket. We strive to accommodate all needs; however, the kitchen cannot accommodate non-medical dietary choices, such as the Ketogenic or Atkins diet. We place food orders two weeks before the conference, so we cannot guarantee that we will meet any dietary requirements for tickets sold or completed after that time.
Our conference venue, Svenska Mässan, is the largest congress centre in Gothenburg and conveniently located in the Gothia Towers, Mässans Gata 24, next to Korsvägen, Scandinavium, and Liseberg. You get to Svenska Mässan by tram (Korsvägen), bus (Korsvägen), or train (Liseberg Station). If you go by car, you can park at the Focus parking garage, from which you have an indoor walk to Svenska Mässan. The closest airport is Landvetter Airport (GOT). The 2026 GAIA Conference is located in the Congress Foyer, Congress Hall, room H1+H2, and room G3 on the northwest side of Svenska Mässan. You enter through Entrance 8, located next to Scandinavium, unless you arrive by car and park in the Focus parking garage.
You buy your tickets through this site. The workshops and the conference are separate, and tickets are sold individually for each event. Do not wait; all our previous conferences have sold out, and we increase the price close to the event.
Are you wondering about the schedule or any of our sessions? Send an email to program@gaia.fish.
If you have questions regarding tickets, please email gaia@meetx.se, and we will be happy to help you.
Want to support the conference by becoming a partner? Reach out to us at partnership@gaia.fish for more information.
Are you wondering about the workshops? Send an email to workshops@gaia.fish.
We use cookies to provide you with the best possible experience. They also allow us to analyse user behaviour in order to constantly improve the website for you.





































