TBD

Location
Time
Duration
Price
Max Seats
Svenska Mässan
April 21, 2026 23:39
120
 min
300
 SEK
36

Description

For safety-critical and regulated products, there is an assurance challenge: how can AI support engineering work while keeping outputs auditable, reproducible, accountable, and aligned with the governed system baseline? 

Modern systems engineering depends on consistent traceability across requirements, architecture, variants, and verification artefacts. At the same time, large language models (LLMs) enable powerful natural-language interaction and automation, but introduce well-known risks: non-deterministic outputs, hallucinations, weak provenance, and difficulty maintaining configuration control.

We will work hands-on with a small, anonymised system model. One representative (but widely applicable) example is an end-to-end chain such as hazards → safety goals → requirements → components → tests, enriched with related attributes. Typical questions—directly tied to the assurance challenge above—include:

In groups, participants will first answer these questions by manually navigating the model (e.g., change impact, safety coverage, gap identification). They will then design how an AI assistant should answer the same questions, treating the system model as the ground truth and requiring the assistant to cite exact model elements (IDs/names) in its responses. A short optional demo lets you try a simple notebook; coding support will be provided.

What you will gain

Our aim is to support you in gaining knowledge of concrete patterns you can apply in your own environment: how to turn traceability links into a structured AI context, how to ask natural-language questions while keeping answers verifiable, and how to plan for accountability in joint AI/human coding activities. Based on this, you will define one AI-assisted workflow with your group, keeping in mind the clear boundaries: what the AI does, and what the engineer must review.

Additionally, you will get an opportunity to bounce your ideas with experts from industry, academia, and an innovation hub in the same workshop.

Target audience

This workshop is for anyone who works with complex products and wants to explore how AI can support systems engineering in a trustworthy way, without losing control or traceability. Typical participants include engineers, systems and safety engineers, architects, product owners, project leaders, development managers, researchers, and AI/ML engineers. You do not need to be a SystemWeaver user or an AI expert—basic familiarity with requirements, components, tests, or modern AI assistants is enough. We would also adapt our approach to keep it interesting for experts.

The hosts

The workshop is jointly hosted by Eric (Chalmers), Shahid and Jonas (SystemWeaver), Filip and Philipp (TReqs Technologies AB), and Mats (AI Sweden). Eric is a professor in software engineering at Chalmers University of Technology and the University of Gothenburg, focusing on requirements and traceability management in DevOps and AI-enabled systems. He will provide overall moderation and framing.

Shahid and Jonas represent SystemWeaver, working on practical LLM use cases for systems engineering and PLM. They bring a strong background in safety-critical development and traceability-driven engineering to the workshop, and focus on making AI assistance useful, explainable, and safe to adopt. They will drive the main technical part of the workshop.

Filip and Philipp represent TReqs Technologies, a new startup that brings requirements and traceability management to software repositories, to facilitate continuous compliance and accountable AI-enabled software development. They will provide a technical demo.

Mats is the director of AI Labs at AI Sweden. At AI Sweden, some 180 partners across all aspects of the Swedish ecosystem collaborate to accelerate the use of AI. Such collaboration ranges from research and innovation via adoption activities to the development of talents and leaders. Mats will complement the workshop with deep knowledge and a broad overview of the Swedish and international AI landscape.

Chalmers University of Technology is a leading university in Sweden with a vision to become a world-leading technical university. The University of Gothenburg is one of the largest higher education institutions in Sweden, taking responsibility for societal development and a sustainable world. The Department of Computer Science and Engineering is shared between Chalmers and Gothenburg University. It is engaged in research and education across the full spectrum of computer science, computer engineering, AI, cybersecurity, software engineering, and interaction design, from foundations to applications. 

SystemWeaver is a Swedish software company providing a graph-based platform for system engineering and product lifecycle management. It is used in industries where traceability matters—such as automotive, aerospace, and industrial systems—to manage requirements, architecture, variants, verification, and safety artefacts. In this workshop, we keep the approach tool-agnostic: the key idea is that if your system model is already a well-linked graph, it is ready to be “fed” to an AI assistant for trustworthy answers.

TReqs Technologies is a Swedish startup that provides capabilities for managing traceability in software repositories to support continuous compliance.

AI Sweden is the Swedish national centre for applied artificial intelligence. Its mission is to accelerate the use of AI for the benefit of our society, our competitiveness, and for everyone living in Sweden.

Hosts

Eric Knauss
Muhammad Shahid
Jonas Mellin
Filip Lindset
Philipp Svensson
Mats Nordlund