Federated Fine-Tuning of LLMs Using PEFT Techniques

Location
Time
Duration
Price
Max Seats
Svenska Mässan
April 21, 2026 23:51
150
 min
300
 SEK
36

Description

The rapid adoption of Large Language Models (LLMs) has transformed a wide range of industries, including healthcare, finance, education, software engineering, and public services. These models enable advanced capabilities such as natural language understanding, automated content generation, and intelligent decision support, making them a central component of modern data-driven systems. As organizations increasingly seek to fine-tune LLMs for domain-specific tasks, they face growing challenges related to computational cost, communication efficiency, and the management of large, distributed datasets.

In many real-world settings, the data required to adapt LLMs is decentralized, sensitive, and governed by regulatory, operational, or trust-related constraints, making centralized training infeasible. This has driven interest in scalable and communication-efficient training paradigms that respect data locality. Federated Learning (FL) offers a compelling solution by enabling collaborative model training without sharing raw data. However, applying FL to large-scale models such as LLMs introduces additional challenges, particularly related to training stability, convergence behavior, and heterogeneous client dynamics under strict communication and resource constraints.

What will the workshop be about?

This workshop addresses the above-mentioned challenges by combining Federated Learning with Parameter-Efficient Fine-Tuning (PEFT) techniques such as LoRA, along with quantization strategies that significantly reduce model size and computation. We demonstrate how PEFT reduces the number of trainable parameters exchanged during federation, while quantization further lowers memory and communication costs, together enabling cross-site LLM fine-tuning on devices that previously lacked the capacity for such workloads.

The workshop uses the Scaleout AI Platform, a federated learning framework built for real-world, large-scale deployments. The platform supports heterogeneous compute environments, communication-efficient orchestration, and flexible deployment across on-premise, cloud, and edge infrastructures.

What you will gain

Participants will gain hands-on experience orchestrating distributed LLM fine-tuning using the Scaleout platform, applying PEFT and quantization to meet practical deployment constraints. By the end, attendees will have concrete skills and design insights for addressing the emerging challenges at the intersection of FL and LLMs. We will also share lessons and results from previous and ongoing projects across sectors including healthcare, finance, and defense.

The overall goal of this workshop is to bridge cutting-edge research and real-world deployment challenges in federated learning (FL), with a focus on LLM fine-tuning, parameter-efficient techniques, and quantization. 

Outline

The workshop will feature a hands-on session where participants will:

  1. Collaboratively train a machine learning model.
  2. Introduction to LLM fine-tuning, quantization.
  3. The impact of PEFT on the training process. 
  4. Learn how to write and test model fine-tuning units.

This live session will be completed in 90 minutes, providing a practical and engaging experience.

The presenters have extensive experience in conducting demos and workshops using the Scaleout’s AI Platform. Scaleout has developed an FL platform for testing and trialing industrial use cases. During the workshop, the platform will be used, and all participants will receive a free account. This will allow them to explore the platform during and after the workshop, empowering them to implement and test their strategies for real-world applications.

Agenda (2 hours 30 minutes)

(Part 1)

(Part 2)

Prerequisites

Introductory level understanding of neural networks.

Concepts

Software 

Hardware 

Intended audience

This workshop is designed for a diverse audience interested in exploring cutting-edge advancements in federated learning and its practical applications. It is ideal for:

PhD Students/Researchers/ML Engineers: Engage in hands-on learning, designed for PhD students, researchers, MLOps professionals, data engineers, and machine learning practitioners seeking to expand their skill set in decentralized AI. (Focus Area: Part 1 and Part 2)

ML/LLM Experts: Gain insights into PEFT and quantization techniques for LLMs in federated learning environments, covering both technical implementations and mathematical foundations.. (Focus Area: Part 1 and Part 2)

Technology Experts: Explore the technical depth of the platform and its potential to address real-world scalability concerns for LLM use cases. (Focus Area: Part 1 and Part 2)

Business Leaders: Gain high-level insights into how federated learning can drive innovation while preserving data privacy and security. (Focus Area: Part 1)

Product Owners: Understand the opportunities and challenges of integrating federated learning into your product roadmap. (Focus Area: Part 1)

Whether you are a decision-maker exploring privacy-preserving AI solutions, an academic or industry researcher, or a technical professional interested in the practical aspects of federated learning and LLMs, this workshop offers valuable knowledge and actionable insights tailored to your needs.

The host

Salman Toor: Associate Professor in Scientific Computing at Uppsala University and the co-founder and CTO of Scaleout Systems. He is an expert in distributed infrastructures, applied machine learning and cybersecurity. Toor is one of the lead architects of the FEDn framework and heads the research and development initiatives at the company.

Jonas Frankemölle: Machine Learning Engineer at Scaleout Systems, where he helps organizations leverage federated learning to overcome challenges in data privacy and data accessibility. His work focuses on real-world applications of computer vision and large language models.

Support material
Relevant articles
  1. M. Ekmefjord, A. Ait-Mlouk, S. Alawadi, M. Åkesson, P. Singh, O. Spjuth, S. Toor, A. Hellander. Scalable federated learning with FEDn. https://doi.org/10.1109/CCGrid54584.2022.00065
  2. L. Ju; T. Zhang; S. Toor; A. Hellander. Accelerating Fair Federated Learning: Adaptive Federated Adam. https://doi.org/10.1109/TMLCN.2024.3423648
  3. S. Alawadi, A. Ait-Mlouk, S. Toor, A. Hellander. Toward efficient resource utilization at edge nodes in federated learning. https://doi.org/10.1007/s13748-024-00322-3
  4. L. Ju, M. Andersson, S. Fredriksson, E. Glöckner, A. Hellander, E. Vats, P. Singh. Exploiting the Asymmetric Uncertainty Structure of Pre-trained VLMs on the Unit Hypersphere. https://arxiv.org/abs/2505.11029

Hosts

Salman Toor
Jonas Frankemölle