The rapid adoption of Large Language Models (LLMs) has transformed a wide range of industries, including healthcare, finance, education, software engineering, and public services. These models enable advanced capabilities such as natural language understanding, automated content generation, and intelligent decision support, making them a central component of modern data-driven systems. As organizations increasingly seek to fine-tune LLMs for domain-specific tasks, they face growing challenges related to computational cost, communication efficiency, and the management of large, distributed datasets.
In many real-world settings, the data required to adapt LLMs is decentralized, sensitive, and governed by regulatory, operational, or trust-related constraints, making centralized training infeasible. This has driven interest in scalable and communication-efficient training paradigms that respect data locality. Federated Learning (FL) offers a compelling solution by enabling collaborative model training without sharing raw data. However, applying FL to large-scale models such as LLMs introduces additional challenges, particularly related to training stability, convergence behavior, and heterogeneous client dynamics under strict communication and resource constraints.
This workshop addresses the above-mentioned challenges by combining Federated Learning with Parameter-Efficient Fine-Tuning (PEFT) techniques such as LoRA, along with quantization strategies that significantly reduce model size and computation. We demonstrate how PEFT reduces the number of trainable parameters exchanged during federation, while quantization further lowers memory and communication costs, together enabling cross-site LLM fine-tuning on devices that previously lacked the capacity for such workloads.
The workshop uses the Scaleout AI Platform, a federated learning framework built for real-world, large-scale deployments. The platform supports heterogeneous compute environments, communication-efficient orchestration, and flexible deployment across on-premise, cloud, and edge infrastructures.
Participants will gain hands-on experience orchestrating distributed LLM fine-tuning using the Scaleout platform, applying PEFT and quantization to meet practical deployment constraints. By the end, attendees will have concrete skills and design insights for addressing the emerging challenges at the intersection of FL and LLMs. We will also share lessons and results from previous and ongoing projects across sectors including healthcare, finance, and defense.
The overall goal of this workshop is to bridge cutting-edge research and real-world deployment challenges in federated learning (FL), with a focus on LLM fine-tuning, parameter-efficient techniques, and quantization.
The workshop will feature a hands-on session where participants will:
This live session will be completed in 90 minutes, providing a practical and engaging experience.
The presenters have extensive experience in conducting demos and workshops using the Scaleout’s AI Platform. Scaleout has developed an FL platform for testing and trialing industrial use cases. During the workshop, the platform will be used, and all participants will receive a free account. This will allow them to explore the platform during and after the workshop, empowering them to implement and test their strategies for real-world applications.
(Part 1)
(Part 2)
Introductory level understanding of neural networks.
Concepts
Software
Hardware
This workshop is designed for a diverse audience interested in exploring cutting-edge advancements in federated learning and its practical applications. It is ideal for:
PhD Students/Researchers/ML Engineers: Engage in hands-on learning, designed for PhD students, researchers, MLOps professionals, data engineers, and machine learning practitioners seeking to expand their skill set in decentralized AI. (Focus Area: Part 1 and Part 2)
ML/LLM Experts: Gain insights into PEFT and quantization techniques for LLMs in federated learning environments, covering both technical implementations and mathematical foundations.. (Focus Area: Part 1 and Part 2)
Technology Experts: Explore the technical depth of the platform and its potential to address real-world scalability concerns for LLM use cases. (Focus Area: Part 1 and Part 2)
Business Leaders: Gain high-level insights into how federated learning can drive innovation while preserving data privacy and security. (Focus Area: Part 1)
Product Owners: Understand the opportunities and challenges of integrating federated learning into your product roadmap. (Focus Area: Part 1)
Whether you are a decision-maker exploring privacy-preserving AI solutions, an academic or industry researcher, or a technical professional interested in the practical aspects of federated learning and LLMs, this workshop offers valuable knowledge and actionable insights tailored to your needs.
Salman Toor: Associate Professor in Scientific Computing at Uppsala University and the co-founder and CTO of Scaleout Systems. He is an expert in distributed infrastructures, applied machine learning and cybersecurity. Toor is one of the lead architects of the FEDn framework and heads the research and development initiatives at the company.
Jonas Frankemölle: Machine Learning Engineer at Scaleout Systems, where he helps organizations leverage federated learning to overcome challenges in data privacy and data accessibility. His work focuses on real-world applications of computer vision and large language models.

