Back to All Events

P-Tuning: A Parameter Efficient Tuning to Boost LLM Performance

Abstract

As more LLMs become available, industries need techniques for solving real-world natural language tasks. It has been shown that model prompting methods can elicit good zero– and few-shot performance from LLMs and help yield quality results on various downstream natural language processing (NLP) tasks. However, there is a limit to it. In this talk, we will demonstrate how to adapt p-tuning, a prompt-learning method, to low-resource language settings. We use an improved version of p-tuning implemented in NVIDIA NeMo that enables the continuous multitask learning of virtual prompts. In particular, we focus on adapting our English p-tuning workflow to Swedish.

Zenodia Charpy

Senior Data Scientist @ NVIDIA

Zenodia Charpy is a senior deep learning data scientist working at NVIDIA. Her field of expertise lies in training and deploying very large language models with a focus on tackling challenges for non-English and low-resource languages such as Swedish, Danish, Norwegian, and many others. She further explores parameter-efficient tuning techniques to boost LLMs performance while grounding the factual correctness of LLMs responses.