Apr 9 2025

Statistics and Data Science Seminar: Defenses Against Backdoor Attacks in Federated Learning and Text Classification, by Yao Li

April 9, 2025

4:00 PM - 4:50 PM

Location

636 SEO

Address

Chicago, IL

Yao Li (UNC Chapel Hill): Defenses Against Backdoor Attacks in Federated Learning and Text Classification

As machine learning models become increasingly integrated into distributed and language-intensive applications, ensuring their integrity against backdoor attacks is paramount. This talk presents two defense strategies that target vulnerabilities in federated learning and large language models (LLMs). The first part introduces Trusted Aggregation (TAG), a robust defense mechanism for federated learning that leverages a small validation set to estimate permissible updates and filter out malicious contributions. TAG effectively mitigates backdoor risks while preserving task accuracy, even when up to 40% of client updates are adversarial. The second part addresses the threat of syntactic textual backdoor attacks in LLMs. We propose a novel token substitution strategy that alters semantic content while preserving syntactic structures, enabling the detection of both syntax-based and token-based triggers.

Please click here to make changes to, or delete, this seminar announcement.

Contact

Ping-Shou Zhong

Date posted

Mar 31, 2025

Date updated

Mar 31, 2025

Speakers