all InfoSec news
Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
March 14, 2024, 4:11 a.m. | Jianwei Li, Sheng Liu, Qi Lei
cs.CR updates on arXiv.org arxiv.org
Abstract: Language models trained via federated learning (FL) demonstrate impressive capabilities in handling complex tasks while protecting user privacy. Recent studies indicate that leveraging gradient information and prior knowledge can potentially reveal training samples within FL setting. However, these investigations have overlooked the potential privacy risks tied to the intrinsic architecture of the models. This paper presents a two-stage privacy attack strategy that targets the vulnerabilities in the architecture of contemporary language models, significantly enhancing attack …
arxiv attacks beyond can capabilities cs.ai cs.cl cs.cr cs.lg federated federated learning handling information inputs knowledge language language models privacy protecting reveal studies training user privacy
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Offensive Security Engineer
@ Ivanti | United States, Remote
Senior Security Engineer I
@ Samsara | Remote - US
Senior Principal Information System Security Engineer
@ Chameleon Consulting Group | Herndon, VA
Junior Detections Engineer
@ Kandji | San Francisco
Data Security Engineer/ Architect - Remote United States
@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700