April 2, 2024, 7:12 p.m. | Yu Sun, Gaojian Xiong, Xianxun Yao, Kailang Ma, Jian Cui

cs.CR updates on arXiv.org arxiv.org

arXiv:2401.11748v3 Announce Type: replace
Abstract: Deep gradient inversion attacks expose a serious threat to Federated Learning (FL) by accurately recovering private data from shared gradients. However, the state-of-the-art heavily relies on impractical assumptions to access excessive auxiliary data, which violates the basic data partitioning principle of FL. In this paper, a novel method, Gradient Inversion Attack using Practical Image Prior (GI-PIP), is proposed under a revised threat model. GI-PIP exploits anomaly detection models to capture the underlying distribution from fewer …

access art arxiv attacks basic cs.ai cs.cr cs.lg data dataset expose federated federated learning pip private private data serious state threat

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cyber Security Culture – Communication and Content Specialist

@ H&M Group | Stockholm, Sweden

Container Hardening, Sr. (Remote | Top Secret)

@ Rackner | San Antonio, TX

GRC and Information Security Analyst

@ Intertek | United States

Information Security Officer

@ Sopra Steria | Bristol, United Kingdom

Casual Area Security Officer South Down Area

@ TSS | County Down, United Kingdom