Web: http://arxiv.org/abs/2007.06236

April 29, 2022, 1:20 a.m. | Balázs Pejó, András Tótth, Gergely Biczók

cs.CR updates on arXiv.org arxiv.org

Federated learning algorithms are developed both for efficiency reasons and
to ensure the privacy and confidentiality of personal and business data,
respectively. Despite no data being shared explicitly, recent studies showed
that the mechanism could still leak sensitive information. Hence, secure
aggregation is utilized in many real-world scenarios to prevent attribution to
specific participants. In this paper, we focus on the quality of individual
training datasets and show that such quality information could be inferred and
attributed to specific participants …

lg quality secure

More from arxiv.org / cs.CR updates on arXiv.org

Security Analyst

@ Storable | Missouri, United States

Artificial Intelligence and Cybersecurity Researcher

@ NavInfo Europe BV | Eindhoven, Netherlands

Senior Security Engineer (E5) - Infrastructure Security

@ Netflix | Remote, United States

Sr. Security Engineer (Infrastructure)

@ SpaceX | Hawthorne, CA or Redmond, WA or Washington, DC

Senior Global Security Compliance Analyst

@ Snowflake Inc. | Warsaw, Poland

Staff Security Engineer, Threat Hunt & Research (L4)

@ Twilio | Remote - Ireland

Junior Cybersecurity Engineer

@ KUDO | Buenos Aires

iOS Engineer (hybrid / flexibility / cybersecurity)

@ Qustodio | Barcelona, Spain

Security Engineer

@ Binance.US | U.S. Remote

Senior Information Systems Security Officer (ISSO)

@ Sigma Defense | Fayetteville, North Carolina, United States

ATGPAC Battle Lab - Ballistic Missile Defense Commander/Operations Manager

@ Sigma Defense | San Diego, California, United States

Cyber Security - Head of Infrastructure m/f

@ DataDome | Paris