April 29, 2022, 1:20 a.m. | Balázs Pejó, András Tótth, Gergely Biczók

cs.CR updates on arXiv.org arxiv.org

Federated learning algorithms are developed both for efficiency reasons and
to ensure the privacy and confidentiality of personal and business data,
respectively. Despite no data being shared explicitly, recent studies showed
that the mechanism could still leak sensitive information. Hence, secure
aggregation is utilized in many real-world scenarios to prevent attribution to
specific participants. In this paper, we focus on the quality of individual
training datasets and show that such quality information could be inferred and
attributed to specific participants …

lg quality

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Penetration Tester

@ Resillion | Bengaluru, India

Senior Backend Software Engineer (Java) - Privacy Engineering (Open to remote across ANZ)

@ Canva | Sydney, Australia

(Senior) Information Security Professional (w/m/d)

@ IONOS | Deutschland - Remote

Information Security (Incident Response) Intern

@ Eurofins | Katowice, Poland

Game Penetration Tester

@ Magic Media | Belgrade, Vojvodina, Serbia - Remote