all InfoSec news
PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning. (arXiv:2205.11584v2 [cs.LG] UPDATED)
Aug. 30, 2022, 1:20 a.m. | Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi
cs.CR updates on arXiv.org arxiv.org
Group fairness ensures that the outcome of machine learning (ML) based
decision making systems are not biased towards a certain group of people
defined by a sensitive attribute such as gender or ethnicity. Achieving group
fairness in Federated Learning (FL) is challenging because mitigating bias
inherently requires using the sensitive attribute values of all clients, while
FL is aimed precisely at protecting privacy by not giving access to the
clients' data. As we show in this paper, this conflict between …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Information Systems Security Officer (ISSO), Junior
@ Dark Wolf Solutions | Remote / Dark Wolf Locations
Cloud Security Engineer
@ ManTech | REMT - Remote Worker Location
SAP Security & GRC Consultant
@ NTT DATA | HYDERABAD, TG, IN
Security Engineer 2 - Adversary Simulation Operations
@ Datadog | New York City, USA