May 25, 2022, 1:20 a.m. | Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi

cs.CR updates on arXiv.org arxiv.org

Group fairness ensures that the outcome of machine learning (ML) based
decision making systems are not biased towards a certain group of people
defined by a sensitive attribute such as gender or ethnicity. Achieving group
fairness in Federated Learning (FL) is challenging because mitigating bias
inherently requires using the sensitive attribute values of all clients, while
FL is aimed precisely at protecting privacy by not giving access to the
clients' data. As we show in this paper, this conflict between …

fairness lg privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Consultant

@ Auckland Council | Central Auckland, NZ, 1010

Security Engineer, Threat Detection

@ Stripe | Remote, US

DevSecOps Engineer (Remote in Europe)

@ CloudTalk | Prague, Prague, Czechia - Remote

Security Architect

@ Valeo Foods | Dublin, Ireland

Security Specialist - IoT & OT

@ Wallbox | Barcelona, Catalonia, Spain