Nov. 7, 2022, 2:20 a.m. | Faisal Hamman, Jiahao Chen, Sanghamitra Dutta

cs.CR updates on arXiv.org arxiv.org

Existing regulations prohibit model developers from accessing protected
attributes (gender, race, etc.), often resulting in fairness assessments on
populations without knowing their protected groups. In such scenarios,
institutions often adopt a separation between the model developers (who train
models with no access to the protected attributes) and a compliance team (who
may have access to the entire dataset for auditing purpose). However, the model
developers might be allowed to test their models for bias by querying the
compliance team for …

attributes bias leak privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Security Cloud Solution Architect

@ Microsoft | London, London, United Kingdom

Compliance Program Analyst

@ SailPoint | United States

Software Engineer III, Infrastructure, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Cryptography Expert

@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine

Senior Cyber Intelligence Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States