Feb. 9, 2022, 2:20 a.m. | Sikha Pentyala, David Melanson, Martine De Cock, Golnoosh Farnadi

cs.CR updates on arXiv.org arxiv.org

Machine learning (ML) has become prominent in applications that directly
affect people's quality of life, including in healthcare, justice, and finance.
ML models have been found to exhibit discrimination based on sensitive
attributes such as gender, race, or disability. Assessing if an ML model is
free of bias remains challenging to date, and by definition has to be done with
sensitive user characteristics that are subject of anti-discrimination and data
protection law. Existing libraries for fairness auditing of ML models …

auditing fairness lg library privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Officer Hospital - Major Hospital Account - Full-Time - Healthcare Security

@ Allied Universal | Anaheim, CA, United States

Product Security Lead

@ Lely | Maassluis, Netherlands

Summer Associate, IT Information Security (Temporary)

@ Vir Biotechnology, Inc. | San Francisco, California, United States

Director, Governance, Risk and Compliance - Corporate

@ Ryan Specialty | Chicago, IL, US, 60606

Cybersecurity Governance, Risk, and Compliance Engineer

@ Emerson | Shakopee, MN, United States