May 31, 2023, 1:10 a.m. | Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu

cs.CR updates on arXiv.org arxiv.org

Machine learning models have demonstrated promising performance in many
areas. However, the concerns that they can be biased against specific
demographic groups hinder their adoption in high-stake applications. Thus, it
is essential to ensure fairness in machine learning models. Most previous
efforts require direct access to sensitive attributes for mitigating bias.
Nonetheless, it is often infeasible to obtain large-scale users' sensitive
attributes considering users' concerns about privacy in the data collection
process. Privacy mechanisms such as local differential privacy (LDP) …

access adoption applications attributes classification fair fairness high machine machine learning machine learning models performance privacy private

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Security Engineer II- Full stack Java with React

@ JPMorgan Chase & Co. | Hyderabad, Telangana, India

Cybersecurity SecOps

@ GFT Technologies | Mexico City, MX, 11850

Senior Information Security Advisor

@ Sun Life | Sun Life Toronto One York

Contract Special Security Officer (CSSO) - Top Secret Clearance

@ SpaceX | Hawthorne, CA

Early Career Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts