all InfoSec news
Synthetic faces could solve algorithmic bias and ethical data quandary: IDVerse
Sept. 25, 2023, 11:36 p.m. | Masha Borak
Biometric Update www.biometricupdate.com
For years, many of the AI systems used for facial recognition and identity verification have had a race and gender bias problem. Systems have shown higher misidentification rates for people with darker skin, and have sometimes contributed to wrongful arrests. The main reason behind this is the data AI models have been trained on, which disproportionately includes white and male faces.
To counter this, some companies are turning to enriching their data sets with fake faces. Generative AI can …
algorithmic bias arrests bias biometric-bias biometric data biometric r&d biometrics biometrics news contributed data dataset ethical ethics facial facial recognition features and interviews gender generative ai higher identity identity verification idverse main people problem race recognition synthetic synthetic faces systems verification
More from www.biometricupdate.com / Biometric Update
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)