all InfoSec news
In a military context, bias in AI reflects the decisions of its makers
March 15, 2024, 3:48 p.m. | Joel R. McConvey
Biometric Update www.biometricupdate.com
According to Ingvild Bode, associate professor at the Center for War Studies at the University of Southern Denmark, the artificial intelligence in lethal autonomous weapons systems (LAWS) and other military applications “will likely contain algorithmic biases” that could have “serious consequences.”
“Biases can lead to legal and moral harms as people of a certain age group, gender, or skin tone may be wrongfully assessed to be combatants,” writes Bode in the piece, which is based on her presentation to …
ai algorithms applications artificial artificial intelligence autonomous bias biases bias in ai biometric-bias biometric r&d biometrics biometrics news can center consequences context denmark facial recognition intelligence laws legal machine learning makers military military applications serious studies systems university war weapons weapons systems
More from www.biometricupdate.com / Biometric Update
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg
@ Hifield | Strasbourg, France
Lead Security Specialist
@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas
Consultant SOC / CERT H/F
@ Hifield | Sèvres, France