all InfoSec news
In a military context, bias in AI reflects the decisions of its makers
March 15, 2024, 3:48 p.m. | Joel R. McConvey
Biometric Update www.biometricupdate.com
According to Ingvild Bode, associate professor at the Center for War Studies at the University of Southern Denmark, the artificial intelligence in lethal autonomous weapons systems (LAWS) and other military applications “will likely contain algorithmic biases” that could have “serious consequences.”
“Biases can lead to legal and moral harms as people of a certain age group, gender, or skin tone may be wrongfully assessed to be combatants,” writes Bode in the piece, which is based on her presentation to …
ai algorithms applications artificial artificial intelligence autonomous bias biases bias in ai biometric-bias biometric r&d biometrics biometrics news can center consequences context denmark facial recognition intelligence laws legal machine learning makers military military applications serious studies systems university war weapons weapons systems
More from www.biometricupdate.com / Biometric Update
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Salesforce Solution Consultant
@ BeyondTrust | Remote United States
Divisional Deputy City Solicitor, Public Safety Compliance Counsel - Compliance and Legislation Unit
@ City of Philadelphia | Philadelphia, PA, United States
Security Engineer, IT IAM, EIS
@ Micron Technology | Hyderabad - Skyview, India
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
Werkstudent Cybersecurity (m/w/d)
@ Brose Group | Bamberg, DE, 96052