July 28, 2023, 2:24 p.m. | Bianca Gonzalez

Biometric Update www.biometricupdate.com


University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings. 
A hacker can create a deepfake voice with five minutes of the target's recorded voice, which can be taken from public posts on social media, the research shows. GitHub's open source AI software can create deepfake audio that can surpass voice authentication.
He used the deepfake to expose a weakness in the Amazon …

access account amazon connect voiceid audio authentication biometric r&d biometrics biometrics news biometrics research cybersecurity deepfake deepfakes deepfake voice detection findings generated hacker pass popular recordings spoof spoof detection student system taken target university voice voice authentication voice biometrics waterloo

More from www.biometricupdate.com / Biometric Update

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Principal Business Value Consultant

@ Palo Alto Networks | Chicago, IL, United States

Cybersecurity Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Penetration Testing Engineer- Remote United States

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700

Internal Audit- Compliance & Legal Audit-Dallas-Associate

@ Goldman Sachs | Dallas, Texas, United States

Threat Responder

@ Deepwatch | Remote