Feb. 7, 2024, 2:55 p.m. | Jeffrey Burt

Security Boulevard securityboulevard.com


IBM researchers have discovered a way to use generative AI tools to hijack live audio calls and manipulate what is being said without the speakers knowing. The “audio-jacking” technique – which uses large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities – could be used by bad actors to manipulate conversations for financial gain, Chenta..


The post IBM Shows How Generative AI Tools Can Hijack Live Calls appeared first on Security Boulevard.

ai tools audio audio-jacking bad bad actors can capabilities cloning cloud security cybersecurity cybersecurity risks of generative ai data security endpoint featured financial scams generative generative ai hijack ibm identity & access language language models large live llms mobile security researchers security boulevard (original) social engineering social - facebook social - linkedin social - x speakers speech spotlight text threat intelligence tools voice voice cloning what is

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Officer Hospital Laguna Beach

@ Allied Universal | Laguna Beach, CA, United States

Sr. Cloud DevSecOps Engineer

@ Oracle | NOIDA, UTTAR PRADESH, India

Cloud Operations Security Engineer

@ Elekta | Crawley - Cornerstone

Cybersecurity – Senior Information System Security Manager (ISSM)

@ Boeing | USA - Seal Beach, CA

Engineering -- Tech Risk -- Security Architecture -- VP -- Dallas

@ Goldman Sachs | Dallas, Texas, United States