Feb. 7, 2024, 2:55 p.m. | Jeffrey Burt

Security Boulevard securityboulevard.com

IBM researchers have discovered a way to use generative AI tools to hijack live audio calls and manipulate what is being said without the speakers knowing. The “audio-jacking” technique – which uses large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities – could be used by bad actors to manipulate conversations for financial gain, Chenta..

The post IBM Shows How Generative AI Tools Can Hijack Live Calls appeared first on Security Boulevard.

ai tools audio audio-jacking bad bad actors can capabilities cloning cloud security cybersecurity cybersecurity risks of generative ai data security endpoint featured financial scams generative generative ai hijack ibm identity & access language language models large live llms mobile security researchers security boulevard (original) social engineering social - facebook social - linkedin social - x speakers speech spotlight text threat intelligence tools voice voice cloning what is

Director of the Air Force Cyber Technical Center of Excellence (CyTCoE)

@ Air Force Institute of Technology | Dayton, OH, USA

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Cybersecurity Engineer

@ Hitachi | (STS) Perth - Belmont

Cyber Security Expert (W/M)

@ Worldline | Seclin - 59, Nord, France

Senior CISO

@ Alter Solutions | Madrid, Spain

IT Security Specialist

@ BDO | Eindhoven, Netherlands