all InfoSec news
IBM Shows How Generative AI Tools Can Hijack Live Calls
IBM researchers have discovered a way to use generative AI tools to hijack live audio calls and manipulate what is being said without the speakers knowing. The “audio-jacking” technique – which uses large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities – could be used by bad actors to manipulate conversations for financial gain, Chenta..
The post IBM Shows How Generative AI Tools Can Hijack Live Calls appeared first on Security Boulevard.
ai tools audio audio-jacking bad bad actors can capabilities cloning cloud security cybersecurity cybersecurity risks of generative ai data security endpoint featured financial scams generative generative ai hijack ibm identity & access language language models large live llms mobile security researchers security boulevard (original) social engineering social - facebook social - linkedin social - x speakers speech spotlight text threat intelligence tools voice voice cloning what is