Feb. 7, 2024, 2:55 p.m. | Jeffrey Burt

Security Boulevard securityboulevard.com


IBM researchers have discovered a way to use generative AI tools to hijack live audio calls and manipulate what is being said without the speakers knowing. The “audio-jacking” technique – which uses large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities – could be used by bad actors to manipulate conversations for financial gain, Chenta..


The post IBM Shows How Generative AI Tools Can Hijack Live Calls appeared first on Security Boulevard.

ai tools audio audio-jacking bad bad actors can capabilities cloning cloud security cybersecurity cybersecurity risks of generative ai data security endpoint featured financial scams generative generative ai hijack ibm identity & access language language models large live llms mobile security researchers security boulevard (original) social engineering social - facebook social - linkedin social - x speakers speech spotlight text threat intelligence tools voice voice cloning what is

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC