Jan. 29, 2024, 5:46 p.m. | Black Hat

Black Hat www.youtube.com

We'll show that prompt injections are more than a novelty or nuisance- in fact, a whole new generation of malware and manipulation can now run entirely inside of large language models like ChatGPT. As companies race to integrate them with applications of all kinds we will highlight the need to think thoroughly about the security of these new systems. You'll find out how your personal assistant of the future might be compromised and what consequences could ensue.

By: Sahar Abdelnabi …

ai malware applications can chatgpt companies fact integrate language language models large llms malware manipulation prompt race run

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC