Jan. 29, 2024, 5:46 p.m. | Black Hat

Black Hat www.youtube.com

We'll show that prompt injections are more than a novelty or nuisance- in fact, a whole new generation of malware and manipulation can now run entirely inside of large language models like ChatGPT. As companies race to integrate them with applications of all kinds we will highlight the need to think thoroughly about the security of these new systems. You'll find out how your personal assistant of the future might be compromised and what consequences could ensue.

By: Sahar Abdelnabi …

ai malware applications can chatgpt companies fact integrate language language models large llms malware manipulation prompt race run

Network Security Administrator

@ Peraton | United States

IT Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Sr Cybersecurity Forensics Specialist

@ Health Care Service Corporation | Chicago (200 E. Randolph Street)

Security Engineer

@ Apple | Hyderabad, Telangana, India

Cyber GRC & Awareness Lead

@ Origin Energy | Adelaide, SA, AU, 5000

Senior Security Analyst

@ Prenuvo | Vancouver, British Columbia, Canada