c
June 11, 2024, 5:09 p.m. |

Cloud Security Alliance cloudsecurityalliance.org

Originally published by Abnormal Security.Written by Daniel Kelley.Since the launch of ChatGPT nearly 18 months ago, cybercriminals have been able to leverage generative AI for their attacks. As part of its content policy, OpenAI created restrictions to stop the generation of malicious content. In response, threat actors have created their own generative AI platforms like WormGPT and FraudGPT, and they’re also sharing ways to bypass the policies and “jailbreak” ChatGPT.In fact, entire section...

abnormal abnormal security attacks chatgpt cybercriminals daniel daniel kelley generative generative ai jailbreak launch malicious openai own policy prompts response restrictions security threat threat actors written

Information Technology Specialist I: Windows Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, California

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Vice President, Controls Design & Development-7

@ State Street | Quincy, Massachusetts

Vice President, Controls Design & Development-5

@ State Street | Quincy, Massachusetts

Data Scientist & AI Prompt Engineer

@ Varonis | Israel

Contractor

@ Birlasoft | INDIA - MUMBAI - BIRLASOFT OFFICE, IN