July 28, 2023, 9:20 p.m. | MalBot

Malware Analysis, News and Indicators - Latest topics malware.news

A study claims to have discovered a relatively simple addition to prompt questions that can trick a many of the most popular LLMs into providing forbidden answers.


Article Link: https://cms.cyberriskalliance.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models


1 post - 1 participant


Read full topic

addition chat claims find forbidden jailbreak llms popular prompts questions researchers simple study topic

Red Team Operator

@ JPMorgan Chase & Co. | LONDON, United Kingdom

SOC Analyst

@ Resillion | Bengaluru, India

Director of Cyber Security

@ Revinate | San Francisco Bay Area

Jr. Security Incident Response Analyst

@ Kaseya | Miami, Florida, United States

Infrastructure Vulnerability Consultant - (Cloud Security , CSPM)

@ Blue Yonder | Hyderabad

Product Security Lead

@ Lely | Maassluis, Netherlands