all InfoSec news
Researchers find 'universal' jailbreak prompts for multiple AI chat models
July 28, 2023, 8:36 p.m. | Derek B. Johnson
SC Magazine feed for Strategy www.scmagazine.com
A study claims to have discovered a relatively simple addition to prompt questions that can trick many of the most popular LLMs into providing forbidden answers.
addition ai-benefitsrisks chat claims find forbidden jailbreak llms popular prompts questions researchers simple study
More from www.scmagazine.com / SC Magazine feed for Strategy
Jobs in InfoSec / Cybersecurity
Information System Security Officer (ISSO)
@ LinQuest | Boulder, Colorado, United States
Project Manager - Security Engineering
@ MongoDB | New York City
Security Continuous Improvement Program Manager (m/f/d)
@ METRO/MAKRO | Düsseldorf, Germany
Senior JavaScript Security Engineer, Tools
@ MongoDB | New York City
Principal Platform Security Architect
@ Microsoft | Redmond, Washington, United States
Staff Cyber Security Engineer (Emerging Platforms)
@ NBCUniversal | Englewood Cliffs, NEW JERSEY, United States