all InfoSec news
ChatGPT jailbreak prompts proliferate on hacker forums
April 3, 2024, 12:05 a.m. | MalBot
Malware Analysis, News and Indicators - Latest topics malware.news
Tactics include “tricking” the AI into believing it is in “development mode” or roleplaying.
Article Link: ChatGPT jailbreak prompts proliferate on hacker forums | SC Media
1 post - 1 participant
article chatgpt development forums hacker jailbreak link media mode prompts tactics topic
More from malware.news / Malware Analysis, News and Indicators - Latest topics
RSA Conference 2024: What to expect
50 minutes ago |
malware.news
Pay up, or else? – Week in security with Tony Anscombe
1 day, 10 hours ago |
malware.news
Jobs in InfoSec / Cybersecurity
Cyber Security Engineer
@ ASSYSTEM | Bridgwater, United Kingdom
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
GRC Analyst
@ Richemont | Shelton, CT, US
Security Specialist
@ Peraton | Government Site, MD, United States
Information Assurance Security Specialist (IASS)
@ OBXtek Inc. | United States
Cyber Security Technology Analyst
@ Airbus | Bengaluru (Airbus)