all InfoSec news
ChatGPT jailbreak prompts proliferate on hacker forums
April 2, 2024, 10:43 p.m. | Laura French
SC Magazine feed for Risk Management www.scmagazine.com
Tactics include “tricking” the AI into believing it is in “development mode” or roleplaying.
ai-benefitsrisks aiml chatgpt development forums generative ai hacker jailbreak mode phishing prompts tactics
More from www.scmagazine.com / SC Magazine feed for Risk Management
Leveling the cybersecurity playing field
1 day, 5 hours ago |
www.scmagazine.com
Report: Cat-phishing of legitimate websites on the rise
1 day, 14 hours ago |
www.scmagazine.com
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg
@ Hifield | Strasbourg, France
Lead Security Specialist
@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas
Consultant SOC / CERT H/F
@ Hifield | Sèvres, France