all InfoSec news
'Skeleton Key' attack unlocks the worst of AI, says Microsoft
June 28, 2024, 6:38 a.m. | Thomas Claburn
The Register - Security www.theregister.com
Simple jailbreak prompt can bypass safety guardrails on major models
Microsoft on Thursday published details about Skeleton Key – a technique that bypasses the guardrails used by makers of AI models to prevent their generative chatbots from creating harmful content.…
ai models attack bypass can chatbots generative guardrails jailbreak key major makers microsoft prevent prompt safety simple skeleton
More from www.theregister.com / The Register - Security
TeamViewer says Russia broke into its corp IT network
2 days, 3 hours ago |
www.theregister.com
Unlock the future of security
2 days, 7 hours ago |
www.theregister.com
'Skeleton Key' attack unlocks the worst of AI, says Microsoft
2 days, 16 hours ago |
www.theregister.com
US lawmakers wave red flags over Chinese drone dominance
3 days, 9 hours ago |
www.theregister.com
Jobs in InfoSec / Cybersecurity
Technical Product Engineer
@ Palo Alto Networks | Tel Aviv-Yafo, Israel
Azure Cloud Architect
@ Version 1 | Dublin, Ireland
Junior Pen Tester
@ Vertiv | Pune, India
Information Security GRC Director
@ IQ-EQ | Hyderabad, India
Senior Technical Analyst
@ Fidelity International | Gurgaon Office
Security Engineer II
@ Microsoft | Redmond, Washington, United States