March 27, 2024, 10:21 p.m. |

GovInfoSecurity.com RSS Syndication www.govinfosecurity.com

Crooks Are Recruiting AI Experts to Jailbreak Existing LLM Guardrails
Cybercrooks are exploring ways to develop custom, malicious large language models after existing tools such as WormGPT failed to cater to their demands for advanced intrusion capabilities, security researchers say. Undergrounds forums teem with hackers' discussions about how to exploit guardrails.

advanced capabilities demands discussions experts exploit forums guardrails hackers intrusion jailbreak language language models large llm llms malicious recruiting researchers security security researchers tools wormgpt

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Principal Security Engineer

@ Activision Blizzard | Work from Home - CA

Security Engineer- Systems Integration

@ Meta | Bellevue, WA | Menlo Park, CA | New York City

Lead Security Engineer (Digital Forensic and IR Analyst)

@ Blue Yonder | Hyderabad

Senior Principal IAM Engineering Program Manager Cybersecurity

@ Providence | Redmond, WA, United States

Information Security Analyst II or III

@ Entergy | The Woodlands, Texas, United States