April 13, 2023, 4:07 p.m. | Matt Burgess

Security Latest www.wired.com

Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.

artificial intelligence business chatgpt cyberattacks and hacks cybersecurity hacking jailbreaking language language models large researchers rules safety security security researchers things

Incident Response Lead

@ Blue Yonder | Hyderabad

GRC Analyst

@ Chubb | Malaysia

Information Security Manager

@ Walbec Group | Waukesha, WI, United States

Senior Executive / Manager, Security Ops (TSSQ)

@ SMRT Corporation Ltd | Singapore, SG

Senior Engineer, Cybersecurity

@ Sonova Group | Valencia (CA), United States

Consultant (Multiple Positions Available)

@ Atos | Plano, TX, US, 75093