all InfoSec news
GPT-3 'prompt injection' attack causes bad bot manners
Sept. 19, 2022, 1:37 p.m. | Brandon Vigliarolo
The Register - Security www.theregister.com
Also, EA goes kernel-deep to stop cheaters, PuTTY gets hijacked by North Korea, and more.
In Brief OpenAI's popular natural language model GPT-3 has a problem: It can be tricked into behaving badly by doing little more than telling it to ignore its previous orders.…
More from www.theregister.com / The Register - Security
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Cyber Consultant
@ Frazer-Nash Consultancy | Gloucester, England, United Kingdom
Senior Vulnerability Management Reporting & Analytics Developer
@ Baker Hughes | IN-KA-BANGALORE-NEON BUILDING WEST TOWER
Product Security Architect
@ ChargePoint | Italy