Dec. 6, 2023, 9:10 p.m. |

GovInfoSecurity.com RSS Syndication www.govinfosecurity.com

Researchers Automate Tricking LLMs Into Providing Harmful Information
A small group of researchers says it has identified an automated method for jailbreaking OpenAI, Meta and Google large language models with no obvious fix. Just like the algorithms that researchers can force into giving dangerous or undesirable responses, the technique depends on machine learning.

algorithms automated fix google jailbreak jailbreaking language language models large llms machine machine learning meta openai researchers

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Compliance Advisor

@ SAP | Budapest, HU, 1031

DevSecOps Engineer

@ Qube Research & Technologies | London

Software Engineer, Security

@ Render | San Francisco, CA or Remote (USA & Canada)

Associate Consultant

@ Control Risks | Frankfurt, Hessen, Germany

Senior Security Engineer

@ Activision Blizzard | Work from Home - CA