all InfoSec news
Researchers automated jailbreaking of LLMs with other LLMs
Help Net Security www.helpnetsecurity.com
AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models (LLMs) in an automated fashion. “The method, known as the Tree of Attacks with Pruning (TAP), can be used to induce sophisticated models like GPT-4 and Llama-2 to produce hundreds of toxic, harmful, and otherwise unsafe responses to a user query (e.g. ‘how to build a bomb’) in mere minutes,” Robust Intelligence researchers … More
The post …
ai security artificial intelligence attack attacks automated don't miss fashion gpt gpt-4 hot stuff intelligence jailbreak jailbreaking language language models large llama llms machine machine learning research researchers robust intelligence security security researchers tap university yale university