Nov. 13, 2023, 2:10 a.m. | Nanna Inie, Jonathan Stray, Leon Derczynski

cs.CR updates on arXiv.org arxiv.org

Engaging in the deliberate generation of abnormal outputs from large language
models (LLMs) by attacking them is a novel human activity. This paper presents
a thorough exposition of how and why people perform such attacks. Using a
formal qualitative methodology, we interviewed dozens of practitioners from a
broad range of backgrounds, all contributors to this novel work of attempting
to cause LLMs to fail. We relate and connect this activity between its
practitioners' motivations and goals; the strategies and techniques …

attacks bind demon human language language models large llm llms methodology novel people qualitative red teaming theory

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)