Feb. 13, 2024, 5:10 a.m. | Sumeet Ramesh Motwani Mikhail Baranchuk Martin Strohmeier Vijay Bolina Philip H. S. Torr Lewis Hammond

cs.CR updates on arXiv.org arxiv.org

Recent capability increases in large language models (LLMs) open up applications in which teams of communicating generative AI agents solve joint tasks. This poses privacy and security challenges concerning the unauthorised sharing of information, or other unwanted forms of agent coordination. Modern steganographic techniques could render such dynamics hard to detect. In this paper, we comprehensively formalise the problem of secret collusion in systems of generative AI agents by drawing on relevant concepts from both the AI and security literature. …

agent ai agents applications challenges coordination cs.ai cs.cr detect forms generative generative ai hard information language language models large llms privacy privacy and security secret security security challenges sharing teams techniques unauthorised

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC