Feb. 13, 2024, 5:10 a.m. | Sumeet Ramesh Motwani Mikhail Baranchuk Martin Strohmeier Vijay Bolina Philip H. S. Torr Lewis Hammond

cs.CR updates on arXiv.org arxiv.org

Recent capability increases in large language models (LLMs) open up applications in which teams of communicating generative AI agents solve joint tasks. This poses privacy and security challenges concerning the unauthorised sharing of information, or other unwanted forms of agent coordination. Modern steganographic techniques could render such dynamics hard to detect. In this paper, we comprehensively formalise the problem of secret collusion in systems of generative AI agents by drawing on relevant concepts from both the AI and security literature. …

agent ai agents applications challenges coordination cs.ai cs.cr detect forms generative generative ai hard information language language models large llms privacy privacy and security secret security security challenges sharing teams techniques unauthorised

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Data & Security Engineer Lead

@ LiquidX | Singapore, Central Singapore, Singapore

IT and Cyber Risk Control Lead

@ GXS Bank | Singapore - OneNorth

Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F

@ Hifield | Sèvres, France

Cyber Security Analyst (Weekend 1st Shift)

@ Fortress Security Risk Management | Cleveland, OH, United States

Senior Manager, Cybersecurity

@ BlueTriton Brands | Stamford, CT, US