Feb. 13, 2024, 5:10 a.m. | Jonathan Evertz Merlin Chlosta Lea Sch\"onherr Thorsten Eisenhofer

cs.CR updates on arXiv.org arxiv.org

Large Language Models (LLMs) are increasingly integrated with external tools. While these integrations can significantly improve the functionality of LLMs, they also create a new attack surface where confidential data may be disclosed between different components. Specifically, malicious tools can exploit vulnerabilities in the LLM itself to manipulate the model and compromise the data of other services, raising the question of how private data can be protected in the context of LLM integrations.
In this work, we provide a systematic …

attack attack surface can components compromise confidential confidentiality cs.cr cs.lg data exploit external integrations language language models large llm llms machine malicious may systems tools vulnerabilities whispers

Security Operations Engineer

@ Nokia | India

Machine Learning DevSecOps Engineer

@ Ford Motor Company | Mexico City, MEX, Mexico

Cybersecurity Defense Analyst 2

@ IDEMIA | Casablanca, MA, 20270

Executive, IT Security

@ CIMB | Cambodia

Cloud Security Architect - Microsoft (m/w/d)

@ Bertelsmann | Gütersloh, NW, DE, 33333

Senior Consultant, Cybersecurity - SOC

@ NielsenIQ | Chennai, India