e
April 3, 2024, 4 a.m. |

Embrace The Red embracethered.com

About a year ago we talked about how developers can’t intrinsically trust LLM responses and common threats that AI Chatbots face and how attackers can exploit them, including ways to exfiltrate data.
One of the threats is unfurling of hyperlinks, which can lead to data exfiltration and is something often seen in Chatbots. So, let’s shine more light on it, including practical guidance on how to mitigate it with the example of Slack Apps.

agents ai agents ai chatbots attackers can chatbots data data exfiltration developers exfiltrate data exfiltration exploit llm threats trust

EY- GDS- Cybersecurity- Staff

@ EY | Miguel Hidalgo, MX, 11520

Staff Security Operations Engineer

@ Workiva | Ames

Public Relations Senior Account Executive (B2B Tech/Cybersecurity/Enterprise)

@ Highwire Public Relations | Los Angeles, CA

Airbus Canada - Responsable Cyber sécurité produit / Product Cyber Security Responsible

@ Airbus | Mirabel

Investigations (OSINT) Manager

@ Logically | India

Security Engineer I, Offensive Security Penetration Testing

@ Amazon.com | US, NY, Virtual Location - New York