e
April 15, 2024, 3:11 p.m. |

Embrace The Red embracethered.com

Google’s NotebookML is an experimental project that was released last year. It allows users to upload files and analyze them with a large language model (LLM).
However, it is vulnerable to Prompt Injection, meaning that uploaded files can manipulate the chat conversation and control what the user sees in responses.
There is currently no known solution to these kinds of attacks, so users can’t implicitly trust responses from large language model applications when untrusted data is involved.

apps can chat control conversation data data exfiltration exfiltration files google injection language large large language model llm llm apps project prompt prompt injection sees tables upload vulnerable

Sr. Cloud Security Engineer

@ BLOCKCHAINS | USA - Remote

Network Security (SDWAN: Velocloud) Infrastructure Lead

@ Sopra Steria | Noida, Uttar Pradesh, India

Senior Python Engineer, Cloud Security

@ Darktrace | Cambridge

Senior Security Consultant

@ Nokia | United States

Manager, Threat Operations

@ Ivanti | United States, Remote

Lead Cybersecurity Architect - Threat Modeling | AWS Cloud Security

@ JPMorgan Chase & Co. | Columbus, OH, United States