e
Sept. 16, 2023, 7 a.m. |

Embrace The Red embracethered.com

What happens if an attacker calls an LLM tool or plugin recursively during an Indirect Prompt Injection? Could this be an issue and drive up costs, or DoS a system?
I tried it with ChatGPT, and it indeed works and the Chatbot enters a loop! 😊

However, for ChatGPT users this isn’t really a threat, because:
It’s subscription based, so OpenAI would pay the bill. There seems to be a call limit of 10 times in a single conversation turn …

apps attacker chatbot chatgpt don dos drive indeed injection issue llm loop plugin prompt injection system tool

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Application Security Engineer - Remote Friendly

@ Unit21 | San Francisco,CA; New York City; Remote USA;

Cloud Security Specialist

@ AppsFlyer | Herzliya

Malware Analysis Engineer - Canberra, Australia

@ Apple | Canberra, Australian Capital Territory, Australia

Product CISO

@ Fortinet | Sunnyvale, CA, United States

Manager, Security Engineering

@ Thrive | United States - Remote