Feb. 8, 2024, 7:51 p.m. | /u/IncludeSec

cybersecurity www.reddit.com

Hi everyone! We just published part 2 of our series focusing on improving LLM security against prompt injection. In this release, we’re doing a deeper dive into transformers, attention, and how these topics play a role in prompt injection attacks. This post aims to provide more under-the-hood context about why prompt injection attacks are effective, and why they’re so difficult to mitigate.

[Improving LLM Security Against Prompt Injection: AppSec Guidance For Pentesters and Developers - Part 2](https://blog.includesecurity.com/2024/02/improving-llm-security-against-prompt-injection-appsec-guidance-for-pentesters-and-developers-part-2/)

appsec attacks attention cybersecurity developers dive doing guidance injection injection attacks llm play prompt prompt injection prompt injection attacks release role security series topics transformers

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens

Cryptography Software Developer

@ Intel | USA - AZ - Chandler

Lead Consultant, Geology

@ WSP | Richmond, VA, United States

BISO Cybersecurity Director

@ ABM Industries | Alpharetta, GA, United States