May 11, 2023, 9:06 p.m. | LiveOverflow

LiveOverflow www.youtube.com

After we explored attacking LLMs, in this video we finally talk about defending against prompt injections. Is it even possible?

Buy my shitty font (advertisement): shop.liveoverflow.com

Watch the complete AI series:
https://www.youtube.com/playlist?list=PLhixgUqwRTjzerY4bJgwpxCLyfqNYwDVB

Language Models are Few-Shot Learners: https://arxiv.org/pdf/2005.14165.pdf
A Holistic Approach to Undesired Content Detection in the Real World: https://arxiv.org/pdf/2208.03274.pdf

Chapters:
00:00 - Intro
00:43 - AI Threat Model?
01:51 - Inherently Vulnerable to Prompt Injections
03:00 - It's not a Bug, it's a Feature!
04:49 - Don't Trust User …

advertisement bug defense injection llms prompt injection series shop threat threat model video vulnerable watch

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC