May 11, 2023, 9:06 p.m. | LiveOverflow

LiveOverflow www.youtube.com

After we explored attacking LLMs, in this video we finally talk about defending against prompt injections. Is it even possible?

Buy my shitty font (advertisement): shop.liveoverflow.com

Watch the complete AI series:
https://www.youtube.com/playlist?list=PLhixgUqwRTjzerY4bJgwpxCLyfqNYwDVB

Language Models are Few-Shot Learners: https://arxiv.org/pdf/2005.14165.pdf
A Holistic Approach to Undesired Content Detection in the Real World: https://arxiv.org/pdf/2208.03274.pdf

Chapters:
00:00 - Intro
00:43 - AI Threat Model?
01:51 - Inherently Vulnerable to Prompt Injections
03:00 - It's not a Bug, it's a Feature!
04:49 - Don't Trust User …

advertisement bug defense injection llms prompt injection series shop threat threat model video vulnerable watch

Senior Security Engineer - Detection and Response

@ Fastly, Inc. | US (Remote)

Application Security Engineer

@ Solidigm | Zapopan, Mexico

Defensive Cyber Operations Engineer-Mid

@ ISYS Technologies | Aurora, CO, United States

Manager, Information Security GRC

@ OneTrust | Atlanta, Georgia

Senior Information Security Analyst | IAM

@ EBANX | Curitiba or São Paulo

Senior Information Security Engineer, Cloud Vulnerability Research

@ Google | New York City, USA; New York, USA