all InfoSec news
ChatGPT Systems: Prompt Injection and How to avoid ?
Jan. 3, 2024, 11:52 a.m. | Shahwar Alam Naqvi
DEV Community dev.to
Prompt Injection (definition)
Prompt injection refers to a technique used in natural language processing (NLP) models, where an attacker manipulates the input prompt to trick the model into generating unintended or biased outputs.
Prompt Injection (example)
- A simple example is in the image bit below, User asks to forget the original instructions and tries to allot it a task of his own will.
Prompt Injection (impact)
Prompt injection can have serious consequences, such as spreading misinformation, promoting biased views, or …
ai attacker chatgpt deeplearning definition image injection input language machinelearning natural natural language natural language processing nlp prompt prompt injection simple systems
More from dev.to / DEV Community
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Security Engineer II- Full stack Java with React
@ JPMorgan Chase & Co. | Hyderabad, Telangana, India
Cybersecurity SecOps
@ GFT Technologies | Mexico City, MX, 11850
Senior Information Security Advisor
@ Sun Life | Sun Life Toronto One York
Contract Special Security Officer (CSSO) - Top Secret Clearance
@ SpaceX | Hawthorne, CA
Early Career Cyber Security Operations Center (SOC) Analyst
@ State Street | Quincy, Massachusetts