Sept. 20, 2023, 1:19 p.m. | Daniel Huynh

DEV Community dev.to




Key Takeaways:


LLMs can leak data through two mechanisms:


Input privacy: data is exposed when sent to a remote AI provider, e.g. Hugging Face or OpenAI, and can be at risk if these admins are compromised or malicious.

Output privacy, aka a user/attacker, can send prompts to make the LLM regurgitate parts of the training/fine-tuning set, which can leak confidential information. This is what happened to Samsung.

Input privacy issues arise when relying on external SaaS AI solutions like GPT4 …

attacker compromised data deep dive dive exposed finetuning hugging face input key leak llm llms malicious openai parts privacy privacy risks prompts risk risks send takeaways

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Principal Business Value Consultant

@ Palo Alto Networks | Chicago, IL, United States

Cybersecurity Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Penetration Testing Engineer- Remote United States

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700

Internal Audit- Compliance & Legal Audit-Dallas-Associate

@ Goldman Sachs | Dallas, Texas, United States

Threat Responder

@ Deepwatch | Remote