all InfoSec news
Deep dive : Privacy risks of fine-tuning
DEV Community dev.to
Key Takeaways:
LLMs can leak data through two mechanisms:
Input privacy: data is exposed when sent to a remote AI provider, e.g. Hugging Face or OpenAI, and can be at risk if these admins are compromised or malicious.
Output privacy, aka a user/attacker, can send prompts to make the LLM regurgitate parts of the training/fine-tuning set, which can leak confidential information. This is what happened to Samsung.
Input privacy issues arise when relying on external SaaS AI solutions like GPT4 …
attacker compromised data deep dive dive exposed finetuning hugging face input key leak llm llms malicious openai parts privacy privacy risks prompts risk risks send takeaways