May 25, 2023, 11:05 a.m. | Bruce Schneier

Schneier on Security www.schneier.com

Interesting essay on the poisoning of LLMs—ChatGPT in particular:


Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know …

academic papers artificial intelligence bad bad actors chatgpt crowd don hacking incentives llms openai poisoning processes results secrecy seo snake oil

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Ford Pro Tech and FCSD Tech – Product Manager, Cyber Security

@ Ford Motor Company | Chennai, Tamil Nadu, India

Cloud Data Encryption and Cryptography Automation Expert

@ Ford Motor Company | Chennai, Tamil Nadu, India

SecOps Analyst

@ Atheneum | Berlin, Berlin, Germany

Consulting Director, Cloud Security, Proactive Services (Unit 42)

@ Palo Alto Networks | Santa Clara, CA, United States