Sept. 19, 2023, 11:08 a.m. | Bruce Schneier

Schneier on Security www.schneier.com

There are no reliable ways to distinguish text written by a human from text written by an large language model. OpenAI writes:


Do AI detectors work?



  • In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
  • Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this …

artificial intelligence chatgpt detect generated human language large large language model llm openai text tools work written

Business Information Security Officer

@ Metrolink | Los Angeles, CA

Senior Security Engineer

@ Freedom of the Press Foundation | Remote, 4 hour time zone overlap with New York City

Security Engineer

@ ChartMogul | Remote, EU

REF7225P- Information Security (HIPPA& GDPR) Pune-Contract Employee

@ WNS Global Services | Pune, India

Cortex Systems Engineer, SecOps Platform - North America

@ Palo Alto Networks | Remote, Texas, United States

Senior Threat Engineer

@ Zscaler | Tel Aviv-Yafo, Israel