Aug. 12, 2023, 4:01 a.m. | Cade Metz

The RISKS Digest catless.ncl.ac.uk

Cade Metz, *The New York Times*, 27 Jul 2023, via,ACM TechNews

Scientists at Carnegie Mellon University and the Center for AI Safety
demonstrated the ability to produce nearly infinite volumes of
destructive information by bypassing artificial intelligence (AI)
protections in any leading chatbot. The researchers found they could
exploit open source systems by appending a long suffix of characters
onto each English-language prompt inputted into the system. In this
manner, they were able to persuade chatbots to provide harmful
information …

acm ai safety artificial artificial intelligence bypassing carnegie mellon carnegie mellon university center chatbot chatbots chatgpt controls exploit information intelligence new york new york times researchers safety technews the new york times university

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité

@ Hifield | Sèvres, France

Infrastructure Consultant

@ Telefonica Tech | Belfast, United Kingdom