Aug. 12, 2023, 4:01 a.m. | Cade Metz

The RISKS Digest catless.ncl.ac.uk

Cade Metz, *The New York Times*, 27 Jul 2023, via,ACM TechNews

Scientists at Carnegie Mellon University and the Center for AI Safety
demonstrated the ability to produce nearly infinite volumes of
destructive information by bypassing artificial intelligence (AI)
protections in any leading chatbot. The researchers found they could
exploit open source systems by appending a long suffix of characters
onto each English-language prompt inputted into the system. In this
manner, they were able to persuade chatbots to provide harmful
information …

acm ai safety artificial artificial intelligence bypassing carnegie mellon carnegie mellon university center chatbot chatbots chatgpt controls exploit information intelligence new york new york times researchers safety technews the new york times university

Cybersecurity Consultant

@ Devoteam | Cité Mahrajène, Tunisia

GTI Manager of Cybersecurity Operations

@ Grant Thornton | Phoenix, AZ, United States

(Senior) Director of Information Governance, Risk, and Compliance

@ SIXT | Munich, Germany

Information System Security Engineer

@ Space Dynamics Laboratory | North Logan, UT

Intelligence Specialist (Threat/DCO) - Level 3

@ Constellation Technologies | Fort Meade, MD

Cybersecurity GRC Specialist (On-site)

@ EnerSys | Reading, PA, US, 19605