all InfoSec news
Researchers Poke Holes in Safety Controls of ChatGPT, Othoer Chatbots
Aug. 12, 2023, 4:01 a.m. | Cade Metz
The RISKS Digest catless.ncl.ac.uk
Cade Metz, *The New York Times*, 27 Jul 2023, via,ACM TechNews
Scientists at Carnegie Mellon University and the Center for AI Safety
demonstrated the ability to produce nearly infinite volumes of
destructive information by bypassing artificial intelligence (AI)
protections in any leading chatbot. The researchers found they could
exploit open source systems by appending a long suffix of characters
onto each English-language prompt inputted into the system. In this
manner, they were able to persuade chatbots to provide harmful
information …
acm ai safety artificial artificial intelligence bypassing carnegie mellon carnegie mellon university center chatbot chatbots chatgpt controls exploit information intelligence new york new york times researchers safety technews the new york times university
More from catless.ncl.ac.uk / The RISKS Digest
EFI IPv6/PXE Security Flaw
3 months, 2 weeks ago |
catless.ncl.ac.uk
Imaging privacy threats from an ambient light sensor
3 months, 2 weeks ago |
catless.ncl.ac.uk
Re: CLEAR wants to scan your face at airports. Privacy experts are worried.
3 months, 2 weeks ago |
catless.ncl.ac.uk
Jobs in InfoSec / Cybersecurity
Cybersecurity Consultant
@ Devoteam | Cité Mahrajène, Tunisia
GTI Manager of Cybersecurity Operations
@ Grant Thornton | Phoenix, AZ, United States
(Senior) Director of Information Governance, Risk, and Compliance
@ SIXT | Munich, Germany
Information System Security Engineer
@ Space Dynamics Laboratory | North Logan, UT
Intelligence Specialist (Threat/DCO) - Level 3
@ Constellation Technologies | Fort Meade, MD
Cybersecurity GRC Specialist (On-site)
@ EnerSys | Reading, PA, US, 19605