March 19, 2024, 6:13 a.m. | Tushar Subhra Dutta

GBHackers On Security gbhackers.com

Large language models (LLMs) are vulnerable to attacks, leveraging their inability to recognize prompts conveyed through ASCII art.  ASCII art is a form of visual art created using characters from the ASCII (American Standard Code for Information Interchange) character set. Recently, the following researchers from their respective universities proposed a new jailbreak attack, ArtPrompt, that […]


The post Researchers Hack AI Assistants Using ASCII Art appeared first on GBHackers on Security | #1 Globally Trusted Cyber Security News Platform.

ai assistants ai security american art artificial intelligence ascii ascii art attack attacks characters code cyber security hack information jailbreak language language models large llms prompts researchers standard universities vulnerability vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC