Feb. 24, 2023, 5:30 p.m. | Kyle Wiggers

TechCrunch techcrunch.com

When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts? What […]


Can AI really be protected from text-based attacks? by Kyle Wiggers originally published on TechCrunch

ai ai-powered artificial intelligence attacks bing bing chat chat chatbot conspiracy theories holocaust inputs large-language-models llms love machine learning malicious microsoft natural language processing nlp openai prompts robotics & ai techcrunch text

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)