Feb. 24, 2023, 5:30 p.m. | Kyle Wiggers

TechCrunch techcrunch.com

When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts? What […]


Can AI really be protected from text-based attacks? by Kyle Wiggers originally published on TechCrunch

ai ai-powered artificial intelligence attacks bing bing chat chat chatbot conspiracy theories holocaust inputs large-language-models llms love machine learning malicious microsoft natural language processing nlp openai prompts robotics & ai techcrunch text

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Lead Technical Product Manager - Threat Protection

@ Mastercard | Remote - United Kingdom

Data Privacy Officer

@ Banco Popular | San Juan, PR

GRC Security Program Manager

@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City

Cyber Security Engineer

@ ASSYSTEM | Warrington, United Kingdom

Privacy Engineer, Technical Audit

@ Meta | Menlo Park, CA