all InfoSec news
Can AI really be protected from text-based attacks?
TechCrunch techcrunch.com
When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts? What […]
Can AI really be protected from text-based attacks? by Kyle Wiggers originally published on TechCrunch
ai ai-powered artificial intelligence attacks bing bing chat chat chatbot conspiracy theories holocaust inputs large-language-models llms love machine learning malicious microsoft natural language processing nlp openai prompts robotics & ai techcrunch text