all InfoSec news
Cybercrime: be careful what you tell your chatbot helper…
Data and computer security | The Guardian www.theguardian.com
Alluring and useful they may be, but the AI interfaces’ potential as gateways for fraud and intrusive data gathering is huge – and is only set to grow
Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI’s GPT-4, Google’s Bard and Microsoft’s Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this …
area artificial intelligence (ai) bard bing bing chat chat chatbot chatbots computing cybercrime data data and computer security development experts fraud gathering google gpt gpt-4 humanity internet intrusive isn language language models large making may microsoft misinformation openai risk scale technology third