April 9, 2023, 11 a.m. | Kate O'Flaherty

Data and computer security | The Guardian www.theguardian.com

Alluring and useful they may be, but the AI interfaces’ potential as gateways for fraud and intrusive data gathering is huge – and is only set to grow

Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI’s GPT-4, Google’s Bard and Microsoft’s Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this …

area artificial intelligence (ai) bard bing bing chat chat chatbot chatbots computing cybercrime data data and computer security development experts fraud gathering google gpt gpt-4 humanity internet intrusive isn language language models large making may microsoft misinformation openai risk scale technology third

More from www.theguardian.com / Data and computer security | The Guardian

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineering Professional

@ Nokia | India

Cyber Intelligence Exercise Planner

@ Peraton | Fort Gordon, GA, United States

Technical Lead, HR Systems Security

@ Sun Life | Sun Life Wellesley

SecOps Manager *

@ WTW | Thane, Maharashtra, India

Consultant Appels d'Offres Marketing Digital

@ Numberly | Paris, France