all InfoSec news
Why We’re Worried About the Wrong AI Security Risk; Gemini is Coming (Again)
The Information www.theinformation.com
Put aside all of the scary talk about bad people theoretically using large language models to build bombs or bioweapons for a moment. A more urgent threat, says investor Rama Sekhar, are AI models that could leak sensitive corporate data or hackers that trigger ChatGPT service outages. Sekhar is a longtime cybersecurity investor who joined Menlo Ventures as a partner last month after many years at Norwest Venture Partners.
He isn’t the only one making this argument. Last week, for …
ai models ai security bad bombs build chatgpt coming corporate corporate data data gemini hackers language language models large leak outages people risk scary security security risk sensitive service threat trigger urgent wrong