Feb. 6, 2024, 3:10 p.m. | Stephanie Palazzolo

The Information www.theinformation.com

Put aside all of the scary talk about bad people theoretically using large language models to build bombs or bioweapons for a moment. A more urgent threat, says investor Rama Sekhar, are AI models that could leak sensitive corporate data or hackers that trigger ChatGPT service outages. Sekhar is a longtime cybersecurity investor who joined Menlo Ventures as a partner last month after many years at Norwest Venture Partners.

He isn’t the only one making this argument. Last week, for …

ai models ai security bad bombs build chatgpt coming corporate corporate data data gemini hackers language language models large leak outages people risk scary security security risk sensitive service threat trigger urgent wrong

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)