Feb. 6, 2024, 3:10 p.m. | Stephanie Palazzolo

The Information www.theinformation.com

Put aside all of the scary talk about bad people theoretically using large language models to build bombs or bioweapons for a moment. A more urgent threat, says investor Rama Sekhar, are AI models that could leak sensitive corporate data or hackers that trigger ChatGPT service outages. Sekhar is a longtime cybersecurity investor who joined Menlo Ventures as a partner last month after many years at Norwest Venture Partners.

He isn’t the only one making this argument. Last week, for …

ai models ai security bad bombs build chatgpt coming corporate corporate data data gemini hackers language language models large leak outages people risk scary security security risk sensitive service threat trigger urgent wrong

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)

@ WWC Global | Reston, Virginia, United States

Security Architect (DevSecOps)

@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium

Infrastructure Security Architect

@ Ørsted | Kuala Lumpur, MY

Contract Penetration Tester

@ Evolve Security | United States - Remote

Senior Penetration Tester

@ DigitalOcean | Canada