all InfoSec news
OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats
TechCrunch techcrunch.com
OpenAI today announced that it’s created a new team to assess, evaluate and probe AI models to protect against what it describes as “catastrophic risks.” The team, called Preparedness, will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning. (Madry joined OpenAI in May as “head of Preparedness,” according to […]
© 2023 TechCrunch. All rights reserved. For personal use only.
ai ai models ai risks called center director forms generative ai government & policy joined led machine machine learning may mit nuclear openai policy preparedness probe protect research risks study team threats today