June 1, 2023, 11:17 a.m. | Bruce Schneier

Schneier on Security www.schneier.com

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:


Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.


The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.


I …

artificial intelligence center coordinated global new york new york times nuclear nuclear war press risk risk assessment risks safety scale statement the new york times war

Technical Senior Manager, SecOps | Remote US

@ Coalfire | United States

Global Cybersecurity Governance Analyst

@ UL Solutions | United States

Security Engineer II, AWS Offensive Security

@ Amazon.com | US, WA, Virtual Location - Washington

Senior Cyber Threat Intelligence Analyst

@ Sainsbury's | Coventry, West Midlands, United Kingdom

Embedded Global Intelligence and Threat Monitoring Analyst

@ Sibylline Ltd | Austin, Texas, United States

Senior Security Engineer

@ Curai Health | Remote