April 11, 2023, 3:03 p.m. | Stewart Baker

The Cyberlaw Podcast www.steptoe.com

We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley’s “trust and safety” bureaucracies) but there’s no doubt that a potential existential issue lurks below the surface …

ai safety attention basics opaque openai reports safe safety stanford toxic

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Application Security Engineer - Remote Friendly

@ Unit21 | San Francisco,CA; New York City; Remote USA;

Cloud Security Specialist

@ AppsFlyer | Herzliya

Malware Analysis Engineer - Canberra, Australia

@ Apple | Canberra, Australian Capital Territory, Australia

Product CISO

@ Fortinet | Sunnyvale, CA, United States

Manager, Security Engineering

@ Thrive | United States - Remote