Aug. 5, 2022, 11:24 a.m. | Carol Smith, Dustin Updyke

Software Engineering Institute (SEI) Podcast Series www.sei.cmu.edu

To ensure trust, artificial intelligence systems need to be built with fairness, accountability, and transparency at each step of the development cycle. In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, a senior research scientist in human machine interaction, and Dustin Updyke, a senior cybersecurity engineering in the SEI’s CERT Division, discuss the construction of trustworthy AI systems and factors influencing human trust of AI systems. 

accountability ai artificial artificial intelligence artificialintelligence carnegie mellon carnegie mellon university cert cybersecurity development discuss engineering fairness human intelligence machine ml podcast research sei software software engineering systems transparency trust trustinai university

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Engineer, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

BaaN IV Techno-functional consultant-On-Balfour

@ Marlabs | Piscataway, US

Senior Security Analyst

@ BETSOL | Bengaluru, India

Security Operations Centre Operator

@ NEXTDC | West Footscray, Australia

Senior Network and Security Research Officer

@ University of Toronto | Toronto, ON, CA