Aug. 5, 2022, 11:24 a.m. | Carol Smith, Dustin Updyke

Software Engineering Institute (SEI) Podcast Series www.sei.cmu.edu

To ensure trust, artificial intelligence systems need to be built with fairness, accountability, and transparency at each step of the development cycle. In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, a senior research scientist in human machine interaction, and Dustin Updyke, a senior cybersecurity engineering in the SEI’s CERT Division, discuss the construction of trustworthy AI systems and factors influencing human trust of AI systems. 

accountability ai artificial artificial intelligence artificialintelligence carnegie mellon carnegie mellon university cert cybersecurity development discuss engineering fairness human intelligence machine ml podcast research sei software software engineering systems transparency trust trustinai university

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)