May 16, 2022, 6:02 p.m. | Violet Turri

Software Engineering Institute (SEI) Podcast Series www.sei.cmu.edu

As the field of artificial intelligence (AI) has matured, increasingly complex opaque models have been developed and deployed to solve hard problems. Unlike many predecessor models, these models, by the nature of their architecture, are harder to understand and oversee. When such models fail or do not behave as expected or hoped, it can be hard for developers and end-users to pinpoint why or determine methods for addressing the problem. Explainable AI (XAI) meets the emerging demands of AI engineering …

ai architecture artificial artificial intelligence explainableai explained fail hard intelligence nature opaque problems understand xai

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Brand Experience and Development Associate (Libby's Pumpkin)

@ Nestlé | Arlington, VA, US, 22209

Cybersecurity Analyst

@ L&T Technology Services | Milpitas, CA, US

Information Security Analyst

@ Fortinet | Burnaby, BC, Canada