The accuracy of AI models has grown dramatically over the past few years
However, the most accurate models are "black boxes" and do not explain why they have made certain decisions or recommendations.
This created a huge challenge
especially with the surge of new regulations requiring the need for AI explainability in specific verticals, including finance, insurance and healthcare.
In order to leverage the advancements in AI technology
organizations must adopt explainability, which also benefits data science teams, business stakeholders and end users.
"Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead"
Professor Cynthia Rudin
Director of the Prediction Analysis Lab at Duke University
©2020 by Comprendo.ai