What we need is interpretable and not explainable machine learning

Session Date & Time:
Thu. January 28, 2021 @ 09:00 ET
Days
Hours
Minutes
Seconds
Add to My Agenda
Share This session:
Share on facebook
Share on twitter
Share on linkedin
  • Session Description
  • Presenters
  • Additional Resources

All models are wrong and when they are wrong they create financial or non-financial harm. Understanding, testing and managing potential model failures and their unintended consequences are  the key focus of model risk management, particularly for mission critical or regulated applications. This is a challenging task for complex machine learning models and having an explainable model is a key enabler. Machine learning explainability has become an active area of academic research and an industry in its own right. Despite all the progress that has been made, machine learning explainers are still fraught with weakness and complexity. In this talk, I will argue that what we need is an interpretable machine learning model, one that is self-explanatory and inherently interpretable. I will discuss how to make sophisticated machine learning models such as Neural networks (Deep Learning) as self-explanatory models.

Login to View Content

Login Or Register

Event Registration

Almost there!

We need to confirm your email.

You're Almost thEre!

Check your Email for Confirmation and Login Instructions

You will receive an email confirming your registration – please click on the link in the email to confirm.

Login instructions will also be provided in the email. If you don’t get the email, please let us know at info@cognilytica.com.