Modern AI approaches often work like black boxes: nobody really knows why things work the way they work. Offering explanations why an AI system came to a conclusion is certainly needed. The article by
studied two approaches that are promising: using an inherently interpretable model, or adopting an inscrutably complex model and generating post hoc explanations by mapping it to a simpler, explanatory model.[1] The Challenge of Crafting Intelligible Intelligence | June 2019 | Communications of the ACM