Original title: Concept-based Explainable Artificial Intelligence: A Survey
Authors: Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli, Elena Baralis
In the world of artificial intelligence, there is a growing demand for models that are transparent and reliable. However, recent studies have shown that using raw features to explain AI decisions may not be effective. There is a call for more user-friendly explanations. To address this issue, a number of papers have been published on Concept-based eXplainable Artificial Intelligence (C-XAI). However, there is still a lack of a unified categorization and clear definition in this field. This article aims to fill that gap by providing a comprehensive review of C-XAI approaches. The authors define and explain different concepts and types of explanations. They also offer a taxonomy of nine categories and guidelines for selecting the most suitable category based on the development context. Furthermore, the article discusses common evaluation strategies including metrics, human evaluations, and datasets used to aid the development of future C-XAI methods. The authors expect that this survey will be valuable for researchers, practitioners, and domain experts in understanding and advancing this innovative field.
Original article: https://arxiv.org/abs/2312.12936