Speaker: Wojciech Samek, Fraunhofer Institute
Deep neural networks (DNNs) are reaching or even exceeding the human level on an increasing number of complex tasks. However, due to their complex non-linear structure, these models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This talk will focus on a popular explanation technique, Layer-wise Relevance Propagation (LRP), and will show how to make it robust and scalable for complex DNN models. The effectivity of LRP will be demonstrated on various tasks (images, text, audio, video, biomedical signals) and neural architectures (ConvNets, LSTMs) and its close relation to the theoretical concept of (deep) Taylor Decomposition will be discussed. Finally, this talk will summarize recent developments in extending explainable AI beyond deep neural networks and classification problems.