Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Despite the recent success of deep learning methods in automated medical image analysis tasks, their acceptance in the medical community is still questionable due to the lack of explainability in their decision-making process. The highly opaque feature learning process of deep models makes it difficult to rationalize their behavior and exploit the potential bottlenecks. Hence it is crucial to verify whether these deep features correlate with the clinical features, and whether their decision-making process can be backed by conventional medical knowledge. In this work, we attempt to bridge this gap by closely examining how the raw pixel-based neural architectures associate with the clinical feature based learning algorithms at both the decision level as well as feature level. We have adopted skin lesion classification as the test case and present the insight obtained in this pilot study. Three broad kinds of raw pixel-based learning algorithms based on convolution, spatial self-attention and attention as activation were analyzed and compared with the ABCD skin lesion clinical features based learning algorithms, with qualitative and quantitative interpretations.

Original publication

DOI

10.1007/978-3-030-80432-9_1

Type

Conference paper

Publication Date

01/01/2021

Volume

12722 LNCS

Pages

3 - 17