HESIP is a hybrid explanation system for image predictions that combines sub-symbolic and symbolic machine learning techniques to explain the predictions of image classification tasks. The sub-symbolic component makes a prediction for an image and the symbolic component learns probabilistic symbolic rules in order to explain that prediction. In HESIP, the explanations are generated in controlled natural language from the learned probabilistic rules using a bi-directional logic grammar. In this paper, we present an explanation modification method where a human-in-the-loop can modify an incorrect explanation generated by the HESIP system and afterwards, the modified explanation is used by HESIP to learn a better explanation.
Human-understandable and Machine-processable Explanations for Sub-symbolic Predictions
Abdus Salam | Rolf Schwitter | Mehmet Orgun
Proceedings of the Seventh International Workshop on Controlled Natural Language (CNL 2020/21)