Towards making NLG a voice for interpretable Machine Learning
James Forrest | Somayajulu Sripada | Wei Pang | George Coghill
Proceedings of the 11th International Conference on Natural Language Generation
This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework called LIME. Our study shows that self-reported rating of NLG explanation was higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clear-cut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.