Investigating the interpretability of hidden layers in deep text mining
Investigating the interpretability of hidden layers in deep text mining
Samenvatting
In this short paper, we address the interpretability of hidden layer representations in deep text mining: deep neural networks applied to text mining tasks. Following earlier work predating deep learning methods, we exploit the internal neural network activation (latent) space as a source for performing k-nearest neighbor search, looking for representative, explanatory training data examples with similar neural layer activations as test inputs. We deploy an additional semantic document similarity metric for establishing document similarity between the textual representations of these nearest neighbors and the test inputs. We argue that the statistical analysis of the output of this measure provides insight to engineers training the networks, and that nearest neighbor search in latent space combined with semantic document similarity measures offers a mechanism for presenting explanatory, intelligible examples to users.
Organisatie | HAN University of Applied Sciences |
Afdeling | Academie IT en Mediadesign |
Lectoraten | |
Academie Engineering en Automotive | |
Lectoraat | Model-based Information Systems |
Jaar | 2017 |
Type | Conferentiebijdrage |
Taal | Engels |