A method for assessing the degree of confidence in the self-explanations of GPT models
A.N. Lukyanov, A.M. Tramova
Upload the full text
Abstract. With the rapid growth in the use of generative neural network models for practical tasks, the problem of explaining their decisions is becoming increasingly acute. As neural network-based solutions are being introduced into medical practice, government administration, and defense, the demands for interpretability of such systems will undoubtedly increase. In this study, we aim to propose a method for verifying the reliability of self-explanations provided by models post factum by comparing the attention distribution of the model during the generation of the response and its explanation.The authors propose and develop methods for numerical evaluation of answers reliability provided by generative pre-trained transformers. It is proposed to use the Kullback-Leibler divergence over the attention distributions of the model during the issuance of the response and the subsequent explanation. Additionally, it is proposed to compute the ratio of the model’s attention between the original query and the generated explanation to understand how much the self-explanation was influenced by its own response. An algorithm for recursively computing the model’s attention across the generation steps is proposed to obtain these values.The study demonstrated the effectiveness of the proposed methods, identifying metric values corresponding to correct and incorrect explanations and responses.We analyzed the currently existing methods for determining the reliability of generative model responses, noting that the overwhelming majority of them are challenging for an ordinary user to interpret. In this regard, we proposed our own methods, testing them on the most widely used generative models available at the time of writing. As a result, we obtained typical values for the proposed metrics, an algorithm for their computation, and visualization.
Keywords: neural networks, metrics, language models, interpretability, GPT, LLM, XAI
For citation. Lukyanov A.N., Tramova A.M. A method for assessing the degree of confidence in the self-explanations of GPT models. News of the Kabardino-Balkarian Scientific Center of RAS. 2024. Vol. 26. No. 4. Pp. 54–61. DOI: 10.35330/1991-6639-2024-26-4-54-61