Explainable AI to reduce information asymmetry in consumer relations:
a comparative analysis of tools and educational implications
Abstract
This article explores the need to implement Explainable Artificial Intelligence (XAI) to mitigate information asymmetry in consumer relations. The opacity of AI algorithms often leaves consumers at a disadvantage, unable to understand the automated decisions that impact their choices. Using the Analytic Hierarchy Process (AHP), the study evaluates XAI tools, such as SHAP, LIME, and InterpretML, identifying the most effective ones to promote transparency and ethics. In addition to benefiting consumers, XAI also assists regulatory bodies in ensuring fair practices in the market. The results indicate that adopting XAI is essential to developing a more balanced and conscious market, where the explainability of algorithmic decisions strengthens trust and promotes more informed consumption.
Keywords: Explainable artificial intelligence; Information asymmetry; AI tools evaluation; AI education; AI ethics.
References
ALVES, O. M. D. A. M. A. S. Da “caixa-preta” à “caixa de vidro”: o uso da explainable artificial intelligence (xai) para reduzir a opacidade e enfrentar o enviesamento em modelos algorítmicos. Dossiê – Inteligência Artificial, Ética e Epistemologia, v. 18, p. 373, 2021.
ALVEY, B.J., ANDERSON, D.T., KELLER, J.M. Linguistic Comparisons of Black Box Models. 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Yokohama, Japan, 2024, pp. 1-9.
BAVARESCO,M.Z. Avaliação de Bibliotecas para a Explicabilidade da Inteligência Artificial. Caxias do Sul: Trabalho de conclusão de curso, 2023. 79p.
DOMINGUES, Juliana Oliveira e SILVA, Alaís Aparecida Bonelli da e SOUZA, Henrique Monteiro Araujo de. Inteligência artificial nas relações de consumo: reflexões à luz do histórico recente. Inteligência artificial : sociedade, economia e Estado. Tradução . São Paulo, SP: Thomson Reuters Brasil, 2021.
PHILLIPS, P. Jonathon; et al. Four principles of explainable artificial intelligence. 2021. Disponível em: https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8312.pdf. Acesso em 03-2024.
LUGER, G. F. Inteligência Artificial. 6. ed. [S.l.]: PEARSON, 2013. 632 p.
LUNDBERG, S. A unified approach to interpreting model predictions. Paper with Code, 2017.
LÓPEZ, F. Shap: Shapley additive explanations. Towards Data Science, 2021.
MISHRA, P. Explainable AI Recipes. 1. ed. [S.l.]: Apress Berkeley, CA, 2023. 254 p.
RIBEIRO, C. G. M. T.; SAMEER, S. Local interpretable model-agnostic explanations (LIME): An introduction. O'Reilly, 2016.
ROSSETTI, R.; ANGELUCI, A. Ética algorítmica: questões e desafios éticos do avanço tecnológico da sociedade da informação. Galáxia (São Paulo), Programa de Estudos Pós-graduados em Comunicação e Semiótica - PUC-SP, n. 46, p. e50301, 2021. ISSN 1982-2553.
SANTOS, V. O que é AHP ou Processo Hierárquico Analítico e seus usos? FM2S, 2017.
SETZU, M. et al. GlocalX - From local to global explanations of black box AI models. Artificial Intelligence, Elsevier, v. 294, p. 103457, 2021.
STIGLITZ, J. E. The Theory of ‘Screening,’ Education, and the Distribution of Income. The American Economic Review, 65(3), 283-300, 1975.
TREVISAN, V. Using SHAP values to explain how your machine learning model works. Towards Data Science, 2022.
VELASCO, F. L. Shap: Explicações aditivas de Shapley. Ichi Pro, 2020.
VILONE, G.; LONGO, L. Classification of explainable artificial intelligence methods through their output formats. Machine Learning and Knowledge Extraction, v. 3, n. 3, p. 615–661, 2021. ISSN 2504-4990.
VISANI, G. LIME: Explain machine learning predictions. Towards Data Science, 2020.
WEBER, S.; LAPUSCHKIN, A.; WICK, A.; SAMEK, W.; BINDER, A. Beyond explaining: Opportunities and challenges of XAI-based model improvement. Information Fusion, v. 92, p. 154–176, 2023. ISSN 1566-2535.
WEBER, P., CARL, K.V.; HINZ, O. Applications of Explainable Artificial Intelligence in Finance - a systematic review of Finance, Information Systems, and Computer Science literature. Manag Rev Q 74, 867–907, 2024.