Unpublished conference/Abstract (Scientific congresses and symposiums)
LLM explanations for interpretable recommender systems
Manderlier, Maxime
202512th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS 2025) co-located with 19th ACM Conference on Recommender Systems (RecSys 2025)
Peer reviewed
 

Files


Full Text
intrs25_presentation.pdf
Author postprint (3.16 MB)
Request a copy

All documents in ORBi UMONS are protected by a user license.

Send to



Details



Keywords :
explainable recommendations; collaborative filtering; matrix factorization; large language models; user study
Abstract :
[en] We investigate whether large language models (LLMs) can generate effective, user-facing explanations from a mathematically interpretable recommendation model. The model is based on constrained matrix factorization, where user types are explicitly represented and predicted item scores share the same scale as observed ratings, making the model's internal representations and predicted scores directly interpretable. This structure is translated into natural language explanations using carefully designed LLM prompts. Many works in explainable AI rely on automatic evaluation metrics, which often fail to capture users' actual needs and perceptions. In contrast, we adopt a user-centered approach: we conduct a study with 326 participants who assessed the quality of the explanations across five key dimensions-transparency, effectiveness, persuasion, trust, and satisfaction-as well as the recommendations themselves. To evaluate how different explanation strategies are perceived, we generate multiple explanation types from the same underlying model, varying the input information provided to the LLM. Our analysis reveals that all explanation types are generally well received, with moderate statistical differences between strategies. User comments further underscore how participants react to each type of explanation, offering complementary insights beyond the quantitative results.
Disciplines :
Computer science
Author, co-author :
Manderlier, Maxime  ;  Université de Mons - UMONS > Faculté Polytechnique > Service de Management de l'Innovation Technologique
Language :
English
Title :
LLM explanations for interpretable recommender systems
Publication date :
22 September 2025
Event name :
12th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS 2025) co-located with 19th ACM Conference on Recommender Systems (RecSys 2025)
Event place :
Prague, Czechia
Event date :
22/09/2025
Audience :
International
Peer review/Selection committee :
Peer reviewed
Research unit :
F113 - Management de l'Innovation Technologique
Research institute :
R300 - Institut de Recherche en Technologies de l'Information et Sciences de l'Informatique
Commentary :
Oral presentation of the paper “From latent factors to language: a user study on LLM-generated explanations for an inherently interpretable matrix-based recommender system” by Manderlier, M.; Lecron, F.; Vu Thanh, O. and Gillis, N.
Available on ORBi UMONS :
since 14 January 2026

Statistics


Number of views
5 (1 by UMONS)
Number of downloads
0 (0 by UMONS)

Bibliography


Similar publications



Contact ORBi UMONS