Description du projet :
Explainable AI (XAI) has recently emerged proposing a set of techniques attempting to explain machine learning (ML) models. The recipients (explainee) are intended to be humans or other intelligent virtual entities. Transparency, trust, and debuging are the underlying features calling for XAI. However, in real-world settings, systems are distributed, data are heterogeneous, the "system" knowledge is bounded, and privacy concerns are subject to variable constraints. Current XAI approaches cannot cope with such requirements. Therefore, there is a need for personalized explainable artificial intelligence. We plan to develop models and mechanisms to reconcile sub-symbolic, symbolic, and semantic representations leveraging on the agent-based paradigm. In particular, the proposed approach combines inter-agent, intra-agent, and human-agent interactions to benefit from both the specialization of ML agents and the establishment of agent collaboration mechanisms, which will integrate heterogeneous knowledge / explanations extracted from efficient black-box AI agents. The project includes the validation of the personalization and heterogeneous knowledge integration approach through a prototype application in the domain of food and nutrition monitoring and recommendation, including the evaluation of agent-human explainability, and the performance of the employed techniques in a collaborative AI environment.
Equipe de recherche au sein de la HES-SO:
Partenaires académiques: Michael Schumacher, University of Applied Sciences and Arts Western Switzerland (HES-SO); Andrea Omicini, University of Bologna; Giovanni Ciatto, University of Bologna; Leon Van del Torre, University of Luxembourg; Amro Najjar, University of Luxembourg; Reyhan Aydogan, Ozyegin University
Durée du projet:
Statut: En cours