CREED

Coherence and Explanation (CREED)

Funds: Internal Research Funds 2017 – Free University of Bolzano.

Principal Investigaor: Daniele Porello

Co-PI: Oliver Kutz

 

Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) resulted in a remarkable increase in the number and capabilities of the intelligent systems being employed in the spheres of pro- fessional and private life, from credit ranking systems to personal health to smartwatches. Given this growing involvement in (and also influence over) people’s everyday activities, the question for the comprehensibility, accountability, and explainability of intelligent systems, their decisions and their behavior is becoming a compelling issue in assessing technology in society. From the perspective of AI and ML research, the corresponding questions are varied, ranging from theory (What is an explanation in a technological context? What does it mean to comprehend a system?) to applied research (How can in- terfaces be made better comprehensible? How does information about system states and processing have to be presented so as to serve as an explanation?). These issues are tied to several long-standing strands of research in AI and ML, among others including work in neural-symbolic integration, research into ontologies and knowledge representation, but also into human-computer interaction and actual system design. The aim of this project is to develop a philosophical understanding and a formal definition of the concept of explanation of AI and ML based technology, to enhance users awareness of those tech- nologies, to assess their transparency and accountability. Specifically, we will study a formal theory of explanation by grounding it into a theory of coherence as derived from the work of Thagard. Then, we investigate use-cases in ML and AI relating to the learning problem of coherence measures as well as to defining coherence based on similarity between concepts. The project will be developed in close collaboration with the University of Bremen, Otto-von-Guericke Universita ¨t Magdeburg, Artificial Intelligence Research Institute of Barcelona, and with the Laboratory for Applied Ontology, Institute for Cognitive Sciences and Technologies (ISTC) of the Italian National Research Council (CNR).