How do AI education and systems become explainable?Research associate and PhD student in computer science education at the University of Hamburg, Moritz Kreinsen, is a member of an international “Squad on Explainable AI in Education” of the European Digital Education Hub by the European Commission
23 October 2024
Photo: EDEH/EC
With the increasing influence of AI systems on instruction, educational outcomes and assessment, the demand for transparency and accountability of these systems has increased. Explainable AI (XAI for short) aims to bridge the gap between complex AI algorithms on the one hand and teachers, learners and administrative staff on the other by providing clear access to how AI systems come to their conclusions. This transparency is crucial as it fosters trust among users who can see and understand the reasons behind AI-driven decisions, recommendations and actions. It also empowers educators to make informed decisions about integrating AI tools into their teaching strategies. In addition, it ensures that AI systems adhere to ethical standards, reduce potential biases and promote fairness in assessments (source: EDEH/EC).
The European Digital Education Hub (EDEH) is an initiative of the European Commission that aims to promote and support digital teaching and learning practices within the EU. The EDEH was created to facilitate the exchange of information, best practices and resources in the field of digital education. The EDEH is part of the EU's efforts to drive digital transformation in the education sector and ensure that all learners have access to modern, digital learning environments.
In order to achieve these goals in the field of “Explainable AI in Education”, a digitally networked “Squad” consisting of experts from different educational backgrounds, priorities and nationalities was initiated in August 2024 for the period until December 2024, which will develop guidelines for action and resources for policymakers and educators on behalf of the European Commission. This squad is accompanied by a workshop, which took place in Brussels on 17/18 October 2024 and in which initial results were compiled in an intensive exchange with members of the European Commission.
Moritz Kreinsen, research associate and PhD student in the Computer Science Education working group at the University of Hamburg, is part of this squad and works in a sub-unit on the topic of “Explainable AI for AI Literacy”. At the workshop in Brussels, he acted as a facilitator for a working group on the research question “What are the key challenges and opportunities for ensuring explainability in generative AI models used in classroom settings, and how can these models be designed to provide interpretable feedback to educators and learners?”.
The results of the EDEH Squad's work will be presented to the European Commission in December 2024.