Trustworthy AI

The Trustworthy AI Lab ties the activities of the L3S on explainability, bias and robustness of AI as part of our International Future Lab on Artificial Intelligence. This provides a way towards further synergies, as an explicit joint label connecting L3S projects related to this topic, as well as extending and complementing our activities with the assessment of AI systems based on the Z-Inspection process developed by the Z-Inspection® initiative.

Why we need a Trustworthy AI Lab

Artificial Intelligence (AI) represents the potential to move humanity forward, but only if developed and used responsibly. While AI offers the opportunity to optimize various areas in industry and society, it can also create discrimination, for example if an AI-based system is used in the allocation of jobs and loans. The scientific community is therefore, faced not only with the task of optimizing the predictive performance of AI algorithms, but also with incorporating ethical and legal principles into their design, training, and deployment. Responsibility in AI is more than a technical question, rather a solution requiring interdisciplinary collaboration, ethical scrutiny, and a human-centered approach.  

Several projects at L3S aim to understand – and avoid – the legal, societal, and technical challenges of these biases. 

How we work – Z-inspection® Method​

This inter-lab collaboration extends and complements L3S’s work with the assessment of AI systems based on the Z-Inspection® process developed by Z-Inspection® initiative.

Among the questions this lab seeks to investigate are: What is trustworthiness? What is trustworthiness in AI applications? Does this differ according to application? What are technical solutions to enable, support, and sustain trustworthiness in AI?

Why we are different

The Trustworthy AI Lab at L3S facilitates dialogue among the L3S labs whose focus includes explainability, bias and robustness of AI.

Leibniz AI Lab

Personalised medicine puts the patient at the centre. Prevention, diagnostics and therapy are tailored to the individual’s needs. To accomplish all this, vast amounts of data must be processed and analysed. This requires intelligent, reliable and responsible systems. Excellent researchers from all over the world have been working together with colleagues from Leibniz Universität, Hannover Medical School (MHH) and European partner institutes at the Leibniz Future Laboratory for Artificial Intelligence (LeibnizAILab) since May 2020 to develop such systems for personalised medicine. The interdisciplinary research team integrates a variety of approaches relevant to AI. It is important that the methods are both reproducible and robust and that data protection is always guaranteed. The results of intelligent systems must be explainable, fair and attributable.

BIAS

In the BIAS research group, experts from Leibniz Universität Hannover bring together epistemological as well as ethical, legal and technical perspectives. The Volkswagen Foundation is funding the inter-faculty research initiative as part of the call for proposals “Artificial Intelligence – Its Impact on Tomorrow’s Society”. The core idea: philosophers analyse the ethical dimension of concepts and principles in the context of AI (bias, discrimination, fairness). Lawyers examine whether the principles are adequately reflected in the relevant legal framework (data protection, consumer, competition, anti-discrimination law). And computer scientists develop concrete technical solutions to recognise discrimination and remedy it with debiasing strategies.

Druck
nobias

NoBIAS

NoBIAS develops innovative methods for unbiased AI-based decision-making by taking ethical and legal considerations into account when developing technical solutions. The main goals of NoBIAS are to understand the legal, social and technical challenges of bias in AI decision-making, to counteract them by developing fairness-aware algorithms, to automatically explain AI results and to document the overall process in terms of data provenance and transparency.

NoBIAS is training a group of 15 ESRs (Early-Stage Researchers) to address problems of bias through multidisciplinary training and research in computer science, data science, machine learning, law and social science. ESRs will gain practical expertise in a variety of sectors including telecommunications, finance, marketing, media, software and legal services to promote regulatory compliance and innovation across the board. Technical, interdisciplinary and soft skills give ESRs a head start on future leadership positions in industry, academia or government.

xAIM

In the xAIM (eXplainable Artificial Intelligence in healthcare Management) project, we are working with partners from the Universities of Pavia, Keele, Ljubljana and the Goethe University Frankfurt to develop a Master’s programme dedicated to the application of explainable AI in healthcare, which will particularly appeal to people from this field. The online programme will be offered at the University of Pavia (Italy) and will create an international and interdisciplinary environment for students between data science, artificial intelligence and healthcare. Students will learn the basics of machine learning and data science to deal with large and heterogeneous datasets typical in the medical environment. Alongside this, healthcare concepts are taught to enable students to understand and interpret both the data and the results of the analyses. Furthermore, the degree programme places great emphasis on students learning to recognise and discuss ethical and social implications as well as risks of the applications of AI and how to deal with them well.

xAIM logo
ZDIN Gesellschaft und Arbeit

The Future Lab Society & Work

The Future Lab Society & Work researches the work-related consequences and effects of digitalisation, the possibilities, concepts and prerequisites for the design of digital working environments, the effects of artificial intelligence (AI) on organisational processes as well as the economic policy and regulatory framework. The scientists involved from the social, economic, legal and technical fields are therefore as diverse as its research fields.

At L3S, we coordinate the project and specifically research the representativeness of data and models in AI algorithms.

PhD Programme: Responsible AI

The research and application of responsible AI is a very young discipline and requires the bundling of research activities from a wide range of disciplines in order to design and apply AI systems in a reliable, transparent, safe and legally acceptable manner.

The doctoral programme addresses the interdisciplinary research challenges within the framework of 14 transdisciplinary doctorates. Organised in four clusters, fellows explore the most pressing research questions in the areas of quality, liability, interpretability, responsible use of information and the application of AI. The innovative, goal-oriented and internationally oriented supervision concept and experienced PI team support the fellows in conducting excellent research.

Artificial Intelligence and Law

A recording of a talk given by Dr. Roberto V. Zicari for the course of Dr. Seongwook Heo at the Seoul National University Law School on March 30, 2022. The English section of the talk begins at minute 27 (the video will start automatically at that point.) This talk introduces the EU Framework for Trustworthy AI, his research on assessing trustworthy AI, and the EU AI Act.

Z-inspection® is a registered trademark.

Z-inspection® is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.

In case of any questions, please contact:

Dr. rer. nat. Marco Fisichella

Lead Scientist for Trustworthy AI

mfisichella@l3s.de