KI-SIGS

AI Space for Intelligent Health Systems

Initial situation

This project addresses the critical issue of the privacy of AI models in healthcare to prevent sensitive patient data from leaking or (using other AI models) being reconstructed. However, the AI models should still remain explainable. Since explainability requires transparency, but transparency can undermine privacy, it is necessary to model this relationship mathematically and quantify the effects.

Aims of the project

The aim of the project is to develop a novel information-theoretical framework to investigate and optimize the dichotomy between privacy and explainability of machine learning algorithms in the context of health data.

Innovation

The innovation consists in the development of an information-theoretical measure to quantify both aspects, data protection in terms of privacy and explainability. Relevant topics for the use of AI models in health care are addressed that go beyond the state of the art:

  • How can informational privacy be optimized if the statistical distribution of data is unknown?
  • How can the explainability of a machine/deep learning model be quantified?
  • How can the interaction between privacy, explainability and utility be investigated?
  • How to investigate the impact of privacy on the transferability of knowledge from one domain to another related domain?

Results

The result will be a software framework that supports the design and analysis of AI models with regard to privacy and explainability, thus creating the basis for new standards and best practices in the handling of healthcare data and AI models based on them.

Consortium

The project combines all the required scientific & technological competences through its partners.

Funding

This project is funded within the framework of "IKT der Zukunft".

Further information

Find more information on the following website.

Contact

Lukas Fischer

Fischer Lukas

Research Manager Data Science
Phone: +43 50 343 828

back