Development of Explainable AI Techniques for Complex Disease Diagnosis using Genomics Data

In this project we investigate approaches to make deep learning systems and models transparent, understandable and explainable in the context of complex disease diagnosis based on genomics data. While deep learning models have demonstrated unprecedented effectiveness in many tasks, these models are limited by the methods’ inability to explain their decisions and results (i.e. they are a “black box” model). The key problem is that deep learning models lack an explicit declarative knowledge representation and are limited in the generation of the underlying explanatory structures. In some cases, e.g. in low risk environments, it is sufficient to know the ‘what’ of the problem, but in other contexts, e.g. in healthcare (e.g., for disease diagnosis) knowing the ‘why’ is essential to understand the problem, the data and the reasons for certain recommendation prodcued by the model. This explainability is essential for building trust in the model and ensuring the security of approaches that rely on deep learning. In healthcare, and specifically for disease diagnosis, the explainability of results obtained through automatic decision support systems is essential. Complex diseases, unlike single-gene-disorders, do not have clear pattern of inheritance. In this project, we intend to acquire publicly available gene expression datasets generated from healthy and disease-affected individuals: these datasets will be analysed by means of explainable AI (XAI) methods based on deep learning algorithms. The complex patterns of molecular signatures deduced by our approach may help in prediction of a person’s risk of inheriting or passing on these diseases.

Chief Investigator(s): Dr Rahee Amit Walambe (Symb.), Dr Ketan Kotecha (Symb.), Dr Satyajeet Khare (Symb.), Dr Guido Zuccon (UQ), Dr Sen Wang (UQ)

Administering Organisation: Symbiosis International

Value: ~$74,755 (AUD)

Founding round: 2019 (2019-2021)