Skip to Main Content

An official website of the United States government

Principal Investigator
Name
Dr. Chern Hong Lim
Degrees
Ph.D.
Institution
Monash University Malaysia Sdn Bhd [ Co. Reg. No. : 199801002475 (458601-U)]
Position Title
Lecturer
Email
About this CDAS Project
Study
NLST (Learn more about this study)
Project ID
NLST-656
Initial CDAS Request Approval
Apr 9, 2020
Title
Explainable Artificial Intelligence Model for Lung Cancer Detection
Summary
In the medical domain, explainability was ranked the most desirable feature of a clinical decision support system by the physicians in order to gain the trust of users and patients. However, this is contradict with the state-of-the-art artificial intelligence (AI) based inference models that were implemented and used in this domain where the computation of the final decision is based on “black box” algorithms such as artificial neural network and deep learning. Although some remarkable results have been published and proven by scientific experiments, yet they are lacking the explicit declarative knowledge representation that constructs the underlying explanatory model which will answer the “Why” questions towards the predicted outcome. Such circumstances limited the trustworthiness of a diagnosis system as the algorithms have been shown to be easily misclassifying the inputs with no resemblance to the true categories and generally unreliable in supporting a very crucial decision. In this research, our objective is to extend the study of generating explainable AI in Lung cancer diagnosis by integrating the causality theory and model induction in medical imaging (e.g. CT-Scan) to support the decision making in the Lung cancer detection.
Aims

The project aims to investigate and implement the techniques and new theories on generating the explainable AI in cancer diagnosis. The expected outcomes of this project include:
(a) A technique to learn the explainable feature from the relevant medical images (CT-Scan).
(b) Interpretable causal model to represent the extracted knowledge for inferencing.
(c) A technique to infer an explainable model to generate descriptions towards the predicted outcome.

Collaborators

1) Monash University
2) National Cancer Institute Malaysia