Skip to Main Content

An official website of the United States government

Principal Investigator
Name
xujiong ye
Degrees
Ph.D
Institution
University of Lincoln
Position Title
Professor in Medical Imaging & Computer Vision
Email
About this CDAS Project
Study
NLST (Learn more about this study)
Project ID
NLST-1033
Initial CDAS Request Approval
Mar 28, 2023
Title
Causal Counterfactual visualisation for human causal decision making – A case study in healthcare
Summary
This research project, funded by the UK EPSRC, will investigate novel causal counterfactual visualisation, which will, in contrast to the direct visualisation of real data, have a new functionality to render causal counterfactuals that did not occur in reality. The counterfactuals will be generated by a counterfactual simulation model that is trained with real data. This extends standard data visualisation by visualising hypothetical exemplars beyond real data. It will support "explanation-with-examples" by enabling decision makers to interactively create synthetic data and examine "close possible worlds" (e.g. different outcomes from a small causal change). Visualising concrete exemplars will allow people to view key evidence and contest their decisions against the counterfactuals to gain actionable insights. Causal counterfactual visualisation will be underpinned by the latest advance in both AI domains and psychology. the new causal counterfactual visualisation techniques developed in this project will offer a useful channel to assess and further our understanding of human behaviour and performance in causal decision making.

In this project, we will use one clinical case study to probe causal decision making in healthcare, especially in the context of lung cancer clinical decision support, providing clinical judgement based on causal risk factors of developing lung cancer, which targets unit-level causality about individual patients. We will focus on causal structure learning and the use of causal counterfactual visualisation in lung cancer risk prediction from clinical, demographics, family medical history, and CT image data by visualising future occurrence of lung nodules, its time and locations. Potential risk factors include age, smoking status, medical history, work history and so on. Human decisions based on the prediction model will enable the identification of patients with high risks at early stage for timely interventions. Image data will contribute to the risk calculation. Synthetic images will be generated for visualisation to display the hypothetical outcomes under alternative actions.

This work will demonstrate how the new technologies can aid in the reduction variability and support robust decision making with actionable insights that clinicians can interpret. We will train and evaluate the model using the National Lung Screen Trial (NLST) database.
Aims

To develop machine learning and visualisation technologies that aims at aiding in clinical judgement based on causal risk factors of developing lung cancer, which involves three main specific aims:

- CT lung cancer detection and segmentation

a deep learning-based method will be used to accurately detect and segment lung cancer in CT images acquired at different time points. Image features (such as cancer volume, location and texture features) will be calculated and used in the next steps.

- Causal structure learning and identification of causal risks of developing lung cancer

Learning causal structure via the given data information including clinical, demographics data, smoking status, medical history, work history, and image features extracted from the first stage. A new generative machine learning model will be used as the basic architecture to establish the causal links among all the factors and identify the causal risks of developing lung cancer and its progression. The key component of the directed acyclic graph (DAG) will be learned for the causal inference.

- Synthetic image generation and visualization.

We will further develop a conditional generative adversarial network (GAN) in stage three to generate synthetic images from input clinical data, which will be used in the visualization, along with the detected cancer regions at different time points.

Collaborators

Dr Lei Zhang, University of Lincoln
Professor Feng Dong, University of strathclyde