Skip to Main Content
An official website of the United States government

Assessing the effect of different kinds of explanations on how much people trust a cancer diagnosis

Principal Investigator

Name
Ben Armstrong

Degrees
Ph.D.

Institution
Massachusetts Institute of Technology

Position Title
Executive director and a research scientist

Email
armst@mit.edu

About this CDAS Project

Study
PLCO (Learn more about this study)

Project ID
PLCO-1080

Initial CDAS Request Approval
Oct 27, 2022

Title
Assessing the effect of different kinds of explanations on how much people trust a cancer diagnosis

Summary
We are interested in seeing the effect different kinds of explanations have on people's trust in a cancer diagnosis.

For example, people will be shown an individual case of a person with features such as "do they smoke", "what's their BMI" etc., and given a prediction representing their risk of getting a positive cancer diagnosis in x years.

However, each group in our user study will be told the explanations come from a different source, such as a doctor, and AI, a doctor and AI working together etc.

Users will be assessed on how much they trust the explanations from different sources.

Aims

* We want to see if people trust the predictions of an AI system, a doctor, or both combined more.
* We are also interested if this trust is appropriate, for example, a user shouldn't trust the AI (or doctor) if they are wrong, so we would like to see which group has their trust appropriately calibrated to the system (or person) at hand.

Collaborators

Apart from me, Eoin Kenny, other people working on the project are:

Ben Armstrong
Massachusetts Institute of Technology
Julie Shah
Massachusetts Institute of Technology
Abby Jaques
Massachusetts Institute of Technology