Assessing the effect of different kinds of explanations on how much people trust a cancer diagnosis
For example, people will be shown an individual case of a person with features such as "do they smoke", "what's their BMI" etc., and given a prediction representing their risk of getting a positive cancer diagnosis in x years.
However, each group in our user study will be told the explanations come from a different source, such as a doctor, and AI, a doctor and AI working together etc.
Users will be assessed on how much they trust the explanations from different sources.
* We want to see if people trust the predictions of an AI system, a doctor, or both combined more.
* We are also interested if this trust is appropriate, for example, a user shouldn't trust the AI (or doctor) if they are wrong, so we would like to see which group has their trust appropriately calibrated to the system (or person) at hand.
Apart from me, Eoin Kenny, other people working on the project are:
Ben Armstrong
Massachusetts Institute of Technology
Julie Shah
Massachusetts Institute of Technology
Abby Jaques
Massachusetts Institute of Technology