Exploration of Explainable AI Models using Machine Learning for Prostate Cancer Diagnosis
This project aims to explore Explainable AI (XAI) techniques to enhance the transparency of ML-driven prostate cancer diagnosis. By leveraging datasets from prostate cancer screening and diagnostic procedures, the study will compare multiple ML models. Two XAI methods will be employed to interpret the models’ predictions, ensuring clinical relevance and trustworthiness.
The objectives of this study include:
Evaluating ML models for their accuracy in prostate cancer detection.
Applying XAI techniques to interpret model decisions.
Comparing model performance and interpretability to determine the most clinically useful approach.
Assessing the feasibility of AI integration into clinical workflows for enhanced decision-making.
Through this research, I aim to bridge the gap between AI advancements and real-world medical applications, fostering trust in AI-assisted prostate cancer diagnosis. The findings will provide insights into balancing accuracy with interpretability, making AI-driven healthcare more accessible and reliable.
Develop a Machine Learning Pipeline for Prostate Cancer Diagnosis
Utilize supervised ML models such as Random Forest, SVM, and Neural Networks.
Train models on prostate cancer screening and diagnostic datasets.
Apply Explainable AI (XAI) Techniques
Implement SHAP and LIME to interpret ML model predictions.
Compare XAI methods for their effectiveness in providing transparent explanations.
Evaluate Model Performance and Interpretability
Assess accuracy, sensitivity, and specificity of ML models.
Measure the clinical relevance and trustworthiness of XAI explanations.
Integrate Findings into a Clinically Relevant Framework
Analyse how AI-driven explanations align with clinical decision-making.
Provide recommendations for improving AI adoption in prostate cancer diagnosis.
Working on this by myself for my final year project.