
Model Explainability:
Reading SHAP Stories
This work shows beeswarm, waterfall, and force plots to explain how features affect predictions. They highlight which features matter most, the order they contribute, and give clear signals of strength and direction.

Beeswarm Plot
The beeswarm plot shows SHAP values for each observation. You can see how features shape individual outcomes in the dataset, highlighting whether their influence is uniform across participants or varies widely.

Waterfall Plot
In contrast to the beeswarm plot, the waterfall plot focuses on a single observation in detail. It highlights how the SHAP score is constructed in the model, showing which feature came first and how much it contributed.

Force Plot
This figure also focuses on one person's result, clearly showing how each feature pulls the prediction from the base. The colors and bar sizes together give a quick feel for which ones mattered the most.
From our team project in the Machine Learning Software Foundations Certificate at the Data Sciences Institute, University of Toronto: https://github.com/Chun-YuanChen/heart_guard_dsi_group_ML11
