top of page

Model Explainability:
Reading SHAP Stories

This work shows beeswarm, waterfall, and force plots to explain how features affect predictions. They highlight which features matter most, the order they contribute, and give clear signals of strength and direction.

shap_beeswarmplot.jpg

Beeswarm Plot

The beeswarm plot shows SHAP values for each observation. You can see how features shape individual outcomes in the dataset, highlighting whether their influence is uniform across participants or varies widely.

shap_waterfallplot_obs12_heartdisease.jpg

Waterfall Plot

In contrast to the beeswarm plot, the waterfall plot focuses on a single observation in detail. It highlights how the SHAP score is constructed in the model, showing which feature came first and how much it contributed.

shap_forceplot_obs12_heartdisease.jpg

Force Plot

This figure also focuses on one person's result, clearly showing how each feature pulls the prediction from the base. The colors and bar sizes together give a quick feel for which ones mattered the most.

From our team project in the Machine Learning Software Foundations Certificate at the Data Sciences Institute, University of Toronto: https://github.com/Chun-YuanChen/heart_guard_dsi_group_ML11

Welcome to my e-home, and thanks for stopping by!

Here's where I stash my data viz creations, dive into analysis adventures, and share the coolest research sparks. You'll also catch glimpses of my "oceanic expedition"—a wild ride through curiosity and discovery. Feel free to snoop around, explore, and reach out with any ideas. I'm always up for a coffee chat!​

© 2025 by Chun-Yuan Chen. Powered and secured by Wix. Licensed under CC BY-NC-ND 4.0.

bottom of page