Explainable AI for Industrial Automation
Predictive maintenance system utilizing Google Cloud Vertex AI with Sampled Shapley explainability to detect operational faults in physical machinery. This page demonstrates local and global explainability, quantitative model performance, counterfactual what-if analysis, and an open-source simulation pipeline — bridging the gap between black-box ML and transparent, trustworthy AI for industrial systems.
Live Telemetry Feed (Simulated)
SHAP Output Samples (Vertex AI Endpoint)
Sample predictions from the deployed Vertex AI endpoint with Sampled Shapley attributions. Each row represents a real inference call with its corresponding feature-level explanations.
| ID | Joint Torque | Vibration RMS | Cycle Time | Encoder Err | Prediction | Confidence |
|---|---|---|---|---|---|---|
| #001 | 62.4 (+0.38) | 0.12 (+0.52) | 1180 (-0.03) | 0.08 (+0.28) | FAULT | 94.2% |
| #002 | 38.1 (-0.12) | 0.03 (-0.22) | 950 (-0.05) | 0.01 (-0.08) | NORMAL | 97.8% |
| #003 | 55.0 (+0.30) | 0.09 (+0.45) | 1600 (+0.14) | 0.01 (-0.04) | FAULT | 89.1% |
| #004 | 42.5 (-0.08) | 0.05 (-0.15) | 1100 (-0.02) | 0.07 (+0.22) | NORMAL | 62.3% |
| #005 | 71.8 (+0.55) | 0.15 (+0.61) | 1850 (+0.18) | 0.12 (+0.35) | FAULT | 99.4% |
Model Performance Metrics
Quantitative evaluation on the held-out industrial telemetry test set (n = 2,480 samples). Model: Gradient Boosted Trees trained on Vertex AI AutoML.
System Architecture
End-to-end data flow from physical sensor telemetry through the ML inference pipeline to the explainable prediction output.
SHAP Integration (Python)
Global Explainability
Aggregated feature importance across the entire training distribution (n = 12,400 samples). This reveals which physical parameters the model considers most critical for fault prediction system-wide.
Counterfactual What-If Scenarios
Explore how changing a single parameter shifts the AI's decision boundary. Click any scenario to auto-populate the telemetry inputs above.
Open-Source Simulation Pipeline
The full simulation pipeline is open-sourced for community review, extension, and feedback. It enables researchers and engineers to replicate the XAI methodology for their own industrial use cases.