Recommendation for Machine Learning Interpretability options for a SeriesNetwork object?
1 回表示 (過去 30 日間)
古いコメントを表示
Hello –
I have a trained algorithm (i.e., LSTM) for time-series regression that is a SeriesNetwork object:
SeriesNetwork with properties:
Layers: [6×1 nnet.cnn.layer.Layer]
InputNames: {'sequenceinput'}
OutputNames: {'regressionoutput'}
I have used some canned routines for machine learning interpretability (e.g., shapley, lime, plotPartialDependence) that work great with some object types (e.g., RegressionSVM) but not with SeriesNetwork objects. The relevant functions I have read about appear to be for use with image classification, e.g., rather than time-series regression.
My question is thus: Can you recommend a machine learning interpretability function for use with a SeriesNetwork object built for regression? I am confident such a function exists, but I can’t seem to find it. Any and all help would be greatly appreciated.
Thank you in advance.
0 件のコメント
回答 (1 件)
Shivansh
2023 年 11 月 8 日
編集済み: Shivansh
2023 年 11 月 8 日
Hi Bart,
I understand that you want to find a machine learning interpretability function for use with a SeriesNetwork object built for regression.
You can use “gradCam” function for time series models. You can refer to the following link for an example on classification model using time series.
The method is designed specifically for convolutional networks so it may not give good results for LSTMs.
Hope it helps!
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Gaussian Process Regression についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!