フィルターのクリア

Testing statistical significance of ANN hidden neuron outputs

2 ビュー (過去 30 日間)
Petar Zuvela
Petar Zuvela 2017 年 10 月 28 日
編集済み: Petar Zuvela 2017 年 12 月 5 日
My recent project involves optimization of an MLP ANN. The best architecture turned out to be with 8 hidden neurons, for 6 inputs and 1 output. For a reliable network, besides optimization of the architecture, it is important to select an appropriate model. One way to do so is to test statistical significance of the weights and biases, and perform pruning on the ones that are not statistically significant.
I was reading up on the Wald method, seems simple for a linear model, but I have trouble to code it in Matlab for ANN. Could you please give me some pointers?

回答 (1 件)

Greg Heath
Greg Heath 2017 年 10 月 30 日
1. Did you
i. Normalize inputs and output to [-1,1] ?
ii. Use A LINEAR output node and TANH hidden nodes?
iii. What was your training goal?
iv. What non-default settings did you use?
2. For each original weight, combine the resulting training, validation, testing and total data performance scores for the following configurations
a. The original trained net with all weights present.
b. The original trained net when only one of the trained weights is
replaced with a zero value weight.
c. Starting with the modified network in b, continue training until
convergence or algorithm stopping.
d. Starting with the modified network in b, continue training with
THE ORIGINAL WEIGHT FIXED AT ZERO.
3. Please let us know your results.
Hope this helps.
Greg
  1 件のコメント
Petar Zuvela
Petar Zuvela 2017 年 12 月 5 日
編集済み: Petar Zuvela 2017 年 12 月 5 日
Hi Greg,
Thank you very much for your thorough answer and apologies for late response.
i. The inputs and outputs are normalized to [-1,1].
ii. Linear output node and TANH hidden nodes are used.
iii. Training goal is mean squared error (MSE).
iv. Non-default settings used: Training algorithm - BFG, no regularization.
Working on obtaining performance results for 2a through 2c, but could you please give me an idea on how to train the network and simultaneously keep the original weight fixed at zero?
All the best,
Petar

サインインしてコメントする。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by