Weights in Neural networks

12 ビュー (過去 30 日間)
Fred
Fred 2012 年 10 月 5 日
コメント済み: fa abud 2016 年 12 月 27 日
I am training a simple BP neural network with 8 inputs, 1 output and 1 hidden layer with 10 nodes in it. my weight matrices is a set of numbers between -1 and 1; but I can not get a physical meaning about these weights. Are weights in accordance with importance of the inputs in the model? shouldn't I get higher weights for inputs which are more correlated with the output? how can get a physical meaning about resulted weights?
THANK YOU
  1 件のコメント
fa abud
fa abud 2016 年 12 月 27 日
thanks

サインインしてコメントする。

採用された回答

Greg Heath
Greg Heath 2012 年 10 月 5 日
It tends to be difficult, if not impossible, to understand weight configurations when one or more of the following conditions exist
a. The number of input variables, I, is large.
b. Some input variables are correlated.
c. The number of hidden nodes, H, is large.
d. The number of output variables, O, is large.
With an I-H-O = 8-10-1 node topology, there are Nw = net.numWeightElements = (I+1)*H+(H+1)*O = 90+11 = 101 unknown weights to be estimated by Ntrneq = prod(size(Ttrn)) = Ntrn*O training equations. With Nw large, nonlinear optimization solutions tend to be nonrobust unless Ntrneq >> Nw.
If odd activation functions like tansig are used, each local minimum is associated with 2^H * H! weight configurations related by changing the signs of the weights attached to each hidden node (2^H) and reordering the position of the hidden nodes (H!).
The best bet is to (not necessarily in order of effectiveness)
1. Reduce the input dimensionality I as much as possible. Each reduction by 1 reduces Nw by H. Simple approaches are
a. Use STEPWISEFIT or SEQUENTIALFS with polynomial models that are
linear in the weights.
b. After training, rank the inputs by the increase in MSE when only the
matrix row of that input is scrambled (i.e., randomly reordered ). Remove
the worst input, retrain and repeat untill only useful inputs remain.
c.Transform to dominant orthogonal inputs using PCA for regression or PLS
for classification.
2. Reduce the number of hidden nodes, H, as much as possible. Each reduction by 1 reduces Nw by I+O+1 . My approach is to obtain numH*Ntrials separate designs where numH is the number of candidate values for H and Ntrials is the number of different weight initializations for each candidate.
The resulting normalized MSEtrn, NMSEtrn = MSEtrn/var(Ttrn,1,2) or biased Rsquare = 1-NMSEtrn is tabulated in a Ntrials by numH matrix and examined. I tend to use Ntrials = 10 and restrict H so that Nw <= Ntrneq.
Examples can be obtainedby searching the NEWSGROUP and ANSWERS using the keywords heath Nw Ntrials
Mitigating the bias of evaluating with training data can be achieved by either
a. Dividing SSE by the degree-of-freedom adjusted denominator Neqtrn-Nw
(instead of Ntrneq) or
b. Using a separate holdout validation set ( which is not necessarily used for validation stopping)
Hope this helps.
Thank you for formally accepting my answer.
  2 件のコメント
Fred
Fred 2012 年 10 月 6 日
Thank you very much, so we can not get any coefficient for inputs or function (like regression) as a result of our fitting?!
Star Strider
Star Strider 2012 年 10 月 6 日
No.
Neural nets are nonparametric, and do not need a model function to fit data. To fit a parametric model, such as a regression, you need to first define the model (as a function m-file or as an anonymous function), and then use functions such as nlinfit (Statistics Toolbox) or lsqcurvefit (Optimization Toolbox, that allows for constrained problems) to estimate the parameters.

サインインしてコメントする。

その他の回答 (1 件)

Greg Heath
Greg Heath 2012 年 10 月 7 日
The most common NN is the single hidden layer MLP (MultiLayer Perceptron). The I-H-O node topology is consistent with "I"nput matrices of I-dimensional column inputs, x, H "H"idden node activation functions and "O"utput matrices of O-dimensional column outputs, y. With tanh and linear activation fumctions in the hidden and output layers, respetively, the matrix I/O relationsship is in the form of a sum of tanh (via MATLAB's tansig) functions:
y = b2 + LW * tanh( b1 + IW * x );
IW - input weight matrix
b1 - input bias weight
LW - output layer weight matrix
b2 - output bias weight
This is a universal approximation model that can be made as accurate as desired for bounded continuous functions regardless of the functional form of the actual physical or mathematical relationhip y = f(x,parameters).
The approximation weights are most easily understood when y and x are both standardized (zero-mean/unit-variance) with uncorrelated components.
Hope this helps.

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by