Neural Network Normalization process
古いコメントを表示
Hello all,
I have a question regarding the NN normalization procedure. When a NN is trained using the train(net,x,y) command the function somehow normalizes x and y in order to ensure that the weights and biases of the network are bound by [-1,1].
Currently I am in the process of trying to apply a set of NN weights and biases analytically (instead of just calling net(xtest)) using the following equation:
ytest = Outputbias+Hiddenweight*tanh(Inputbias + Inputweight*xtest);
which produces an output -- however it does not produce the same output as (ytest = net(xtest)).
I'm assuming the difference is due to the fact that xtest is not normalized before using the above equation.
I tried to simply divide xtest by its maximum before feeding it in to the equation however the results still differ.
Does anyone know how xtest should be manipulated in order to produce the same output as net(xtest).
Thanks! Bryan
採用された回答
その他の回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!