Back propagation neural network
古いコメントを表示
I learnt that the output of activation functions, logsig and tansig, returns the value between [0 1] and [-1 1] respectively. What will happen if the target values are beyond these limits?
2 件のコメント
Mohammad Sami
2020 年 6 月 8 日
One of the reasons is that larger values can result in a problem of exploding gradients, when training the network.
Sivamani S
2020 年 6 月 8 日
回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で Deep Learning Toolbox についてさらに検索
製品
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!