- Increase number of Layers
- Increase number of Parameters
- add an Activation layer to handle nonlinearity
- you might also give more iterate to network for learning , maxepoch=2000 since your data and network are small.
How to use deep learning for interpolation properly
9 ビュー (過去 30 日間)
古いコメントを表示
Alexey Kozhakin
2021 年 11 月 29 日
コメント済み: Abolfazl Chaman Motlagh
2021 年 12 月 5 日
Hi!
What I do wrong? I try to apply deep learning for interpolation function sin. And It learns not good. Insted of sin its just show line function. This is my code
clear all;
clc;
close all;
N=100;
T=2*pi;
x=0:T/(N-1):T;
y=sin(x);
%----- normalization ---------------
y=y+abs(min(y));
y=y./max(y);
x=x./max(x);
x=x;
y=y;
%---------- end normalization -----
plot(x,y,'o')
%------------------------
layers = [
featureInputLayer(1)
fullyConnectedLayer(10)
fullyConnectedLayer(1)
regressionLayer
];
options = trainingOptions('adam', ...
'MaxEpochs',20, ...
'MiniBatchSize',10, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'Verbose',false);
[net inf] = trainNetwork(x',y',layers,options);
y1 = predict(net,x');
figure(2); hold on;
plot(x,y,'oblack','MarkerSize',20)
plot(x,y1,'x','MarkerSize',20)
0 件のコメント
採用された回答
Abolfazl Chaman Motlagh
2021 年 11 月 30 日
Hi,
The Sin(x) function is a complete nonlinear function, on the other hand your network is too simple to handle such a nonlinearity. for overcoming this problem you can:
here a simple solution for your model:
layers = [
featureInputLayer(1)
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
fullyConnectedLayer(1)
regressionLayer
];
the result is (5 fully connected layer with 20 parameters)
here are some other examples:
(2 fully connected layer each with 200 parameters) ( you can see still not converge correctly)
(3 fully connected layer each with 5 parameters)
so you can change this parameters (even change layers parameters indivisually) to see what is best option for your application.
6 件のコメント
Abolfazl Chaman Motlagh
2021 年 12 月 5 日
for first question, here is some example:
N=100;
T=2*pi;
x=0:T/(N-1):T;
y=sin(x);
y=y+abs(min(y));
y=y./max(y);
x=x./max(x);
layers = [
featureInputLayer(1)
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
fullyConnectedLayer(1)
regressionLayer
];
options = trainingOptions('sgdm', ...
'MaxEpochs',600, ...
'MiniBatchSize',10, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropFactor',0.1, ...
'LearnRateDropPeriod',200, ...
'Verbose',false);
[net inf] = trainNetwork(x',y',layers,options);
y1 = predict(net,x');
plot(x,y,'o','LineWidth',2)
hold on
plot(x,y1,'-','LineWidth',2)
legend('Sin(x)','Predicted')
Abolfazl Chaman Motlagh
2021 年 12 月 5 日
for second question:
by putting relu function after one layer, all outputs pass over the relu function. so just by adding reluLayer in layers list, you are using this function on every neurons.
there are some standart implementation of other common activation function in matlab, like
- relu
- leakyRelu
- clippedRelu
- elu
- tanh
- swish
but if you want you can define your own activation function using matlab:
functionLayer:
or even complete Deep Network Layer structure and operation:
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!