Equivalent of Neural ODE for discrete time state space models
5 ビュー (過去 30 日間)
古いコメントを表示
Here is the example on how to train neural ODE to identify dynamical system :
https://mathworks.com/help/deeplearning/ug/dynamical-system-modeling-using-neural-ode.html
This example talks about continuous time models and I was wondering if there was any equivalent tutorial related to discrete time models ?
0 件のコメント
採用された回答
Arkadiy Turevskiy
2023 年 1 月 31 日
We added idNeuralStateSpace object that support both continuous and discrete time model. Maybe this could be useful. It was created to simplify code you have to write, so it would not allow you to write your own training loop though.
1 件のコメント
Ben
2023 年 2 月 2 日
Hi M.
I'm not sure if this is possible with the shallow network functions but it can be done with the dlnetwork and custom training loops since these we allow you to write your own model function that re-uses the same network on 2 different inputs. Here's some example code with dummy data - in practice you may need to tweak the training and network hyperparameters to get optimal performance.
% share a neural net across multiple calls
% create some fake data
% predict x(t+1) = F(x(1,t),u(1,t)) + F(x(2,t),u(2,t)) for some unknown F
numSteps = 100;
t = linspace(0,2*pi,numSteps);
F = @(x,u) sqrt(x+u+1);
x = [0;1];
u = [cos(t);sin(t)];
for i = 2:numSteps
x(:,i) = F(x(1,i-1),u(1,i-1)) + F(x(2,i-1),u(2,i-1));
end
% create a network to model F
% it needs to have two inputs, for x and u.
hiddenSize = 5000;
inputSize = 1;
outputSize = 2;
layers = [
featureInputLayer(inputSize,Name="x")
concatenationLayer(1,2,Name="concat");
fullyConnectedLayer(hiddenSize)
reluLayer
fullyConnectedLayer(outputSize)];
net = dlnetwork(layers,Initialize=false);
net = addLayers(net,featureInputLayer(1,Name="u"));
net = connectLayers(net,"u","concat/in2");
net = initialize(net);
% train with custom training loop
numEpochs = 1000;
vel = [];
x = dlarray(x,"CB");
u = dlarray(u,"CB");
learnRate = 0.1;
for epoch = 1:numEpochs
[loss,gradient] = dlfeval(@modelLoss,x,u,net);
lossValue = extractdata(loss);
fprintf("Epoch: %d, Loss %.4f\n", epoch, lossValue);
[net,vel] = sgdmupdate(net,gradient,vel,learnRate);
end
function [loss,gradient] = modelLoss(x,u,net)
% predict x(:,2:end) from x(:,1:end-1) and u(:,1:end-1)
xtarget = x(:,2:end);
xpred = model(x(:,1:end-1),u(:,1:end-1),net);
loss = mse(xtarget,xpred);
gradient = dlgradient(loss,net.Learnables);
end
function xpred = model(x,u,net)
% model xpred = x(t+1) = f(x(1,t),u(1,t)) + f(x(2,t),u(2,t)) where f is a neural net.
xpred = forward(net,x(1,:),u(1,:)) + forward(net,x(2,:),u(2,:));
end
Hope that helps.
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Custom Training Loops についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!