Solving ODE using Deep Learning
5 ビュー (過去 30 日間)
古いコメントを表示
Hi all,
I am trying to understand how to solve ODE using Deep Learning, and code it using MATLAB, based on this tutorial:
When I modified the code to solve a Lotka-Volterra model:
I could not get the loss to converge. I think it is because the tutorial uses sgdmupdate optimizer. If I want to change it to adam optimizer, how can I change the code?
0 件のコメント
回答 (1 件)
Antoni Woss
2023 年 9 月 14 日
To use the adam optimizer in this custom training loop example, you can follow the example set out in the documentation page for the adamupdate function - https://uk.mathworks.com/help/deeplearning/ref/adamupdate.html.
Note that the adamupdate function has some different required input arguments and return arguments so you will need to map the differences to the ODE example you are trying to solve. For example, initializing empty averageGrad and averageSqGrad outside the custom training loop so that you can update it at each call to adamupdate. Here is a snippet just showing where these quantites would be used.
averageGrad = [];
averageSqGrad = [];
...
[net,averageGrad,averageSqGrad] = adamupdate(net,gradients,averageGrad,averageSqGrad,iteration);
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Sequence and Numeric Feature Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!