What is the best practice to train a NeuralODE model with very small learnable parameter ?

13 ビュー (過去 30 日間)
Chuguang Pan
Chuguang Pan 2025 年 12 月 1 日 13:51
回答済み: Ayush 2025 年 12 月 11 日 5:52
I want to train a NeuralODE model which approximates a complicated physical system with complex governing ODEs. The physical system's ODEs have a parameetr which represents the working condition of system. The ODEs of the physical system can be expressed as:
function dY = odeModel(t,Y,theta)
% theta presents the learnable parameters of neuralODE model
% ...
dY = fcn(theta.StateParam,t,Y); % theta.StateParam is the parameter needed to be estimated
% ....
end
I use the synthetic data which is obtained by simulating the ODEs of the physical system. However, when I train the NeuralODE model using the synthetic data, the loss function fluctuates significantly. Since the parameter has a very narrow range between [0,1e-10], every time the parameter is updated by Adam optimizer, I clip the parameter manually by using clip function, e.g.,
[neuralOdeParameters,averageGrad,averageSqGrad] = adamupdate(neuralOdeParameters,gradients,averageGrad,averageSqGrad,...
iteration,learnRate,gradDecay,sqGradDecay);
neuralOdeParameters = dlupdate(@(param) clip(param,0,1e-10) ,neuralOdeParameters);
I want to konw if there are other better methods to train the NerualODE model with very small learnable parameter, thanks in advance.

回答 (1 件)

Ayush
Ayush 2025 年 12 月 11 日 5:52
Hi Chuguang,
I understand optimizers like Adam may struggle to make effective updates within such narrow intervals.
A more robust approach is to reparameterize the small parameter so that the optimizer works in a numerically stable range. For example, you can define your parameter as
>> theta.StateParam = 1e-10 * sigmoid(alpha)
where alpha is an unconstrained variable that the optimizer updates. This guarantees the parameter stays within [0, 1e-10] without manual clipping and typically leads to smoother optimization. Alternatively, optimizing the logarithm of the parameter
>> theta.StateParam = exp(beta)
can also help. Adjusting the learning rate or experimenting with different optimizers may further improve stability, but reparameterization is generally the most effective solution.
Hope this helps!

カテゴリ

Help Center および File ExchangeSolver Outputs and Iterative Display についてさらに検索

製品


リリース

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by