フィルターのクリア

Error in helperModC​lassTraini​ngOptions (line 29) 'Checkpoin​tPath',che​ckpointPat​h,...

2 ビュー (過去 30 日間)
john karli
john karli 2022 年 2 月 16 日
コメント済み: Joss Knight 2022 年 2 月 18 日
I want to train the model using following link
I want to save every epochs but when i run the following section
checkpointPath = pwd;
maxEpochs = 20;
miniBatchSize = 128;
options = helperModClassTrainingOptions(maxEpochs,miniBatchSize,...
numel(rxTrainLabels),rxValidFrames,rxValidLabels);
trainedNettime = trainNetwork(rxTrainFrames,rxTrainLabels,lgraph_1 ,options);
save trainedNettime
I got the error
Unrecognized function or variable 'checkpointPath'.
Error in helperModClassTrainingOptions (line 29)
'CheckpointPath',checkpointPath,...
my helperModClassTrainingOptions function is
function options = helperModClassTrainingOptions(maxEpochs,miniBatchSize,...
trainingSize,rxValidFrames,rxValidLabels)
%helperModClassTrainingOptions Modulation classification training options
% OPT = helperModClassTrainingOptions(MAXE,MINIBATCH,NTRAIN,Y,YLABEL)
% returns the training options, OPT, for the modulation classification
% CNN, where MAXE is the maximum number of epochs, MINIBATCH is the mini
% batch size, NTRAIN is the number of training frames, Y is the
% validation frames and YLABEL is the labels.
%
% This function configures the training options to use an SGDM solver.
% By default, the 'ExecutionEnvironment' property is set to 'auto', where
% the trainNetwork function uses a GPU if one is available or uses the
% CPU, if not. To use the GPU, you must have a Parallel Computing Toolbox
% license. Set the initial learning rate to 2e-2. Reduce the learning
% rate by a factor of 10 every 9 epochs. Set 'Plots' to
% 'training-progress' to plot the training progress.
%
% See also ModulationClassificationWithDeepLearningExample.
% Copyright 2019 The MathWorks, Inc.
validationFrequency = floor(trainingSize/miniBatchSize);
options = trainingOptions('sgdm', ...
'InitialLearnRate',1e-3, ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'CheckpointPath',checkpointPath,...
'ValidationData',{rxValidFrames,rxValidLabels}, ...
'ValidationFrequency',validationFrequency, ...
'Verbose',false, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropPeriod', 9, ...
'LearnRateDropFactor', 0.1);

採用された回答

Joss Knight
Joss Knight 2022 年 2 月 16 日
You need to pass the checkpointPath variable to your function.
  5 件のコメント
john karli
john karli 2022 年 2 月 18 日
why my validation get sudden rise in last epochs. I have attached the image.
Joss Knight
Joss Knight 2022 年 2 月 18 日
The final validation is computed after a final epoch to compute the batch normalization statistics. Some networks are particularly sensitive to the difference between the mini-batch statistics and those of the whole dataset. Make sure your dataset is shuffled and your minibatch size is as large as possible. To avoid this (at a small additional performance cost), using moving averages (see BatchNormalizationStatistics training option).
I can't explain why it's not checkpointing the network every epoch.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by