Regression using MATLAB. Need help!
2 ビュー (過去 30 日間)
古いコメントを表示
Hi everyone,
My university is having this competition where the participants have to design a DNN that estimates the Data-rate of signals.
The provided data is a (800x64000) matrix, where rows represent signals and columns are the samples or features of the signal.
We are also provided with the respective response (data-rate) for each signal. Exactly one response for each signal.
We are required to design this DNN using MATLAB.
Here is what I cam up with so far:
And here are the provided datasets:
All i did was reshape the provided 800x64000 data into a 800x1 cell, with each cell containing a complex double of 64x1000. This was suggested by one of the engineers responsible for the competition, so I did so.
After that, I split the data 80/20, constructed a 5 layer NN:
layers = [
sequenceInputLayer(64,"Name","sequence","SplitComplexInputs",true,'Normalization', 'zerocenter')
lstmLayer(numHiddenUnits,"Name","lstm","OutputMode","last")
dropoutLayer(0.5, "Name", "dropout")
fullyConnectedLayer(1,"Name","fc")
regressionLayer("Name","regressionoutput")];
Then used sgdm solver:
options = trainingOptions("sgdm", ...
MaxEpochs=50, ...
MiniBatchSize= 64,...
ValidationData={XValidation TValidation}, ...
OutputNetwork="best-validation-loss", ...
Shuffle="every-epoch",...
InitialLearnRate=0.005, ...
Plots="training-progress", ...
Verbose= false);
I get very high RMSE. Here's one of the trials:
I would love to get some guidance on this task.
0 件のコメント
回答 (1 件)
Shivansh
2024 年 3 月 23 日
Hi Omar!
It looks like you are having issues in training your model using the LSTM network. It is evident by the validation error in your network. Model training in deep learning often requires a lot of experimentation and tuning. You are on a good path with the reshaping of data and setting up the network. The choice of "sgdm" as a solver also makes sense.
The provided graph for RMSE vs iterations looks constant after 20 epochs. You can try experimenting with the learning rate and other parameters like optimizers and batch size to eliminate the possibility of the model being stuck in local minima.
If the performance doesn't improve by much, try working with a model with more complexity. You can either tweak the current LSTM or adopt a different model architecture such as bidirectional LSTMs or ensemble techniques. You can also look at the data and experiment with different representations for complex numbers and analyze the impact on the model.
You can refer to the following link for more information about the deep learning toolbox in MATLAB: https://www.mathworks.com/help/deeplearning.
I hope it helps!
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Sequence and Numeric Feature Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!