L2 regularization in sparse stacked autoencoders not clear to me

2 ビュー (過去 30 日間)
Lukas Vareka
Lukas Vareka 2018 年 3 月 21 日
回答済み: Sara Sharifzadeh 2020 年 6 月 6 日
Dear Matlab users,
on https://www.mathworks.com/help/nnet/ref/trainautoencoder.html , there is a bit of theory behind L2 regularization used in stacked autoencoders. However, the definition of L2 regularization is not clear to me. First, why is the sum running through all hidden layers l = 1..L but not through all neurons in each hidden layer? Second, I do not understand "k is the number of variables in the training data". Does it mean that k corresponds to the dimensionality of feature vectors?
Thanks for any clarification.
Regards, Lukas Vareka

回答 (2 件)

Sara Sharifzadeh
Sara Sharifzadeh 2020 年 6 月 6 日
As this L2 regularisation aims to constraints all the weights, The first Sigma should sum all hidden layers+final layer and the two inner Sigma must count on the number of neurons in previous layer e.g. (l-1) and the current hidden layer (l) that in total will include all weights of the network to be constraint.

BERGHOUT Tarek
BERGHOUT Tarek 2019 年 4 月 11 日
i didnt undrestand your question , could make more clear , there is no k and L2 parameters in the link .
could you give us an example?.
any way i have made many types of autoencoders i upload some of them on my profile:
https://www.mathworks.com/matlabcentral/fileexchange/71115-denoising-autoencoders?s_tid=prof_contriblnk

カテゴリ

Help Center および File ExchangeElectrophysiology についてさらに検索

製品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by