how to handle soft weight constraints in neural network

3 ビュー (過去 30 日間)
massoud pourmandi
massoud pourmandi 2022 年 7 月 4 日
編集済み: Matt J 2022 年 7 月 4 日
Let us assume that there is a feedforward neural network with two layers. and weights of each layer are constrained such that sum of the weights is a constant value in each layer and their values are non-negative. You may wonder why should we have such assumptions? Answer: I have an optimization problem with unknown variables that can be mapped to a neural network in that weights represent my variables that's why. Can anyone suggest to me a way to handle these constraints? for now, I just integrated these constraints into the cost function, though the way I did is not working very well. I just added the constraints to the main cost function using max. for example when A(x)<x I just added its cost as max(A(x)/x-1,0) to the main cost function.

採用された回答

Matt J
Matt J 2022 年 7 月 4 日
編集済み: Matt J 2022 年 7 月 4 日
If you wish to train with standard unconstrained stochastic gradient descent algorithms, you will probably have to make a custom layer in which the score functions are calculated according to,
where are the learnable parameters. This is equivalent to weighting the inputs with positive weights that sum to C.

その他の回答 (0 件)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by