フィルターのクリア

Is it correct that the Regression Learner Toolbox uses the best parameters for each model?

3 ビュー (過去 30 日間)
Is it correct that the Regression Learner Toolbox uses the best parameters for each model? For example, in the case of trees, the accuracy depends on how many branches there are. Do regression learners use these parameters most accurately? Do svm, nnr, linear regression, etc. also use parameters with optimal accuracy?
  1 件のコメント
Ive J
Ive J 2023 年 8 月 26 日
It only does so if you select optimizable version of each method, e.g. Optimizable Tree

サインインしてコメントする。

採用された回答

Yatharth
Yatharth 2023 年 9 月 5 日
Hi,
I understand that you want to know whether the result obtained by the parameters selected by the toolbox are the best results or there is scope of improvement via manual tuning of parameters. However as you mentioned "accuracy" and "Optimal Accuracy" that depends on your data and your expectations from the model.
Let's assume you are using a Linear Regression Model:
Then the model contains an intercept and linear terms for each predictor. A least-square fit is used to determine the model parameters - which are the intercept and the coefficient of the linear term (the predicted response is compared to the true response and then an error is calculated -> algorithm tries to minimize the overall error)
So for the chosen set of parameters, the results are "optimal".
But you can of course tune your model by changing certain hyperparameters which might lead to better predictions. There is always the tradeoff between performance and accuracy. You have to distinguish between accuracy looking at the training data and validation/test data (keyword "overfitting").
I hope this helps.
  1 件のコメント
Ive J
Ive J 2023 年 9 月 5 日
編集済み: Ive J 2023 年 9 月 5 日
This doesn't address what OP was asking. More: this is rather a comparison between apples and oranges. In case of a generalized linear model, we estimate true parameters of a model (a parameterized model). This is not the case for other methods in the Regression Learner Toolbox: NNs, GPR, trees, etc. These models have hyperparameters to be optimized in a proper manner (nested CV for instance on the training set). This is not however the case for linear regression. Of course, you can be talking about penalized regression models (LASSO, ridge or elastic nets) which do contain hyperparameters.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeSupport Vector Machine Regression についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by