Hi @Quazi Hussain,
Please see my response to your comments. The script and app give equivalent results, with minor differences due to randomness in optimization. A near-perfect R² on training data is normal for GPR since it tends to fit very closely. The key measure is cross-validated or test performance, and if that’s poor it suggests overfitting. You may want to try constraining hyperparameters, testing simpler kernels, or comparing with simpler models.
Please note @dpb is correct because the Regression Learner app and the script both rely on the same underlying fitrgp function, and the optimization process involves randomness. Without fixing the random seed, each run can produce slightly different hyperparameters and results. Setting the seed ensures reproducibility and makes the outputs from the app and script match.
Hope this helps.