loop causes enlargement of array

1 回表示 (過去 30 日間)
fima v
fima v 2020 年 4 月 13 日
コメント済み: Star Strider 2020 年 4 月 14 日
Hello, my theta vector 2X1 two values needs to be updated each itteration. instead i get theta to be size 1000 (as the number of the itterations)
Why my theta updating line causes theta to inflate?
Thanks
n=1000;
t=1.4;
sigma_R = t*0.001;
min_value_t = t-sigma_R;
max_value_t = t+sigma_R;
y_data = min_value_t + (max_value_t - min_value_t) * rand(n,1);
x_data=[1:1000];
L=0.0001; %learning rate
%plot(x_data,y_data);
itter=1000;
theta_0=0;
theta_1=0;
theta=[theta_0;theta_1];
itter=1000;
for i=1:itter
onss=ones(1,1000);
x_mat=[onss;x_data]';
pred=x_mat*theta;
residuals = (pred-y_data)';
theta_0=theta_0-((x_data.*residuals)*(L/n));
theta_1=theta_1-((x_data.*residuals)*(L/n));
theta=[theta_0,theta_1];
end

採用された回答

Star Strider
Star Strider 2020 年 4 月 13 日
The ‘x_data’ and ‘residuals’ variables are both (1x1000) vectors.
Subscripting them is likely the solution, although you need to decide that:
theta_0=theta_0-((x_data(i).*residuals(i))*(L/n));
theta_1=theta_1-((x_data(i).*residuals(i))*(L/n));
It may also be necessary to do this with other variables in the loop. You need to decide that as well.
.
  2 件のコメント
fima v
fima v 2020 年 4 月 14 日
Hello Star Strider,in the big code bellow i have those two lines which are in the itter loop.
as you can see the temp0 is a scalar its not a vector.on the right side we create a scalar,after that we subtract two number and update the theta.so why i get a vector sizee 1000 for theta when i put it in a loop?
Thanks.
temp0=theta(1) - Learning_step_a * (1/m)* sum((hypothesis-y).* x);
temp1=theta(2) - Learning_step_a * (1/m) *sum(hypothesis-y);
% Machine Learning : Linear Regression
clear all; close all; clc;
%% ======================= Plotting Training Data =======================
t=1.2
n=1000;
sigma_R = t*0.001;
min_value_t = t-sigma_R;
max_value_t = t+sigma_R;
y = min_value_t + (max_value_t - min_value_t) * rand(n,1);
x = [1:1000];
% Plot Data
plot(x,y,'rx');
xlabel('X -> Input') % x-axis label
ylabel('Y -> Output') % y-axis label
%% =================== Initialize Linear regression parameters ===================
m = length(y); % number of training examples
% initialize fitting parameters - all zeros
theta=zeros(2,1);%theta 0,1
% Some gradient descent settings
iterations = 1500;
Learning_step_a = 0.07; % step parameter
%% =================== Gradient descent ===================
fprintf('Running Gradient Descent ...\n')
%Compute Gradient descent
% Initialize Objective Function History
J_history = zeros(iterations, 1);
m = length(y); % number of training examples
% run gradient descent
for iter = 1:iterations
% In every iteration calculate hypothesis
hypothesis=theta(1).*x+theta(2);
% Update theta variables
temp0=theta(1) - Learning_step_a * (1/m)* sum((hypothesis-y).* x);
temp1=theta(2) - Learning_step_a * (1/m) *sum(hypothesis-y);
theta(1)=temp0;
theta(2)=temp1;
% Save objective function
J_history(iter)=(1/2*m)*sum(( hypothesis-y ).^2);
end
% print theta to screen
fprintf('Theta found by gradient descent: %f %f\n',theta(1), theta(2));
fprintf('Minimum of objective function is %f \n',J_history(iterations));
% Plot the linear fit
hold on; % keep previous plot visible
plot(x, theta(1)*x+theta(2), '-')
% Validate with polyfit fnc
poly_theta = polyfit(x,y,1);
plot(x, poly_theta(1)*x+poly_theta(2), 'y--');
legend('Training data', 'Linear regression','Linear regression with polyfit')
hold off
figure
% Plot Data
plot(x,y,'rx');
xlabel('X -> Input') % x-axis label
ylabel('Y -> Output') % y-axis label
hold on; % keep previous plot visible
% Validate with polyfit fnc
poly_theta = polyfit(x,y,1);
plot(x, poly_theta(1)*x+poly_theta(2), 'y--');
% for theta values that you are saying
theta(1)=0.0745; theta(2)=0.3800;
plot(x, theta(1)*x+theta(2), 'g--')
legend('Training data', 'Linear regression with polyfit','Your thetas')
hold of
Star Strider
Star Strider 2020 年 4 月 14 日
One problem is that ‘x’ is (1x1000). Another problem is that ‘(hypothesis-y)’ is a (1000x1000) matrix, so the sum is going to be a (1x1000) row vector:
temp0=theta(1) - Learning_step_a * (1/m)* sum((hypothesis-y).* x);
The same soirt of problem occurs in the next line:
temp1=theta(2) - Learning_step_a * (1/m) *sum(hypothesis-y);
so when you try to assign them to a scalar:
theta(1)=temp0;
theta(2)=temp1;
your code throws the error.
If you want to assign them to a scalar, it would be best to calculate:
sum((hypothesis-y)
in a separate step, then refer to the individual element of it in the ‘theta’ assignments.
Either that, or make ‘theta’ row vectors:
theta(1,:)=temp0;
theta(2,:)=temp1;
It all depends on what you want your code to do.
.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeMultiple Linear Regression についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by