I am a novice trying to do a gradient descent with one variable, but cannot figure out how to fix my code (below). Not sure if my for-part is correct. This is the error message: "In an assignment A(:) = B, the number of elements in A and B must be the same." Please help?
data = load('data.txt' );
X = data(:, 1); y = data(:, 2);
m = length(y);
X = [ones(m, 1), data(:,1)]; % Add a column of ones to x
theta = zeros(2, 1); % initialize fitting parameters
num_iters = 1500;
alpha = 0.01;
J = computeCost(X, y, theta)
m = length(y);
J = sum(( X * theta - y ) .^2 )/( 2 * m );
[theta J_history] = gradientDescent(X, y, theta, alpha, num_iters)
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
h=(theta(1)+ theta(2)*X)';
theta(1) = theta(1) - alpha * (1/m) * h * X(:, 1);
theta(2) = theta(2) - alpha * (1/m) * h * X(:, 2);
% Save the cost J in every iteration
J_history(num_iters) = computeCost(X, y, theta);
end

2 件のコメント

Walter Roberson
Walter Roberson 2016 年 3 月 30 日
Please show the complete error message, everything in red.
Jackwhale
Jackwhale 2016 年 3 月 30 日
This is the complete error message: "In an assignment A(:) = B, the number of elements in A and B must be the same."

サインインしてコメントする。

 採用された回答

Torsten
Torsten 2016 年 3 月 30 日

0 投票

theta(1) - alpha * (1/m) * h * X(:, 1)
and
theta(2) - alpha * (1/m) * h * X(:, 2)
are 2x1 vectors which are assigned to scalars in the lines
theta(1) = theta(1) - alpha * (1/m) * h * X(:, 1);
theta(2) = theta(2) - alpha * (1/m) * h * X(:, 2);
This is not possible.
Best wishes
Torsten.

3 件のコメント

Jackwhale
Jackwhale 2016 年 3 月 30 日
Thanks! But what is the best way to fix that?
Torsten
Torsten 2016 年 3 月 30 日
I must admit that I don't understand what your code does.
To answer your question, you had to include comments and explain in more detail the underlying problem and the algorithm to solve it.
Best wishes
Torsten.
Jackwhale
Jackwhale 2016 年 3 月 30 日
To clarify the goal, the objective is to predict the profitability of a food delivery truck when expanding to a new city, based on the city population size. The fi rst data column is the population, the second column is the profi t of a food truck in that city. The chosen approach is the batch gradient descent algorithm, changing the parameters to come closer to the optimal values that will minimise the cost function J(). The idea however is to monitor J(), so as to check the convergence of the gradient descent implementation. The loop structure has been provided, I only need to supply the updates to theta within each iteration - to minimise J(). I have implemented computeCost correctly, but am struggling with implementing the gradient descent correctly.

サインインしてコメントする。

その他の回答 (2 件)

Torsten
Torsten 2016 年 3 月 30 日
編集済み: Torsten 2016 年 3 月 30 日

0 投票

I don't know why you use such a complicated approach.
Just execute
data = load('data.txt' );
A = [ones(length(data(:,1)),1), data(:,1)];
b = data(:,2);
theta = A \ b
to get your optimum theta values.
Best wishes
Torsten.

14 件のコメント

Jackwhale
Jackwhale 2016 年 3 月 30 日
編集済み: Jackwhale 2016 年 3 月 30 日
I guess that is because this is for a course assignment to understand how gradient descent works. The goal is to minimize the value of J by changing the values of the vector, and check each step to see that J is decreasing. The goal is not just to calculate the optimum theta values.
Torsten
Torsten 2016 年 3 月 30 日
Your objective function is
f(theta)=(X*theta-y)'*(X*theta-y)/(2*m)
with
X = [ones(m, 1), data(:,1)] and y = data(:, 2).
The gradient descend method reads
theta(n+1)=theta(n)-alpha*grad(f(theta(n)))
Now determine grad(f(theta(n))) and iterate.
Note that the update formula for theta in your code from above is incorrect.
Best wishes
Torsten.
Jackwhale
Jackwhale 2016 年 3 月 30 日
So, do I understand you correctly like below?
In my code above:
1)
J = sum((X*theta-y).^2)/(2*m); becomes
J = sum((X*theta-y)'*(X*theta-y))/(2*m)
2)
h=(theta(1)+ theta(2)*X)'; becomes
theta(n+1)=theta(n)-alpha*grad(f(theta(n)))
3) Removed/ deleted:
theta(1) = theta(1) - alpha * (1/m) * h * X(:, 1);
theta(2) = theta(2) - alpha * (1/m) * h * X(:, 2);
When I implement his, I get the following error:
Undefined function or variable 'n'.
Torsten
Torsten 2016 年 3 月 31 日
Please check whether this is correct:
data = load('data.txt' );
y = data(:,2);
m = length(y);
X = [ones(m,1), data(:,1)]; % Add a column of ones to x
num_iters = 1500;
alpha = 0.01;
J_history = zeros(num_iters, 1);
theta = zeros(2, 1); % initialize fitting parameters
J_history(1) = (X*theta-y)'*(X*theta-y)/(2*m);
for iter = 2:num_iters
theta = theta-alpha*X'*(X*theta-y)/m;
J_history(iter)=(X*theta-y)'*(X*theta-y)/(2*m);
end
Best wishes
Torsten.
Jackwhale
Jackwhale 2016 年 3 月 31 日
Hi Torsten, thank you so much. Your code runs without errors but something is still wrong. I see that your code does not use J = computeCost(X, y, theta). Perhaps that is it. The following results are given for debugging, which I cannot replicate.
>>[theta J_hist] = gradientDescent([1 5; 1 2; 1 4; 1 5],[1 6 4 2]',[0 0]',0.01,1000);
% when typing these variable names, the final results should be as follows: >>theta theta = 5.2148 -0.5733 >>J_hist(1) ans = 5.9794 >>J_hist(1000) ans = 0.85426
These are the first few theta values computed in the gradientDescent() for-loop % first iteration theta = 0.032500 0.107500 % second iteration theta = 0.060375 0.194887 % third iteration theta = 0.084476 0.265867 % fourth iteration theta = 0.10550 0.32346
Torsten
Torsten 2016 年 3 月 31 日
And what does the above code give as J_history(1000) and theta for the first four iterations ?
Are the values different ?
Best wishes
Torsten.
Jackwhale
Jackwhale 2016 年 3 月 31 日
編集済み: Jackwhale 2016 年 3 月 31 日
For theta, I get ans =
0
0
For J_history(1000) ans = 7.1250
Torsten
Torsten 2016 年 3 月 31 日
Can't be true.
For theta=0, the expression
(X*theta-y)'*(X*theta-y)/(2*m)
evaluates to
[1 6 4 2]*[1 6 4 2]'/(2*4) = 57/8 = 7.125,
but not 4.5161.
Best wishes
Torsten.
Jackwhale
Jackwhale 2016 年 3 月 31 日
Very sorry, you are right. I have changed my response. But the correct answer to J_hist(1000) ans = 0.85426. And I just don't get it.
Also, when I enter [theta J_hist] = gradientDescent([1 5; 1 2; 1 4; 1 5],[1 6 4 2]',[0 0]',0.01,1000); and then "theta", I get 0 0 instead of theta = 5.2148 -0.5733
Torsten
Torsten 2016 年 3 月 31 日
編集済み: Torsten 2016 年 3 月 31 日
Please run this code and output J_history(1000) and theta at the end.
y=[1; 6; 4; 2];
m = 4;
X = [1 5; 1 2; 1 4; 1 5];
num_iters = 1000;
alpha = 0.01;
J_history = zeros(num_iters, 1);
theta = zeros(2, 1); % initialize fitting parameters
J_history(1) = (X*theta-y)'*(X*theta-y)/(2*m);
for iter = 2:num_iters
theta = theta-alpha*X'*(X*theta-y)/m;
J_history(iter)=(X*theta-y)'*(X*theta-y)/(2*m);
end
Best wishes
Torsten.
Jackwhale
Jackwhale 2016 年 3 月 31 日
I get this: theta =
0.0584
0.6533
J_history(1000)
ans =
0
Jackwhale
Jackwhale 2016 年 3 月 31 日
The assignment is not focused on obtaining the answers, but on doing it a certain way. It requires me to first calculate the cost function, so I can check the convergence of the gradient descent implementation. J = computeCost(X, y, theta). Then run computeCost once using theta initialized to zeros. The cost then becomes 32.0727. I have done that correctly. Next, run gradient descent. The loop structure has been written for me:
[theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters);
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
I only need to supply the updates to theta within each iteration.
Torsten
Torsten 2016 年 3 月 31 日
You seem to have a strange MATLAB version.
If I set
num_iters=1001,
I get
theta =
5.2147549
- 0.5733459
J_history(1001)
ans =
0.8554026
thus the results expected.
Best wishes
Torsten.
Torsten
Torsten 2016 年 3 月 31 日
I only need to supply the updates to theta within each iteration.
If you can't read from the code I supplied how theta is updated every iteration, then you should really start with MATLAB principles.

サインインしてコメントする。

Agbakoba Chukwunoso
Agbakoba Chukwunoso 2020 年 12 月 6 日

0 投票

Pls help me out.. I'm trying to find gradientdescent with this code but when I run it, it returns gradientdescents to me not the value . data = load('ex1data1.txt'); % text file conatins 2 values in each row separated by commas X = [ones(m, 1), data(:,1)]; theta = zeros(2, 1); iterations = 1500; alpha = 0.01; function [theta, J_history] = gradientdescent(X, y, theta, alpha, num_iters) m = length(y); % number of training examples J_history = zeros(num_iters, 1); for iter = 1:num_iters k=1:m; j1=(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k)) j2=(1/m)*sum(((theta(1)+theta(2).*X(k,2))-y(k)).*X(k,2)) theta(1)=theta(1)-alpha*(j1); theta(2)=theta(2)-alpha*(j2); end end

2 件のコメント

Agbakoba Chukwunoso
Agbakoba Chukwunoso 2020 年 12 月 6 日
data = load('ex1data1.txt');
% text file conatins 2 values in each row separated by commas
X = [ones(m, 1), data(:,1)];
theta = zeros(2, 1);
iterations = 1500;
alpha = 0.01;
function [theta, J_history] = gradientdescent(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
k=1:m;
j1=(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k))
j2=(1/m)*sum(((theta(1)+theta(2).*X(k,2))-y(k)).*X(k,2))
theta(1)=theta(1)-alpha*(j1);
theta(2)=theta(2)-alpha*(j2);
end
end
sivarao K
sivarao K 2021 年 11 月 10 日
here 'y' not defined but it excuting how?

サインインしてコメントする。

カテゴリ

ヘルプ センター および File ExchangeMathematics についてさらに検索

質問済み:

2016 年 3 月 30 日

コメント済み:

2021 年 11 月 10 日

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by