For the Linear regression model, the cost function will be the minimum of the Root Mean Squared Error of the model, obtained by subtracting the predicted values from actual values. The cost function will be the minimum of these error values.
In the line equation ( y = mx + c ), m is a slope and c is the y-intercept of the line. In the given equation, theta-0 is the y-intercept and theta-1 is the slope of the regression line.
The theta values are the parameters. Some quick examples of how we visualize the hypothesis: This yields h(x) = 1.5 + 0x. 0x means no slope, and y will always be the constant 1.5.
You can use vectorize of operations in Octave/Matlab. Iterate over entire vector - it is really bad idea, if your programm language let you vectorize operations. R, Octave, Matlab, Python (numpy) allow this operation. For example, you can get scalar production, if theta = (t0, t1, t2, t3) and X = (x0, x1, x2, x3) in the next way: theta * X' = (t0, t1, t2, t3) * (x0, x1, x2, x3)' = t0*x0 + t1*x1 + t2*x2 + t3*x3 Result will be scalar.
For example, you can vectorize h in your code in the next way:
H = (theta'*X')';
S = sum((H - y) .^ 2);
J = S / (2*m);
Above answer is perfect but you can also do
H = (X*theta);
S = sum((H - y) .^ 2);
J = S / (2*m);
Rather than computing
(theta' * X')'
and then taking the transpose you can directly calculate
(X * theta)
It works perfectly.
The below line return the required 32.07 cost value while we run computeCost once using θ initialized to zeros:
J = (1/(2*m)) * (sum(((X * theta) - y).^2));
and is similar to the original formulas that is given below.
It can be also done in a line- m- # training sets
J=(1/(2*m)) * ((((X * theta) - y).^2)'* ones(m,1));
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With