CS 229: Machine learning- Notes on Lecture #2
by Amit
In the second lecture, Prof. Andrew Ng starts talking about supervised learning methods. He begins with Linear Regression, in which the relationship between the input is assumed to be linear, such as . The parameters
and
need to be found, for which couple of approaches are discussed:
- The first one involves minimizing the function:
, where m is the number of training examples, x(i) is the ith sample input, and y(i) is the corresponding output. Couple of methods are discussed for the minimization task above. The first method that is discussed is the Gradient Descent method, which roughly is:
- Start with some value of the
(say
)
- Keep updating
to reduce
as follows:
- Stop when a desired reduced value of
is reached.
- Start with some value of the
- The second approach to estimate the value of
uses linear algebraic techniques to obtain a closed form formulae for the parameters,
A faster method in case of large data sets is the stochastic gradient descent method is then described
This is the video lecture:
As I told in the notes of my first lecture, this is a good time to review the section notes on Linear Algebra. Lecture Notes 1 have some notes on the first two lectures.
Looks like we will do a lot of regression in the next lecture. See you then!
[…] leave a comment » For a brief recap of lecture #2, please see my notes. […]