Cs229

Only available on StudyMode
  • Download(s) : 63
  • Published : March 13, 2013
Open Document
Text Preview
CS229 Lecture notes
Andrew Ng

Supervised learning
Lets start by talking about a few examples of supervised learning problems. Suppose we have a dataset giving the living areas and prices of 47 houses from Portland, Oregon:
Living area (feet2 )
2104
1600
2400
1416
3000
.
.
.

Price (1000$s)
400
330
369
232
540
.
.
.

We can plot this data:
housing prices
1000
900
800

price (in $1000)

700
600
500
400
300
200
100
0
500

1000

1500

2000

2500
3000
square feet

3500

4000

4500

5000

Given data like this, how can we learn to predict the prices of other houses in Portland, as a function of the size of their living areas? 1

2

CS229 Winter 2003

To establish notation for future use, we’ll use x(i) to denote the “input” variables (living area in this example), also called input features, and y (i) to denote the “output” or target variable that we are trying to predict (price). A pair (x(i) , y (i) ) is called a training example, and the dataset that we’ll be using to learn—a list of m training examples {(x(i) , y (i) ); i = 1, . . . , m}—is called a training set. Note that the superscript “(i)” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X denote the space of input values, and Y the space of output values. In this example, X = Y = R.

To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X → Y so that h(x) is a “good” predictor for the corresponding value of y . For historical reasons, this function h is called a hypothesis. Seen pictorially, the process is therefore like this:

Training
set

Learning
algorithm
x
(living area of
house.)

h

predicted y
(predicted price)
of house)

When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.

3

Part I

Linear Regression
To make our housing example more interesting, lets consider a slightly richer dataset in which we also know the number of bedrooms in each house: Living area (feet2 ) #bedrooms Price (1000$s)
2104
3
400
1600
3
330
3
369
2400
1416
2
232
3000
4
540
.
.
.
.
.
.
.
.
.
(i)

Here, the x’s are two-dimensional vectors in R2 . For instance, x1 is the (i)
living area of the i-th house in the training set, and x2 is its number of bedrooms. (In general, when designing a learning problem, it will be up to you to decide what features to choose, so if you are out in Portland gathering housing data, you might also decide to include other features such as whether each house has a fireplace, the number of bathrooms, and so on. We’ll say more about feature selection later, but for now lets take the features as given.) To perform supervised learning, we must decide how we’re going to represent functions/hypotheses h in a computer. As an initial choice, lets say we decide to approximate y as a linear function of x:

hθ (x) = θ0 + θ1 x1 + θ2 x2
Here, the θi ’s are the parameters (also called weights) parameterizing the space of linear functions mapping from X to Y . When there is no risk of confusion, we will drop the θ subscript in hθ (x), and write it more simply as h(x). To simplify our notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), so that

n

θi xi = θT x,

h(x) =
i=0

where on the right-hand side above we are viewing θ and x both as vectors, and here n is the number of input variables (not counting x0 ). Now, given a training set, how do we pick, or learn, the parameters θ? One reasonable method seems to be to make h(x) close to y , at least for

4
the training examples we...
tracking img