c Bryan S. Morse, Brigham Young University, 1998–2000 Last modiﬁed on February 12, 2000 at 10:00 AM

Contents

13.1 Introduction . . . . . . . . . . . . . . 13.2 First-Derivative Methods . . . . . . . 13.2.1 Roberts Kernels . . . . . . . . . 13.2.2 Kirsch Compass Kernels . . . . 13.2.3 Prewitt Kernels . . . . . . . . . 13.2.4 Sobel Kernels . . . . . . . . . . 13.2.5 Edge Extraction . . . . . . . . . 13.3 Second-Derivative Methods . . . . . . 13.3.1 Laplacian Operators . . . . . . . 13.3.2 Laplacian of Gaussian Operators 13.3.3 Difference of Gaussians Operator 13.4 Laplacians and Gradients . . . . . . . 13.5 Combined Detection . . . . . . . . . . 13.6 The Effect of Window Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 2 2 3 3 3 4 5 5 6 6 6

Reading

SH&B, 4.3.2–4.3.3 Castleman 18.4–18.5.1

13.1 Introduction

Remember back in Lecture 2 that we deﬁned an edge as a place of local transition from one object to another. They aren’t complete borders, just locally-identiﬁable probable transitions. In this lecture and the next, we’ll discuss ways for detecting edges locally. To do this, we will exploit local neighborhood operations as covered in CS 450. If you need to brush up on the basics of convolution, now would be a good time to do so.

13.2 First-Derivative Methods

Most edge detectors are based in some way on measuring the intensity gradient at a point in the image. Recall from our discussion of vector calculus and differential geometry that the gradient operator ∇ is ∂ ∇= ∂x ∂ ∂y

(13.1)

When we apply this vector operator to a function, we get ∂ ∇f =

∂x f ∂ ∂y f

(13.2)

As with any vector, we can compute its magnitude ∇f and orientation φ (∇f ).

• The gradient magnitude gives the amount of the difference between pixels in the neighborhood (the strength of the edge). • The gradient orientation gives the direction of the greatest change, which presumably is the direction across the edge (the edge normal). Many algorithms use only the gradient magnitude, but keep in mind that the gradient orientation often carries just as much information. Most edge-detecting operators can be thought of as gradient-calculators. Because the gradient is a continuousfunction concept and we have discrete functions (images), we have to approximate it. Since derivatives are linear and shift-invariant, gradient calculation is most often done using convolution. Numerous kernels have been proposed for ﬁnding edges, and we’ll cover some of those here.

13.2.1

Roberts Kernels

Since we’re looking for differences between adjacent pixels, one way to ﬁnd edges is to explictly use a {+1, −1} operator that calculates I(xi ) − I(xj ) for two pixels i and j in a neighborhood. Mathematically, these are called forward differences: ∂I ≈ I(x + 1, y) − I(x, y) ∂x The Roberts kernels attempt to implement this using the following kernels: +1 -1 g1 -1 g2 +1

While these aren’t speciﬁcally derivatives with respect to x and y, they are derivatives with respect to the two diagonal directions. These can be thought of as components of the...