19-11-2012, 05:36 PM
Edge Detection Techniques: Evaluations and Comparisons
edge.pdf (Size: 461.09 KB / Downloads: 132)
Abstract
Edge detection is one of the most commonly used operations in image analysis, and
there are probably more algorithms in the literature for enhancing and detecting edges
than any other single subject. The reason for this is that edges form the outline of an
object. An edge is the boundary between an object and the background, and indicates
the boundary between overlapping objects. This means that if the edges in an image can
be identified accurately, all of the objects can be located and basic properties such as
area, perimeter, and shape can be measured. Since computer vision involves the
identification and classification of objects in an image, edge detections is an essential
tool. In this paper, we have compared several techniques for edge detection in image
processing. We consider various well-known measuring metrics used in image
processing applied to standard images in this comparison.
INTRODUCTION
Edge detection is a very important area in the field of Computer Vision. Edges define
the boundaries between regions in an image, which helps with segmentation and object
recognition. They can show where shadows fall in an image or any other distinct
change in the intensity of an image. Edge detection is a fundamental of low-level image
processing and good edges are necessary for higher level processing. [1]
The problem is that in general edge detectors behave very poorly. While their behavior
may fall within tolerances in specific situations, in general edge detectors have
difficulty adapting to different situations. The quality of edge detection is highly
dependent on lighting conditions, the presence of objects of similar intensities, density
of edges in the scene, and noise. While each of these problems can be handled by
adjusting certain values in the edge detector and changing the threshold value for what
is considered an edge, no good method has been determined for automatically setting
these values, so they must be manually changed by an operator each time the detector is
run with a different set of data.
REVIEW OF EDGE DETECTOR
The Marr-Hildreth Edge Detector
The Marr-Hildreth edge detector was a very popular edge operator before Canny
released his paper. It is a gradient based operator which uses the Laplacian to take the
second derivative of an image. The idea is that if there is a step difference in the
intensity of the image, it will be represented by in the second derivative by a zero
crossing.
The Canny Edge Detector
The Canny edge detector is widely considered to be the standard edge detection
algorithm in the industry. It was first created by John Canny for his Masters thesis at
MIT in 1983 [2], and still outperforms many of the newer algorithms that have been
developed. Canny saw the edge detection problem as a signal processing optimization
problem, so he developed an objective function to be optimized [2]. The solution to this
problem was a rather complex exponential function, but Canny found several ways to
approximate and optimize the edge-searching problem. The steps in the Canny edge
detector are as follows:
1. Smooth the image with a two dimensional Gaussian. In most cases the computation
of a two dimensional Gaussian is costly, so it is approximated by two one dimensional
Gaussians, one in the x direction and the other in the y direction.
The Local Threshold and Boolean Function Based Edge Detection [1]
This edge detector is fundamentally different than many of the modern edge detectors
derived from Canny’s original. It does not rely on the gradient or Gaussian smoothing.
It takes advantage of both local and global thresholding to find edges. Unlike other
edge detectors, it converts a window of pixels into a binary pattern based on a local
threshold, and then applies masks to determine if an edge exists at a certain point or not.
By calculating the threshold on a per pixel basis, the edge detector should be less
sensitive to variations in lighting throughout the picture. It does not rely on blurring to
reduce noise in the image. It instead looks at the variance on a local level. The
algorithm is as follows:
1. Apply a local threshold to a 3x3 window of the image. Because this is a local
threshold, it is recalculated each time the window is moved. The threshold value is
calculated as the mean of the 9 intensity values of the pixels in the window minus some
small tolerance value. If a pixel has an intensity value greater than this threshold, it is
set to a 1. If a pixel has an intensity value less than this threshold, it is set to a 0. This
gives a binary pattern of the 3x3 window.
Color Edge Detection using the Canny Operator
Another approach to edge detection using color information is simply to extend a
traditional intensity based edge detector into the color space. This method seeks to take
advantage of the known strengths of the traditional edge detector and tries to overcome
its weaknesses by providing more information in the form of three color channels rather
than a single intensity channel. As the Canny edge detector is the current standard for
intensity based edge detection, it seemed logical to use this operator as the basis for
color edge detection.
EXPERIMENTAL RESULTS
We ran five separate test images through our edge detectors (except the Multi-Flash
edge detector, which only had data sufficient data to run on two of the images). One
image was artificial and the rest were real world photographs. The results are shown in
the figures below. All color images were converted to grayscale using Matlab’s
RBG2GRAY function except when using an edge detector that required color
information. Various threshold, sigma, slope, etc. values were chosen by hand. The
images were blurred by a Gaussian filter (3x3 filter, sigma=1) before being fed into the
Euclidean Distance and Vector Angle color edge detector as we found this significantly
decreased the effect of noise. The other edge detectors do their own smoothing (Where
applicable). The low threshold in the Canny edge detector was always defined to be
40% of the high threshold. The average of the four images from the Multi-Flash edge
detector was used as input to the other edge detectors.
CONCLUSION
The Boolean edge detector performs surprisingly similarly to the Canny edge
detector even though they both take drastically different approaches. Canny’s
method is still preferred since it produces single pixel thick, continuous edges.
The Boolean edge detector’s edges are often spotty. Color edge detection seems
like it should be able to outperform grayscale edge detectors since it has more
information about the image. In the case of the Canny color edge detector, it
usually finds more edges than the grayscale version. Finding the optimal way to
combine the three color challenges may improve this method. The other color
edge detection scheme we implemented, the Euclidian Distance/Vector Angle
detector, did a decent job of identifying the borders between regions, but misses
fine grained detail. If the direction of the edge were known, a non-maximal
suppression on the color edge detector’s output may help the granularity of its
output. Multi-flash edge detection shows some promise as it strives to produce
photographs that will be easy to edge detect, rather than running on an arbitrary
image. One problem inherent to the Multi-flash edge detector is that it will have
difficulty finding edges between objects that are at almost the same depth or are at
depths which are very far away. For example, the Multi-flash method would not
work at all on an outdoor scene such as the shoreline.