Traditional Culture Encyclopedia - Traditional culture - edge detection

edge detection

Image edge information is mainly concentrated in high frequency band. Generally speaking, image sharpening or edge detection is essentially high-frequency filtering. We know that the difference operation is to find the rate of change of the signal, which has the function of strengthening the high-frequency components.

In spatial computing, sharpening an image is calculating differentiation.

Since the digital image is a discrete signal, the differential operation becomes calculating the difference or gradient.

There are many edge detection (gradient) operators in image processing, including ordinary first-order difference, Robert operator (cross difference), Sobel operator, etc., all of which are based on finding gradient intensity. Laplace operator (second order difference) is based on zero-crossing detection. By calculating the gradient and setting the threshold, the edge image is obtained.

Edge detection is a basic problem in image processing and computer vision.

The purpose of edge detection is to identify the points with obvious brightness changes in digital images.

Significant changes in image attributes usually reflect changes in important events and attributes.

Edge detection is a research field of image processing and computer vision, especially in feature extraction.

Unless the objects in the scene are very simple and the lighting conditions are well controlled, it is not easy to determine a threshold to judge how big the brightness change between two adjacent points is. In fact, this is one of the reasons why edge detection is not an insignificant problem.

Image edge detection greatly reduces the amount of data, eliminates the information that can be considered irrelevant, and retains the important structural attributes of the image.

There are many methods of edge detection, most of which can be divided into two categories:

Filtering is usually necessary as preprocessing of edge detection, and Gaussian filtering is usually used.

The disclosed edge detection method is used to calculate the measure of edge strength, which is essentially different from smooth filtering. Just as many edge detection methods rely on the calculation of image gradient, they use different kinds of filters to estimate the gradient in X direction and Y direction.

Other edge detection operations are based on the second derivative of brightness. This is essentially the rate of change of the brightness gradient.

In the case of ideal continuous change, detecting the zero-crossing point in the second derivative will get the local maximum in the gradient. On the other hand, the peak detection in the second derivative is edge detection as long as the image operation is represented by an appropriate scale.

As mentioned above, the edge line is double-edged, so we can see a brightness gradient on one side of the edge line and an opposite gradient on the other side. In this way, if there are edges in the image, we can see the great change of brightness gradient.

① Filtering: The edge detection algorithm is mainly based on the first and second derivatives of image intensity, but the calculation of derivatives is very sensitive to noise, so filters must be used to improve the performance of edge detectors related to noise. It should be pointed out that most filters reduce the noise, but also lead to the loss of edge strength. Therefore, there is a trade-off between enhancing edges and reducing noise.

② Enhancement: The basis of edge enhancement is to determine the change value of neighborhood intensity of each point in the image. The enhancement algorithm can highlight the points with significant changes in neighborhood (or local) intensity values. Edge enhancement is usually achieved by calculating the gradient amplitude.

③ Detection: There are many points with large gradient amplitude in the image, and these points are not all edges in a specific application field, so some method should be used to determine which points are edge points. The simplest edge detection standard is the gradient amplitude threshold standard.

④ Positioning: If the application requires to determine the edge position, the edge position can be estimated with sub-pixel resolution, and the edge direction can also be estimated.

In edge detection algorithm, the first three steps are very common. This is because in most cases, the edge detector only needs to point out that the edge appears near a pixel of the image, without pointing out the exact position or direction of the edge.

The essence of edge detection is to extract the boundary line between the object and the background in the image by using some algorithm. We define the edge as the boundary of the region where the gray level changes sharply in the image.

Edge detection method

The gradient of image gray level distribution can reflect the change of image gray level, so the edge detection operator can be obtained by using image local differentiation technology. The classical edge detection method is to achieve the purpose of edge detection by constructing edge detection operators for small neighborhood pixels in the original image.