Traditional Culture Encyclopedia - Traditional stories - Remote sensing digital image processing method

Remote sensing digital image processing method

1. histogram method

For each image, its gray histogram can be made. According to the morphology of histogram, the image quality can be roughly inferred. Because the image contains a large number of pixels, the distribution of pixel gray value should conform to the law of probability and statistics. Assuming that the gray value of a pixel is randomly distributed, its histogram should be normally distributed. The gray value of an image is a discrete variable, so the histogram represents a discrete probability distribution. If the histogram of an image is made by taking the ratio of the number of pixels in each gray level to the total number of pixels as the ordinate, the highest point of each bar in the histogram is connected into an outer contour line, and the ratio of the ordinate is the probability density of a certain gray level, and the contour line can be approximately regarded as the probability distribution curve of the continuous function corresponding to the image. Generally speaking, if the histogram contour of an image is closer to the normal distribution, it means that the brightness of the image is close to the random distribution, which is suitable for statistical processing. Such an image is generally moderate in contrast. If the peak position of histogram is biased to the side with large gray value, the image will be brighter; If the peak position is biased to the side with small gray value, the image is dark; If the peak change is too steep and narrow, it means that the gray value of the image is too concentrated, and the latter three cases all have the problems of small contrast and poor quality. Histogram analysis is the basic method of image analysis, and changing the shape of histogram purposefully can improve the image quality.

2. Neighborhood method

For any pixel (i, j) in an image, the set of pixels {i+p, j+p}(j, p is an arbitrary integer) is called the neighborhood of the pixel, and the common neighborhoods are shown in the figure, which respectively represent the 4 neighborhood and 8 neighborhood of the central pixel.

In the process of image processing, the processing value g(i, j) of a pixel is determined by the pixel value in the small neighborhood N(i, j) of the pixel f(i, i) before processing, which is called local processing or neighborhood processing. In general image processing, different neighborhood analysis functions can be designed according to different calculation purposes.

3. convolution method

Convolution operation is an operation to detect the neighborhood of an image in spatial domain. Choosing a convolution function, also called "template", is actually a small image of M×N, such as 3×3, 5×7, 7×7, etc. Using template to realize image convolution operation. Template operation method as shown in the figure. Select the operation template φ(m, n) with the size of M×N, and open an active window f(m, n) with the same size as the template on the image from the upper left corner, so that the gray values of the image window and the template pixels are multiplied and added correspondingly. The calculation result g(m, n) is taken as the new gray value of the window center pixel. The formula for template operation is as follows (if the sum of templates is 0, divide by 1):

4. Frequency domain enhancement method

In an image, the frequency at which the gray value of a pixel changes with the position can be expressed by frequency, which is a spatial frequency that changes with the position. For edge, line, noise and other characteristics, such as the boundaries between rivers and lakes, roads and land cover with great differences, it has a higher spatial frequency, that is, the frequency of gray value changes is greater in a shorter pixel distance; However, the spatial frequency of evenly distributed ground objects or large-area stable structures, such as plains with consistent vegetation types, large-area deserts and sea surfaces, is low, that is, the gray value gradually changes within a long pixel distance. For example, in frequency domain enhancement technology, smoothing mainly keeps the low-frequency part of the image and suppresses the high-frequency part, while sharpening the high-frequency part of the image and weakening the low-frequency part.

5. Image operation method

For spatially registered multispectral remote sensing images and two or more single-band remote sensing images, a series of algebraic operations can be performed to achieve some enhancement purpose. This is similar to the traditional spatial superposition analysis, and the specific operations include addition operation, difference operation, ratio operation, compound exponential operation and so on.

6. Unsupervised classification method

It means that people don't have any prior knowledge about the classification process in advance, but naturally classify land objects according to the distribution law of spectral characteristics of remote sensing images. The result of its classification is only to distinguish different categories, but not to determine the category attributes. After analyzing the spectral curves of various categories and comparing with the field investigation, the category attributes are determined.

Similar objects in remote sensing images generally have the same or similar spectral characteristics under the same surface structure characteristics, vegetation coverage, illumination and other conditions, thus showing some inherent similarity and belonging to the same spectral space area; Different ground objects have different spectral information characteristics and belong to different spectral space regions. This is the theoretical basis of unsupervised classification. Because in complex images, the training area sometimes can't contain the spectral patterns of all ground objects, which leads to some pixels can't find their ownership. In practical work, it is not easy to determine the category of supervised classification and the selection of training area, so it is very valuable to study the original structure of data and the distribution of natural point groups by unsupervised classification method when analyzing images.

Unsupervised classification mainly adopts cluster analysis to make the distance between pixels belonging to the same category as small as possible and the distance between pixels belonging to different categories as large as possible. In cluster analysis, the parameters of benchmark categories should be determined first. However, unsupervised classification has no prior knowledge of benchmark categories, and only through pre-classification processing can initial parameters be assumed to form clustering. Then use the statistical parameters of clustering to adjust the preset parameters, and then cluster and adjust. Iterate this way until the relevant parameters reach the allowable range.

7. Supervision classification

Different from unsupervised classification, the most basic feature of supervised classification is that people have prior knowledge of the classification attributes of image features in some sampling areas of remote sensing images before classification, that is, all kinds of feature samples to be distinguished from the images should be selected to train the classifier (establish discriminant function). The transcendental knowledge here can come from field investigation, or refer to other related text materials or maps or the experience of image processors directly. In the training area, the gray values of all kinds of ground objects in each band are determined in detail, so as to determine the characteristic parameters and establish the discriminant function. Generally, supervised classification is to select a representative area in the image as the training area, get the statistical data of each category from the training area, and then classify the whole image according to these statistical data, which can use both probability discriminant function and distance discriminant function.

8. Image segmentation method

It is one of the key technologies in digital image processing. Image segmentation is to extract meaningful features from the image, and its meaningful features include edges and regions in the image, which is the basis for further image recognition, analysis and understanding. Although many methods of edge extraction and region segmentation have been developed, there is not an effective method that can be universally applied to all types of images. Therefore, the research on image segmentation needs to be deepened.