Next Article in Journal
FADIT: Fast Document Image Thresholding
Previous Article in Journal
Modified Migrating Birds Optimization for Energy-Aware Flexible Job Shop Scheduling Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Tolerance Dehazing Algorithm Based on Dark Channel Prior

School of Economics and Management, Bei**g University of Posts and Telecommunications, Bei**g 100876, China
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(2), 45; https://doi.org/10.3390/a13020045
Submission received: 20 January 2020 / Revised: 14 February 2020 / Accepted: 19 February 2020 / Published: 20 February 2020

Abstract

:
The tolerance mechanism based on dark channel prior (DCP) of a single image dehazing algorithm is less effective when there are large areas of the bright region in the hazy image because it cannot obtain the tolerance adaptively according to the characteristics of the image. It will lead to insufficient improvement of the transmission of image, so it is difficult to eliminate the color distortion and block effects in the restored image completely. Moreover, when a dense haze area or a third-party direct light source (sunlight, headlights and reflected glare) is misjudged as sky area, the use of tolerance will cause an inferior dehazing effect such as details lost. Regarding the issue above, this paper proposes an adaptive tolerance estimation algorithm. The tolerance is obtained according to the statistic characteristics of each image to make the estimation of transmission more accurately. The experimental results show that the proposed algorithm not only maintains high operational efficiency but also effectively compensates for the defects of the dark channel prior to some scenes. The proposed algorithm can effectively solve the problem of color distortion recovered by the DCP method in the bright regions of the image.

1. Introduction

Haze is a common natural phenomenon. Even on a clear summer day, distant targets are affected by haze due to the evaporation of surface water vapor. In weather conditions such as haze, the horizontal visibility [1] is significantly reduced due to the scattering of a large number of tiny droplets or aerosols suspended in the atmosphere. This change creates considerable difficulties in outdoor monitoring, automatic navigation, target tracking, etc., resulting in outdoor vision systems working improperly. Therefore, it is very important to study how to effectively treat the degraded images obtained in severe weather conditions.
The processing of hazy images generally falls into two categories: enhancement-based algorithms and physics-based algorithms. Histogram equalization [2], homomorphic filtering [3] and Retinex [4] are all based on image enhancement algorithms. These algorithms can effectively improve the contrast of hazy images and highlight certain information in the image, but they do not consider the formation principle of hazy images and the degradation mechanism of the images. Another category is based on the atmospheric scattering model [5]. By analyzing the inverse process of the image degradation process and obtaining the relevant parameters of the degradation process, a clear image is recovered. The image restored by this method is real, and the image information can be more completely preserved. Therefore, a single image dehazing technology based on physical models has gained broad attention. Fattal [6] proposed removing haze from color images based on independent component analysis (ICA). However, ICA is ineffective for thicker hazy images due to the lack of color information. Tarel [7] used a median filter to calculate the minimum color component of the atmospheric veil. The median filtering of this method does not preserve the edges well, and the desired results cannot be obtained at deep discontinuous edges. He et al. [8] proposed a dark channel prior, first calculating the rough transmission according to the atmospheric scattering model and the dark channel prior and then using the soft matting algorithm [9] to optimize the transmission to obtain haze-free images. To solve the issues of edge loss and halo artifacts in haze-free images, Dilbag et al. proposed a series of DCP-based methods to enhance the estimation of atmospheric light, such as a modified joint trilateral filter (MJTF)-based DCP method [10], a modified gain intervention filter-based DCP method [11], and a fourth-order partial differential equation-based trilateral filter (FPDETF) dehazing method [12]. To reduce the color distortion, the DCP restoration model was also redefined. Recently, with the continuous development of deep learning, an increasing number of neural network algorithms have been used in the field of image processing and have achieved good results. Therefore, algorithms based on deep learning can be considered the third kind of dehazing algorithm. The existing learning-based dehazing algorithms mostly use deep learning to intelligently learn the haze feature and output the medium transmission map, and then recover the haze-free image through the atmospheric scattering model. For example, Cai et al. [13] proposed a trainable end-to-end system, DehazeNet, for medium transmission estimation. Li et al. [14] proposed a dehazing algorithm based on residual depth CNN.
Among the above algorithms, the DCP method proposed by He [8] has been extensively studied due to its simple principle and superior results. However, this method has two drawbacks. (1) The soft matting algorithm [9] needs to perform iterative calculations, which leads to high algorithmic complexity. To improve the computational efficiency, He [15] proposed a guided filter with edge-maintaining characteristics instead of the soft matting algorithm to refine the transmission map. However, the edge smoothing technique of local filtering suffers from halo artifacts, so the improved algorithms [16,17,18,19,20,21] were proposed for this problem. (2) If the image scene contains a large area of the sky or a bright white object, the dark channel prior will invalid, resulting in severe distortion of the restored image. To improve this phenomenon, Wang et al. [22] estimated the transmission map of the sky and non-sky areas and then combined it with the refined transmission map to remove the haze. Although the visibility of the sky area can be improved, this approach generally reduces the recovery performance among adjacent boundaries. Liu et al. [23] proposed a large sky region detection algorithm based on SVM classification, which uses two different strategies to obtain more accurate atmospheric light according to its detection results. Finally, the multiscale open dark channel model is used to adaptively calculate the dark channel for dehazing. Zhang et al. [24] proposed saliency prior to hazy imaging, which can distinguish white objects from dense haze by saliency detection. On the basis of the saliency prior, both an accurate airlight and a correct transmission map can be obtained from images containing large white objects, and finally, these images can be restored successfully. In addition, some authors have proposed tolerance mechanisms to correct the transmission of false estimates for such bright regions. The tolerance mechanism mainly refers to using this as a threshold to judge whether there is a bright area in the haze image and then performing segmentation processing on the sky and other areas of the image to achieve accurate correction of the transmittance image [25]. However, the tolerance values in their algorithms are generally the best or empirical values obtained by trial and error. In this paper, an adaptive tolerance mechanism is studied to adaptively determine the tolerance value according to the characteristics of the image to achieve the best dehazing effect. The simulation results showed that the improved algorithm greatly improves the dehazing performance. Not only is the restored image clear and natural, but the useful information in the image is not lost.
The remainder of this paper is organized as follows. Section 2 first introduces the dark channel prior dehazing algorithm and its defects. Then, in Section 2.4, we describe the tolerance mechanism algorithm to solve the defect of the dark channel prior and its deficiency. In Section 3, we describe the details of the improved algorithm in this paper. In Section 4, we present and analyze the experimental results. Finally, we summarize this paper in Section 5.

2. Related Work

2.1. Atmospheric Scattering Model

To describe the formation of hazy images, McCartney [5] proposed an atmospheric scattering model in 1976. Later, Narasimhan and Nayar [26,27] further derived the model, which describes the degradation process of the hazy images and is widely used in the dehazing of hazy images. Mathematically, the atmospheric scattering model can be described as:
I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) ) ,
where I(x) is a hazy image, J(x) is a clear image, A is the atmospheric light, and t(x) is the transmission. The process of dehazing is actually that of calculating A and t(x) from hazy image I(x) to restore J(x) through the atmospheric scattering model. The transmission t(x) can be expressed as:
t ( x ) = e β d ( x ) ,
where β is the atmospheric scattering coefficient, which is related to the wavelength of visible light, d(x) is the depth of the scene, which is the distance between the imaging device and the scene.

2.2. Dark Channel Prior Dehazing Algorithm

He [8] obtained a law by statistical analysis of a large number of outdoor haze-free images, that is, in most non-sky local regions, at least one color channel has some pixels whose intensities are very low and close to zero. This phenomenon is defined as the dark channel prior. These low pixel values are produced by shadows, colored objects, or the surfaces of darker objects. For the outdoor haze-free image J(x), the mathematical expression of the dark channel is:
J d a r k ( x ) = min y Ω ( x ) ( min c { r , g , b } J c ( y ) ) 0 ,
where Jdark is the dark channel image, which is always low in theory and infinitely close to zero. Jc is a color channel of J, and Ω(x) is a local patch centered on x. It can be clearly seen from Equation (3) that calculating the dark channel of a pixel is actually the process of finding the minimum twice.
It is known from the atmospheric scattering model that the transmission t(x) and the ambient atmospheric light value, A, of the image must be known before recovering the haze-free image. First, the dark channel map of the hazy image can be solved by the dark channel theory:
min y Ω ( x ) ( min c { r , g , b } ( I c ( y ) ) = t ( x ) min y Ω ( x ) ( min c { r , g , b } ( J c ( y ) ) + A ( 1 t ( x ) )
Both sides of the above formula are simultaneously divided by A as follows:
min y Ω ( x ) ( min c { r , g , b } ( I c ( y ) A ) = t ( x ) min y Ω ( x ) ( min c { r , g , b } ( J c ( y ) A ) + A ( 1 t ( x ) )
Assuming that atmospheric light A is a known value, the image transmission t(x) can be estimated by Equations (3) and (5):
t ( x ) = 1 min y Ω ( x ) ( min c { r , g , b } ( I c ( y ) A ) )
He [8] mentioned in the paper that even on sunny days, there are always some particles in the air. Generally, if you look at distant objects with the naked eye, you will still feel that the scenery is blurred. People can feel the depth of field in the scene through the haze. To be consistent with reality, a constant factor ω (0 < ω < 1) can be introduced in Equation (6) to retain some of the haze particles in the distance:
t ( x ) = 1 ω min y Ω ( x ) ( min c { r , g , b } ( I c ( y ) A ) )
The smaller the ω value is, the less obvious the dehazing effect will be. According to experience, ω generally takes a value of 0.95.
The dark channel prior assumes that the transmission characteristics of the light in the local area are consistent, but this assumption is often not satisfied in the edge regions where the depth is inconsistent due to the discontinuity of the scene depth. For a hazy image, when the depth of the object in the local area Ω(x) is the same, the dark channel image can be accurately obtained. However, when the depth of the field difference at the edge of the object in the local region Ω(x) is large, the transmission value estimation in the local region is not accurate. Therefore, it is necessary to refine the transmission map so that the haze-free image can be better transitioned at the edge of the scene without the white halos. At the same time, to improve the efficiency of the algorithm operation, this paper uses the guided filtering algorithm proposed by He [15], instead of the soft matting algorithm [9], to optimize the transmission map.
When both A and t(x) have estimates, the atmospheric scattering model can be used to obtain a haze-free image. Since the transmission rate may be too small in some areas to cause the image J(x) value to exceed 255, it is necessary to set a threshold t0 for t. In the study of He [8], t0 was taken as 0.1. When the obtained t is smaller than the threshold t0, the threshold t0 is taken. The final haze-free image recovery formula is:
J ( x ) = I ( x ) A max [ t ( x ) , t 0 ] + A

2.3. Defect of DCP

Through a large number of experimental results, it is known that for some images containing large bright areas such as the sky and water surfaces, the dehazing results based on the dark channel prior will show significant color distortion in the bright areas. In fact, for outdoor clear images, the bright areas have a large pixel value, and channels with pixel values close to zero cannot be found in the area, so the dark channel priors will invalid in these areas [28]. As shown in Figure 1, the sky area and some bright white areas of the scene in the dark channel image have large pixel values.
If the dark channel prior is not considered, the transmission derived from Equation (5) is
t actual ( x ) = 1 min y Ω ( x ) ( min c { r , g , b } ( I c ( y ) A ) 1 min y Ω ( x ) ( min c { r , g , b } ( J c ( y ) A )
For bright areas that do not satisfy the dark channel prior, the value of the dark channel cannot be approximated to 0, so the denominator of the above formula is less than 1. From this, it can be inferred that the actual transmission tactual(x) of the bright regions is greater than the transmission t(x) estimated based on the dark channel prior. Therefore, the transmission map estimated based on the DCP algorithm is limited to the bright region, and the pixel channel of the sky region is divided by the relatively small t(x) (as shown in Equation (6)), which significantly increases the small difference between the pixel channels of the sky area. This causes the color of the restored image to be distorted.

2.4. Tolerance Mechanism to Correct Transmission and Its Defects

As described in Section 2.3, the dark channel prior is ineffective in bright areas. This causes the transmission rate to be underestimated and results in color distortion in the restored image. If you want to eliminate color distortion, you must adjust the transmission of the bright area so that the estimated t(x) more closely matches the actual transmission tactual(x). To solve this problem, a tolerance coefficient K is introduced to determine the difference between the haze image I(x) and the atmospheric light factor A. If the absolute value of the difference is less than K, the area is a bright area. The corresponding transmittance of this area needs to be recalculated. If the absolute value of the difference is greater than or equal to K, the area is a non-bright area and the transmission of this area is kept constant. Formulated as follows:
t ( x ) = min ( max ( K | I ( x ) A | , 1 ) max ( t ( x ) , t 0 ) , 1 )
The recovery formula for a haze-free image is:
J ( x ) = I ( x ) A t ( x ) + A
Equation (10) is actually a supplement and improvement to the dark channel prior. The tolerance mechanism is introduced to ensure that the calculated transmission of the bright region does not have a large deviation from the actual transmission. After the simulation, for most images with sky regions, the dehazing effect after introducing the tolerance mechanism is better than the algorithm using only the dark channel prior. Many pieces of literature have done related experiments and found that most of the haze images can obtain better dehazing effects when the tolerance K is 50. However, it is found that there are two defects in introducing tolerance mechanism after analyzing Equation (10).
a). The value of the tolerance K directly affects the correction of the transmittance and the dehazing effect. The fixed tolerance value does not effectively correct the transmittance of different image features. As shown in Figure 2, the first image has the best dehazing effect when K = 80, and the sky region appears distorted when K = 20. In the second image, the branches above the center lost considerable detail when K = 80, and the image details were the best when K = 45. Observe the area at the end of the road in the third image. The dehazing effect is best when K = 20.
b). If we only use the ratio of the fixed tolerance K to |I(x)-A| to determine whether the pixel point I(x) belongs to the bright region, so as to determine whether tolerance is introduced in the restoration of the pixel. It is easy to misjudge whether the pixel belongs to the bright region, and the dehazing effect becomes worse due to the wrong adjustment of the transmission. Figure 3a shows an image with a dense fog but no large-area sky area. Figure 3b shows the recovery result after the processing of Figure 3a by the He’s algorithm without tolerance. Figure 3c is a restored image obtained by processing Figure 3a after using the fixed tolerance value 50 to correct the transmittance based on the He algorithm. It can be seen from Figure 3c that the tolerance mechanism is used to make the region lose a lot of image details.

3. Our Improved Method

3.1. Calculation of Adaptive Tolerance

It can be seen from Figure 2 that selecting a suitable value of tolerance can make the bright area of the restored image undistorted and retain more effective information. Therefore, we need to calculate an adaptive value of tolerance to recover the hazy images with different characteristics. After many experiments, we find that the larger the proportion of bright areas in the whole image, the higher the K value required to recalculate the transmission. When the bright area is small in the image (for example, yellow clouds or branches and poles in the sky will reduce the proportion of the sky in the whole picture), taking the smaller K value will retain more details. At this point, the problem is converted to how to calculate the adaptive K value by the proportion of the bright area so that it can have a good recovery effect for all images. Similar to introducing the tolerance K, we introduce |I(x)-A| to record the difference between pixel I(x) in the hazy image and the atmospheric light A.
The average image contrast reflects the concentration of the haze to some extent. In addition, through experiments, we further found that images with different contrasts have different dehazing effects on images with different tolerance values. The average contrast of the images is positively correlated with the tolerance value. The larger the average contrast of the image, the larger the tolerance value. Therefore, we use the average contrast as the threshold parameter for calculating the tolerance, and use it as an index and compare it with |I(x)-A| to determine whether the pixel belongs to a bright area. The formula is as follows:
K 1 = ( | I ( x ) A | < α C ) n u m I n u m 100
The experimental results show that when α is set to 2, there is a better dehazing effect. In the above formula, Inum is the number of pixels in the entire image, and the contrast ɑ*C is a threshold, which is used to determine whether the pixel belongs to the bright region. If the value of |I(x)-A| is less than the threshold, it means that the pixel belongs to a bright area, and the numerator represents the number of pixels belonging to the bright area in the entire picture. C is the contrast of the image, and the calculation formula is as follows [29]:
C = 1 Μ Ν i = 0 N 1 j = 0 M 1 ( I i j I m e a n ) 2 ,
where intensities Iij are the i-th j-th element of the two-dimensional image of size M by N. Imean is the average intensity of all pixel values in the image.
Figure 4 shows the transmission map and restored images obtained using different algorithms. The first line is the dehazing effect after using the algorithms of He [8], and its transmittance image is obtained directly by using Equation (7) after using the DCP algorithm. So the transmittance of the distant sky area is close to zero, and it can be clearly seen that the color distortion of the sky area is serious. The second line is the dehazing effect after introducing the empirical tolerance to correct the transmission rate. It has some improvement compared with the algorithm of He [8]. The third line is the transmission map and the restored image obtained by using the improved algorithm of this paper. The adaptive tolerance K = 62 is calculated according to the characteristics of the image itself, as shown in Equation (12). The transmittance value of the distant sky region of the second and third rows of Figure 4 is calculated by the Equation (10) to be between 0 and 1. It can be seen from Figure 4 that the improvement effect of the adaptive tolerance determined by the image feature on the color distortion of the sky region is most obvious.

3.2. Threshold of Tolerance Mechanism

We know that the value of adaptive tolerance K needs to be used to adjust the transmission only when the dark channel prior is invalid in the sky area or in the bright area of the scene, while other regions use the original dark channel prior to calculating the transmission, it can also obtain a better dehazing effect. Unfortunately, when there is a dense haze area or a third-party direct light source (sunlight, headlights, reflected glare, etc.) in the image, it is misjudged as sky area because the brightness of these areas is similar to the brightness of the atmosphere. As a result, the transmission of these regions is excessively increased, resulting in a serious loss of image detail in the region.
We find that the common feature of the bright areas of misjudgment is that the proportion in the image is small, basically no more than 5% of the entire image’s pixel. Therefore, we must first determine whether there is a large area of bright areas in the image before calculating the value of tolerance K. If the bright area does not exceed 5% of the entire image’s pixels, indicating that there is no large bright area in this image, then the value of tolerance should be as small as possible to maintain the original transmission for the dehazing calculation. If the bright area exceeds 5% of the entire image’s pixels, then there is a large area of the sky in this image. At this time, it is necessary to calculate the value of adaptive tolerance K using Equation (12) to adjust the transmission. The final formula of tolerance K is as follows:
K = { K 1 t h d > 5 % 0.01 t h d < 5 %
t h d = ( I ( x ) > I t h ) n u m I n u m ,
where the numerator in the formula is the number of pixels that satisfy the condition, and Inum is the number of pixels in the entire image. We set the threshold Ith to 190 because the pixel values of the bright regions are relatively average and high.
The 5% mentioned in Equation (14) is a threshold value to decide whether or not the image introduces adaptive tolerance to modify the transmission function. This parameter was determined after extensive experimental observation. As shown in Figure 5, the proportion of the bright areas of the five images is 3.223%, 1.78%, 5.78%, 6.761%, and 27%, respectively. The first three images do not have a large area of the sky, and the proportion of bright areas is less than 5%. The details of the image can be saved without using a tolerance mechanism. The last three areas have sky areas, and the bright areas account for more than 5%. Using the tolerance mechanism can reduce the distortion of the sky area image. Figure 5 shows the restored image corresponding to the different judgment values of whether or not to use the tolerance mechanism in this paper. Setting the threshold to 5% to determine whether to introduce adaptive tolerance to modify the transmission function can enhance the effectiveness of the proposed algorithm for different pictures.

4. Comparison and Analysis of Experiential Results

He [8] calculated the first 0.1% of the pixel points from the dark channel image of the hazy image according to the brightness and then calculated the value of atmospheric light A by looking for the pixel-point of the corresponding position in the original image. However, if the picture has a third-party direct light source such as sunlight, headlights, and reflected strong light. The value of the atmospheric light obtained by He [8] is obviously too large, which will affect the final dehazing effect.
To overcome these limitations, this paper refers to the quad-tree subdivision method [30] to estimate atmospheric light A. First, the input image is divided into four matrix regions, and the average pixel value of each block is calculated. Then, the block with the largest average is selected and further divided into four smaller matrix regions. This process is repeated until the size of the selected area is less than a prespecified threshold (typically the threshold is set to 5%*w*h, where w and h represent the width and height of the input image). The defogging algorithm in this paper is the result of the quad-tree subdivision method and the adaptive tolerance correction transmittance. The transmittance map and the restored image obtained by processing the haze image of Figure 4 by the improved algorithm of the paper are as shown in Figure 6.
To verify the effectiveness of the proposed algorithm, we test various hazy images and compare them with Tarel [7], Cai [13], He [8], and the algorithm of introducing fixed tolerance values. The comparison includes a qualitative, subjective evaluation and quantitative, objective evaluation.

4.1. Qualitative Comparison of Real-World Images

Figure 7 shows the recovery effect of the improved algorithm and the other four representative dehazing algorithms on outdoor blurred images. The transmission map during the various dehazing algorithms is shown in the first row of Figure 7. Figure 7a depicts the hazy images to be dehazed. Figure 7b–e shows the results of Tarel [7], Cai [13], He [8] and the result of the introduced fixed tolerance algorithm. As shown in Figure 7b, most of the haze is removed in Tarel’s results, and the details of the scenes and objects are well restored. However, the result is clearly affected by over enhancement, making the entire image much darker than what it should be. In contrast, the results of He’s image are much better visually (see Figure 7c). However, in the first two pictures, the sky area still shows significant distortion. This is because the method of He’s transmission rate estimation is based on the dark channel prior, and the accuracy of the estimation largely depends on the validity of the dark channel prior. Unfortunately, the dark channel prior is ineffective when the scene brightness is similar to the atmospheric light. As shown in Figure 7d,e, the distortion of the sky region is improved. However, as shown in the sixth line and its enlarged image, the restored images of the two algorithms have lost detail in the region of dense haze. Compared with the results of these four algorithms, our results are not over-saturated and better preserve the details of the image. As shown in Figure 7f, the sky area in the picture is clear, and the details of the mountains and leaves are well preserved.

4.2. Qualitative Comparison of Synthetic Images

In Figure 8, the five algorithms, including the proposed algorithm, are tested on the stereo images where the ground truth images are known. Figure 8a shows the hazy images, which are synthesized from the haze-free images with known depth maps. The results of the five algorithms are shown in Figure 8b–f. Figure 8g shows the ground truth images for comparison. These haze-free images and their corresponding ground truth depth maps are taken from the Middlebury stereo datasets [31,32]. It is obvious that Tarel’s results are quite different from the ground truth images, as the results are much darker (as shown in Figure 8b). By observing the images in Figure 8c, we find that He’s results have a similar problem (for example, in the third image, the skin color of the child is obviously deepened, and the color of the background behind the hat in the second image is darkened). Figure 8d is the recovery result of Cai et al. [13], and Figure 8e,f is the recovery result of the fixed tolerance value algorithm and the adaptive tolerance algorithm in this paper. They maintain the original color of the object, and the result is that there is no oversaturation such that the corrected image is more similar to the ground truth images.

4.3. Quantitative Comparison

The subjective evaluation uses the naked eye to observe the contrast picture before and after dehazing to judge the effect of it. This method is often susceptible to individual factors, such as the aesthetics and psychology of the observer, and cannot be imported into the computer vision system to achieve more accurate and detailed follow-up analysis. Objective evaluation is more automatic, efficient, and easier to integrate than a subjective evaluation. Therefore, we use objective evaluation methods to further evaluate the various algorithms described in this paper.

4.3.1. Blind Contrast Enhancement Assessment

In studying the dehazing effect, Hautière’s blind evaluation method based on the visible edge contrast is well known [33,34]. This method evaluates the contrast enhancement of each image detail before and after dehazing. It uses three indicators to objectively describe the quality of the images (new visible edge ratio e, visible edge normalized gradient r, saturated black or white percentage pixels σ):
e = n r n 0 n 0 ,
r = exp ( 1 n r p i ψ r log r i ) ,
σ = n s dim x × dim y ,
where n0 and nr are the numbers of visible edges of the image before and after the dehazing, respectively, Ψr is the set of visible edges for the dehazed image, Pi is the pixels on the visible edge, ri is the ratio of the Sobel gradient at Pi and the corresponding point of the original image, ns is the number of saturated black and white pixels, dimx and dimy represent the width and height of the image, respectively. The comparative data are shown in Table 1.
Table 1 shows the results of the three objective indicators e, r and σ of the image in Figure 6. A lower σ value represents better performance. Table 1 shows that the σ value of the algorithm in this paper is the smallest. Even if the σ value of the fifth picture is higher than that of He’s method, the difference is small. The two indicators e and r focus on edge recovery and usually larger values indicate better performance. However, the increase in the visible edge may also result from a false edge caused by severe color distortion (as shown in Figure 6b). Figure 9 shows it more concretely, where Tarel’s result shows evident false edges in the local enlarged regions. Our method suppresses the color distortion effectively, which is further manifested as a decrease in false fake edges. Thus, our results often have a lower value than the results of Tarel and in those two indicators. However, our method is actually better than most other methods in most images (as shown in Table 1).

4.3.2. Structural Similarity (SSIM) Image Quality Assessment

The structural similarity (SSIM) image quality assessment index [35] is introduced to evaluate the ability to preserve the structural information of the algorithms. This indicator was first proposed by the Image and Video Engineering Laboratory at the University of Texas at Austin. It is often used in image processing, especially in image denoising, which comprehensively surpasses SNR and PSNR in image similarity evaluation. The high SSIM indicates that the haze-free image is highly similar to the ground truth image, while the low SSIM indicates the opposite.
To directly compare the structural similarity between the restored image and the real image, we compare the dehazing results of the synthesized depth image in Figure 7. Table 2 shows the SSIM of the four restored images in Figure 7. The SSIM results of Tarel are all less than 0.8, indicating that a large amount of structural information is lost in the restored image. It is clear that Cai’s SSIMs are higher than those of the other four algorithms. Cai’s algorithm is a deep learning algorithm that continuously trains to obtain the best-restored image by learning the relevant features of the synthesized depth image. The algorithm can save structural information well. Our results reach the highest SSIM results except for Cai. It is proved that the ability of the algorithm to save structural information is superior to the general algorithm.
All the simulations for our proposed algorithm are carried out in MATLAB R2016a environment running on a personal computer with an Intel Core (i3-8100) central processing unit running at 3.6 GHz with 4 GB of RAM. The average computing time for dehazing processing of our algorithm is less than 0.5 seconds. We think it is possible to apply it to a video including several frames in the future.

5. Conclusions

The dark channel prior is invalid when the scene brightness is similar to that of atmospheric light, and there is no shadow projection. Therefore, this paper improves the dehazing algorithm based on a dark channel prior. First, we estimate the atmospheric light using the method of quad-tree subdivision and then combine the dark channel prior to introduce a value of adaptive tolerance K to obtain the correct transmission. To prevent the occurrence of halo artifacts at the edges of the restored image, the guided filtering technique is used to optimize the transmission. Finally, these parameters are brought into the atmospheric scattering imaging model to complete the dehazing treatment. The experimental results show that our dehazing algorithm not only better recovers images with large areas of bright areas but also better preserves image details compared with other methods. In future work, we will extend our work to the issue of video dehazing.

Author Contributions

Methodology, F.Y.; Supervision, S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Narasimhan, S.G.; Nayar, S.K. Vision and the Atmosphere. Int. J. Comput. Vision 2002, 48, 233–254. [Google Scholar] [CrossRef]
  2. Kim, T.K.; Paik, J.K.; Kang, B.S. Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Trans. Consum. Electron. 1998, 44, 82–87. [Google Scholar]
  3. Seown, M.J.; Asari, V.K. Ratio rule and homomorphic filter for enhancement of digital colour image. Neurocomputing 2006, 69, 954–958. [Google Scholar]
  4. Joshi, K.R.; Kamathe, R.S. Quantification of retinex in enhancement of weather degraded images. In Proceedings of the International Conference on Audio, Language and Image Processing IEEE, Shanghai, China, 7–9 July 2008; pp. 1229–1233. [Google Scholar]
  5. Mccartney, E.J. Optics of the atmosphere: Scattering by molecules and particles. Optica Acta Int. J. Optics 1997, 24, 779. [Google Scholar] [CrossRef]
  6. Fattal, R. Single image dehazing. Acm Trans. Graphics 2008, 27, 1–9. [Google Scholar] [CrossRef]
  7. Tarel, J.P.; Hautière, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2201–2208. [Google Scholar]
  8. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  9. Levin, A.; Lischinski, D.; Weiss, Y. A closed form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 228–242. [Google Scholar] [CrossRef] [Green Version]
  10. Singh, D.; Kumar, V. Dehazing of remote sensing images using improved restoration model based dark channel prior. Imaging Sci. J. 2017, 65, 282–292. [Google Scholar] [CrossRef]
  11. Singh, D.; Kumar, V. Modified gain intervention filter based dehazing technique. J. Modern Optics 2017, 64, 2165–2178. [Google Scholar] [CrossRef]
  12. Singh, D.; Kumar, V. Dehazing of remote sensing images using fourth-order partial differential equations based trilateral filter. IET Comp. Vision 2018, 12, 208–219. [Google Scholar] [CrossRef]
  13. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Li, J.; Li, G.; Fan, H. Image dehazing using residual-based Deep CNN. IEEE Access 2018, 6, 26831–26842. [Google Scholar] [CrossRef]
  15. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, B.H.; Huang, S.C.; Cheng, F.C. A high-efficiency and high-speed gain intervention refinement filter for haze removal. J. Disp. Technol. 2016, 12, 753–759. [Google Scholar] [CrossRef]
  17. Yu, T.; Riaz, I.; Piao, J.; Shin, H. Real-time single image dehazing using block-to-pixel interpolation and adaptive dark channel prior. IET Image Process. 2015, 9, 725–734. [Google Scholar] [CrossRef]
  18. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted guided image filtering. IEEE Trans. Image Process. 2014, 24, 120–129. [Google Scholar]
  19. Li, Z.; Zheng, J. Edge-preserving decomposition-based single image haze removal. IEEE Trans. Image Process. 2015, 24, 5432–5441. [Google Scholar] [CrossRef]
  20. **ang, R.; Zhu, X.; Wu, F.; Jiang, X.; Xu, Q. Guided filter based on multikernel fusion. J. Electron. Imaging 2017, 26, 033027. [Google Scholar] [CrossRef]
  21. Cong-Hua, X.; Wei-Wei, Q.; **u-**ang, Z.; Feng, Z. Single image dehazing algorithm using wavelet decomposition and fast kernel regression model. J. Electron. Imaging 2016, 25, 043003. [Google Scholar] [CrossRef]
  22. Wang, G.; Ren, G.; Jiang, L.; Quan, T. Single image dehazing algorithm based on sky region segmentation. Inf. Technol. J. 2013, 12, 1168–1175. [Google Scholar] [CrossRef] [Green Version]
  23. Liu, Y.; Li, H.; Wang, M. Single image dehazing via large sky region segmentation and multiscale opening dark channel model. IEEE Access 2017, 5, 8890–8903. [Google Scholar] [CrossRef]
  24. Zhang, L.; Wang, X.; She, C.; Wang, S.; Zhang, Z. Saliency-driven single image haze removal method based on reliable airlight and transmission. J. Electron. Imaging 2018, 27, 023038. [Google Scholar] [CrossRef]
  25. Gao, Y.; Yun, L.; Shi, J.; Li, C. Enhancement dark channel theory algorithm of fog image based on fourth-order PDE model. J. Nan**g Univ. Sci. Technol. 2015, 39, 6. [Google Scholar] [CrossRef]
  26. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef] [Green Version]
  27. Narasimhan, S.G.; Nayar, S.K. Interactive (de) weathering of an image using physical models. In Proceedings of the IEEE Workshop on Color and Photometric Methods in Computer Vision, Nice, France, 12 October 2003. [Google Scholar]
  28. Shi, L.; Yang, L.; Cui, X.; Gai, Z.; Chu, S.; Shi, J. Image dehazing using dark channel prior and the corrected transmission map. In Proceedings of the 2016 2nd International Conference on Control, Automation and Robotics (ICCAR), Hong Kong, 1 March 2016. [Google Scholar]
  29. Peli, E. Contrast in complex images. J. Opt. Soc. Am. Opt. Image Sci. Vision 1990, 7, 2032–2040. [Google Scholar] [CrossRef]
  30. Kim, J.H.; Sim, J.Y.; Kim, C.S. Single image dehazing based on contrast enhancement. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, 22–27 May 2011; pp. 1273–1276. [Google Scholar]
  31. Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 2002, 47, 7–42. [Google Scholar] [CrossRef]
  32. Scharsteinet, D.; Hirschmüller, H.; Kitajima, Y.; Krathwohl, G.; Nešić, N.; Wang, X.; Westling, P. High-resolution stereo datasets with subpixel-accurate ground truth. In Proceedings of the German Conference on Pattern Recognition, Münster, Germany, 2–5 September 2014; pp. 31–42. [Google Scholar]
  33. Hautière, N.; Tarel, J.P.; Aubert, D.; Dumont, E. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Anal. Stereol. 2008, 27, 87–95. [Google Scholar] [CrossRef]
  34. Yuan, H.; Liu, C.; Guo, Z.; Sun, Z. A region-wised medium transmission based image dehazing method. IEEE Access 2017, 5, 1735–1742. [Google Scholar] [CrossRef]
  35. Wang, Z. Image quality assessment: From errsor visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Outdoor haze-free image and its dark channel image.
Figure 1. Outdoor haze-free image and its dark channel image.
Algorithms 13 00045 g001
Figure 2. Dehazing effect diagram corresponding to different tolerance values.
Figure 2. Dehazing effect diagram corresponding to different tolerance values.
Algorithms 13 00045 g002
Figure 3. Restored images before and after using the tolerance mechanism.
Figure 3. Restored images before and after using the tolerance mechanism.
Algorithms 13 00045 g003
Figure 4. Comparison of the image before and after correction. (a) hazy image, (b) Transmission map. (c) Restored image with (b), (d) Magnification of rectangle (c).
Figure 4. Comparison of the image before and after correction. (a) hazy image, (b) Transmission map. (c) Restored image with (b), (d) Magnification of rectangle (c).
Algorithms 13 00045 g004
Figure 5. The restored image corresponding to the different judgment values.
Figure 5. The restored image corresponding to the different judgment values.
Algorithms 13 00045 g005
Figure 6. (a) hazy image. (b) Transmission map. (c) Restored image with (b). (d) Magnification of rectangle (c).
Figure 6. (a) hazy image. (b) Transmission map. (c) Restored image with (b). (d) Magnification of rectangle (c).
Algorithms 13 00045 g006
Figure 7. Qualitative comparison of different dehazing algorithms for outdoor images. (a) The hazy images. (b) Tarel’s results. (c) He’s results. (d) Cai’s results. (e) Results of used fixed tolerance. (f) Our results.
Figure 7. Qualitative comparison of different dehazing algorithms for outdoor images. (a) The hazy images. (b) Tarel’s results. (c) He’s results. (d) Cai’s results. (e) Results of used fixed tolerance. (f) Our results.
Algorithms 13 00045 g007
Figure 8. Qualitative comparison of different dehazing algorithms for synthetic depth images. (a) The hazy images. (b) Tarel’s results. (c) He’s results. (d) Cai’s results. (e) Results of used fixed tolerance. (f) Our results. (g) Ground truth.
Figure 8. Qualitative comparison of different dehazing algorithms for synthetic depth images. (a) The hazy images. (b) Tarel’s results. (c) He’s results. (d) Cai’s results. (e) Results of used fixed tolerance. (f) Our results. (g) Ground truth.
Algorithms 13 00045 g008
Figure 9. False edges caused by serious color distortion in the dehazing results. (a) Tarel’s result. (b) Our result.
Figure 9. False edges caused by serious color distortion in the dehazing results. (a) Tarel’s result. (b) Our result.
Algorithms 13 00045 g009
Table 1. Indicator e, r and σ of the images in Figure 6.
Table 1. Indicator e, r and σ of the images in Figure 6.
MethodsTarel’s MethodHe’s MethodCai’s MethodIntroducing Fixed ToleranceOur Method
Indexes
Image 1e0.31630.18730.11160.14470.1918
σ000.00200
r1.75320.96600.10231.25401.3576
Image 2e2.23720.25670.36050.25180.6276
σ000.012600
r2.37420.73491.12631.05571.2857
Image 3e0.30860.07230.14170.11740.1654
σ000.003800
r1.54271.02961.09571.13841.4328
Image 4e0.17520.08940.00600.09200.1103
σ000.009200
r1.37861.08701.08941.21751.7019
Image 5e1.38270.68870.51490.54830.8144
σ000.00520.00520.0018
r1.83371.49291.40401.53801.7013
Table 2. Structural similarity (SSIM) of different algorithms in Figure 7.
Table 2. Structural similarity (SSIM) of different algorithms in Figure 7.
ImageTarel ’s MethodHe’s MethodCai ’s MethodIntroducing Fixed ToleranceOur Method
Image 10.78670.91520.94630.90430.9222
Image 20.79840.85700.90940.86170.8902
Image 30.73220.80620.91150.78590.8578
Image 40.76190.86910.91630.85890.8755

Share and Cite

MDPI and ACS Style

Yang, F.; Tang, S. Adaptive Tolerance Dehazing Algorithm Based on Dark Channel Prior. Algorithms 2020, 13, 45. https://doi.org/10.3390/a13020045

AMA Style

Yang F, Tang S. Adaptive Tolerance Dehazing Algorithm Based on Dark Channel Prior. Algorithms. 2020; 13(2):45. https://doi.org/10.3390/a13020045

Chicago/Turabian Style

Yang, Fan, and ShouLian Tang. 2020. "Adaptive Tolerance Dehazing Algorithm Based on Dark Channel Prior" Algorithms 13, no. 2: 45. https://doi.org/10.3390/a13020045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop