Next Article in Journal
Ultrasound-Assisted Extraction as a Technique for Preparing Improved Infusions as Functional Beverage Bases
Next Article in Special Issue
Saturation-Based Airlight Color Restoration of Hazy Images
Previous Article in Journal
Measurement of CO2 Emissions by the Operation of Freight Transport in Mexican Road Corridors
Previous Article in Special Issue
Contrast Enhancement-Based Preprocessing Process to Improve Deep Learning Object Task Performance and Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement of Low-Light Images Using Illumination Estimate and Local Steering Kernel

1
Department of Intelligent Robot Engineering, Pukyong National University, Busan 48513, Republic of Korea
2
School of Electrical Engineering, Pukyong National University, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(20), 11394; https://doi.org/10.3390/app132011394
Submission received: 25 September 2023 / Revised: 10 October 2023 / Accepted: 16 October 2023 / Published: 17 October 2023
(This article belongs to the Special Issue Future Information & Communication Engineering 2023)

Abstract

:
Images acquired in low-light conditions often have poor visibility. These images considerably degrade the performance of algorithms when used in computer vision and multi-media systems. Several methods for low-light image enhancement have been proposed to address these issues; furthermore, various techniques have been used to restore close-to-normal light conditions or improve visibility. However, there are problems with the enhanced image, such as saturation of local light sources, color distortion, and amplified noise. In this study, we propose a low-light image enhancement technique using illumination component estimation and a local steering kernel to address this problem. The proposed method estimates the illumination components in low-light images and obtains the images with illumination enhancement based on Retinex theory. The resulting image is then color-corrected and denoised using a local steering kernel. To evaluate the performance of the proposed method, low-light images taken under various conditions are simulated using the proposed method, and it demonstrates visual and quantitative superiority to the existing methods.

1. Introduction

As artificial intelligence advances in modern society, automation and unmanned technologies are being introduced in various fields. In particular, intelligent CCTV systems, autonomous driving, and remote monitoring are composed of various computer vision algorithms. These systems require excellent visibility in the acquired images for high accuracy. However, objects may not be clearly identified under insufficient contrast in images acquired at night or due to environmental factors such as ambient light sources. Various low-light image enhancement algorithms have been proposed to address these issues.
A common approach for improving low-light images has been to enhance dark areas by expanding the dynamic range of the histogram in the image. However, in images with non-uniform lighting, this can lead to over-enhancement of bright areas, thereby corrupting data in bright areas [1,2]. Chiu et al. proposed an adaptive gamma correction method using a weight distribution to compensate low-light images, but it resulted in the over-enhancement of bright areas in the image [3].
Low-light image enhancement using the Retinex model divides an image into illumination and reflection components and uses these components to estimate or generate an illuminated image using various techniques [4,5]. For low-light footage enhancement to produce the best quality results, the lighting and reflections must be in ideal conditions [6]. Wang et al. applied a joint edge-preserving filter to the illumination to obtain improved images [7]. Guo et al. used a structure-aware smoothing technique to predict the illumination component in an image based on the Retinex model [8].
Traditional methods often fail to produce visually natural results. Some methods over-emphasize low-light areas, resulting in unnatural outcomes or even ruining the overall tonality of the input image. Moreover, incorrect estimates of illumination can lead to color mismatch issues in the resulting images, which can sometimes reduce visibility [9,10].
To address these issues, an algorithm has been proposed in this paper using illumination component estimation and a local steering kernel for low-light image enhancement. To maintain the naturalness of the image even when there are bright areas in the low-light image, the proposed algorithm estimates the illumination component by calculating the gamma value using the mean and distribution constant obtained from the histogram of the image. Then, the image with improved illumination is obtained using Retinex theory, and color correction is performed by comparing the pixel values of the RGB channels of the pixel to each other to emphasize the colors of the improved image. Finally, a local steering kernel is used for filtering to remove the co-amplified noise components during the low-light image enhancement process.
The paper is structured as follows. Section 2 introduces related research on low-light image enhancement, and Section 3 explains the process by which our proposed algorithm improves low-light images. Section 4 discusses the performance of our proposed low-light image enhancement algorithm based on simulation results, and finally, Section 5 presents the conclusions.

2. Related Research

2.1. Low-Light Image Enhancement Based on Deep Learning Technology

Recently, research utilizing deep learning has been taking place actively across various fields, with methods employing CNN (convolutional neural network) training gaining attention for improving low-light images. MSR-net [11] is a CNN-based learning model proposed to enhance low-light images and Retinex theory. MSR-net performs well compared with other traditional methods, but, due to the limited acceptance area of the network architecture, it can produce visually unnatural parts in the resulting images.
While these deep learning-based learning methods outperform classic manual low-light enhancement approaches, there are still some issues. Since most deep learning-based learning methods are based on supervised learning, they require large-scale training datasets consisting of low-light images and regular images captured under normal light conditions as input. The performance of a deep learning network is closely related to the dataset used to train it, but building large-scale training datasets that take into account real-world conditions is challenging. For example, while obtaining a pair of images with different exposures and generating an HDR (high dynamic range) image from them is relatively straightforward [12], this is feasible only when capturing still scenes under well-lit conditions during the daytime. Acquiring a large amount of clear normal light images in extremely low-light environments is difficult.

2.2. Low-Illumination Color Image Enhancement Based on Retinex Theory

Retinex theory is a model used to explain how the human visual system perceives brightness. The fundamental principle of Retinex theory is that an image can be separated into its illumination and reflection components. The original image I x , y acquired by a camera or sensor can be represented using the following formula [13].
I x , y = S ( x , y ) L ( x , y )
In the above formula, S ( x , y ) and L ( x , y ) represent the reflection and illumination components, respectively, and x , y is the pixel coordinates of the image. In Retinex theory, S ( x , y ) has a greater impact on I x , y than L ( x , y ) because it determines the unique characteristics of the image [14]. This reduces the effect of the illumination component on the reflection component, thus restoring clear image information [4]. Unlike traditional linear and non-linear methods that can only enhance certain types of image features, Retinex achieves a balance between dynamic range compression, edge enhancement, and color constancy, making it suitable for enhancing various types of images.
The process of improving low-light images using Retinex is as follows:
  • The low-light image you want to improve is represented by Equation (1).
  • To separate the illumination and reflection components, we take the logarithm as in the following formula [15].
log I x , y = l o g S ( x , y ) L ( x , y )
3.
The illumination-enhanced image S ^ ( x , y ) can be obtained by subtracting the illumination estimate from the original image and taking the inverse logarithm, as shown below.
log S ^ x , y = log I x , y log L ^ x , y
The methods based on Retinex theory such as SSR (single scale Retinex) [16] and MSRCR (multi-scale Retinex with color restoration) [17] estimate the illumination and reflectance of an image and adjust the dynamic range of illuminated pixels to improve the image. Wang et al. [18] improve upon existing Retinex-based image enhancement algorithms by incorporating Gabor filters and the Retinex theory. Wang et al.’s technique extracts the luminance component I in the HSI color space and applies the MSRCR technique to enhance the luminance component of low-light images. Additionally, they utilize the SSR algorithm based on Gabor filters in the RGB color space of low-light images to obtain images with improved texture and detail.
Ma et al. [19] propose an MSRCR image enhancement algorithm based on Gaussian filtering and guided filtering, using multiscale Gaussian filtering and guided filtering to estimate accurate illumination components in low-light images.

2.3. Low-Light Image Enhancement Based on RetinexNet

RetinexNet was proposed based on the Retinex theory to perform image enhancement by decomposing low-light images into reflection and illumination components. RetinexNet uses Decom-Net to decompose the input image into reflectance and illumination components in the decomposition stage, and trains by using pairs of low-light and normal images in the training stage. However, while RetinexNet’s results significantly enhance image brightness, they may introduce color distortion, and issues such as blurriness or noise amplification occur during the process of decomposing and combining images [20]. The improved network model proposed by Li et al. [21] uses the HSV color space model to address the color distortion and noise problems in RetinexNet. Advanced RetinxeNet [22] uses two subnets, DecomNet and EnhanceNet, to appropriately enhance the contrast and suppress noise in the resulting image.
However, existing low-light image enhancement methods can result in insufficient illumination improvement or color imbalance, and the process of amplifying low-light areas can also amplify noise components. Due to these issues, enhanced images may exhibit distortion and blurriness.

3. Proposed Method

Our proposed method aims to address the issues that arise during the enhancement of low-light images through the following approaches:
-
Estimating illumination components in low-light images using histogram smoothing and Retinex theory.
-
Color correction by selecting a correction channel among the RGB channels in an image with enhanced illumination.
-
Removal of noise from the resulting image with a non-local mean based on a local steering kernel.
The proposed method is specifically categorized into three parts based on the methods presented above. First, there is the illumination improvement part, which improves the illumination of low-light images. Secondly, there is the color correction part, which corrects distorted colors during the illumination improvement process. Finally, there is the noise removal part, which removes the noise amplified during the illumination improvement and color correction. Figure 1 shows a block diagram of the processing of the proposed algorithm.

3.1. Illumination Estimate

Low-light footage may show areas in the image that are poorly lit or have an uneven brightness distribution. To accurately estimate the illumination components in these environments, we need information to determine the illumination environment of the low-light images. To determine the illumination environment, the proposed algorithm calculates the mean and distribution constants by obtaining the histogram of each channel in the RGB channel region of the low-light image. Here, the mean constant refers to the basis for judging the overall light level of the image, and the distribution constant refers to the frequency of local bright areas. In this case, a histogram smoothing technique is used to obtain a more natural result than the existing method. The smoothed histogram H ˙ p can be obtained as follows.
H ˙ ( p ) = G     H m ( p )
G = 1 2 π σ i e x p p q 2 σ i 2
H m ( p ) = H q   |   p n q p + n
Here, G refers to the Gaussian weight for histogram equalization, H m p represents the bin of the histogram H p of the low-light image, p represents pixel values, and q represents pixel values that belong to a histogram bin, the range of which is determined by a constant n that determines the width of the bin. If q exceeds the range that can represent the pixel values in the image, the histogram H ( q ) does not exist. Therefore, histogram smoothing is performed using the minimum histogram value. The mean constant m of the image can be calculated from the smoothed histogram H ˙ p as follows.
m = 1 N p = 0 N p H ˙ p / p = 0 N H ˙ p
Here, N refers to the maximum number of pixel values that can be represented in an image. The proposed algorithm uses the following conversion formula to convert m to a gamma value.
γ 1 = 1 1 + e x p 2 m
γ 1 is given by Equation (8) as a range of 0.5 ,   1 , with darker images being closer to γ 1 = 0.5 , indicating a lower illumination estimate.
The distribution constant is obtained from the smoothed histogram just like the mean constant, but the calculation is performed on only the low-light areas to emphasize the information about the values in the low-light areas. The proposed algorithm considers low-light areas as those with less than half of the pixel values that can be represented in the image, and obtains the distribution constant γ 2 as follows.
γ 2 = 1 R C p = 0 N / 2 H ˙ p
Here, R , C represents the horizontal and vertical dimensions of the video, respectively. Higher values of the distribution constant indicate images with more dark areas. The gamma value γ for lighting estimation is calculated by weighting the mean and distribution constants as shown in the following formula.
γ = τ 1 + e x p 2 m + 1 τ R C p = 0 N / 2 H ˙ p
Here, τ is the weighting constant to obtain the gamma value. τ is set to a range of 0 , 1 , with lower values improving low-light images more brightly. Conversely, a high value of τ will result in a more pronounced color contrast in the resulting image. The estimated illumination component L ^ ( x , y ) is obtained by applying the gamma value to the input image as follows, and then L ^ x , y is substituted into Equation (3) to obtain the enhanced image S ^ x , y as follows.
L ^ x , y = I x , y γ
S ^ x , y = e x p log I x , y log L ^ x , y
Images with enhanced brightness may appear to have less contrast in the enhanced image because the pixel values in the low-light areas have been improved by a larger amount, concentrating them in the range of bright levels. To address these issues, we further extend the dynamic range for pixel values in low-light areas to naturally disperse concentrated pixel values in bright levels. The proposed algorithm extends the dynamic range by classifying the cases where the pixel values in the enhanced image are smaller than the median of the maximum pixel values that the image can represent as low-light regions, as shown in Equation (13).
S p x , y = S ^ x , y , i f   0.5 < S ^ ( x , y ) 2 S ^ x , y 2 , o t h e r w i s e

3.2. Color Compensation

For images with enhanced illumination, the rate at which pixel values are amplified may vary depending on differences in the mean and variance constants of the RGB channels. This can lead to color distortion or reduced contrast, which reduces the visibility of the resulting footage. The proposed algorithm uses color correction to make the contrast of colors more pronounced after image enhancement. The color correction step compares the pixel values of the RGB channels in the x , y coordinates of the enhanced image S p x , y to each other, and the channel with the highest value is selected as the correction channel. Color correction applies different correction values to the correction channel and the remaining channels based on the difference between the pixel values of the correction channel and the pixel values of the other two channels. Here, if the pixel value of the RGB channel selected as the calibration channel is t x , y and the pixel values of the other two channels are d 1 x , y and d 2 x , y , respectively, the color correction values t ^ 1 x , y and t ^ 2 x , y for the two channels are obtained as follows.
t ^ 1 x , y = t x , y + α t x , y d 1 x , y 1 t x , y d 1 x , y
t ^ 2 x , y = t x , y + α t x , y d 2 x , y 1 t x , y d 2 x , y
Here, α refers to the color correction weight. The color correction value is calculated proportionally to the difference between d 1 x , y and d 2 x , y , and the pixel value of the correction channel. In this case, if the pixel values of only one channel of d 1 x , y and d 2 x , y are significantly different, the color correction value may show a large value, so we set the weights for the color correction values t ^ 1 x , y and t ^ 2 x , y to minimize color distortion, maintain the naturalness of the image, and emphasize the color contrast. Taking this into account, the combined weights β 1 x , y and β 2 x , y can be calculated as follows.
β 1 x , y = d 1 x , y d 1 x , y + d 2 x , y
β 2 x , y = d 2 x , y d 1 x , y + d 2 x , y
The proposed algorithm sets t x , y as the largest pixel value in the RGB channel for color correction, so the smaller s 1 x , y and s 2 x , y are calculated, the larger t ^ 1 x , y and t ^ 2 x , y are calculated. The sum of the two color correction values is obtained by applying the sum weight obtained in Equations (16) and (17) to the respective corrections t ^ 1 x , y and t ^ 2 x , y . The combined color correction value t p x , y is calculated using the following formula.
t p x , y = β 1 x , y t ^ 1 x , y + β 2 x , y t ^ 2 x , y

3.3. Noise Removal Process

The proposed algorithm uses a non-local mean technique to remove the co-amplified noise during the enhancement process of low-light images. Unlike a typical local mean filter, the non-local mean technique uses an average of the pixels around the filter center to remove noise, weighted by how similar the surrounding pixels are to the center. This results in much better sharpness after filtering and less loss of detail in the image compared with the local mean filter. However, non-local mean techniques rely on similarity to set weights, which means that the structural features of the filtering region are not taken into account. To address these issues, this paper proposes a modified non-local mean algorithm based on a local steering kernel. Steering kernels are characterized by considering gradients and analyzing the radiosity of pixels in the local region, and typically take the following form [13].
K x , y ( i , j ) = d e t ( C x , y ) 2 π h 2 e x p [ i   j ] T C   [ i   j ] 2 h 2 , k i , j k
Here, C x , y refers to the covariance matrix of size 2 × 2 computed on a square local region centered at pixel x , y [17]. i , j refers to the relative coordinates away from the center of the steering kernel. i   j represents a matrix representation of the coordinates, and k is a constant that determines the size of the steering kernel. h is a smoothing constant to control the scope of the steering kernel [18]. The weights for denoising are calculated by comparing the similarity of the center and periphery regions to the steering kernel. The similarity comparison is calculated using Euclidean distance, and to convert this result to weights u x , y i , j , we use the following formula.
u x , y i ,   j = 1 z x , y e x p w x , y w x + i , y + j 2 2 h 2
z x , y = i ,   j K x , y e x p w x , y w x + i , y + j 2 2 h 2
Here, w x , y means the center window centered at x , y , and w x + i , y + j means the comparison window set at a distance of i , j from the center window. z x , y is the normalization constant used to convert the results of the similarity comparison into weights. The denoising result o u t x , y is calculated by convolving the pixel value with the steering kernel K x , y i , j and the weight u x , y i , j , as shown in the following equation.
o u t x , y = M x , y i , j     K x , y i , j     u x , y ( i , j )
Here, refers to the convolution operator and M x , y i , j is the pixel value in the same region as the local steering kernel.

4. Simulation and Results

4.1. Experimental Setting

To evaluate the visual performance, the simulation compares the results of the existing low-light image enhancement technique and the low-light image processed by the proposed algorithm. The image used in the evaluation is a 600 × 400 -sized image from the LOL dataset, which is widely used in low-light image enhancement research [3]. To analyze the influence of the parameters of the proposed method on the resulting images, experiments are conducted under different conditions.
First, τ , which is used in the low-light image enhancement process, is set to the range of 0 , 1 , and the closer the value is to 0, the brighter the illumination enhancement image is. Conversely, the closer τ is to 1, the more contrasting the colors in the resulting image. The proposed technique performs reasonably well for τ = 0.45 when the value of τ is varied in 0.05 increments from 0 to 1 in the simulation.
The color correction weight α used for color correction is the color correction weight, and the higher the value, the stronger the color correction effect. If the color correction weight is 0, an uncorrected image is output, and if it exceeds α = 2 , the color distortion becomes severe. When the proposed algorithm is simulated by varying the value of the color correction weight α in increments of 0.1, the most natural color correction is achieved when the value is α = 1 .
Since the steering kernel used in the denoising process is used to remove noise amplified during the enhancement of low-light images, it does not require strong denoising performance, so the small size of 15 × 15 is sufficient. Also, due to the importance of preserving details such as edges and text, the best performance is achieved using a small window of 3 × 3 for the similarity comparison.
For the other parameters, we select the values that show the best overall performance when running experiments under various conditions, and the results are shown in Table 1.

4.2. Experimental Result and Visual Compare

Figure 2, Figure 3, Figure 4 and Figure 5 show the low-light image and the enhanced and enlarged images, where (a) is the low-light image, (b–e) are the results of processing with existing techniques, namely, dark light [23], LIME [8], RBMP [24], and LIIEN [4], respectively, and (f) shows the result of processing with the proposed technique.
In Figure 2, we compare object detection performance of very dark images. As a result, the detection performance of the images processed with dark light and LIME is insufficient. The RBMP-processed footage greatly improves the illumination of the footage, but when the dark areas are zoomed in, a significant presence of green light is noticed. The proposed technique sufficiently improves the illumination of the image, and the visibility of the dark areas is naturally enhanced.
In Figure 3, the dark light and LIME treatments do not improve the illumination of the background areas. RBMP’s enlargements are less visible due to the lack of contrast in the color tones, and LIIEN’s enlargements show relatively much noise. The proposed algorithm performs well in improving the overall illumination, and the color contrast of the enlarged image is clearly visible.
In Figure 4, the images processed with dark light and RBMP have low color contrast, resulting in blurry enlargements. The results of LIME and LIIEN show good overall visibility with sharp color contrast, but not enough illumination enhancement in dark areas. Overall, the proposed algorithm shows good visibility of objects in dark areas and clear color contrast.
In Figure 5, we zoom in on the areas where dark and light areas appear together, and the images processed with LIME and LIIEN appear darker in low-light areas and distorted in bright areas. The images processed with dark light and RBMP show a significant improvement in the dark areas, but the resulting images have a dull color contrast. The images processed by the proposed algorithm do not show any color distortion, and the overall result has vivid tones and good visibility.
Visual evaluation shows that the resulting images from existing methods have inadequate illumination enhancement or color distortion. On the other hand, the resulting image of the proposed algorithm exhibits sufficiently improved brightness, sharp contrast of colors, and natural texture of the image.

4.3. Objective Evaluation

To objectively evaluate the performance of the proposed method, a quantitative evaluation is conducted on the resulting images. The methods used for quantitative evaluation are LOE (lightness order error), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) [25]. LOE is proposed to objectively evaluate the naturalness of an image and evaluate the change in illumination of an image by assessing the sequential change in brightness of the image. Smaller LOE values indicate a more natural brightness transition in the resulting images. PSNR is the most commonly used video quality metric and is measured in (dB). PSNR evaluates the difference between two images, with higher values indicating greater similarity and less distortion between the images. SSIM compares the brightness, contrast, and structural similarity between two images, with values ranging from 0 to 1; values closer to 1 signify greater structural similarity. Table 2, Table 3 and Table 4 show the LOE, PSNR, and SSIM results for the processed videos in Section 4.2.
In the quantitative evaluation, dark light shows a poor performance overall, with a mean value of 189.700 for LOE and a mean of 14.847 and 0.488 for PSNR and SSIM, respectively. Overall, the quantitative evaluation is inadequate for the resulting videos from LIME; in particular, the PSNR average shows the lowest value at 14.167. While RBMP has superior PSNR and SSIM, the average LOE value is 106.041, which is slightly insufficient for the proposed algorithm. LIIEN has superior average LOE values, but PSNR and SSIM values show low average values. The quantitative evaluation of the proposed method shows that the average LOE value is 90.474, the average PSNR is 18.338 (dB), and the average SSIM is 0.576, which are generally superior to the existing methods. Table 5, Table 6 and Table 7 show the quantitative evaluation using 485 images from the LOL dataset, 100 images from the VE-LOL-L dataset, and 360 images from the SICE dataset for objective evaluation. Table 5, Table 6 and Table 7 show the average LOE, average PSNR, and average SSIM for each dataset.
When comparing the LOE values for each datasheet in Table 5, the proposed algorithm has superior results compared with those of dark light and LIME, and it shows natural luminance changes in the resulting images. However, the results are similar to or slightly higher than the results of processing with RBMP and LIIEN.
The PSNR comparison results in Table 6 show that the best results are processed by the proposed algorithm. In particular, the VE-LOL-L dataset shows improvements of 2.333 (dB), 2.518 (dB), 0.417 (dB), and 3.434 (dB) over the existing method, respectively, and the most similar results are shown for the control image.
SSIM, the proposed method in Table 7, shows the highest value of 0.529 on the SICE dataset, with improvements of 0.100, 0.070, 0.024, and 0.069 over the existing methods, respectively. The SSIM results of the proposed method are superior to the existing methods in most cases, and they produce the most structurally similar results to the control image.

4.4. Discussion

The proposed method uses histogram smoothing and Retinex theory for natural illumination enhancement. It estimates the illumination component of low-light images based on mean and distribution constants. Color correction using compensating channel selection in the RGB channel reduces color distortion and sharpens contrast, and non-local averaging based on a local steering kernel minimizes noise in the resulting image. The resulting images of the proposed method show clear and natural images, unlike the existing methods in the visual evaluation, and the best results are achieved in quantitative evaluations.

5. Conclusions

In this paper, we propose a technique based on Retinex theory and local steering kernel to improve low-light images. To effectively improve the low-light images and address the color mismatch problem, the proposed technique estimates the illumination component of the low-light images by using the mean and distribution constants, and the improved images are obtained by using the Retinex theory. Then, color correction is used to highlight the hue of the improved image, and a denoising algorithm based on the local steering kernel is used to obtain the final image.
To evaluate the performance of the proposed algorithm, low-light images from the LOL dataset are used to compare it to existing methods. In terms of visual assessment, the existing method improves the overall illumination of the resulting image and amplifies the pixel values in dark areas, revealing hidden objects. The results are superior to those of existing methods, exhibiting sharper color contrast and less distortion. Additionally, during the process of enhancing low-light areas, the proposed method effectively suppresses the amplified noise signals. For quantitative evaluation, the LOE method is used to compare the naturalness of the resulting images, and the proposed method shows improved results with lower LOE values compared with existing techniques.

Author Contributions

Conceptualization, B.-W.C. and N.-H.K.; software, B.-W.C.; validation, N.-H.K.; formal analysis, B.-W.C. and N.-H.K.; investigation, B.-W.C.; data curation, B.-W.C.; writing—original draft preparation, B.-W.C.; writing—review and editing, B.-W.C. and N.-H.K.; visualization, B.-W.C. and N.-H.K.; project administration, N.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lv, X.; Sun, Y.; Zhang, J.; Jiang, F.; Zhang, S. Low-light image enhancement via deep Retinex decomposition and bilateral learning. Signal Process. Image Commun. 2021, 99, 116466. [Google Scholar] [CrossRef]
  2. Cheon, B.W.; Kim, N.H. Modified gaussian filter based on fuzzy membership function for awgn removal in digital images. J. Inf. Commun. Converg. Eng. 2021, 19, 54–60. [Google Scholar]
  3. Dai, Q.; PU, Y.F.; Rahman, Z.; Aamir, M. Fractional-order fusion model for low-light image enhancement. Symmetry 2019, 11, 574. [Google Scholar] [CrossRef]
  4. Mittal, A.; Moorthy, A.K.; Bovik, A.C. Improved Retinex for low illumination image enhancement of nighttime traffic. In Proceedings of the 2022 International Conference on Computer Engineering and Artificial Intelligence, Shijiazhuang, China, 22–24 July 2022; pp. 226–229. [Google Scholar]
  5. Mukaida, M.; Ueda, Y.; Suetake, N. Low-light image enhancement method by using a modified gamma transform for convex combination coefficients. In Proceedings of the 2022 IEEE International Conference on Image Processing, Bordeaux, France, 16–19 October 2022; pp. 2866–2870. [Google Scholar]
  6. Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6842–6850. [Google Scholar]
  7. Liu, S.; Long, W.; He, L.; Li, Y.; Ding, W. Retinex-based fast algorithm for low-light image enhancement. Entropy 2021, 23, 746. [Google Scholar] [CrossRef] [PubMed]
  8. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  9. Li, M.D.; Liu, J.Y.; Yang, W.H.; Sun, X.Y.; Guo, Z.M. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, W.C.; Chen, Z.X.; Yuan, X.H.; Wu, X.J. Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 2019, 496, 25–41. [Google Scholar] [CrossRef]
  11. Jha, R.R.; Nigam, A.; Bhavsar, A.; Pathak, S.K.; Schneider, W.; Rathish, K. Multi-shell D-MRI reconstruction via residual learning utilizing encoder-decoder network with attention (MSR-Net). In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 27 August 2020; pp. 1709–1713. [Google Scholar]
  12. Liang, S.; Zihan, Y.; Fan, F.; Quan, C.; Shihao, L.; Jie, M. LLNet: A deep autoencoder approach to natural low-light image enhancement. Comput. Vis. Pattern Recognit. 2017, 61, 650–662. [Google Scholar]
  13. Dhal, K.G.; Ray, S.; Das, S.; Biswas, A.; Ghosh, S. Hue-preserving and Gamut problem-free histopathology image enhancement. Int. J. Sci. Technol. Trans. Electr. Eng. 2019, 43, 645–672. [Google Scholar] [CrossRef]
  14. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
  15. Lai, R.; Mo, Y.; Liu, Z.; Guan, J. Local and nonlocal steering kernel weighted total variation model for image denoising. Symmetry 2019, 11, 329. [Google Scholar] [CrossRef]
  16. Ying, Z.L.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A new image contrast enhancement algorithm using exposure fusion framework. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 August 2017; pp. 36–46. [Google Scholar]
  17. Mohammad, A.A.H.; Zohair, A.A. Retinex-based multiphase algorithm for low-light image enhancement. IIETA 2020, 37, 733–743. [Google Scholar]
  18. Daniel, J.J.; Yutong, H.; Zia-ur, R.; Fang, L.; Glenn, A.W. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar]
  19. **xiang, M.; **nnan, F.; Jianjun, N.; **fang, Z.; Chao, X. Multi-scale retinex with color restoration image enhancement based on Gaussian filtering and guided filtering. Int. J. Mod. Phys. B 2017, 31, 1744077. [Google Scholar]
  20. Takeda, H.; Farsiu, S.; Milanfar, P. Kernel regression for image processing and reconstruction. IEEE Trans. Image Process. 2007, 16, 349–366. [Google Scholar] [CrossRef] [PubMed]
  21. Qi, W.; Maoling, Q.; **gqi, S.; Li, L. An improved method of low light image enhancement based on retinex. In Proceedings of the 2021 6th International Conference on Image, Vision and Computing (ICIVC), Qingdao, China, 14 September 2021; pp. 6842–6850. [Google Scholar]
  22. **, W.; Zhiwen, W.; Dong, L.; Chanlong, Z.; Yuhang, W. Low illumination color image enhancement based on Gabor filtering and Retinex theory. Multimed. Tools Appl. 2021, 80, 17705–17719. [Google Scholar]
  23. Jiang, H.; Yutong, H.; Fengzhu, Z.; Fang, L.; Songchen, H. Advanced RetinexNet: A fully convolutional network for low-light image enhancement. Signal Process. Image Commun. 2023, 112, 116916. [Google Scholar]
  24. Cheon, B.W.; Kim, N.H. A modified steering kernel filter for AWGN removal based on kernel similarity. J. Inf. Commun. Converg. Eng. 2022, 20, 195–203. [Google Scholar] [CrossRef]
  25. Wang, S.; Zheng, Z.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram of proposed method.
Figure 1. Block diagram of proposed method.
Applsci 13 11394 g001
Figure 2. Enhancement result and enlarged image of low-light image (LOL493). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed method.
Figure 2. Enhancement result and enlarged image of low-light image (LOL493). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed method.
Applsci 13 11394 g002
Figure 3. Enhancement result and enlarged image of low-light image (LOL547). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed method.
Figure 3. Enhancement result and enlarged image of low-light image (LOL547). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed method.
Applsci 13 11394 g003
Figure 4. Enhancement result and enlarged image of low-light image (LOL665). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed methods.
Figure 4. Enhancement result and enlarged image of low-light image (LOL665). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed methods.
Applsci 13 11394 g004
Figure 5. Enhancement result and enlarged image of low-light image (LOL780). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed method.
Figure 5. Enhancement result and enlarged image of low-light image (LOL780). (a) Low-light image, (b) dark light, (c) LIME, (d) RBMP, (e) LIIEN, and (f) proposed method.
Applsci 13 11394 g005
Table 1. Parameter set of proposed image enhancement method.
Table 1. Parameter set of proposed image enhancement method.
ParameterVariableValue
Gamma weight parameter τ 0.45
Color compensation weight α 1
Smoothing parameter h 1.5
Steering kernel size K x , y 15 × 15
Window size w 3 × 3
Table 2. LOE score comparison of test image.
Table 2. LOE score comparison of test image.
ImageDark LightLIMERBMPLIIENPM
LOL493162.07294.73988.60080.84072.030
LOL665150.877111.68277.14569.06149.115
LOL669194.140143.95461.47486.141123.681
LOL778192.890123.611112.84880.80283.162
LOL780248.520133.832190.140121.238124.381
Average189.700121.564106.04187.61690.474
Table 3. PSNR comparison of test image.
Table 3. PSNR comparison of test image.
ImageDark LightLIMERBMPLIIENPM
LOL49317.01916.43416.36615.74420.030
LOL66511.0279.83213.5509.87412.680
LOL66911.60611.43724.81711.98018.750
LOL77813.85913.04615.30513.08316.410
LOL78020.72520.08624.30520.22723.820
Average14.84714.16718.86914.18218.338
Table 4. SSIM comparison of test image.
Table 4. SSIM comparison of test image.
ImageDark LightLIMERBMPLIIENPM
LOL4930.5040.4080.5410.4090.576
LOL6650.3210.2650.3930.2550.378
LOL6690.4450.4890.6210.4570.613
LOL7780.4640.4060.5130.4000.570
LOL7800.7080.6450.7860.6110.742
Average0.4880.4430.5710.4260.576
Table 5. Comparison of average LOE score.
Table 5. Comparison of average LOE score.
DatasetsDark LightLIMERBMPLIIENPM
LOL226.942138.15275.43091.237110.961
VE-LOL-L235.419145.71793.062106.850121.055
SICE209.528207.543102.66277.526143.723
Table 6. Comparison of average PSNR score.
Table 6. Comparison of average PSNR score.
DatasetsDark LightLIMERBMPLIIENPM
LOL12.47212.46416.95712.08515.986
VE-LOL-L16.33316.14818.24915.23218.666
SICE11.04412.82115.66412.58916.030
Table 7. Comparison of average SSIM score.
Table 7. Comparison of average SSIM score.
DatasetsDark LightLIMERBMPLIIENPM
LOL0.4370.3930.4870.3180.472
VE-LOL-L0.5530.5000.5470.3940.542
SICE0.4190.4590.5050.4600.529
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheon, B.-W.; Kim, N.-H. Enhancement of Low-Light Images Using Illumination Estimate and Local Steering Kernel. Appl. Sci. 2023, 13, 11394. https://doi.org/10.3390/app132011394

AMA Style

Cheon B-W, Kim N-H. Enhancement of Low-Light Images Using Illumination Estimate and Local Steering Kernel. Applied Sciences. 2023; 13(20):11394. https://doi.org/10.3390/app132011394

Chicago/Turabian Style

Cheon, Bong-Won, and Nam-Ho Kim. 2023. "Enhancement of Low-Light Images Using Illumination Estimate and Local Steering Kernel" Applied Sciences 13, no. 20: 11394. https://doi.org/10.3390/app132011394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop