Next Article in Journal
A Novel Video Propagation Strategy Fusing User Interests and Social Influences Based on Assistance of Key Nodes in Social Networks
Next Article in Special Issue
The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes
Previous Article in Journal
Dynamic Voltage Restorer Based on Integrated Energy Optimal Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Single-Pixel High-Precision Imaging Technique Based on a Discrete Zernike Transform for High-Efficiency Image Reconstructions

1
School of Instrument Science and Opto Electronics Engineering, Bei**g Information Science and Technology University, Bei**g 100192, China
2
Fisheries Science Institute, Bei**g Academy of Agriculture and Forestry Sciences, Bei**g 100068, China
3
School of Computer Science and Technology, Bei**g Institute of Technology, Bei**g 100081, China
4
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(3), 530; https://doi.org/10.3390/electronics12030530
Submission received: 26 December 2022 / Revised: 14 January 2023 / Accepted: 18 January 2023 / Published: 19 January 2023

Abstract

:
Single-pixel imaging (SPI) has attracted increasing attention in recent years because of its advantages in imaging systems. However, a low reconstruction quality and a long reconstruction time have hindered the development of SPI. Hence, in this study, we propose a Zernike SPI (ZSPI) technique to reduce the number of illumination patterns and reconstruction time whilst retaining robustness. First, the ZSPI technique was theoretically demonstrated. Phase-shifting Zernike moment projections were used to illuminate the target and an inverse Zernike transform was used to reconstruct the desired image. In order to prove the feasibility, numerical simulations were carried out with different sample ratios (SRs) ranging from 0.1 to 0.3; an acceptable reconstruction appeared at approximately 0.1. This result indicated that ZSPI could obtain satisfactory reconstruction results at low SRs. Further simulation and physical experiments were compared with different reconstruction algorithms, including noniterative, linear iterative, and nonlinear iterative methods under speckle modulation patterns at a sample of 0.1 in terms of different targets. The results revealed that ZSPI had a higher imaging quality and required less imaging time, particularly for low-frequency targets. The method presented in this study has advantages for the high-efficiency imaging of low-frequency targets, which can provide a new solution for the SPI method.

1. Introduction

Single-pixel imaging (SPI) [1,2] is different from traditional array detector imaging, and its image information acquisition method and acquisition efficiency are different. SPI has advantages in terms of the hardware complexity and industrial cost. In addition, this technique has potential application prospects for the simplification and integration of future imaging systems, particularly in infrared and terahertz bands. That is, the array sensor is immature, and three-dimensional (3D) imaging and spectral imaging require a high resolution and sensitivity. The advent of SPI technology brings new solutions to these problems. Although detector array technology has a superior performance in the visible band, SPI (such as polarimetric imaging [3,4,5], holographic imaging [6,7], multispectral imaging [8,9], X-ray imaging [10,11,12], and THz imaging [13,14,15,16,17,18]) is more suitable for unconventional imaging and has an accurate time and depth resolution. In addition, SPI can meet the application requirements of different fields, including imaging through scattering media [19,20], remote sensing [21], compressive radar [22], optical encryption [23], bioluminescence microscopic imaging [24,25,26], gas detection [27], and 3D imaging [28,29,30].
SPI has certain advantages and characteristics compared with traditional imaging technology and has achieved fruitful research results. However, the problems of imaging quality and sampling efficiency have not been completely solved, thereby greatly restricting the practical application of SPI technology [31,32]. How to improve the imaging quality and reduce the sampling time has become a leading topic discussed by researchers. The spatial distribution of the projection pattern and the running time of the reconstruction algorithm are two main factors that restrict SPI systems, and determine the image quality and efficiency [33].
In SPI technology, random patterns and orthogonal basis patterns are generally selected as the projection pattern. The use of random speckles [34], random binary patterns [35], or other random patterns as the projection patterns often does not have orthogonality. Moreover, the use of a spatial light modulator makes it difficult to solve the problem of SPI in the sampling or reconstruction of a long time period. Compressed sensing reduces the measurement time to an extent in the case of under-sampling. However, it increases the computation time of the reconstruction algorithm accordingly, and the quality of the image reconstruction depends on the algorithm robustness [36,37]. At present, an increasing number of researchers have used the orthogonal basis pattern rather than the random pattern as the projection pattern to improve the quality and efficiency of SPI. In accordance with the different projection patterns, SPI techniques such as wavelet transform [38,39], discrete cosine transform (DCT) [40,41], Fourier transform [42,43,44], and Hadamard transform [27,28,43,45,46,47,48] have been produced.
Zernike polynomials are widely used in the field of optical engineering owing to their orthogonal and complete characteristics in the unit circle [49]. The description method provided by Zernike polynomials has achieved great success in the application of optical system designs and analyses [50], adaptive optics [51], atmospheric optics [52], optical testing [53], wavefront sha** [54], wavefront sensing [55], interferometry [56], aberration characteristics and the correction of human eyes [57], and other fields. Considering the orthogonal and complete characteristics of Zernike polynomials, Zernike polynomials can be introduced into the SPI method.
In this work, a Zernike single-pixel imaging (ZSPI) method based on a discrete Zernike transform was proposed. The ZSPI method was theoretically demonstrated. Phase-shifting Zernike moment projections were used to illuminate the target, and an inverse Zernike transform was used to reconstruct the desired image. ZSPI could obtain satisfactory reconstruction results at low sample ratios. Further simulation and physical experiment results revealed that the Zernike single-pixel imaging had a higher imaging quality and required less imaging time.

2. Theory and Methods

The proposed ZSPI technique was based on the theorem of a Zernike transform. This technique employs Zernike polynomials as structured light patterns to illuminate the scene, and uses a detector that has no spatial resolution to collect the resulting light. A two-step phase-shifting Zernike illumination pattern based on a discrete Zernike transform was employed for the Zernike moment acquisition, which could obtain the Zernike coefficients and eliminate random noise. The final target was reconstructed by an inverse discrete Zernike transform.

2.1. Zernike Basis Pattern

The Zernike polynomial is a common mathematical description in optics such as defining any function of pupil optical systems, wavefront phases, or wave aberrations [58]. In this study, we attempted to sample the image through the Zernike basis in SPI.
Zernike polynomials are a product of angular functions and radial polynomials [52] as shown in Equation (1). Zernike basis patterns are orthogonal in the unit circle. Therefore, they are defined with polar coordinates as follows:
Z eve n j = n + 1 R n m r 2 cos m θ Z o d d j = n + 1 R n m r 2 sin m θ m 0 Z j = n + 1 R n 0 r                                             m = 0
where r , θ is the polar coordinate in the unit circle, n is the order of the Zernike polynomial, and m represents the angular frequencies, which must meet m n ; n m is even. Z j denotes the Zernike polynomial of mode order j [52].
R n m r represents the radial polynomial and is deduced from the Jacobi polynomials. It is defined as follows:
R n m r = s = 0 n m / 2 1 s n s ! s ! n + m / 2 s ! n m / 2 s ! r n 2 s
Similar to Fourier patterns, which have been used as representative SPI techniques for illumination by a deterministic orthogonal model rather than random patterns, the Zernike basis pattern is a matrix of grayscale values. Figure 1 shows the Zernike basis patterns of different mode numbers. It is well-known that Hadamard basis patterns only have horizontal and vertical features. Fourier basis patterns have horizontal, vertical, and oblique features whilst Zernike basis pattern have all direction features; the higher the order, the greater the directional resolution. In the experiment, the Zernike base pattern of multiple gray levels was generated for the projection by upsampling and dithering binarization to achieve high-speed and high-quality SPI technology.

2.2. Zernike Transform and Inverse Zernike Transform

The proposed method was based on direct and inverse discrete Zernike transforms. For a continuous signal, the Zernike moment of order n , m was defined as the double integrals inside the unit circle, which is expressed as follows [59]:
A n m = n + 1 π D I ( x , y ) Z n m * ( x , y ) d x d y
where I x , y represents a two-dimensional (2D) image function and D is the integration range of the unit circle x 2 + y 2 1 . Z n m x , y is a Zernike function and * stands for the conjugate.
For a discrete signal, we were required to discretize a digital image by summation rather than integration to calculate its Zernike moment. Assuming that I x i , y j was a function defined from the identity element, Equation (4) could be used to approximate the following:
A n m ~ = n + 1 π i N j N I x i , y j w n m x i , y j
where x i = 2 i N 1 / N and y j = 2 j N 1 / N .
i and j were taken such that x , y D , and
w n m x i , y j = x i Δ 2 x i + Δ 2 y i Δ 2 y i + Δ 2 Z n m x , y d x d y Δ 2 Z n m x i , y j
where Δ = 2 / N represents the pixel width. w n m x i , y j could be numerically calculated, and the most commonly used formula was as follows:
w n m x i , y j Δ 2 Z n m x i , y j
From the above derivation, the precision of the Zernike moment was affected by geometric and numerical integration errors. The geometric error was caused by the fact that the total area covered by all square pixels involved in the Zernike moment calculation was not an exact unit circle from Equation (4) [59]. The numerical integration was caused by the approximate formula of Equation (6). Although the above errors could be reduced by a few techniques, they could never be eliminated as long as the Zernike moments were computed in Cartesian coordinates [59].
Therefore, image I x i , y j could be approximated as follows:
I x , y = n m A n m Z n m x i , y j
In accordance with the theoretical analysis, a 2D image could be regarded as a linear superposition of a series of Zernike base patterns and the Zernike coefficient was the weight of the corresponding Zernike base patterns. For a 2D image, the Zernike forward transform could decompose the 2D image into different Zernike base patterns and corresponding weights. The inverse Zernike transform combined different Zernike base patterns into 2D images in accordance with the weight.

2.3. Principle of ZSPI

ZSPI is based on a Zernike transform and an inverse Zernike transform. ZSPI provides a solution when an image consists of a weighted sum of the Zernike basis pattern with different mode ordering. Therefore, image reconstruction by the Zernike moment is the process of obtaining the weight of the image corresponding with each Zernike basis pattern. Zernike basis patterns were generated by using the method of Section 2.1. These patterns were projected onto the target and a bucket detector was used to detect the reflected light intensity of the target.
We assumed that the target was reflective and the reflection intensity of the target in the direction of the bucket detector measurement was Ref x , y . In this case, the image was proportional to the intensity reflection distribution. That is, I x , y Ref x , y .
Therefore, the intensity of the reflected light of the target illuminated by the Zernike basis pattern lighting E ϕ n , m could be given as follows:
E ϕ n , m = D Ref x , y Z j * x , y d x d y
where D represents the projected region of the Zernike basis pattern. The response value of the bucket detector was as follows:
T ϕ x , y = T n + ε E ϕ n , m
where T n is the DC component caused by the background illumination and ε is a factor related to the magnification of the bucket detector and the position relationship between the object and bucket detector.
Two measurements were needed to obtain the Zernike moment corresponding with each pixel in the target image. We illuminated the scene with two Zernike basis patterns of mode ordering j with a phase-shift of π . They were denoted as Z1 and Z2.
Z 1 x , y = Z j * x , y Z 2 x , y = Z j * x , y
The bucket detectors received light signals from the target, and the signal acquisition and analog/digital conversion were conducted by the signal acquisition device. The computer recorded the response value T of the detector. The response value of the bucket detector was denoted as T.
T 1 x , y = T n + β E ϕ 1 n , m T 2 x , y = T n + β E ϕ 2 n , m
In accordance with the two-step phase-shift algorithm, the Zernike forward transformation integral formula of the reflection intensity Ref x , y could be obtained as follows:
T 1 x , y T 2 x , y = β E ϕ 1 n , m E ϕ 2 n , m = β D Ref x , y Z j * x , y d x d y D Ref x , y ( Z j * x , y ) d x d y = 2 β D Ref x , y Z j * x , y d x d y = 2 β Z Ref x , y
where Z represents the Zernike transformation. The distribution relation between the object image and the reflection intensity of the object was I x , y Ref x , y ; that is, the proportional relation was satisfied. Hence, the Zernike moment in the process of SPI could be obtained from the following formula:
A x , y = T 1 x , y T 2 x , y

3. Results and Discussion

3.1. Numerical Simulations

In this section, an image named “cameraman” (128 by 128 pixels) was utilized as the target scene and simulated by ZSPI with different sampling ratios (SRs), which were taken from the Zernike basis pattern. SR was the ratio between the actual and total measurement numbers on behalf of the capture efficiency. The peak signal-to-noise ratio (PSNR) and root mean square error (RMSE) were used to quantitatively evaluate the quality of the target reconstructions.
P S N R = 10 log 10 D 2 / M S E
R M S E = M S E
where MSE denotes the mean square error.
All the simulations were performed using MATLAB R2020a (R2020a, MathWorks, Nedick, MA, USA, 2020) on a personal computer (PC, Intel Core i7 CPU, 2.90 GHz, 16 G RAM, 64 bit, Windows 10). The target scene was reconstructed by ZSPI of different SRs ranging from 0.1 to 0.3. Figure 2 shows the results. From Figure 2, the reconstructions of SR below 0.1 had a degree of ring call in the image; an acceptable reconstruction appeared at above 0.1, which could be set as the quality benchmark. Figure 3 shows the distributions of the light intensity sequence and measurement numbers. From Figure 3, it can be seen that the large intensity values were concentrated in front of the horizontal axis of the coordinate axis. This finding indicated that the Zernike reconstruction could obtain the desired result by using the first few projections of the Zernike pattern.
In order to quantitatively evaluate the quality of the reconstructions, the normalized RMSE and PSNR were determined. Figure 4 shows the results of the normalized RMSE and PSNR of the reconstructions of different SRs. The most obvious result was that the SR of 0.14 could obtain a smaller RMSE and a larger PSNR with under-sampling, which could be considered to be the appropriate SR of ZSPI. This finding indicated again that the intensity values of the Zernike pattern were concentrated in the front part by the Zernike reconstruction.
In this study, we selected the noniterative methods of differential ghost imaging (DGI), linear iterative methods of conjugate gradient descent (CGD), nonlinear iterative methods of discrete cosine transform (DCT), and compressive sensing based total variation (TV) algorithms for a comparison to further verify the imaging performance of the ZSPI system [60]. Figure 5 shows the performance of ZSPI with various algorithms at the same SR of 0.1, including the DGI, CGD, DCT, and TV algorithms from speckle modulation patterns based on the simulated data. At the same low sampling rate, the image quality of the different reconstruction algorithms was compared. As shown in Figure 5, ZSPI had a higher reconstruction quality than several other algorithms. The PSNR and RMSE were used to quantitatively compare the imaging quality.
To quantitatively compare the quality of the target reconstructions, the RMSE and PSNR of different reconstruction methods were determined. The results of the performance of ZSPI, DGI, CGD, DCT, and TV are shown in Figure 6a. From these results, it could be seen that the PSNR of ZSPI was remarkably higher than that of the other reconstruction methods. Moreover, the RMSE was remarkably lower than that of the other reconstruction methods, which proved that ZSPI had a better imaging quality at low SRs. Furthermore, to investigate the efficiency of the image reconstruction, the reconstruction times were measured for DGI, CGD, DCT, TV, and ZSPI. The results are shown in Figure 6b. The reconstruction times of DGI, CGD, DCT, TV, and ZSPI were 0.16868, 11.25305, 151.19757, 72.42121, and 3.29553, respectively. Although there were five reconstruction methods for comparison, the data of DGI are not contained in Figure 6b because its reconstruction times were too small to be exhibited. However, the DGI algorithms from the speckle modulation patterns had the largest PSNR and the lowest RMSE. Therefore, we observed that ZSPI required less running reconstruction time for a small-scale reconstruction.

3.2. Experiments

Figure 7 shows the physical experiment setup of ZSPI, which included an LED light source (400–760 nm @ 20 W), a digital micromirror device (DMD, ViALUX V-7001), a bucket detector (Thorlabs PDA36A), a data acquisition card (Gage CSE22G8, Vitrek), and a computer. The computer was used to generate the illumination patterns, which were sent to the DMD. The DMD projected these patterns onto the object. The bucket detector was used to receive the light reflected by the object whilst converting the intensity into the corresponding voltage, which was recorded by the data acquisition card. The resolution of the DMD was 1024 × 768 and the refresh rate was 22 kHz. The “cameraman” and the emblem of Bei**g Information Science and Technology University were selected as the imaging targets. The reconstruction size was 128 by 128 pixels.
Two groups of experiments based on the above experimental setup were performed to demonstrate the reconstruction performance of ZSPI. In these experiments, ZSPI was compared with various algorithms at the same sample of 0.1, including DGI, CGD, DCT, and TV algorithms from speckle modulation patterns. Figure 8 shows the physical experimental results. Intuitively, ZSPI had a better imaging quality. The structural similarity index measure (SSIM) was calculated for these experimental results to quantitatively evaluate the experimental performance. Figure 9 shows the SSIM performance. The most evident result was that ZSPI had a maximum value of the SSIM for both imaging targets compared with other algorithms from speckle modulation patterns. Therefore, it could be concluded that ZSPI had a better imaging quality at a low SR. Furthermore, from Figure 9, the values of the SSIM of ZSPI of the “cameraman” and emblem were 0.2794 and 0.3169, respectively. It was indicated that ZSPI had the advantage of reconstructing a low-frequency target. Figure 9 also shows the reconstruction time of DGI, CGD, DCT, TV, and ZSPI. As shown in the figure, for the same SR, ZSPI corresponded with the shortest running time, which was the same as the numerical simulation results.

4. Conclusions

In this study, a ZSPI technique was proposed to reduce the number of illumination patterns and reconstruction time whilst retaining its robustness. In order to prove the feasibility, numerical simulations were carried out with different SRs ranging from 0.1 to 0.3; an acceptable reconstruction appeared at approximately 0.1. This result indicated that ZSPI could obtain satisfactory reconstruction results at low SRs. Further simulation and physical experiments were compared with different reconstruction algorithms such as DGI, CGD, DCT, and TV under speckle modulation patterns at a sample of 0.1 in terms of different targets. The results proved that ZSPI had a higher imaging quality and required less imaging time, particularly for low-frequency targets. The proposed method had an evident advantage in reconstructing high-quality pictures with relatively few illumination patterns and a lower reconstruction time, which could be used as a reference for SPI methods such as low-frequency target fast SPI. In addition, the proposed method could be used as a rotated object classification method in microcosmic or remote sensing due to the rotation invariant of the Zernike moments.

Author Contributions

Conceptualization, S.Z., K.L. and H.L.; Methodology, S.Z., K.L. and H.L.; Software, L.L.; Resources, K.L.; Data curation, S.Z.; Writing—original draft, S.Z., K.L. and L.L.; Writing—review & editing, S.Z., K.L. and L.L.; Visualization, S.Z. and L.L.; Supervision, K.L.; Funding acquisition, K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 32202996.

Data Availability Statement

All implementation details, sources and data are available upon request from the corresponding author.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 32202996).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
  2. Candes, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  3. Durán, V.; Clemente, P.; Fernández-Alonso, M.; Tajahuerce, E.; Lancis, J. Single-pixel polarimetric imaging. Opt. Lett. 2012, 37, 824–826. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Soldevila, F.; Irles, E.; Durán, V.; Clemente, P.; Fernández-Alonso, M.; Tajahuerce, E.; Lancis, J. Single-pixel polarimetric imaging spectrometer by compressive sensing. Appl. Phys. B 2013, 113, 551–558. [Google Scholar] [CrossRef]
  5. Seow, K.L.C.; Török, P.; Foreman, M.R. Single pixel polarimetric imaging through scattering media. Opt. Lett. 2020, 45, 5740–5743. [Google Scholar] [CrossRef]
  6. Ramachandran, P.; Alex, Z.C.; Nelleri, A. Compressive Fresnel digital holography using Fresnelet based sparse representation. Opt. Commun. 2015, 340, 110–115. [Google Scholar] [CrossRef]
  7. Li, J.; Li, H.; Li, J.; Pan, Y.; Li, R. Compressive optical image encryption with two-step-only quadrature phase-shifting digital holography. Opt. Commun. 2015, 344, 166–171. [Google Scholar] [CrossRef]
  8. Bian, L.; Suo, J.; Situ, G.; Li, Z.; Fan, J.; Chen, F.; Dai, Q. Multispectral imaging using a single bucket detector. Sci. Rep. 2016, 6, 24752. [Google Scholar] [CrossRef] [Green Version]
  9. Li, Z.; Suo, J.; Hu, X.; Deng, C.; Fan, J.; Dai, Q. Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation. Sci. Rep. 2017, 7, 41435. [Google Scholar] [CrossRef] [Green Version]
  10. Greenberg, J.; Krishnamurthy, K.; Brady, D. Compressive single-pixel snapshot x-ray diffraction imaging. Opt. Lett. 2014, 39, 111–114. [Google Scholar] [CrossRef]
  11. Yu, H.; Lu, R.; Han, S.; **e, H.; Du, G.; **ao, T.; Zhu, D. Fourier-transform ghost imaging with hard X rays. Phys. Rev. Lett. 2016, 117, 113901. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, A.-X.; He, Y.-H.; Wu, L.-A.; Chen, L.-M.; Wang, B.-B. Tabletop x-ray ghost imaging with ultra-low radiation. Optica 2018, 5, 374–377. [Google Scholar] [CrossRef] [Green Version]
  13. Chen, S.-C.; Feng, Z.; Li, J.; Tan, W.; Du, L.-H.; Cai, J.; Ma, Y.; He, K.; Ding, H.; Zhai, Z.-H.; et al. Ghost spintronic THz-emitter-array microscope. Light Sci. Appl. 2020, 9, 99. [Google Scholar] [CrossRef]
  14. Stantchev, R.I.; Sun, B.; Hornett, S.M.; Hobson, P.A.; Gibson, G.M.; Padgett, M.J.; Hendry, E. Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector. Sci. Adv. 2016, 2, e1600190. [Google Scholar] [CrossRef] [Green Version]
  15. Hornett, S.M.; Stantchev, R.I.; Vardaki, M.Z.; Beckerleg, C.; Hendry, E. Subwavelength Terahertz imaging of graphene photoconductivity. Nano Lett. 2016, 16, 7019–7024. [Google Scholar] [CrossRef] [Green Version]
  16. Watts, C.M.; Shrekenhamer, D.; Montoya, J.; Lipworth, G.; Hunt, J.; Sleasman, T.; Krishna, S.; Smith, D.R.; Padilla, W.J. Terahertz compressive imaging with metamaterial spatial light modulators. Nat. Photonics 2014, 8, 605–609. [Google Scholar] [CrossRef]
  17. Shrekenhamer, D.; Watts, C.M.; Padilla, W.J. Terahertz single pixel imaging with an optically controlled dynamic spatial light modulator. Opt. Express 2013, 21, 12507–12518. [Google Scholar] [CrossRef]
  18. Chan, W.L.; Charan, K.; Takhar, D.; Kelly, K.F.; Baraniuk, R.G.; Mittleman, D.M. A single-pixel terahertz imaging system based on compressed sensing. Appl. Phys. Lett. 2008, 93, 121105. [Google Scholar] [CrossRef] [Green Version]
  19. Li, F.; Zhao, M.; Tian, Z.; Willomitzer, F.; Cossairt, O. Compressive ghost imaging through scattering media with deep learning. Opt. Express 2020, 28, 17395–17408. [Google Scholar] [CrossRef]
  20. Durán, V.; Soldevila, F.; Irles, E.; Clemente, P.; Tajahuerce, E.; Andrés, P.; Lancis, J. Compressive imaging in scattering media. Opt. Express 2015, 23, 14424–14433. [Google Scholar] [CrossRef]
  21. Zhao, C.; Gong, W.; Chen, M.; Li, E.; Wang, H.; Xu, W.; Han, S. Ghost imaging lidar via sparsity constraints. Appl. Phys. Lett. 2012, 101, 141123. [Google Scholar] [CrossRef] [Green Version]
  22. Baraniuk, R.; Steeghs, P. Compressive radar imaging. In Proceedings of the 2007 IEEE Radar Conference, Waltham, MA, USA, 17–20 April 2007; pp. 128–133. [Google Scholar]
  23. Clemente, P.; Durán, V.; Torres-Company, V.; Tajahuerce, E.; Lancis, J. Optical encryption based on computational ghost imaging. Opt. Lett. 2010, 35, 2391–2393. [Google Scholar] [CrossRef] [PubMed]
  24. Yu, W.-K.; Yao, X.-R.; Liu, X.-F.; Lan, R.-M.; Wu, L.-A.; Zhai, G.-J.; Zhao, Q. Compressive microscopic imaging with “positive–negative” light modulation. Opt. Commun. 2016, 371, 105–111. [Google Scholar] [CrossRef] [Green Version]
  25. Radwell, N.; Mitchell, K.J.; Gibson, G.M.; Edgar, M.P.; Bowman, R.; Padgett, M.J. Single-pixel infrared and visible microscope. Optica 2014, 1, 285–289. [Google Scholar] [CrossRef] [Green Version]
  26. Studer, V.; Bobin, J.; Chahid, M.; Mousavi, H.S.; Candes, E.; Dahan, M. Compressive fluorescence microscopy for biological and hyperspectral imaging. Proc. Natl. Acad. Sci. USA 2012, 109, E1679–E1687. [Google Scholar] [CrossRef] [Green Version]
  27. Gibson, G.M.; Sun, B.; Edgar, M.P.; Phillips, D.B.; Hempler, N.; Maker, G.T.; Malcolm, G.P.A.; Padgett, M.J. Real-time imaging of methane gas leaks using a single-pixel camera. Opt. Express 2017, 25, 2998–3005. [Google Scholar] [CrossRef]
  28. Sun, M.-J.; Edgar, M.P.; Gibson, G.M.; Sun, B.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 2016, 7, 12010. [Google Scholar] [CrossRef]
  29. Sun, B.; Edgar, M.P.; Bowman, R.; Vittert, L.E.; Welsh, S.; Bowman, A.; Padgett, M.J. 3D computational imaging with single-pixel detectors. Science 2013, 340, 844–847. [Google Scholar] [CrossRef] [Green Version]
  30. Li, L.; **ao, W.; Jian, W. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing. Appl. Opt. 2014, 53, 7992–7997. [Google Scholar] [CrossRef]
  31. Zhou, D.; Cao, J.; Cui, H.; Hao, Q.; Chen, B.-K.; Lin, K. Complementary Fourier single-pixel imaging. Sensors 2021, 21, 6544. [Google Scholar] [CrossRef]
  32. Rizvi, S.; Cao, J.; Zhang, K.; Hao, Q. Improving imaging quality of real-time Fourier single-pixel imaging via deep learning. Sensors 2019, 19, 4190. [Google Scholar] [CrossRef] [Green Version]
  33. Wenwen, M.; Dongfeng, S.; Jian, H.; Kee, Y.; Yingjian, W.; Chengyu, F. Sparse Fourier single-pixel imaging. Opt. Express 2019, 27, 31490–31503. [Google Scholar] [CrossRef]
  34. Valencia, A.; Scarcelli, G.; D’Angelo, M.; Shih, Y. Two-photon imaging with thermal light. Phys. Rev. Lett. 2005, 94, 63601. [Google Scholar] [CrossRef] [Green Version]
  35. Bromberg, Y.; Katz, O.; Silberberg, Y. Ghost imaging with a single detector. Phys. Rev. A 2009, 79, 053840. [Google Scholar] [CrossRef] [Green Version]
  36. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  37. Tropp, J.; Gilbert, A.C. Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans. Inform. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  38. Phillips, D.B.; Sun, M.-J.; Taylor, J.M.; Edgar, M.P.; Barnett, S.M.; Gibson, G.M.; Padgett, M.J. Adaptive foveated single-pixel imaging with dynamic supersampling. Sci. Adv. 2017, 3, e1601782. [Google Scholar] [CrossRef] [Green Version]
  39. Rousset, F.; Ducros, N.; Farina, A.; Valentini, G.; D’Andrea, C.; Peyrin, F. Adaptive basis scan by wavelet prediction for single-pixel imaging. IEEE Trans. Comput. Imaging 2017, 3, 36–46. [Google Scholar] [CrossRef] [Green Version]
  40. Liu, B.-L.; Yang, Z.-H.; Liu, X.; Wu, L.-A. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform. J. Mod. Opt. 2017, 64, 259–264. [Google Scholar] [CrossRef] [Green Version]
  41. Chen, Y.; Liu, S.; Yao, X.-R.; Zhao, Q.; Liu, X.-F.; Liu, B.; Zhai, G.-J. Discrete cosine single-pixel microscopic compressive imaging via fast binary modulation. Opt. Commun. 2020, 454, 124512. [Google Scholar] [CrossRef]
  42. Zhang, Z.; Ma, X.; Zhong, J. Single-pixel imaging by means of Fourier spectrum acquisition. Nat. Commun. 2015, 6, 6225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Zhang, Z.; Wang, X.; Zheng, G.; Zhong, J. Hadamard single-pixel imaging versus fourier single-pixel imaging. Opt. Express 2017, 25, 19619–19639. [Google Scholar] [CrossRef]
  44. Bian, L.; Suo, J.; Hu, X.; Chen, F.; Dai, Q. Efficient single pixel imaging in Fourier space. J. Opt. 2016, 18, 085704. [Google Scholar] [CrossRef] [Green Version]
  45. Edgar, M.P.; Gibson, G.M.; Bowman, R.W.; Sun, B.; Radwell, N.; Mitchell, K.J.; Welsh, S.S.; Padgett, M.J. Simultaneous real-time visible and infrared video with single-pixel detectors. Sci. Rep. 2015, 5, 10669. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Vasile, T.; Damian, V.; Coltuc, D.; Petrovici, M. Single pixel sensing for THz laser beam profiler based on Hadamard Transform. Opt. Laser Technol. 2016, 79, 173–178. [Google Scholar] [CrossRef]
  47. Sun, M.-J.; Meng, L.-T.; Edgar, M.P.; Padgett, M.J.; Radwell, N. A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging. Sci. Rep. 2017, 7, 3464. [Google Scholar] [CrossRef] [Green Version]
  48. Martínez-León, L.; Clemente, P.; Mori, Y.; Climent, V.; Lancis, J.; Tajahuerce, E. Single-pixel digital holography with phase-encoded illumination. Opt. Express 2017, 25, 4975–4984. [Google Scholar] [CrossRef]
  49. Lakshminarayanan, V.; Fleck, A. Zernike polynomials: A guide. J. Mod. Opt. 2011, 58, 545–561. [Google Scholar] [CrossRef]
  50. Schwiegerling, J. Review of Zernike polynomials and their use in describing the impact of misalignment in optical systems. In Proceedings of the SPIE Optical System Alignment, Tolerancing, and Verification XI, San Diego, CA, USA, 22 August 2017; Volume 10377, p. 103770D. [Google Scholar]
  51. Alda, J.; Boreman, G.D. Zernike-based matrix model of deformable mirrors: Optimization of aperture size. Appl. Opt. 1993, 32, 2431–2438. [Google Scholar] [CrossRef] [Green Version]
  52. Noll, R.J. Zernike polynomials and atmospheric turbulence. J. Opt. Soc. Am. 1976, 66, 207–211. [Google Scholar] [CrossRef]
  53. Navarro, R.; Moreno-Barriuso, E. Laser ray-tracing method for optical testing. Opt. Lett. 1999, 24, 951–953. [Google Scholar] [CrossRef] [Green Version]
  54. Love, G.D. Wave-front correction and production of Zernike modes with a liquid-crystal spatial light modulator. Appl. Opt. 1997, 36, 1517–1524. [Google Scholar] [CrossRef]
  55. Noll, R.J. Phase estimates from slope-type wave-front sensors. J. Opt. Soc. Am. 1978, 68, 139–140. [Google Scholar] [CrossRef]
  56. Van Brug, H. Zernike polynomials as a basis for wave-front fitting in lateral shearing interferometry. Appl. Opt. 1997, 36, 2788–2790. [Google Scholar] [CrossRef]
  57. McAlinden, C.; McCartney, M.; Moore, J. Mathematics of Zernike polynomials: A review. Clin. Experiment. Ophthalmol. 2011, 39, 820–827. [Google Scholar] [CrossRef]
  58. Navarro, R.; Arines, J.; Rivera, R. Direct and inverse discrete Zernike transform. Opt. Express 2009, 17, 24269–24281. [Google Scholar] [CrossRef] [Green Version]
  59. **n, Y.; Pawlak, M.; Liao, S. Accurate computation of Zernike moments in polar coordinates. IEEE Trans. Image Process. 2007, 16, 581–587. [Google Scholar] [CrossRef]
  60. Bian, L.; Suo, J.; Dai, Q.; Chen, F. Experimental comparison of single-pixel imaging algorithms. J. Opt. Soc. Am. A 2018, 35, 78–87. [Google Scholar] [CrossRef]
Figure 1. Zernike basis patterns of different mode numbers.
Figure 1. Zernike basis patterns of different mode numbers.
Electronics 12 00530 g001
Figure 2. Reconstructions of different sampling ratios by Zernike single-pixel imaging.
Figure 2. Reconstructions of different sampling ratios by Zernike single-pixel imaging.
Electronics 12 00530 g002
Figure 3. Intensity distributions by Zernike transform.
Figure 3. Intensity distributions by Zernike transform.
Electronics 12 00530 g003
Figure 4. RMSE and PSNR of the reconstructions of different sampling ratios.
Figure 4. RMSE and PSNR of the reconstructions of different sampling ratios.
Electronics 12 00530 g004
Figure 5. Numerical simulation results of the SPI of different algorithms.
Figure 5. Numerical simulation results of the SPI of different algorithms.
Electronics 12 00530 g005
Figure 6. The comparison of different reconstruction methods: (a) image quality of different reconstruction methods; (b) reconstruction times for different reconstruction methods.
Figure 6. The comparison of different reconstruction methods: (a) image quality of different reconstruction methods; (b) reconstruction times for different reconstruction methods.
Electronics 12 00530 g006
Figure 7. Experimental setup of Zernike single-pixel imaging.
Figure 7. Experimental setup of Zernike single-pixel imaging.
Electronics 12 00530 g007
Figure 8. Physical experimental results of the SPI of different algorithms.
Figure 8. Physical experimental results of the SPI of different algorithms.
Electronics 12 00530 g008
Figure 9. SSIM performance and reconstruction times of different reconstruction methods for (a) the “cameraman” and (b) the emblem of Bei**g Information Science and Technology University.
Figure 9. SSIM performance and reconstruction times of different reconstruction methods for (a) the “cameraman” and (b) the emblem of Bei**g Information Science and Technology University.
Electronics 12 00530 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Lin, K.; Li, H.; Lu, L. A Single-Pixel High-Precision Imaging Technique Based on a Discrete Zernike Transform for High-Efficiency Image Reconstructions. Electronics 2023, 12, 530. https://doi.org/10.3390/electronics12030530

AMA Style

Zhang S, Lin K, Li H, Lu L. A Single-Pixel High-Precision Imaging Technique Based on a Discrete Zernike Transform for High-Efficiency Image Reconstructions. Electronics. 2023; 12(3):530. https://doi.org/10.3390/electronics12030530

Chicago/Turabian Style

Zhang, Shiyu, Kai Lin, Hongsong Li, and Lu Lu. 2023. "A Single-Pixel High-Precision Imaging Technique Based on a Discrete Zernike Transform for High-Efficiency Image Reconstructions" Electronics 12, no. 3: 530. https://doi.org/10.3390/electronics12030530

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop