Similarity Index Based Approach for Identifying Similar Grotto Statues to Support Virtual Restoration
Abstract
:1. Introduction
- Structural Similarity Index Measurement (SSIM): SSIM is an index for the quality evaluation of full reference images, which measures image similarity from brightness, contrast, and structure. SSIM values range from 0 to 1. The larger the value, the smaller the image distortion. This method is usually used to measure the distortion of a compressed image, rather than calculating the similarity between two images [40].
- Cosine Similarity: Cosine Similarity is used to represent the image as a vector and calculate the cosine distance between the vectors to represent the similarity of the two images. According to our testing, this method took a long time to calculate the similarity between small Buddhists on the Thousand Buddhist cassock [41].
- Histogram Method: Histograms can describe the global distribution of colors in an image, which is a basic method for image similarity calculation. However, the histogram is too simple to capture the similarity of color information without any more information. As long as the color distribution is similar, it can be judged that the similarity between the two is high. For Buddhist statues, the stone color is almost the same, so it is obviously unreasonable to use the histogram method to judge whether images are similar [42].
- Mutual Information Method: Calculating the mutual information of two images can represent the similarity between them. If the size of the two images is the same, the similarity of the two images can be represented to a certain extent. However, in most cases, the size of the image is not the same. If the two images are adjusted to the same size, some original information is lost, making it less suitable for determining image similarity [43].
- Hash value: The hash method normalizes images to a certain size, calculates a sequence as the hash value of this image, and then compares the same number of bits in the hash value sequence of two images [44,45]. The lower the number of different data bits, the higher the similarity of the two images. According to the pre-defined threshold value, the similarity of the two images can be tested. Compared with other forms of similarity discrimination, this method takes less time. Furthermore, the algorithm is hardly affected by image resolution and color and has better robustness.
3. Materials and Methods
3.1. Study Area and Data Acquisition
3.2. Quantitative Analysis of Existing Algorithms
- Construct local feature descriptor;
- Feature matching.
3.3. pHash Algorithm and Similarity Index
- Reduce size: The fastest way to remove high frequencies and detail is to shrink the image. In this case, pHash starts with a small image; 32 × 32 pixels is a suitable size. This is really done to simplify the Discrete Cosine Transform (DCT) computation and not because it is needed to reduce the high frequencies.
- Reduce color: The image was reduced to grayscale to further simplify the number of computations.
- Compute the DCT: The DCT separated the image into a collection of frequencies and scalars. We used a 32 × 32 pixel DCT during this process.
- Reduce the DCT: While the DCT is 32 × 32 pixels, the top-left is kept to 8 × 8 and the lowest frequencies in the picture are represented.
- Compute the average value.
- Reduce the DCT further.
3.3.1. Compute the Average Value
3.3.2. Compute the Cultural Heritage Similarity Index
3.4. SIFT Operator and Feature Point Matching
- Constructing scale space;
- Searching for an extremum in scale space;
- Determining the accurate location of extreme points;
- Construct direction parameters of feature points.
4. Results
5. Discussion
- The Yungang Grottoes are large, complex, and immovable cultural heritage with a huge volume, and they lack independent high-precision scanning of individual Buddhist statues. Although high precision 3D scanning can provide more details, it will provide huge amounts of point cloud data, which involves complex data processing and high costs. In order to reduce the difficulty of 3D data processing, instead of using 3D data we chose 2D photos, which are simpler and more economical.
- The objective of this paper was to develop a quick and convenient way to judge the similarity of cultural relic objectively. Using 2D photo as a data source, people can use a consumer digital camera to collect data, which brings down the data collection barrier.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Su, B. Study on China’s Grottoes Temple; Cultural relics Press: Bei**g, China, 1996. [Google Scholar]
- Guo, F.; Jiang, G. Investigation into rock moisture and salinity regimes: Implications of sandstone weathering in Yungang Grottoes, China. Carbonates Evaporites 2015, 30, 1–11. [Google Scholar] [CrossRef]
- Charter Venice. International Charter for the Conservation and Restoration of Monuments and Sites. In IInd International Congress of Architects and Technicians of Historic Monuments, Venice; Charter Venice: Venice, Italy, 1964; pp. 25–31. [Google Scholar]
- Kashihara, K. Three-dimensional reconstruction of artifacts based on a hybrid genetic algorithm. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Korea, 14–17 October 2012; pp. 900–905. [Google Scholar]
- Bilde, P.G.; Handberg, S. Ancient Repairs on Pottery from Olbia Pontica. Am. J. Archaeol. 2012, 116, 461–481. [Google Scholar] [CrossRef]
- Fantini, M.; De Crescenzio, F.; Persiani, F.; Benazzi, S.; Gruppioni, G. 3D restitution, restoration and prototy** of a medieval damaged skull. Rapid Prototyp. J. 2008, 14, 318–324. [Google Scholar] [CrossRef]
- Hou, M.; Zhang, X.; Wu, Y.; Hu, Y. 3D Laser Scanning Modeling and Application on Dazu Thousand-hand Bodhisattva in China. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2014, XL-4, 81–85. [Google Scholar] [CrossRef] [Green Version]
- Hou, M.; Yang, S.; Hu, Y.; Wu, Y.; Shu, Z.; Zhang, X. A novel method for the virtual restoration of cultural relics based on a 3D fine model. DYNA-Ingeniería e Industria 2015, 90, 307–313. [Google Scholar]
- Hou, M.; Li, S.; Jiang, L.; Wu, Y.; Hu, Y.; Yang, S.; Zhang, X. A New Method of Gold Foil Damage Detection in Stone Carving Relics Based on Multi-Temporal 3D LiDAR Point Clouds. ISPRS Int. J. Geo Inf. 2016, 5, 60. [Google Scholar] [CrossRef] [Green Version]
- Hou, M.; Yang, S.; Hu, Y.; Wu, Y.; Jiang, L.; Zhao, S.; Wei, P. Novel Method for Virtual Restoration of Cultural Relics with Complex Geometric Structure Based on Multiscale Spatial Geometry. ISPRS Int. J. Geo Inf. 2018, 7, 353. [Google Scholar] [CrossRef] [Green Version]
- Zhou, M.; Geng, G.; Wu, Z.; Shui, W. A Virtual Restoration System for Broken Pottery. In Proceedings of the Conference on Pattern Recognition, Quebec, QC, Canada, 11–15 August 2002; Volume 11, p. 15. [Google Scholar]
- Lu, M.; Kamakura, M.; Zheng, B.; Takamatsu, J.; Nishino, K.; Ikeuchi, K. Clustering Bayon face towers using restored 3D shape models. In Proceedings of the 2011 Second International Conference on Culture and Computing IEEE, Kyoto, Japan, 20–22 October 2011; pp. 39–44. [Google Scholar]
- Lu, M.; Zheng, B.; Takamatsu, J.; Nishino, K.; Ikeuchi, K. Preserving the khmer smile: Classifying and restoring the faces of bayon. In Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage, Prato, Italy, 18–21 October 2011; pp. 161–168. [Google Scholar]
- Wang, H.; He, Z.; He, Y.; Chen, D.; Huang, Y. Average-face-based virtual inpainting for severely damaged statues of Dazu Rock Carvings. J. Cult. Heritage 2019, 36, 40–50. [Google Scholar] [CrossRef]
- UNESCO, Yungang Grottoes, Word Heritage List. Available online: https://whc.unesco.org/en/list/1039/ (accessed on 1 March 2021).
- Liu, X. An analysis of the development of Yungang Grottoes art in the Northern Wei Dynasty. J. **zhong Univ. 2020, 5, 63–69+92 . [Google Scholar]
- Zhang, Z. Focusing on the musical cultural value of Yungang Grottoes. Chin. Musicol. 2019, 2, 11. [Google Scholar]
- Liu, D. The cultural function of Yungang Grottoes in the Northern Wei Dynasty. Sci. Educ. Guide 2020, 9, 156–157. [Google Scholar]
- Peng, S. The model of Yungang Grottoes statues bred by multi culture. Art Obs. 2018, 1, 117–121. [Google Scholar]
- Liu, R.Z.; Zhang, B.J.; Zhang, H.; Shi, M.F. Deterioration of Yungang Grottoes: Diagnosis and research. J. Cult. Heritage 2011, 12, 494–499. [Google Scholar] [CrossRef]
- Fryskowska, A.; Stachelek, J. A no-reference method of geometric content quality analysis of 3D models generated from laser scanning point clouds for hBIM. J. Cult. Heritage 2018, 34, 95–108. [Google Scholar] [CrossRef]
- Bai, S. The discovery and research of the stele records of the great Cave Temple rebuilt in ** **g Wuzhou mountain of ** Dynasty -- discussing some problems about Yungang Grottoes with Professor Nagahiro Toshio of Japan. J. Peking Univ. 1982, 2, 30–50. [Google Scholar]
- Li, Y. The discovery of the stele records of the great Cave Temple rebuilt in ** **g Wuzhou mountain of ** Dynasty. Chin. Cult. Heritage 2007, 5, 40–43. [Google Scholar]
- Yan, W. Research on Yungang Grottoes; Guangxi Normal University Press: Guilin, China, 2003. [Google Scholar]
- Liang, S.; Lin, H.; Liu, D. Architecture of Northern Wei Dynasty in Yungang Grottoes. Buddh. Cult. 2014, 5, 108–115. [Google Scholar]
- Liang, S.; Liu, D.; China construction, society. Investigation Report on Ancient Architecture in Datong: The Northern Wei Dynasty Architecture in Yungang Grottoes; Society for the Study of Chinese Architecture: Bei**g, China, 1936. [Google Scholar]
- Mizuno, S.; Nagahiro, T.; Kang, Y. The Buddhist Cave Temples of the Fifth Century AD in North China, 16 Vols; Kyoto University: Kyoto, Japan, 1952–1956. (In Japanese) [Google Scholar]
- Nagahiro Toshio. Yungang Diary: A Survey of Buddhist Grottoes During the War; Cultural relics Press: Bei**g, China, 2009. [Google Scholar]
- Wu, Y. Application of digital technology in cave 18 of Yungang Grottoes. Urban Surv. 2015, 6, 89–93. [Google Scholar]
- Diao, C.; Li, Z.; Zhang, Z.; Ning, B.; He, Y. To achieve real immersion: The 3d virtual and physical reconstruction of cave 3 and cave 12 of Yungang Grottoes. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W9, 297–303. [Google Scholar] [CrossRef] [Green Version]
- Li, L.; Wang, J. Application of 3D laser scanning technology and 3D printing technology in reproduction of Yungang Grottoes. Surv. World 2020, 5, 88–92. [Google Scholar]
- Yan, K. Quantitative Study on Weathering Disease of Yungang Grottoes by Multi View Image 3D Reconstruction Technology; Minzu University of China: Bei**g, China, 2020. [Google Scholar]
- Brown, B.J.; Toler-Franklin, C.; Nehab, D.; Burns, M.; Dobkin, D.; Vlachopoulos, A.; Weyrich, T. A system for high-volume acquisition and matching of fresco fragments: Reassembling Theran wall paintings. ACM Trans. Graph. (TOG) 2008, 27, 1–9. [Google Scholar] [CrossRef]
- Di Paola, F.; Milazzo, G.; Spatafora, F. Computer Aided Restoration Tools to Assist the Conservation of AN Ancient Sculpture. The Colossal Statue of ZEUS Enthroned. ISPRS-International Archives of the Photogrammetry. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, XLII-2/W5, 177–184. [Google Scholar] [CrossRef] [Green Version]
- Arbace, L.; Sonnino, E.; Callieri, M.; Dellepiane, M.; Fabbri, M.; Idelson, A.I.; Scopigno, R. Innovative uses of 3D digital technologies to assist the restoration of a fragmented terracotta statue. J. Cult. Heritage 2013, 14, 332–345. [Google Scholar] [CrossRef]
- Kamakura, M.; Oishi, T.; Takamatsu, J.; Ikeuchi, K. Classification of Bayon faces using 3D models. Virtual Syst. Multimed. 2005, 751–760. [Google Scholar]
- Lu, M.; Zheng, B.; Takamatsu, J.; Nishino, K.; Ikeuchi, K. 3D shape restoration via matrix recovery. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 306–315. [Google Scholar]
- Lanitis, A.; Stylianou, G.; Voutounos, C. Virtual restoration of faces appearing in byzantine icons. J. Cult. Heritage 2012, 13, 404–412. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Chen, X.; Zhang, S.; Geng, G. A multi feature fusion method for reassembly of 3D cultural heritage artifacts. J. Cult. Heritage 2018, 33, 191–200. [Google Scholar] [CrossRef]
- Hassan, M.; Bhagvati, C. Structural Similarity Measure for Color Images. Int. J. Comput. Appl. 2012, 43, 7–12. [Google Scholar] [CrossRef]
- **a, P.; Zhang, L.; Li, F. Learning similarity with cosine similarity ensemble. Inf. Sci. 2015, 307, 39–52. [Google Scholar] [CrossRef]
- Jeong, S.; Won, C.S.; Gray, R.M. Image retrieval using color histograms generated by Gauss mixture vector quantization. Comput. Vis. Image Underst. 2004, 94, 44–66. [Google Scholar] [CrossRef]
- Liu, X.; Wang, M.; Song, Z. Multi-Modal Image Registration Based on Multi-Feature Mutual Information. J. Med Imaging Heal. Informatics 2019, 9, 153–158. [Google Scholar] [CrossRef]
- Chamoso, P.; Rivas, A.; Martín-Limorti, J.J.; Rodríguez, S. A hash based image matching algorithm for social networks. In International Conference on Practical Applications of Agents and Multi-Agent Systems; Springer: Cham, Switzerland, 2017; pp. 183–190. [Google Scholar]
- Yang, B.; Gu, F.; Niu, X. Block mean value based image perceptual hashing. In Proceedings of the 2006 International Conference on Intelligent Information Hiding and Multimedia, Pasadena, CA, USA, 18–20 December 2006; pp. 167–172. [Google Scholar]
- Howarth, P.; Rüger, S. Robust texture features for still-image retrieval. IEE Proc. Vision Image Signal Process. 2005, 152, 868. [Google Scholar] [CrossRef] [Green Version]
- Eitz, M.; Hildebrand, K.; Boubekeur, T.; Alexa, M. Sketch-Based Image Retrieval: Benchmark and Bag-of-Features Descriptors. IEEE Trans. Vis. Comput. Graph. 2011, 17, 1624–1636. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Gu, M.; Zhang, D.; Liu, J. Image Sensing Hash Algorithms Based on Cluster Analysis and SIFT. Data Commun. 2015, 3, 36–40. [Google Scholar]
- Hua, W.; Hu, Y.; Hou, M.; Zhang, X. Research on partition method of mural photo collection. Geogr. Inf. World 2017, 3, 107–112. [Google Scholar]
- Huang, J.; Li, X.; Chen, B.; Yang, D. A Comparative Study on Image Similarity Algorithms Based on Hash. J. Dali Univ. 2017, 12, 8. [Google Scholar]
- Christoph, Z. Implementation and Benchmarking of Perceptual Image hash Functions. 2010. Available online: http://phash.org/docs/pubs/thesis_zauner.pdf (accessed on 22 March 2021).
- Monga, V.; Evans, B.L. Robust perceptual image hashing using feature points. In Proceedings of the 2004 International Conference on Image Processing, 2004. ICIP’04, Singapore, 24–27 October 2004; pp. 677–680. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G.R. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Karami, E.; Prasad, S.; Shehata, M. Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images. ar**v 2017, ar**v:1710.02726. [Google Scholar]
- Chien, H.J.; Chuang, C.C.; Chen, C.Y.; Klette, R. When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry. In Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), Palmerston North, New Zealand, 21–22 November 2016; pp. 1–6. [Google Scholar]
- Tareen, S.A.K.; Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–10. [Google Scholar]
- Dang, Q.B.; Le, V.P.; Luqman, M.M.; Coustaty, M.; Tran, C.D.; Ogier, J.M. Camera-based document image retrieval system using local features-comparing SRIF with LLAH, SIFT, SURF and ORB. In Proceedings of the 2015 13th International Conference on Document Analysis and Recognition (ICDAR), Tunis, Tunisia, 23–26 August 2015; pp. 1211–1215. [Google Scholar]
- Rong, G.; Xu, G.; **ng, G.; Jiang, J.; Sun, Y. Feature point matching algorithm based on SIFT, Orb, Ransac. J. **nyu Univ. 2019, 24, 33–37. [Google Scholar]
- Wang, T.; Yu, X.; Liu, Z. Fast Image Matching Algorithm Base on Hash. J. Chongqing Univ. Sci. Technol. 2017, 3, 75–78. [Google Scholar]
- Lerones, P.M.; Llamas, J.; Gómez-García-Bermejo, J.; Zalama, E.; Oli, J.C. Using 3D digital models for the virtual restoration of polychrome in interesting cultural sites. J. Cult. Heritage 2014, 15, 196–198. [Google Scholar] [CrossRef]
Contrast Images | Time (s) | Accuracy | |||||
---|---|---|---|---|---|---|---|
Image1 | Image2 | aHash | dHash | pHash | aHash | dHash | pHash |
original | original | 0.31 | 0.09 | 4.04 | 100.00% | 100.00% | 100.00% |
original | light | 0.31 | 0.09 | 4.08 | 98.44% | 96.88% | 97.66% |
original | resize | 0.28 | 0.07 | 3.96 | 93.75% | 95.31% | 96.88% |
original | contrast | 0.28 | 0.07 | 4.21 | 98.44% | 98.44% | 99.61% |
original | sharp | 0.32 | 0.07 | 4.15 | 82.81% | 85.94% | 83.20% |
original | blur | 0.29 | 0.07 | 4.15 | 76.56% | 73.44% | 95.28% |
original | color | 0.28 | 0.07 | 3.91 | 98.44% | 100.00% | 100.00% |
original | rotate | 0.29 | 0.07 | 4.07 | 56.25% | 57.81% | 57.81% |
Comparative Method | Operating Speed | Accuracy | Robustness |
---|---|---|---|
SIFT operator | Low | Highest accuracy | Highest Robustness |
Surf operator | Fast | High accuracy | High Robustness |
Brisk operator | Fast | Low accuracy | Low Robustness |
Orb operator | Fastest | Lowest accuracy | Lowest Robustness |
Number of Buddhist Statue | Values of Similarity Index (%) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | No. 6 | No. 7 | No. 8 | No. 9 | No. 10 | No. 11 | |
No. 1 | 100 | 94 | 94 | 84 | 84 | 78 | 88 | 78 | 84 | 88 | 59 |
No. 2 | 94 | 100 | 97 | 88 | 84 | 75 | 91 | 81 | 88 | 91 | 63 |
No. 3 | 94 | 97 | 100 | 91 | 88 | 72 | 94 | 81 | 88 | 94 | 66 |
No. 4 | 84 | 88 | 91 | 100 | 81 | 72 | 91 | 88 | 97 | 97 | 66 |
No. 5 | 84 | 84 | 88 | 81 | 100 | 66 | 84 | 72 | 78 | 81 | 69 |
No. 6 | 78 | 75 | 72 | 72 | 66 | 100 | 66 | 66 | 75 | 69 | 36 |
No. 7 | 88 | 91 | 94 | 91 | 84 | 66 | 100 | 81 | 88 | 94 | 72 |
No. 8 | 78 | 81 | 81 | 88 | 72 | 66 | 81 | 100 | 88 | 88 | 63 |
No. 9 | 84 | 88 | 88 | 97 | 78 | 75 | 88 | 88 | 100 | 94 | 63 |
No. 10 | 88 | 91 | 94 | 97 | 81 | 69 | 94 | 88 | 94 | 100 | 66 |
No. 11 | 59 | 63 | 66 | 66 | 69 | 36 | 72 | 63 | 63 | 66 | 100 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hua, W.; Hou, M.; Qiao, Y.; Zhao, X.; Xu, S.; Li, S. Similarity Index Based Approach for Identifying Similar Grotto Statues to Support Virtual Restoration. Remote Sens. 2021, 13, 1201. https://doi.org/10.3390/rs13061201
Hua W, Hou M, Qiao Y, Zhao X, Xu S, Li S. Similarity Index Based Approach for Identifying Similar Grotto Statues to Support Virtual Restoration. Remote Sensing. 2021; 13(6):1201. https://doi.org/10.3390/rs13061201
Chicago/Turabian StyleHua, Wei, Miaole Hou, Yunfei Qiao, Xuesheng Zhao, Shishuo Xu, and Songnian Li. 2021. "Similarity Index Based Approach for Identifying Similar Grotto Statues to Support Virtual Restoration" Remote Sensing 13, no. 6: 1201. https://doi.org/10.3390/rs13061201