Next Article in Journal
A Novel Flexible Geographically Weighted Neural Network for High-Precision PM2.5 Map** across the Contiguous United States
Previous Article in Journal
Spatial and Temporal Changes and Influencing Factors of Capital Cities in Five Provinces of the Underdeveloped Regions of Northwest China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Spatial and Co-Existence Relationships to Improve Administrative Region Target Detection in Map Images

1
School of Resource and Environmental Science, Wuhan University, Wuhan 430079, China
2
Research Center of Geospatial Big Data Application, Chinese Academy of Surveying and Map**, Bei**g 100830, China
3
Key Laboratory of GIS, Ministry of Education, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2024, 13(6), 216; https://doi.org/10.3390/ijgi13060216
Submission received: 10 January 2024 / Revised: 20 April 2024 / Accepted: 17 June 2024 / Published: 20 June 2024
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Map**)

Abstract

:
Administrative regions are fundamental geographic elements on maps, thus making their detection in map images crucial to enhancing intelligent map interpretation. However, existing methods in this field primarily depend on the texture features within the images and do not account for the influence of spatial and co-existence relationships among different targets. In this study, taking the administrative regions of the Chinese Mainland, Taiwan, Tibet, and Henan as test targets, we employed the spatial and co-existence relationships of pairs of targets to improve target detection performance. Firstly, these four regions were detected using a simple Single-Target Cascading detection model based on RetinaNet. Subsequently, the detection results were adjusted with the spatial and co-existence relationships of each pair of targets. The adjusted outcomes demonstrate a significant increase in target detection accuracy, as well as precision (from 0.62 to 0.96) and F1 score (from 0.76 to 0.88), for the Chinese Mainland target. This study contributes to the advancement of intelligent map interpretation.

1. Introduction

Maps, important geographic information carriers representing one of the three common international languages, play a significant role in the expression of geographic space [1] and are applied in various fields of research, including economic activities and trade, industrial production, environmental protection, disaster prevention and mitigation, and epidemic prevention [2,3,4,5,6,7,8,9,10]. Currently, a large number of raster maps are available on the internet because of advancements in computer science and internet technology; they are derived from scanned historical maps, contemporary atlases, vector-to-raster maps with original vector data missing, hand-drawn maps, etc. The geographic information locked in raster maps is crucial for researchers to understand long-term development in an area without manual on-site data acquisition [11,12,13].
Map interpretation is important for the development of geospatial intelligence [12]. Administrative regions represent fundamental geographic information for depicting locations where current research is taking place [4] and providing a framework for public administration [9]. The recognition of administrative regions plays a fundamental role in map interpretation. As administrative regions are objects in map images, the research complexities faced in administrative region detection can be transformed into object/target detection in computer vision. However, generally, within administrative region symbols, there are combined cartographic symbols, including points, lines and areas, which makes the detection of administrative regions in maps more complicated than that of an individual symbol. Most traditional target detection methods first develop hand-crafted features derived from Sobel Canny operator recognition [14,15], Haar-like feature detectors [16], HOG detectors [17], DPMs [18], etc., and then detect the target with shallow classifiers such as SVMs [19] or boosting regressors [20]. Among them, the highest recorded accuracy is only 33.7% (mAP), achieved with DPM-v5 on the VOC07 dataset [21].
With the development of artificial intelligence, deep learning networks have become powerful techniques for image classification [22,23], object detection [24] and object semantic segmentation [25]. In terms of raster map content recognition, most deep learning-based methods have been developed for map classification [26,27,28,29], map style classification [30], specific region target detection [31,32,33] and map textual annotation identification [34,35]. In general, there are two main paradigms in image object detection based on deep learning that have been widely used: two-stage detectors (i.e., R-CNN [36], SPP Net [37], Fast R-CNN [38] and Faster R-CNN [39]) and one-stage detectors (i.e., YOLO [40], SSD [41] and RetinaNet [42]). The former includes two steps, i.e., proposing regions and then classifying and regressing bounding boxes, while the latter directly predicts the bounding boxes and class labels in a single pass [24]. Compared with traditional methods that rely on manually designed features, object detection based on deep learning leverages its learning capability to automatically learn more discriminative feature representations, thereby enhancing the accuracy and robustness of object detection. However, map administrative region detection is rarely performed by using deep learning-based object detection [31,32,33]. In one study [32], the authors proposed a method based on Faster R-CNN [39] to predict specific region objects with multi-scale feature fusion by using fixed-aspect-ratio anchors. In another study [33], the authors introduced and compared two typical map-based geographic object detection methods based on YOLO and RetinaNet [42] and proved that the latter was superior to the former in accuracy and speed. In a previous study [43], a framework based on CNNs and GCNs for the extraction and recognition of geological map symbols was proposed. In another study [44], a three-stage framework based on deep learning to automatically recognize symbols in geological maps was proposed. Finally, in the study described in [45], the authors compared Faster R-CNN and RetinaNet in the detection of ship objects in maps.
In the region target detection studies discussed above, only the color and texture features of regions were extracted, while the spatial and co-existence relationships among regions were not accounted for. However, different targets in maps have very strong spatial correlations. For example, on maps, the Bei**g target is located northwest of Henan, and the Chinese Mainland target tends to include Henan and Tibet. Therefore, in this study, we aimed to adjust the target detection probability to improve performance on targets with lower detection accuracy by integrating the spatial and co-existence relationships of map targets with higher detection accuracy. This paper is structured as follows: First, we introduce the map images of four targets, i.e., the Chinese Mainland, Tibet, Taiwan and Henan. Then, we detail the proposed methodology for detection probability adjustment with spatial and co-existence relationships, together with the test and evaluation results for the Chinese Mainland and Henan targets. We next report visual and quantitative evaluations of the proposed method. This is followed by a discussion and concluding remarks.

2. Data Preparation

In this study, we took four administrative regions in map images as the study targets, including the Chinese Mainland, Tibet, Taiwan and Henan, as shown in Figure 1. We manually annotated the targets with a web-based annotation tool by drawing bounding boxes to contain all the pixels of each target; the annotation included the directory of the map image file, the coordinates of the top left corner of the bounding box, the width and height of the bounding box and the label of the target (as shown in Table 1).
By using an orientation crawler, we derived 2725 map images from an internet image search engine by using the keywords Chinese Mainland, Taiwan, Tibet and map; of these, 2109 and 616 images were randomly assigned to training and test sets. In total, 1677 Taiwan targets, 449 Tibet targets, 370 Chinese Mainland targets and 947 Henan targets were manually annotated for detection model training, and 585, 452, 993 and 286 targets, respectively, for model testing. The specific distribution is shown in Table 2.

3. Methods

The method in this study consisted of four stages: RetinaNet-based detection model training and prediction (Section 3.1), spatial relationship construction (Section 3.2), co-existence relationship construction (Section 3.3) and adjustment of target detection probability with constructed relationships (Section 3.4). We first built four RetinaNet models for the target detection of the four different administrative regions, and each image was predicted by using Single-Target Cascading detection [31] to represent whether the four targets existed. The predicted results were first assessed by estimating whether the targets co-existed by using the spatial relationships between different targets according to the two criteria of area ratio and centroid orientation. Subsequently, the dependent and independent probabilities of co-existence relationships were used to adjust the detection probability (known as the prediction confidence score) to improve detection performance. The methodology framework is shown in Figure 2.

3.1. RetinaNet-Based Target Detection Models

RetinaNet is a simple, one-stage object detector composed of two main backbone networks for generating multi-scale convolutional feature maps of the input image, i.e., ResNet50 and Feature Pyramid Network (FPN), where FPN is built on top of ResNet50. After the FPN backbone, there are two subnetworks: one for classifying anchor boxes and the other for regressing from anchor boxes to ground-truth boxes. The RetinaNet network structure used in this study is shown in Figure 3.
We separately trained four basic RetinaNet models for the four map administrative region targets. The prediction strategy used in this study was Single-Target Cascading detection [31]. A full description of this detection model is beyond the scope of this study, but more information can be found in [31].
Each image was predicted by using Single-Target Cascading detection to obtain original results, indicating the presence or absence of the four targets; these represented the input for the following phase, in which they were adjusted with spatial and co-existence relationships.

3.2. Spatial Relationship Construction

The area ratio and centroid orientation were employed to measure the spatial relationships between different targets, and their valid ranges were estimated in the training samples as the statistical criteria to determine whether targets co-existed. As is well known, different targets occupy relatively stable extents on a map, which results in the relatively stable area ratio of different targets. Despite various map projections and the distortion of distance, a stable area ratio of the targets on a map is thus maintained. The area ratio between the bounding boxes of two targets in each image in the training sample set was calculated by using Equation (1), and its valid range was calculated with the 3σ rule [46,47] as the criterion to determine whether the targets co-existed.
r a t i o i j = a r e a B b o x _ j a r e a B b o x _ i
where r a t i o i j is the area ratio between target i and target j, and a r e a B b o x _ i and a r e a B b o x _ j are the areas of the bounding boxes of targets i and j, respectively.
In addition to the area ratio above, representing the spatial extent of the proportional relationship between two targets, we also introduced centroid orientation to measure the spatial location relationship between two targets, defined as the tangent of the angle formed by the two centroids of the two targets’ bounding boxes (Figure 4). Similarly to the area ratio, the valid range for the centroid orientation was obtained with the 3σ rule by calculating the tangent values for each image in the training sample, and it was used to determine whether the targets co-existed.
t a n i j = y B b o x _ i B b o x _ j x B b o x _ i B b o x _ j
where t a n i j is the centroid orientation between target i and target j, and y B b o x _ i B b o x _ j   and x B b o x _ i B b o x _ j are the y- and x-difference, respectively, between target i and target j of the centroid point.
In the context of predictions generated with the RetinaNet-based Single-Target Cascading model, two administrative region targets are considered to co-exist only when they concurrently satisfy the criteria of both the area ratio and centroid orientation. Otherwise, each target exists independently of the other.

3.3. Co-Existence Relationship Construction

Spatial relationships were used to automatically confirm whether two targets co-existed or not. The dependent probability of target co-existence and the independent probability of the target were also calculated in the training samples to further estimate the probability of co-existence, which, in turn, was used to improve the Single-Target Cascading model detection results. Specifically, based on all training samples with target labels, there were seven co-existence patterns (Table 3). The dependent and independent probabilities were calculated with Equations (3) and (4), where Equation (3) represents the probability of target j co-existing with target i in a map image.
The following dependent and independent probability formulas were derived from the co-existence patterns shown in Table 3.
S i j = n u m i j n u m i
S i a l o n e = n u m i 0 n u m a l l
where S i j represents the existence probability of target j when target i exists in a map image, i.e., the dependent probability for target j with respect to target i; n u m j represents the number of images containing target j; n u m i j represents the number of images containing both target i and target j; S i a l o n e represents the independent probability that an image solely contains target i without including other targets; n u m a l l represents the total number of training images; and n u m i 0 represents the number of sample images that exclusively contain target i.

3.4. Adjustment of Target Detection Probability with Constructed Relations

We aimed to improve one target’s (hereafter referred to as target i) detection result with the commission error (see Section 4.4) by using the dependent probability of target co-existence and the independent probability of the target. When the co-existence of two or three targets is confirmed by a spatial relationship, the detection probability of target i can be adjusted with the detection results of the other targets in the same image. Specifically, the detection probability of target i is adjusted by using the original result of target j’s detection probability (known as the prediction confidence score) multiplied by the dependent probability of targets j and i if the latter co-exist in an image (Equation (5)). If target i exists independently of other targets, its detection probability is adjusted by using its original detection probability multiplied by its independent probability (Equation (6)). The final detection probability of target i is the maximum of all its adjusted detection probability values (Equation (7)).
P i j _ S u p = P j o r i × S j i
P i a l o n e = P i o r i × S i a l o n e
P i f i n a l = m a x ( P j i _ S u p ,   P i a l o n e )
where P j o r i is the prediction confidence score of target j determined with the Single-Target Cascading model; S j i describes the existence probability of target i when target j exists; P i j _ S u p is the prediction confidence score of target j to which the target i detection result has been added; P i o r i is the prediction confidence of target i determined with its detection model; S i a l o n e describes the probability of target i existing independently of other targets; P i a l o n e is the prediction confidence score of target i existing alone; P i j _ S u p represents the corrected prediction confidence of target i, which is based on the co-existence-dependent probability of target i with respect to j; P i a l o n e represents the confidence score of target i existing alone; and P i f i n a l is the maximum value among P i j _ S u p and P i a l o n e and represents the final prediction confidence score of target i with different adjusted targets. The value of P i o r i is 0 when target i does not exist in the image.

4. Experiment and Results

The performance of the proposed methods was evaluated with precision, recall and the F1 score based on the test data. Precision measures the accuracy of the detected targets, and recall measures the integrity of detection. The F1 score is a combination of precision and recall and provides a more comprehensive basis to evaluate a model.
p r e c i s i o n = T P T P + F P
r e c a l l = T P T P + F N
F 1 s c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
where TP represents the number of correctly predicted targets, FP represents the incorrectly predicted results, and FN represents the number of targets that have been missed in target detection.
The IOU (intersection over union) was used to evaluate how well the predicted bounding boxes matched the ground-truth data (Equation (11)). The predicted bounding boxes were considered to be correct when the IOU reached an empirical threshold of 0.5. We used the F1 score with an IOU threshold of 0.5 as the final evaluation indicator of the model, where B p is the predicted bounding box and B g t is the hand-labeled bounding box of a target.
I O U = a r e a ( B g t B p ) a r e a ( B g t B p )

4.1. Original Results of RetinaNet-Based Target Detection

Figure 5 shows the original results of the four targets obtained by using Single-Target Cascading detection. The Taiwan target showed the best performance, with the highest F1 score value of 0.94, while the Henan target had the lowest F1 score value of 0.75. The precision-based evaluation results of the detection results of the Henan and Chinese Mainland targets were significantly lower (only 0.64 and 0.62) than those of the other two targets. All the metrics of the Taiwan and Tibet target detection results are desirable. Therefore, in this study, we used the original detection results of the Taiwan and Tibet targets to adjust those of the Henan and Chinese Mainland targets with spatial and co-existence relationships.

4.2. Spatial Relationship between Targets

(1)
Area ratio
r a t i o C h i T i b = a r e a B b o x _ T i b a r e a B b o x _ C h i and r a t i o C h i T a i = a r e a B b o x _ T a i a r e a B b o x _ C h i were used to measure the area ratios of the Chinese Mainland–Tibet and Chinese Mainland–Taiwan targets according to Equation (1). In parallel, r a t i o H e n T i b = a r e a B b o x _ T i b a r e a B b o x _ H e n and r a t i o H e n T a i = a r e a B b o x _ T a i a r e a B b o x _ H e n were used to measure those of the Henan–Tibet and Henan–Taiwan targets.
Table 4 shows that the area ratio of the Chinese Mainland–Taiwan targets in the training dataset ranged from 0.002 to 0.020, with a significant interval (mean ± 3σ) from 0 to 0.0114, while that of the Chinese Mainland–Tibet targets ranged from 0.0706 to 0.169, with a significant interval (mean ± 3σ) from 0.0753 to 0.167. We also plotted the area ratio distributions for the Chinese Mainland and Taiwan targets, which were found to be normal, with a μ of 0.00528 and a σ of 0.00203; a similar distribution was observed for the Chinese Mainland and Tibet targets, with a μ of 0.121 and a σ of 0.0153. It was evident that the two normal distributions were separable (Figure 6). This can be used to determine whether the area ratio between two targets is reliable; that is to say, two targets probably co-exist if their area ratio is within the possible range, and vice versa. Table 5 shows the area ratio distribution for the Henan–Taiwan and Henan–Tibet targets in the training dataset, and Figure 7 shows a histogram of the area ratio distributions for the Henan–Taiwan and Henan–Tibet targets.
(2)
Centroid orientation
t a n C h i T i b = y B b o x _ C h i B b o x _ T i b x B b o x _ C h i B b o x _ T i b and t a n C h i T a i = y B b o x _ C h i B b o x _ T a i x B b o x _ C h i B b o x _ T a i were employed to measure the centroid orientation of the Chinese Mainland–Tibet and Chinese Mainland–Taiwan targets in the training dataset according to Equation (2). In parallel, t a n H e n T i b = y B b o x _ H e n B b o x _ T i b x B b o x _ H e n B b o x _ T i b and t a n H e n T a i = y B b o x _ H e n B b o x _ T a i x B b o x _ H e n B b o x _ T a i were employed to measure that of the Henan–Tibet and Henan–Taiwan targets.
Table 6 shows that the centroid orientation of the Chinese Mainland–Taiwan targets in the training maps ranged from 0.404 to 1.413, with a significant interval (mean ± 3σ) from 0.393 to 1.119, while that of the Chinese Mainland–Tibet targets ranged from −1 to 0.233, with a significant interval (mean ± 3σ) from −0.753 to 0.0988. We also plotted the centroid orientation distributions of the Chinese Mainland and Taiwan targets, which were found to be normal, with a μ of 0.756 and a σ of 0.121; a similar distribution was observed for the Chinese Mainland and Tibet targets, with a μ of −0.327 and a σ of 0.142. It was evident that the two normal distributions were separable (Figure 8). Similarly to the area ratio, this can be used to determine whether the centroid orientation between two targets is reliable; that is to say, two targets probably co-exist if the centroid orientation is within the possible range, and vice versa. Table 7 shows the centroid orientation distribution parameters and interval estimates for the Henan–Taiwan and Henan–Tibet targets in the training dataset, and Figure 9 shows a histogram of the centroid orientation distributions for the Henan–Taiwan and Henan–Tibet targets.
The valid ranges for the area ratio and centroid orientation were determined by using the significant intervals above. For target prediction based on the Single-Target Cascading model, only when two administrative region targets simultaneously fall within the valid ranges (i.e., significant intervals) of area ratio and centroid orientation are they considered co-existent; otherwise, each target exists independently of the other target.

4.3. Co-Existence Relationship Construction for Chinese Mainland Target

Table 8 shows the number of different co-existence patterns in the training dataset.
Table 9 lists the dependent probabilities for different co-existence patterns of the Chinese Mainland–Tibet, Chinese Mainland–Taiwan, Henan–Tibet and Henan–Taiwan targets, determined according to Equation (3), as well as the independent probability of the Chinese Mainland and Henan targets existing alone, determined according to Equation (4).

4.4. Adjusted Detection Results for Chinese Mainland Target

Table 10 shows the original prediction confidence score adjustment and final prediction confidence calculation according to Equations (5)–(7).
Figure 10 shows the detection results for the Chinese Mainland before (a) (incorrect results) and after (b) (correct results) their adjustment with spatial and co-existence relationships; their comparison indicates that the prediction confidence score of the Chinese Mainland target was decreased by applying spatial and co-existence relationships. For example, in the first row of Figure 10, the prediction confidence score values are 0.77 and 0.0011 before and after adjustment, respectively. This visually confirmed that the incorrect detection result could be successfully eliminated, which improved the precision score of the Chinese Mainland target detection results. Similarly, the incorrect detection results of the Henan target could be successfully eliminated, as shown in Figure 11.
Figure 12 shows that some test images with correct Chinese Mainland target detection results were mistakenly removed after adjustment.
Figure 13 shows that some test images with correct Henan target detection results were mistakenly removed after the adjustment.
Figure 14 shows a comparison of the map detection accuracy rates for the Chinese Mainland target before and after the adjustment of the prediction probability. Following prediction confidence score correction, the detection of the Chinese Mainland target in the same test dataset showed significant improvement, with an R, P and F1 score of 0.82, 0.96 and 0.88, respectively. The probability adjustment resulted in a 12% increment in the F1 score, even though the recall value decreased; precision increased by 34% to a value of 0.96, which indicates that the commission error of the Chinese Mainland target was significantly eliminated. Regarding the Henan target, the F1 score increased by 2% after the adjustment of the prediction probability, as shown in Figure 15.

5. Discussion

5.1. Improvement in Precision and F1 Score of Target Detection Results Following Adjustment

The Single-Target Cascading detection model based on RetinaNet identifies targets in images by extracting the color and texture features of objects; however, it is challenging to integrate the relationships between targets into the model. Previous studies have rarely taken the relationships between targets into account. However, maps, as carriers of geographic information, display very strong spatial correlations among different administrative region targets.
In this study, we integrated the spatial and co-existence relationships between the targets with higher accuracy (i.e., Taiwan and Tibet) to optimize the model detection results of the targets with lower accuracy (i.e., Chinese Mainland and Henan), based on the original target detection results. The area ratio and centroid orientation were used to measure the spatial relationships, which were used to determine whether two targets co-existed or not. Moreover, we obtained the dependent and independent probabilities of the Chinese Mainland and Henan targets by calculating the distribution of the co-existence patterns and used them to measure the co-existence relationships between different targets in a map; then, they were used to adjust the prediction confidence scores of the Chinese Mainland and Henan targets. Thus, the relationships between different targets act as a tool to improve the target detection model’s accuracy.
The results of this study clearly show that the spatial and co-existence relationships between the Chinese Mainland and Taiwan, as well as those between the Chinese Mainland and Tibet, significantly contributed to the improvement in detection accuracy for the Chinese Mainland target. The Chinese Mainland target detection results were adjusted by using the high-precision model (Taiwan and Tibet) with these relationships to obtain a 34% improvement in precision and a 12% improvement in F1 score, which indicate strong usability. Regarding the Henan target, the detection results showed an 8% improvement in precision and a 2% improvement in F1 score. We noticed that the area ratio distribution for the Henan–Tibet ( r a t i o H e n T i b ) targets had a high σ value (0.659), which means that it was not stable and directly led to poor precision and F1 score adjustment.

5.2. Loss of Recall in Target Detection Results Following Adjustment

A small portion of the correct Chinese Mainland and Henan target detection results were erroneously removed after the adjustment, as shown in Figure 12 and Figure 13, because the area ratio or centroid orientation fell outside the valid range. This erroneous removal of targets reduced the recall score (see Figure 14 and Figure 15) of the Chinese Mainland and Henan target detection results.
The Chinese Mainland and Henan targets’ prediction confidence scores were decreased by S C h i a l o n e and S H e n a l o n e , respectively, if the Taiwan and Tibet targets were not identified by the models in the same map, which means that the recall loss for other targets accumulated in that for the Chinese Mainland and Henan targets. However, the valid ranges for the area ratio and centroid orientation could also affect the recall score for the Chinese Mainland and Henan targets. In future research, we will calculate the significant intervals from a growing dataset or infer them from input images with a network rather than obtaining them by performing statistical analysis on a small, manually annotated, fixed training dataset.

5.3. Limitations of Spatial Relationship Measurement

There are also some limitations to the proposed adjustment method. The valid ranges for the area ratio and centroid orientation are essential to spatial relationship construction. Moreover, they can also easily influence co-existence determination. In this study, we derived the valid ranges from the statistical analysis of the training samples with the simple 3σ rule [43,44]. The distribution of the training samples was biased relative to the overall image data. Therefore, comprehensive insights into the overall data distribution remain elusive due to the limitations of our sample dataset, and the generalization of the valid ranges acquired in this study appears to be limited. Moreover, the bias of the valid ranges decreased the recall score of the Chinese Mainland target’s detection results in this study. Spatial and co-existence relationship patterns should be explored with more advanced methods, such as dot-product attention [48], to investigate their complex nature.

6. Conclusions

In contrast to existing research focused on optimizing the structure of object detection models, in this study, we established a method to improve target detection results with low accuracy by adjusting those with higher accuracy with spatial and co-existence relationships.
In this study, the Chinese Mainland, Taiwan, Tibet and Henan were taken as target examples, and the Single-Target Cascading model was applied to map images for original target detection. At the same time, spatial and co-existence relationships were used to improve target detection precision for the Chinese Mainland and Henan targets. We used the area ratio and centroid orientation to measure the spatial relationship between two targets and the dependent and independent probabilities of different targets to measure their co-existence relationships.
In this study, we used the relationships between different targets to decrease the recall score and improve precision for the detection of the Chinese Mainland and Henan targets, thus balancing the two metrics.
The results demonstrate that the use of spatial and co-existence relationships between different administrative regions can significantly improve overall detection accuracy by making use of the high-accuracy advantage of one model to improve another. In this work, the Chinese Mainland and Henan targets, which had smaller training datasets, obtained higher F1 score values after the detection result adjustment. In conclusion, the relationships between different targets in a map are valuable, as they can reduce the labor-intensive nature of training data preparation. The results of this study provide a basis for improving the efficiency of map search and intelligent map interpretation.

Author Contributions

Conceptualization, Yong Wang, Fu Ren and Kaixuan Du; methodology, Kaixuan Du; validation, ** Liu; data curation, Yong Wang, Kaixuan Du, Jiaxin Hou and Zewei You; writing—original draft preparation, Kaixuan Du; writing—review and editing, Kaixuan Du, ** [grant number AR2205] and Special Business Expenses of the Ministry of Natural Resources [grant number 121136000000180004].

Data Availability Statement

The data used in this study are publicly available from multiple public websites. The map images used in this study can be downloaded from the following websites: bzdt.ch.mnr.gov.cn (accessed on 27 November 2021), www.photophoto.cn (accessed on 27 November 2021), www.ce.cn (accessed on 27 November 2021), www.gov.cn (accessed on 27 November 2021), www.cnr.cn (accessed on 27 November 2021), etc.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, J. Cartography: From Digital to Intelligent. Geomat. Inf. Sci. Wuhan Univ. 2022, 47, 1963–1977. [Google Scholar]
  2. Hanson, P. Administrative Regions and the Economy. In The Legacy of the Soviet Union; Springer: London, UK, 2004; pp. 144–168. [Google Scholar]
  3. Andersen, A.K. Are commuting areas relevant for the delimitation of administrative regions in Denmark? Reg. Stud. 2002, 36, 833–844. [Google Scholar] [CrossRef]
  4. Kuzdeuov, A.; Baimukashev, D.; Karabay, A.; Ibragimov, B.; Mirzakhmetov, A.; Nurpeiissov, M.; Lewis, M.; Varol, H.A. A network-based stochastic epidemic simulator: Controlling COVID-19 with region-specific policies. IEEE J. Biomed. Health 2020, 24, 2743–2754. [Google Scholar] [CrossRef]
  5. Lemieux, F. The impact of a natural disaster on altruistic behaviour and crime. Disasters 2014, 38, 483–499. [Google Scholar] [CrossRef]
  6. Li, H.; Kopsakangas-Savolainen, M.; Yan, M.; Wang, J.; **e, B. Which provincial administrative regions in China should reduce their coal consumption? An environmental energy input requirement function based analysis. Energy Policy 2019, 127, 51–63. [Google Scholar] [CrossRef]
  7. Lin, Y.; Huang, J.; Li, M.; Lin, R. Does lower regional density result in less CO2 emission per capita? Evidence from prefecture-level administrative regions in China. Environ. Sci. Pollut. Res. 2022, 29, 29887–29903. [Google Scholar] [CrossRef]
  8. Trappey, A.J.; Trappey, C.V.; Lin, G.Y.; Chang, Y. The analysis of renewable energy policies for the Taiwan Penghu island administrative region. Renew. Sustain. Energy Rev. 2012, 16, 958–965. [Google Scholar] [CrossRef]
  9. Wang, K.; Wang, F. Theory and measurement model of administrative region potential from a perspective of administrative division adjustment: Taking Chongqing city as a case study. J. Geogr. Sci. 2020, 30, 1341–1362. [Google Scholar] [CrossRef]
  10. Wang, Y. Multiperiod optimal allocation of emergency resources in support of cross-regional disaster sustainable rescue. Int. J. Disaster Risk Sci. 2021, 12, 394–409. [Google Scholar] [CrossRef]
  11. Duan, W.; Chiang, Y.; Leyk, S.; Uhl, J.H.; Knoblock, C.A. Automatic alignment of contemporary vector data and georeferenced historical maps using reinforcement learning. Int. J. Geogr. Inf. Sci. 2020, 34, 824–849. [Google Scholar] [CrossRef]
  12. Kang, Y.; Gao, S.; Roth, R.E. Artificial intelligence studies in cartography: A review and synthesis of methods, applications, and ethics. Cartogr. Geogr. Inf. Sci. 2024, 1–32. [Google Scholar] [CrossRef]
  13. Uhl, J.H.; Leyk, S.; Li, Z.; Duan, W.; Shbita, B.; Chiang, Y.; Knoblock, C.A. Combining remote-sensing-derived data and historical maps for long-term back-casting of urban extents. Remote Sens. 2021, 13, 3672. [Google Scholar] [CrossRef]
  14. Saini, A.; Mantosh, B. Object Detection in Underwater Image by Detecting Edges using Adaptive Thresholding. In Proceedings of the 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23–25 April 2019; pp. 628–632. [Google Scholar]
  15. Raja, R.; Kumar, S.; Mahmood, M.R. Color object detection based image retrieval using ROI segmentation with multi-feature method. Wirel. Pers. Commun. 2020, 112, 169–192. [Google Scholar] [CrossRef]
  16. Lienhart, R.; Maydt, J. An extended set of haar-like features for rapid object detection. In Proceedings of the 2002 International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; pp. 900–903. [Google Scholar]
  17. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  18. Felzenszwalb, P.; McAllester, D.; Ramanan, D. A discriminatively trained, multiscale, deformable part model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  19. Hsu, C.W.; Lin, C.J. A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Netw. 2002, 13, 415–425. [Google Scholar] [CrossRef]
  20. Schneiderman, H.; Kanade, T. Object detection using the statistics of parts. Int. J. Comput. Vision 2004, 56, 151–177. [Google Scholar] [CrossRef]
  21. Prasad, D.K. Survey of the problem of object detection in real images. Int. J. Image Process. 2012, 6, 441. [Google Scholar]
  22. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  23. Zhang, K.; Feng, X.H.; Guo, Y.R.; Su, Y.K.; Zhao, K.; Zhao, Z.B.; Ma, Z.Y.; Ding, Q.L. Overview of deep convolutional neural networks for image classification. J. Image Graph. 2021, 26, 2305–2325. [Google Scholar]
  24. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  25. Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. 2017, 40, 834–848. [Google Scholar] [CrossRef]
  26. Bai, Y.; Liu, J.; Huang, L.; Bai, J.; Che, X. Analysis of two convolutional neural networks for map image recognition. Sci. Surv. Mapp. 2021, 46, 126–134. [Google Scholar]
  27. Cui, T.; Liu, J.; Luo, A. Intelligent identification method of network map images based on convolutional neural network. Sci. Surv. Mapp. 2019, 44, 118–123. [Google Scholar]
  28. Du, K.; Wang, L.; Wang, Y.; Che, X. An identification method of map images based on activate learning and convolutional neural network. Sci. Surv. Mapp. 2020, 45, 139–147. [Google Scholar]
  29. Wang, Z.; Liu, J.; Che, X.; Wang, Y.; Du, K. Research on map similarity matching method based on convolutional neural network. Sci. Surv. Mapp. 2022, 47, 169–175. [Google Scholar]
  30. Zhou, X.; Li, W.; Arundel, S.T.; Liu, J. Deep convolutional neural networks for map-type classification. ar**v 2018, ar**v:1805.10402. [Google Scholar]
  31. Du, K.; Che, X.; Wang, Y.; Liu, J.; Luo, A.; Ma, R.; Xu, S. Comparison of RetinaNet-Based Single-Target Cascading and Multi-Target Detection Models for Administrative Regions in Network Map Pictures. Sensors 2022, 22, 7594. [Google Scholar] [CrossRef] [PubMed]
  32. Ren, J.; Liu, W.; Li, Z.; Li, R.; Zhai, X. Intelligent detection of “Problematic Map” using convolutional neural network. Geomat. Inf. Sci. Wuhan Univ. 2021, 46, 570–577. [Google Scholar]
  33. Wang, Z.; Fu, X.; Du, K.; Liu, J.; Che, X. Detection of typical geographic object in maps based on deep learning. Bull. Surv. Mapp. 2022, 11, 74–78. [Google Scholar] [CrossRef]
  34. Li, H.; Liu, J.; Zhou, X. Intelligent map reader: A framework for topographic map understanding with deep learning and gazetteer. IEEE Access 2018, 6, 25363–25376. [Google Scholar] [CrossRef]
  35. Ren, F.; Hou, W. Identification Method of Map Name Annotation Category for Machine Reading. Geomat. Inf. Sci. Wuhan Univ. 2020, 45, 273–280. [Google Scholar]
  36. Girshick, R.; Donahue, J.; Darrell, T.; Berkeley, U.C.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  37. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
  38. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  39. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [PubMed]
  40. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  41. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Cham, Germany, 11–14 October 2016; pp. 21–37. [Google Scholar]
  42. Lin, T.; Goyal, P.; Girshick, R.; He, K.; Doll, A.R.P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  43. Guo, M.Q.; Bei, W.J.; Huang, Y.; Chen, Z.L.; Zhao, X.Z. Deep learning framework for geological symbol detection on geological maps. Comput. Geosci. 2021, 157, 104943. [Google Scholar] [CrossRef]
  44. Qiu, Q.J.; Tan, Y.J.; Ma, K.; Tian, M.; **e, Z.; Tao, L.F. Geological symbol recognition on geological map using convolutional recurrent neural network with augmented data. Ore Geol. Rev. 2023, 153, 105262. [Google Scholar] [CrossRef]
  45. Schnürer, R.; Sieber, R.; Schmid-Lanter, J.; Öztireli, A.C.; Hurni, L. Detection of Pictorial Map Objects with Convolutional Neural Networks. Cartogr. J. 2021, 58, 50–68. [Google Scholar] [CrossRef]
  46. Lehmann, R. 3 σ-rule for outlier detection from the viewpoint of geodetic adjustment. J. Surv. Eng. 2013, 139, 157–165. [Google Scholar] [CrossRef]
  47. Lesperance, M.L.; Kalbfleisch, J.D. An Algorithm for Computing the Nonparametric MLE of a Mixing Distribution. J. Am. Stat. Assoc. 1992, 87, 120–126. [Google Scholar] [CrossRef]
  48. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.U.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
Figure 1. The four study targets. Target 1 in the red bounding box: Chinese Mainland; Target 2 in the blue bounding box: Tibet; Target 3 in the yellow bounding box: Taiwan; Target 4 in the green bounding box: Henan.
Figure 1. The four study targets. Target 1 in the red bounding box: Chinese Mainland; Target 2 in the blue bounding box: Tibet; Target 3 in the yellow bounding box: Taiwan; Target 4 in the green bounding box: Henan.
Ijgi 13 00216 g001
Figure 2. The methodology framework in this study.
Figure 2. The methodology framework in this study.
Ijgi 13 00216 g002
Figure 3. Basic network structure of RetinaNet.
Figure 3. Basic network structure of RetinaNet.
Ijgi 13 00216 g003
Figure 4. Illustration of centroid orientation relationship between different targets.
Figure 4. Illustration of centroid orientation relationship between different targets.
Ijgi 13 00216 g004
Figure 5. A comparison of the evaluation metrics of the four targets’ original detection results.
Figure 5. A comparison of the evaluation metrics of the four targets’ original detection results.
Ijgi 13 00216 g005
Figure 6. Histogram of area ratio distributions for Chinese Mainland–Taiwan (blue) and Chinese Mainland–Tibet (red) targets.
Figure 6. Histogram of area ratio distributions for Chinese Mainland–Taiwan (blue) and Chinese Mainland–Tibet (red) targets.
Ijgi 13 00216 g006
Figure 7. Histogram of area ratio distributions for Henan–Taiwan (blue) and Henan–Tibet (red) targets.
Figure 7. Histogram of area ratio distributions for Henan–Taiwan (blue) and Henan–Tibet (red) targets.
Ijgi 13 00216 g007
Figure 8. Histogram of centroid orientation distributions for Chinese Mainland–Taiwan (blue) and Chinese Mainland–Tibet (red) targets.
Figure 8. Histogram of centroid orientation distributions for Chinese Mainland–Taiwan (blue) and Chinese Mainland–Tibet (red) targets.
Ijgi 13 00216 g008
Figure 9. Histogram of centroid orientation distributions for Henan–Taiwan (blue) and Henan–Tibet (red) targets.
Figure 9. Histogram of centroid orientation distributions for Henan–Taiwan (blue) and Henan–Tibet (red) targets.
Ijgi 13 00216 g009
Figure 10. Incorrect target detection results for Chinese Mainland target before (a) and after (b) adjustment with spatial and co-existence relationships. Red box: Chinese Mainland target; green box: Tibet target; blue box: Taiwan target.
Figure 10. Incorrect target detection results for Chinese Mainland target before (a) and after (b) adjustment with spatial and co-existence relationships. Red box: Chinese Mainland target; green box: Tibet target; blue box: Taiwan target.
Ijgi 13 00216 g010
Figure 11. Incorrect target detection results for Henan target before (a) and after (b) adjustment with spatial and co-existence relationships. Yellow box: Henan target; green box: Tibet target; blue box: Taiwan target.
Figure 11. Incorrect target detection results for Henan target before (a) and after (b) adjustment with spatial and co-existence relationships. Yellow box: Henan target; green box: Tibet target; blue box: Taiwan target.
Ijgi 13 00216 g011
Figure 12. Chinese Mainland target’s correct detection results (a) were removed after (b) adjustment with spatial and co-existence relationships. Red box: Chinese Mainland target; green box: Tibet target; blue box: Taiwan target.
Figure 12. Chinese Mainland target’s correct detection results (a) were removed after (b) adjustment with spatial and co-existence relationships. Red box: Chinese Mainland target; green box: Tibet target; blue box: Taiwan target.
Ijgi 13 00216 g012
Figure 13. Henan target’s correct detection results (a) were removed after (b) adjustment with spatial and co-existence relationships. Yellow box: Henan target; green box: Tibet target; blue box: Taiwan target.
Figure 13. Henan target’s correct detection results (a) were removed after (b) adjustment with spatial and co-existence relationships. Yellow box: Henan target; green box: Tibet target; blue box: Taiwan target.
Ijgi 13 00216 g013
Figure 14. Changes in evaluation metrics for Chinese Mainland target after adjustment.
Figure 14. Changes in evaluation metrics for Chinese Mainland target after adjustment.
Ijgi 13 00216 g014
Figure 15. Changes in evaluation metrics for Henan target after adjustment.
Figure 15. Changes in evaluation metrics for Henan target after adjustment.
Ijgi 13 00216 g015
Table 1. Target annotation format.
Table 1. Target annotation format.
No.File Pathbbox_xbbox_yWidthHeightLabel
1dataset/img_078.jpg28290218Chinese mainland
2dataset/img_079.jpg3520566444Chinese mainland
3dataset/img_254.jpg645820660851Taiwan
4dataset/img_255.jpg1570476565Taiwan
5dataset/img_411.jpg44483520745Tibet
6dataset/img_412.jpg15220291365Tibet
7dataset/img_446.jpg609399736511Henan
8dataset/img_450.jpg259204309251Henan
Table 2. Sample distribution of different targets.
Table 2. Sample distribution of different targets.
Administrative RegionTraining TargetsTest TargetsTotal
Chinese Mainland3709931363
Taiwan16775852262
Tibet449452901
Henan9472861233
Total344323165759
Table 3. Co-existence patterns of different targets in the training samples (where 1 indicates the existence and 0 the non-existence of a target in an image).
Table 3. Co-existence patterns of different targets in the training samples (where 1 indicates the existence and 0 the non-existence of a target in an image).
Administrative RegionChinese Mainland
(or Henan)
TaiwanTibet
Co-Existence Pattern
1111
2101
3100
4010
5110
6011
7001
Table 4. Area ratio distributions for Chinese Mainland–Taiwan ( r a t i o C h i T a i ) and Chinese Mainland–Tibet ( r a t i o C h i T i b ) targets in training dataset.
Table 4. Area ratio distributions for Chinese Mainland–Taiwan ( r a t i o C h i T a i ) and Chinese Mainland–Tibet ( r a t i o C h i T i b ) targets in training dataset.
Area   Ratio   ( r a t i o i j ) MinMaxMeanσMean − 3σMean + 3σ
r a t i o C h i T a i 0.0020.0200.005280.00203−0.00081 (set 0)0.0114
r a t i o C h i T i b 0.07060.1690.1210.01530.07530.167
Table 5. Area ratio distributions for Henan–Taiwan ( r a t i o H e n T a i ) and Henan–Tibet ( r a t i o H e n T i b ) targets in training dataset.
Table 5. Area ratio distributions for Henan–Taiwan ( r a t i o H e n T a i ) and Henan–Tibet ( r a t i o H e n T i b ) targets in training dataset.
Area   Ratio   ( r a t i o i j ) MinMaxMeanσMean − 3σMean + 3σ
r a t i o H e n T a i 0.09630.7560.2500.06380.05860.441
r a t i o H e n T i b 4.7238.8676.1950.6594.2188.172
Table 6. Centroid orientation distribution parameters and interval estimates for Chinese Mainland–Taiwan ( t a n C h i T a i ) and Chinese Mainland–Tibet ( t a n C h i T i b ) targets in training dataset.
Table 6. Centroid orientation distribution parameters and interval estimates for Chinese Mainland–Taiwan ( t a n C h i T a i ) and Chinese Mainland–Tibet ( t a n C h i T i b ) targets in training dataset.
Centroid   Orientation   ( t a n i j ) MinMaxMeanσMean − 3σMean + 3σ
t a n C h i T a i 0.4041.4130.7560.1210.3931.119
t a n C h i T i b −10.233−0.3270.142−0.7530.0988
Table 7. Centroid orientation distribution parameters and interval estimates for Henan–Taiwan ( t a n H e n T a i ) and Henan–Tibet ( t a n H e n T i b ) targets in training dataset.
Table 7. Centroid orientation distribution parameters and interval estimates for Henan–Taiwan ( t a n H e n T a i ) and Henan–Tibet ( t a n H e n T i b ) targets in training dataset.
Centroid   Orientation   ( t a n i j ) MinMaxMeanσMean − 3σMean + 3σ
t a n H e n T a i 0.8101.9581.3280.1480.8841.772
t a n H e n T i b −0.1360.207−0.06130.0409−0.1840.0614
Table 8. Distribution of co-existence patterns in training samples.
Table 8. Distribution of co-existence patterns in training samples.
Co-Existence Pattern1234567
Number (Chinese Mainland)130528358535180
Number (Henan)8397050682471201
Table 9. Dependent and independent probabilities for different targets and corresponding co-existence patterns.
Table 9. Dependent and independent probabilities for different targets and corresponding co-existence patterns.
Co-Existence PatternProbability CalculationValue
Chinese Mainland–Tibet S T I b C h i = n u m T i b C h i n u m T i b = 1305 + 28 1305 + 28 + 5 + 180 0.88
Chinese Mainland–Taiwan S T a i C h i = n u m T a i C h i n u m T a i = 1305 + 3 1305 + 585 + 3 + 5 0.69
Chinese Mainland alone S C h i a l o n e = n u m C h i 0 n u m a l l = 3 2109 0.0014
Henan–Tibet S T I b H e n = n u m T i b H e n n u m T i b = 839 + 7 839 + 7 + 471 + 210 0.554
Henan–Taiwan S T a i H e n = n u m T a i H e n n u m T a i = 839 + 82 839 + 506 + 82 + 471 0.485
Henan alone S H e n a l o n e = n u m H e n 0 n u m a l l = 0 2106 0
Table 10. Prediction confidence adjustment for Chinese Mainland and Henan targets.
Table 10. Prediction confidence adjustment for Chinese Mainland and Henan targets.
Co-Existence PatternConfidence Score Adjustment Formula
Chinese Mainland–Taiwan P C h i T a i _ S u p = P T a i o r i × S T a i C h i
Chinese Mainland–Tibet P C h i T i b _ S u p = P T i b o r i × S T i b C h i
Chinese Mainland alone P C h i a l o n e = P C h i o r i × S C h i a l o n e
Final detection probability P C h i f i n a l = m a x ( P C h i T a i S u p , P C h i T i b S u p , P C h i a l o n e )
Henan–Taiwan P H e n T a i _ S u p = P T a i o r i × S T a i H e n
Henan–Tibet P H e n T i b _ S u p = P T i b o r i × S T i b H e n
Henan alone P H e n a l o n e = P H e n o r i × S H e n a l o n e
Final detection probability P H e n f i n a l = m a x ( P H e n T a i S u p , P H e n T i b S u p , P H e n a l o n e )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, K.; Ren, F.; Wang, Y.; Che, X.; Liu, J.; Hou, J.; You, Z. Integration of Spatial and Co-Existence Relationships to Improve Administrative Region Target Detection in Map Images. ISPRS Int. J. Geo-Inf. 2024, 13, 216. https://doi.org/10.3390/ijgi13060216

AMA Style

Du K, Ren F, Wang Y, Che X, Liu J, Hou J, You Z. Integration of Spatial and Co-Existence Relationships to Improve Administrative Region Target Detection in Map Images. ISPRS International Journal of Geo-Information. 2024; 13(6):216. https://doi.org/10.3390/ijgi13060216

Chicago/Turabian Style

Du, Kaixuan, Fu Ren, Yong Wang, ** Liu, Jiaxin Hou, and Zewei You. 2024. "Integration of Spatial and Co-Existence Relationships to Improve Administrative Region Target Detection in Map Images" ISPRS International Journal of Geo-Information 13, no. 6: 216. https://doi.org/10.3390/ijgi13060216

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop