Next Article in Journal
NOBEL-BOX: Development of a Low-Cost Ship-Based Instrument for Ocean Monitoring
Next Article in Special Issue
SAE3D: Set Abstraction Enhancement Network for 3D Object Detection Based Distance Features
Previous Article in Journal
Improving Text-Independent Forced Alignment to Support Speech-Language Pathologists with Phonetic Transcription
Previous Article in Special Issue
Dimensioning Cuboid and Cylindrical Objects Using Only Noisy and Partially Observed Time-of-Flight Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention

School of Mechanical Engineering, University of South China, Hengyang 421001, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2023, 23(24), 9651; https://doi.org/10.3390/s23249651
Submission received: 30 September 2023 / Revised: 2 December 2023 / Accepted: 4 December 2023 / Published: 6 December 2023

Abstract

:
The problem of registering point clouds in scenarios with low overlap is explored in this study. Previous methodologies depended on having a sufficient number of repeatable keypoints to extract correspondences, making them less effective in partially overlap** environments. In this paper, a novel learning network is proposed to optimize correspondences in sparse keypoints. Firstly, a multi-layer channel sampling mechanism is suggested to enhance the information in point clouds, and keypoints were filtered and fused at multi-layer resolutions to form patches through feature weight filtering. Moreover, a template matching module is devised, comprising a self-attention map** convolutional neural network and a cross-attention network. This module aims to match contextual features and refine the correspondence in overlap** areas of patches, ultimately enhancing correspondence accuracy. Experimental results demonstrate the robustness of our model across various datasets, including ModelNet40, 3DMatch, 3DLoMatch, and KITTI. Notably, our method excels in low-overlap scenarios, showcasing superior performance.

1. Introduction

Point cloud registration is a crucial task within computer vision and robotics, frequently applied in significant domains like autonomous driving [1], 3D reconstruction [2], and simultaneous localization and map** (SLAM) [3]. In recent years, with the development of point cloud processing technology and deep learning technology, point cloud coding algorithms such as [4,5] have further optimized the processing of large-scale point cloud data and improved the accuracy of point cloud registration in large-scale scenes. However, registration in low-overlap environments remains a challenging task.
A classical point cloud registration method is Iterative Nearest Point (ICP) [6]. It starts with an initial transform guess, and then iteratively updates the transform matrix to minimize the distance between corresponding points until convergence or a certain number of iterations is reached. The disadvantage is that it is sensitive to initial transformations and local minima. The Fast Global Registration (FGR) [7] algorithm addresses the drawbacks of the ICP algorithm through global alignment, but is still prone to failure in noisy environments.
Deep learning-based methods can be divided into three categories: the first category [8,9] is based on the global features of the point cloud, which is treated as a whole to regress the transformation parameters. Although this type of method has good robustness to noise, it is not effective for the registration task of partially overlap** point cloud. The second class of methods [10,11] based on correspondence learning form correspondences by means of high-dimensional features of the points and iteratively minimize the feature distances to optimize the pose. This type of approach extracts the correspondences of the points, so it is more robust in partially overlap** point cloud registration, but it is still susceptible to noise interference. The last class [12,13,14] uses a two-stage approach to the point cloud correspondences, where they first learn the local descriptors of the downsampled keypoints for matching, and then use a pose estimator to recover the relative transformations. Their strategy allows them to achieve state-of-the-art performance in the dataset, but the downsampling method they use inherently introduces sparsity point cloud features, which reduces the reproducibility of the correspondences and thus loses its advantage in low overlap regions.
A recently discovered approach [15] bypasses direct keypoint detection. It mitigates the limitations of keypoint detection by employing a coarse-to-fine strategy similar to the two-dimensional correspondence approach [16,17] to address the problem of keypoint sparsity. It achieves correspondence extraction by employing a superpoint patch-to-point path. Essentially, the method extracts superpoint features from the original point cloud, merges them into superpoint patches for matching, and then extends the matched correspondences to include dense points. The advantage of the method is that it transforms the strict point matching requirement into a more relaxed patch overlap** environment. This shift effectively reduces the requirement for a large number of repeatable keypoints. However, it also emphasizes the importance of keypoint reliability. Furthermore, while this approach reduces the need for a large number of keypoints, its sparsity remains unchanged. Therefore, in this paper, more emphasis is placed on compensating for this inherent sparsity by capturing contextual features.
Taking inspiration from references [15,18], our initial focus lies in optimizing the enhancement of the patch context function. With the objective in mind, a matching module based on a graph convolutional neural network was designed, the core of which is constituted by two modules: the self-attention graph convolutional neural network and the cross-attention module. Graph convolutional networks focus on different points when processing point cloud data and dynamically adjust weights based on the relationship between them. This adaptability enables the model to better capture local and global features in point cloud, thereby improving the modeling ability of template points. Secondly, the cross-attention module enables the model to effectively capture the information interaction in point cloud between different channels by introducing cross-channel correlation. This helps integrate multi-channel information to more fully understand feature relationships in point cloud. Through this cross-channel interaction, the model is better able to adapt to complex structures in point cloud data. In addition, previous methods use grid-sampled superpoints as nodes and divide patches through a point-to-node grou** strategy. Since the grid-sampled superpoints are sparse and loose, the local neighborhoods between point cloud pairs are inconsistent, which adversely affects subsequent point matching. To this end, it is recommended to extract points characterized by a more uniform visual field distribution and high repeatability as nodes. The utilization of the feature pyramid effect is suggested for scoring nodes across various receptive fields. Additionally, incorporating multi-level point cloud sampling is advised during the process of patch fusion. During the sampling of point cloud at different levels, the density of sampled points is regulated through non-maximum suppression. Utilizing multi-channel sampling and point matching comparison allows for the acquisition of more comprehensive point cloud information, resulting in improved correspondence with template points.

2. Related Work

2.1. Traditional Point Cloud Registration Methods

The Iterative Closest Point (ICP) [6] algorithm has retained its practical importance since it became established. Its straightforward logic and ease of implementation have solidified its position as a staple method for aligning rigid-body transformations. It shows strong convergence when the initial deviation is relatively small. However, its accuracy decreases when the initial bias is large, resulting in local optima and sensitivity to noise. Consequently, an array of ICP-based variants [19,20,21] have emerged.These ICP-based variations expand upon the original concept to address its limitations. They offer enhanced flexibility by accommodating multiple constraints such as distance, geometry, and normals, leading to improved robustness against initial deviations. However, this enhancement often comes at the cost of increased computational complexity.
Among alternative strategies, feature-based registration methods like Fast Point Feature Histograms (FPFH) [22] and Signature of Histograms of Orientations (SHOT) [23] have been developed to adapt to complex environments and noise by extracting local features. The Random Sample Consensus (RANSAC) [24] divides the point cloud into random subsets for local registration, effectively eliminating outliers, albeit at the expense of computational time. Conversely, the Fast Global Registration (FGR) [7] transforms the non-convex problem into a convex one through a smoothing mechanism, enabling rapid global registration. However, this method exhibits heightened sensitivity to errors.

2.2. Deep Learning-Based Methods

PointNet [25] is the first deep learning model that can directly process raw point cloud data without converting the point cloud into voxels or meshes. Thus, PointNetLK [8] uses PointNet’s ability to extract global features from point cloud to achieve alignment via the LK optical flow method in classical image alignment. PCRNet [9], on the other hand, uses a multilayer perceptron (MLP) to solve the transformation parameters by treating the registration task as a regression problem after extracting features using PointNet. But PointNet, although it is easy to extract global features of the point cloud, loses the value of local features, so it is not robust to noisy and partially overlap** environments.
In contrast to approaches relying on PointNet, DCP [10] uses a Dynamic Graph Convolutional Neural Network (DGCNN) [26] to extract local features from the original point cloud. The rotation matrix and translation parameters are then computed by Singular Value Decomposition (SVD). RPMNet [27] introduces an auxiliary network to predict the optimal asymptotic annealing parameters to derive soft matches for point correspondences in integrating spatial coordinates and local geometric features. These approaches based on local correspondence learning solve part of the point cloud matching problem to some extent, but the network fails to converge when the rotation angle is too large. In the method of correlated feature point detection, D3Feat [14] utilizes a fully convolutional network architecture for joint dense detection and description. Predator [12] predicts dense overlap scores on top of jointly estimating significant scores and learning local descriptors, and analyses the confidence level of whether a point is located on an overlap** region. However, they show low robustness in low overlap scenarios. This is due to the fact that they still inherently rely on repeatable keypoints.
A facet-to-point correspondence strategy is employed, and the establishment of point cloud correspondence is carried out in a coarse-to-fine manner. The dependence on repeatable key points is reduced by composing patches. Meanwhile, the multi-channel sampling strategy has a denser set of correspondence points, while the attention mechanism-based graph convolutional neural network enriches the correspondence of template points by interacting with contextual features. The performance of various algorithms will be compared in Section 4.

3. Methods

Point cloud registration involves aligning point cloud data of an object from distinct viewpoints or sensors onto the same coordinate system. The objective is to fit multi-frame images, enhance visual perception to facilitate understanding of the environment, and provide assistance in subsequent tasks.
The central challenge of point cloud registration revolves around aligning two distinct point clouds, P = { p i R 3 i = 1 , , N } and Q = { q i R 3 i = 1 , , M } , while optimizing their alignment within the shared coordinate system. This task can be represented through the subsequent model: how can we determine a transformation matrix, denoted as T = { R , t } , which effectively reposition point cloud Q to attain optimal conformity with the spatial orientation of point cloud P ? This issue can be effectively characterized as an optimization problem:
argmin R , t i = 1 N i = 1 M R p i + t q i 2 2
where R denotes the rotation matrix and t denotes the translation parameter. We estimate the alignment transformation by finding point correspondences.
The point cloud registration model is illustrated in Figure 1. KPConv and FPN are utilized simultaneously to downsample the input point cloud and extract features (refer to Section 3.1). The downsampled points of the three different levels of the three layers are also selected as the reference and feature points of the correspondences to be matched, respectively. The graph convolutional neural matching module is used to extract the correspondences of the feature points (Section 3.2). Subsequently, the point matching module employs these correspondences to extend the alignment from feature points to encompass the entire dense point cloud (Section 3.3). At last, the local-to-global alignment method estimates the transformation matrix.

3.1. Feature Extraction

The KPConv-FPN [28] technique is utilized to downsample the initial point cloud and obtain features at the individual point level. Multiple resolution levels are sampled from the original point cloud P . These layers of sampled point cloud exhibit progressively diminished resolution connections, necessitating a coarse-to-fine methodology to master correspondences. The points corresponding to the most rudimentary resolution, designated as P ^ , are regarded as the reference points to be aligned. Both multi-level transition points, denoted as P ¯ , and dense points, denoted as P ˜ , are independently extracted, and their respective acquired features are labeled as F ¯ P R | P ¯ | × d ¯ and F ˜ P R | P ˜ | × d ˜ . d is the corresponding sampling scale, determined by the resolution factor. For each modal point, a point-to-node grou** strategy is employed, thereby constructing localized point patches around it. The features in F ¯ and F ˜ are assigned to the nearest modal point, and the ensuing equation validates the assigned weights:
w = p ˜ p ^ 2 / p ¯ p ^ 2
Based on the weights, for different levels, the nearest points will be attributed to the modal points closest to them and the resulting patch is shown below:
G i P = i = argmin j ( p ¯ p ^ j 2 ) , p ^ j P ^ w > 1 , i = argmin j ( p ˜ p ^ j 2 ) , p ^ j P ^ w 1 ,

3.2. Graph Convolutional Neural Network Matching Module

Obtaining a global field of view is critical in a variety of computer vision tasks. Therefore, we employ an attention mechanism that utilizes broader contextual information to augment global properties. This yields enhanced geometric differentiation within the acquired features, consequently mitigating pronounced matching ambiguities and a surplus of aberrant matches, particularly in scenarios characterized by limited overlap. Through the utilization of the cross-focusing mechanism, feature details from the point cloud can be adeptly interchanged and amalgamated, leading to the identification of pivotal points connected with regions of overlap. This innovative approach effectively solves the problem of redundant point accumulation while simplifying the process of selecting point sets during the alignment process. In the cross-focus module, the initial embedding contains features from both the source and target point cloud.
Before connecting the feature codes of the two inputs, a graph neural network (GNN) is first used to further aggregate and strengthen their contextual relationships, respectively. The point sets P or Q are connected into a graph within the Euclidean space through the employment of the K-Nearest Neighbors (K-NN) algorithm. Subsequently, utilizing K-NN searches based on coordinates, the features are linked to centroid features.
f i = c a t ( x i n , x j n x i n )
In the aforementioned equation, x n signifies the feature encoding corresponding to the point set P , while “cat” denotes concatenation.
h i = LeakyRLU ( norm ( conv ( f i ) ) )
x i n + 1 = max ( i , j ) ε h i ( f i )
where h i denotes the linear layer, norm denotes the activation function, max denotes the elemental channel maximum pooling layer, and ( i , j ) ε denotes the two edges of the graph.
x i s e l f G N N = h i ( c a t ( x i 0 , x i 1 , x i 2 ) )
Cross-attention stands as a prototypical module within point cloud registration tasks, fostering the exchange of features between two input point clouds. We apply self-attention processing to the three-channel point cloud data, combining the resultant data with convolved point cloud information, and subsequently activate the aggregated point cloud using a Multi-Layer Perceptron (MLP). The computation of the Cross-Attention (CA) module is detailed as follows:
m i = att ( x i , x j , x j )
Here, att denotes Multi Head Attention. x i = x i s e l f G N N , x j = x j s e l f G N N .
X i C A = x i s e l f G N N + MLP ( cat ( x i s e l f G N N , m i ) )
In the sampling of correspondences, considering patch correspondences at different levels helps to obtain more robust point correspondences in the point matching stage. However, due to the sparse and loose nature of block matching, many correct correspondences are often overlooked in the screening process. Our proposed multi-channel convolutional network can complement more effective point correspondences for accurate point cloud registration.

3.3. Point Matching Module

After obtaining the template point correspondence, the dense point correspondence will be derived based on the patch correspondence. Subsequently, the local-to-global registration (LGR) mechanism derives candidate matrices from the point correspondences engendered by each pairing of matching patches. From these candidates, the globally optimal transformation matrix is selected. Pertaining to the point-level, our approach exclusively employs localized point features gleaned from the backbone network. The underlying principle is that, upon resolving global ambiguity through template point matching, point-level matching is predominantly influenced by the proximity of the matched points. This strategic design enhances the overall robustness of the process.
For each template point correspondence, the optimal transport layer is employed to derive localized dense point correspondences between the point clouds. The process begins by calculating the cost matrix:
C i = F x i P ( F y i Q ) T / d ˜ ,
Following this, the cost matrix undergoes expansion through the addition of new rows and columns, infused with learnable bin parameters. Subsequently, the Sinkhorn algorithm is employed to compute the soft assignment matrix. This matrix is then reinstated to its original form by discarding the last row and column. This resultant matrix serves as a confidence measure for prospective matches. Point correspondences are subsequently culled through mutual top-k selection, whereby a point match is affirmed if it falls within the k largest entries within its respective row and column. The point correspondences computed from each template point match are then collected together to form the final global dense point correspondence: C = i = 1 N c C i .

3.4. Loss Functions

The loss functions of the Graph Convolutional Neural Matching Module and the Point Matching Module are divided into the following two points.
A metric learning approach is chosen to cultivate a feature space that facilitates the assessment of the similarity of samples. This approach is tailored to more effectively evaluate the matching interrelation between patches, facilitating the convergence of matches and divergence of mismatches. We meticulously select patches within P , ensuring each belongs to a group of anchor point patches denoted as N , where a positive patch in Q is present. Pairs of patches are categorized as positive if they display a minimum of 10% overlap, and conversely as negative if they lack overlap. All other pairs are omitted from consideration. For each anchor patch G i N , a corresponding loss takes on the following format:
L c = G i P N f ( x i a ) f ( x i p ) 2 2 f ( x i a ) f ( x i n ) 2 2 + α +
where x i a symbolizes the feature representation of the anchor, while x i p signifies the feature representation of the positive example that aligns with the anchor patch. Correspondingly, x i n stands for the feature representation of the negative example, which does not align with the anchor patch. The parameter α functions as a constant threshold, integral to guaranteeing that the disparity in distance between the positive and negative examples remains surpassing a predefined threshold. The function [ z ] + = max ( z , 0 ) corresponds to the Rectified Linear Unit (ReLU) function. By cultivating a suitable feature space and employing these strategies, we optimize the discernment of matching relationships among patches, thereby enhancing the precision of point cloud registration.
The correspondence relationship of the real point set is sparser than that of the downsampled template points. The correspondence matrix Z in point matching is classified using a negative log-likelihood loss.
During training, true point correspondences C ^ i are randomly sampled. For each C ^ i , a set of true point correspondences M is extracted using the matching radius r. The set of unmatched points in the two patches is denoted as I i and J i . The individual point matching loss of C ^ i is computed as:
L p = ( x , y ) M log z ¯ x , y i x I log z ¯ x , m i + 1 i y J log z ¯ n i + 1 , y i
The final loss function X consists of three loss functions together: L = L c + L m + L p , where L m is different from L p . The intermediate layer P ¯ is used as the real point set. This approach establishes a link between multiple levels of point correspondences, and by exploiting these multiple levels of point correspondences, our method can compute point cloud feature parameters from a comprehensive perspective. This strategy not only improves the accuracy of point cloud alignment, but also enriches the representation learning process by exploiting the hierarchical structure inherent in the data.

4. Results

This section is dedicated to the experimental validation and performance comparison of our proposed method. The efficacy of the model is meticulously assessed through comprehensive experimentation. To establish a robust basis for evaluation, comparisons are conducted against several established methods, namely ICP, FGR, PointNetLK, DCP, and RPMNet. For the evaluation process, the ModelNet40 dataset [29] is employed as a benchmark. Through testing and analysis, distinctive advantages offered by our model in contrast to these existing methodologies are elucidated. Furthermore, to ascertain the generalizability of our approach in real-world scenarios, the evaluation is extended to encompass actual data. Engagement with the 3DMatch [30], 3DLoMatch [31], and KITTI [32] datasets is carried out to test the adaptability and reliability of our model within practical contexts.

4.1. ModelNet40

Our algorithm undergoes a thorough evaluation process on the ModelNet40 dataset, which encompasses computer-aided design (CAD) models representing 40 diverse classes of human-made objects. The evaluation strategy involves training on a set of 9756 models and testing on a separate collection of 2555 models. In alignment with the experimental framework established by RPMNet, adherence to specific guidelines is maintained. For each given shape, 1024 points are selected to constitute a point cloud. Additionally, an element of randomness is introduced into the evaluation process. Specifically, three Euler angles per point cloud are generated, each within the range of [0, 90°]. Furthermore, translations are introduced within the range of [−0.5, 0.5]. The original and target point clouds are distinguished in red and green.
A consistent metric framework is adopted, aligning with the assessment criteria employed by RPMNet [11] to evaluate the performance of our algorithm. This approach ensures comparability with previous research and underscores the reliability of our results. In this metric framework, the evaluation of alignment is performed by calculating the average isotropic rotation and translation errors, along with the average absolute errors of the Euler angles and translation vectors. If the overlap** regions of the two point clouds are identical, then all error parameters should be close to zero.
The performance of the algorithm is thoroughly evaluated across various point cloud scenarios, including clean point cloud, environments with noise, and instances of partially visible point cloud. The experimental outcomes are graphically presented in Figure 2 and Table 1, Table 2 and Table 3. Since some algorithms do not have reliable open source implementations, some data come from their papers.
From Figure 2, it can be concluded that traditional methods such as ICP are susceptible to initialization, which is particularly obvious when the rotation angle is large. On the other hand, the efficacy of FGR is weakened in noisy environments because FPFH is sensitive to the noise problem under different point cloud conditions. In contrast, PointNetLK performs well in noisy environments, but still faces challenges in partially visible data. The reason for this phenomenon is that global feature methods emphasize the overall features of the point cloud rather than the specific local features of individual points. GeoTransformer works well in clean and noisy point clouds. Even in cases involving partially visible noise, superior registration results are observed for point clouds with simple structures (d). However, in the case of point cloud (e), our algorithm outperforms GeoTransformer. This is because injecting geometric information can improve performance, but the estimation method based on the geometric transformer does not rely on a stable estimator like RANSAC, which increases the difficulty in the estimation of actual super points, and GNN contains the transformation of the kNN graph, invariance, and better performance in transformation estimation. Therefore, our method improves the registration effect more significantly.
The tabular data in Table 1, Table 2 and Table 3 is further analyzed. Table 1 shows that FGR performs better than us in clean data, and then its performance in noisy data reflects the previous inference. The following focuses on comparing some visible noise point cloud data in Table 3. The PointNetLK algorithm, which performs well in Table 2, meets its Waterloo, and the DCP also suffers more in the absence of point clouds. Our algorithm still maintains a relatively excellent score, while still improving compared to the data of GeoTransformer. In terms of registration efficiency, ICP boasts the simplest algorithm structure. Given the small point cloud base in this experiment, it highlights the superiority of geometric algorithms. Neither FGR nor ICP utilizes the GPU, resulting in FGR’s efficiency not being significantly enhanced after adding normal vector calculations. DCP adopts an end-to-end design model, eliminating the disadvantages of multi-stage calculation iterations seen in other algorithms, and it performs exceptionally well in computational efficiency due to GPU utilization. However, as the size of the point cloud increases, the performance of the end-to-end algorithm will experience nonlinear decline. GeoTransformer, utilizing geometric information to improve registration speed, achieves faster transformation estimation by omitting RANSAC. In contrast, our method utilizes multi-level feature extraction which, although it reduces part of the registration speed, increases the accuracy of feature extraction. Furthermore, the introduction of template points through hybrid sampling enhances the effectiveness of plane segmentation and feature matching.

4.2. Indoor Benchmarks: 3DMatch and 3DLoMatch

The point cloud data of the real environment is more complex than ModelNet40. The larger number of point cloud and lower overlap area will make many algorithms effective on ModelNet lose their advantages. The robustness of our algorithm is assessed in real environments with low overlap, specifically on the 3DMatch and 3DLoMatch datasets.
The 3DMatch dataset comprises a total of 62 scenes, distributed for training (46 scenes), validation (8 scenes), and testing (8 scenes) purposes. The 3DLoMatch dataset is a more challenging dataset derived from 3DMatch. In the original 3DMatch dataset, only point cloud pairs with an overlap** rate greater than 30% are employed for testing. In contrast, the testing set of 3DLomatch includes point cloud pairs with an overlap** rate ranging between 10% and 30%. Preprocessed training data are utilized, and its performance is evaluated using the established 3DMatch protocol.
In line with previous assessments, the performance of our algorithm is measured using three distinct metrics:
  • Interior Point Ratio (IR): this metric quantifies the proportion of hypothetical correspondences with residuals falling below a predetermined threshold (e.g., 0.1 m) under the ground truth transformation;
  • Feature Matching Recall (FMR): FMR denotes the fraction of point cloud pairs wherein the interior point ratio surpasses a specified threshold (e.g., 5%);
  • Matching Recall (RR): RR involves evaluating the fraction of point cloud pairs exhibiting transformation errors below a given threshold (e.g., Root Mean Square Error < 0.2 m).
Experiments were performed identically on data from FCGF [33], D3feat, Predator, Cofinet and GeoTransformer (data obtained from the paper). As can be seen from Table 4, our model achieves the best performance in all three indicators. In 3DLoMatch (Table 5), compared to GeoTransformer, the FMR indicator is slightly insufficient. This is because when the point cloud overlap rate is too low, although multi-level sampling increases the number of point cloud pairs, the number of tasks with high confidence in the total registration tasks will decrease. Figure 3a,b represents the registration results of 3DMatch, and Figure 3c–e represents the registration results of 3DLoMatch. The algorithm achieves good registration results in both data sets.

4.3. Outdoor Benchmark: KITTI Odometry

The KITTI odometry dataset encompasses 11 sequences capturing diverse outdoor driving scenarios, all of which are captured using LiDAR technology. Our utilization of this dataset is distributed as follows: sequences 0 to 5 are designated for training purposes, sequences 6 and 7 serve as validation sets, and sequences 8 to 10 constitute the testing data. In alignment with established practices in prior research, we adhere to the stipulation that only point cloud pairs separated by a minimum distance of 10 m are considered for evaluation.
Consistent with established practices in earlier studies, our performance evaluation hinges on three critical metrics:
  • Relative Rotation Error (RRE): this metric quantifies the geodesic distance between the rotated matrix derived from our method and the corresponding ground truth rotated matrix;
  • Relative Translation Error (RTE): RTE computes the Euclidean distance between the rotated matrices and the ground truth translation vectors;
  • Recall to Alignment (RR): RR is a comprehensive metric reflecting the fraction of point-cloud pairs wherein both the RRE and RTE fall below specified thresholds (e.g., RRE < 5° and RTE < 2 m).
As shown in Table 6, our model is compared with [12,14,15,18,33]. In comparison to the real environment on the ground, we ensured similar displacement errors and rotation errors. Compared to other models, our metrics do not open a large gap, but still demonstrate the good generalizability of our model in outdoor environments. The registration effect is shown in the Figure 4.

4.4. Ablation Experiment

To illustrate the influence of each component on network performance, an ablation study is conducted in this section. Various modules within the network are systematically added and removed, allowing for an evaluation of their respective contributions to the final matching performance. Experiments are conducted on partially visible point cloud with noise. For easier comparison of results, we selected relative rotation error (RRE), relative translation error (RTE), and root mean square error (RMSE) as measurement standards. Experimental results (Table 7) show that a single graph convolution module can improve some accuracy, but there is still a gap with Transformer. After adding the cross-attention mechanism, our module indicators are already better than Transformer. The addition of Multi-channel further increased the rotation error and translation error by 7.3% and 5.8%. In addition, experimental results prove that our improved method achieves performance improvement compared to the baseline model and has good versatility.

5. Conclusions

This paper proposes a novel network to solve the problem of point cloud registration in low-overlap environments. Compared with previous work, our model uses multi-layer patches to enrich correspondences and can still extract reliable correspondences from disordered point clouds in the environment of sparse keypoints. In addition, the template point matching module enhances the contextual features of patches through graph convolutional neural networks and multiple sub-attention mechanisms, guiding the model to match nodes with nearby regions and narrowing the search space for subsequent refinement. Experiments on multiple datasets show that our proposed method is very robust and still has high general-purpose capabilities on outdoor datasets.

Author Contributions

Conceptualization, J.Q.; methodology, J.Q.; software, J.Q.; validation, J.Q. and D.T.; formal analysis, D.T.; investigation, J.Q. and D.T.; resources, D.T.; data curation, J.Q. and D.T.; writing—original draft preparation, J.Q.; writing—review and editing, J.Q. and D.T.; visualization, J.Q.; supervision, D.T.; project administration, D.T.; funding acquisition, D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Hunan Provincial Regional Joint Fund (2023JJ50130).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are not currently publicly available but are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Teng, S.; Hu, X.; Deng, P.; Li, B.; Li, Y.; Ai, Y.; Yang, D.; Li, L.; Xuanyuan, Z.; Zhu, F.; et al. Motion planning for autonomous driving: The state of the art and future perspectives. IEEE Trans. Intell. Veh. 2023, 9, 3692–3711. [Google Scholar] [CrossRef]
  2. Li, J.; Gao, W.; Wu, Y.; Liu, Y.; Shen, Y. High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief review. Comput. Vis. Media 2022, 8, 369–393. [Google Scholar] [CrossRef]
  3. Kazerouni, I.A.; Fitzgerald, L.; Dooly, G.; Toal, D. A survey of state-of-the-art on visual SLAM. Expert Syst. Appl. 2022, 205, 117734. [Google Scholar] [CrossRef]
  4. Sun, X.; Wang, S.; Wang, M.; Cheng, S.S.; Liu, M. An advanced LiDAR point cloud sequence coding scheme for autonomous driving. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 2793–2801. [Google Scholar]
  5. Sun, X.; Wang, M.; Du, J.; Sun, Y.; Cheng, S.S.; **e, W. A Task-Driven Scene-Aware LiDAR Point Cloud Coding Framework for Autonomous Vehicles. IEEE Trans. Ind. Inform. 2022, 19, 8731–8742. [Google Scholar] [CrossRef]
  6. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 12–15 November 1991; SPIE: Cergy-Pontoise, France, 1991; Volume 1611, pp. 586–606. [Google Scholar]
  7. Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 766–782. [Google Scholar]
  8. Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7163–7172. [Google Scholar]
  9. Sarode, V.; Li, X.; Goforth, H.; Aoki, Y.; Srivatsan, R.A.; Lucey, S.; Choset, H. Pcrnet: Point cloud registration network using pointnet encoding. ar**v 2019, ar**v:1908.07906. [Google Scholar]
  10. Wang, Y.; Solomon, J.M. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Long Beach, CA, USA, 15–20 June 2019; pp. 3523–3532. [Google Scholar]
  11. Wang, Y.; Solomon, J.M. Prnet: Self-supervised learning for partial-to-partial registration. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
  12. Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4267–4276. [Google Scholar]
  13. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  14. Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.L. D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6359–6367. [Google Scholar]
  15. Yu, H.; Li, F.; Saleh, M.; Busam, B.; Ilic, S. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. Adv. Neural Inf. Process. Syst. 2021, 34, 23872–23884. [Google Scholar]
  16. Li, X.; Han, K.; Li, S.; Prisacariu, V. Dual-resolution correspondence networks. Adv. Neural Inf. Process. Syst. 2020, 33, 17346–17357. [Google Scholar]
  17. Zhou, Q.; Sattler, T.; Leal-Taixe, L. Patch2pix: Epipolar-guided pixel-level correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4669–4678. [Google Scholar]
  18. Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; Xu, K. Geometric transformer for fast and robust point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11143–11152. [Google Scholar]
  19. Yookwan, W.; Chinnasarn, K.; So-In, C.; Horkaew, P. Multimodal Fusion of Deeply Inferred Point Clouds for 3D Scene Reconstruction Using Cross-Entropy ICP. IEEE Access 2022, 10, 77123–77136. [Google Scholar] [CrossRef]
  20. Vizzo, I.; Guadagnino, T.; Mersch, B.; Wiesmann, L.; Behley, J.; Stachniss, C. Kiss-icp: In defense of point-to-point icp–simple, accurate, and robust registration if done the right way. IEEE Robot. Autom. Lett. 2023, 8, 1029–1036. [Google Scholar] [CrossRef]
  21. Liu, S.; Gao, D.; Wang, P.; Guo, X.; Xu, J.; Liu, D.X. A depth-based weighted point cloud registration for indoor scene. Sensors 2018, 18, 3608. [Google Scholar] [CrossRef] [PubMed]
  22. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  23. Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  24. Wei, T.; Patel, Y.; Shekhovtsov, A.; Matas, J.; Barath, D. Generalized differentiable RANSAC. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 17649–17660. [Google Scholar]
  25. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  26. Phan, A.V.; Le Nguyen, M.; Nguyen, Y.L.H.; Bui, L.T. Dgcnn: A convolutional neural network over large-scale labeled graphs. Neural Netw. 2018, 108, 533–543. [Google Scholar] [CrossRef] [PubMed]
  27. Yew, Z.J.; Lee, G.H. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11824–11833. [Google Scholar]
  28. Zhao, H.; Wei, S.; Shi, D.; Tan, W.; Li, Z.; Ren, Y.; Wei, X.; Yang, Y.; Pu, S. Learning Symmetry-Aware Geometry Correspondences for 6D Object Pose Estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 14045–14054. [Google Scholar]
  29. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; **ao, J. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
  30. Zeng, A.; Song, S.; Nießner, M.; Fisher, M.; **ao, J.; Funkhouser, T. 3dmatch: Learning local geometric descriptors from rgb-d reconstructions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1802–1811. [Google Scholar]
  31. Mei, G.; Huang, X.; Zhang, J.; Wu, Q. Overlap-guided coarse-to-fine correspondence prediction for point cloud registration. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 18–22 July 2022; pp. 1–6. [Google Scholar]
  32. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  33. Choy, C.; Park, J.; Koltun, V. Fully convolutional geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8958–8966. [Google Scholar]
Figure 1. Given two partially overlap** point clouds P and Q , we first adopt KPConv-FPN to learn point cloud features at different levels. F ¯ P or F ¯ Q are connected to the centroid features through the K-NN algorithm and then imported into the graph convolutional neural network matching module. Their contextual features are further aggregated and enhanced through the self-attention graph convolution network (Self-gnn) and cross-attention (CA) block. Subsequently, the patch correspondences is mapped to the real point set through the point matching module. Finally, a local-to-global registration method is used to calculate the transformation matrix.
Figure 1. Given two partially overlap** point clouds P and Q , we first adopt KPConv-FPN to learn point cloud features at different levels. F ¯ P or F ¯ Q are connected to the centroid features through the K-NN algorithm and then imported into the graph convolutional neural network matching module. Their contextual features are further aggregated and enhanced through the self-attention graph convolution network (Self-gnn) and cross-attention (CA) block. Subsequently, the patch correspondences is mapped to the real point set through the point matching module. Finally, a local-to-global registration method is used to calculate the transformation matrix.
Sensors 23 09651 g001
Figure 2. Examples of qualitative registration: (a,b) clean data, (c) noisy data, (d,e) partially visible noisy data.
Figure 2. Examples of qualitative registration: (a,b) clean data, (c) noisy data, (d,e) partially visible noisy data.
Sensors 23 09651 g002
Figure 3. Results of 3DMatch and 3DLoMatch experiments. (a,b) are from 3DMatch, while (ce) are from 3DLoMatch.
Figure 3. Results of 3DMatch and 3DLoMatch experiments. (a,b) are from 3DMatch, while (ce) are from 3DLoMatch.
Sensors 23 09651 g003
Figure 4. Results of KITTI experiments. (a,b) Point clouds collected using different radar positions and (c) is the result after registration.
Figure 4. Results of KITTI experiments. (a,b) Point clouds collected using different radar positions and (c) is the result after registration.
Sensors 23 09651 g004
Table 1. Performance on clean data.
Table 1. Performance on clean data.
ModelIsotropic
R(°)
Isotropic
t(m)
Anisotropic
R(°)
Anisotropic
t(m)
Time (s)
ICP5.4780.076511.4430.16250.013
FGR0.0100.00010.0220.00020.086
PointNetLK0.4180.02410.8470.00540.157
DCP2.0740.01433.9920.02920.009
PCR2.6910.03465.6820.07350.059
GeoTransformer0.0720.00250.0910.00230.023
Ours0.0340.00030.0740.00050.052
Table 2. Performance on data with Gaussian noise.
Table 2. Performance on data with Gaussian noise.
ModelIsotropic
R(°)
Isotropic
t(m)
Anisotropic
R(°)
Anisotropic
t(m)
Time (s)
ICP5.8630.082312.1450.17260.024
FGR2.4830.03254.2740.06310.118
PointNetLK1.5280.01282.9260.02620.214
DCP4.5280.03458.9220.07070.020
PCR2.9430.04176.2550.08040.122
GeoTransformer1.1560.00971.4370.02130.045
Ours0.5250.00721.3250.01270.083
Table 3. Performance on partially visible data with noise.
Table 3. Performance on partially visible data with noise.
ModelIsotropic
R(°)
Isotropic
t(m)
Anisotropic
R(°)
Anisotropic
t(m)
Time (s)
ICP13.7190.13227.2500.2800.017
FGR19.2660.09030.8340.1920.124
PointNetLK15.9310.14229.7250.2970.176
DCP6.3800.08312.6070.1690.014
PCR4.4370.0659.2180.1350.146
GeoTransformer1.3320.0152.2130.0520.067
Ours0.9170.0121.5770.0180.101
Table 4. Registration results on 3DMarch.
Table 4. Registration results on 3DMarch.
ModelIR (%)FMR (%)RR (%)
FCGF [33]48.797.083.3
D3feat [14]40.494.583.4
Predator [12]57.196.590.6
CoFiNet [15]51.998.188.4
GeoTransformer [18]70.397.791.5
ours72.598.593.0
Table 5. Registration results on 3DLoMarch.
Table 5. Registration results on 3DLoMarch.
ModelIR (%)FMR (%)RR (%)
FCGF [33]17.274.238.2
D3feat [14]14.067.046.9
Predator [12]28.376.361.2
CoFiNet [15]26.783.364.2
GeoTransformer [18]43.388.174.0
ours45.687.275.3
Table 6. Registration results on KITTI odometry.
Table 6. Registration results on KITTI odometry.
ModelRRE (°)RTE (m)RR (%)
FCGF [33]0.309.596.6
D3Feat [14]0.307.299.8
Predator [12]0.276.899.8
CoFiNet [15]0.418.299.8
GeoTransformer [18]0.246.899.8
ours0.2185.499.8
Table 7. Ablation experiments.
Table 7. Ablation experiments.
BaselineTrans-FormerSelf-gnnCA BlocksMulti-ChannelRRE (°)RTE (m)RMSE
2.1540.0330.026
1.5770.0180.017
1.7230.0290.021
1.5540.0170.016
1.440.0160.015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qian, J.; Tang, D. RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention. Sensors 2023, 23, 9651. https://doi.org/10.3390/s23249651

AMA Style

Qian J, Tang D. RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention. Sensors. 2023; 23(24):9651. https://doi.org/10.3390/s23249651

Chicago/Turabian Style

Qian, Jian, and Dewen Tang. 2023. "RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention" Sensors 23, no. 24: 9651. https://doi.org/10.3390/s23249651

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop