sensors-logo

Journal Browser

Journal Browser

Intelligent Point Cloud Processing, Sensing and Understanding (Volume II)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 6510

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Engineering, Shenzhen University, Shenzhen 518052, China
Interests: computer vision and machine learning; image/video processing
Special Issues, Collections and Topics in MDPI journals
School of Mechanical Engineering, Shandong University, **an, China
Interests: medical image processing; deep learning; computer graphics and visualization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the success of the previous Special Issue “Intelligent Point Cloud Processing, Sensing and Understanding” (https://mdpi.longhoe.net/journal/sensors/special_issues/IX18KRFUQ1), we are pleased to announce the next in the series, entitled “Intelligent Point Cloud Processing, Sensing and Understanding (Volume II)”.

Point clouds are deemed to be one of the foundational pillars in representing the 3D digital world, despite irregular topologies among discrete points. Recently, the advancements in sensor technologies that acquire point cloud data for flexible and scalable geometric representation have paved the way for the development of new ideas, methodologies, and solutions in countless remote sensing applications. State-of-the-art sensors are capable of capturing and describing objects in a scene by using dense point clouds from various platforms (satellites, aerial, UAVs, vehicle-borne, backpacks, handheld, and static terrestrial), perspectives (nadir, oblique, and side view), spectra (multispectral), and granularity (point density and completeness). Meanwhile, the ever-expanding application areas of point cloud processing have already covered not only conventional domains in geospatial analysis, but also include manufacturing, civil engineering, construction, transportation, ecology, forestry, mechanical engineering, and so on.

This Special Issue aims to include contributions that focus on processing and utilizing point cloud data acquired from laser scanners and other 3D imaging systems. We are particularly interested in original papers that address innovative techniques for generating, handling, and analyzing point cloud data, challenges in dealing with point cloud data in emerging remote sensing applications, and develo** new applications for point cloud data.

Dr. Miaohui Wang
Dr. Sukun Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at mdpi.longhoe.net by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • point cloud acquisition from laser scanners, stereo vision, panoramas, camera phone images and oblique as well as satellite imagery
  • deep learning for point cloud processing
  • point cloud registration, segmentation, object detection, semantic labelling, compression and quality assessment
  • fusion of multimodal point clouds
  • modeling of LiDAR/image-based point cloud processing
  • industrial applications with large-scale point clouds
  • high-performance computing for large-scale point clouds

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 2436 KiB  
Article
Automated Phenotypic Trait Extraction for Rice Plant Using Terrestrial Laser Scanning Data
by Kexiao Wang, **aojun Pu and Bo Li
Sensors 2024, 24(13), 4322; https://doi.org/10.3390/s24134322 - 3 Jul 2024
Viewed by 212
Abstract
To quickly obtain rice plant phenotypic traits, this study put forward the computational process of six rice phenotype features (e.g., crown diameter, perimeter of stem, plant height, surface area, volume, and projected leaf area) using terrestrial laser scanning (TLS) data, and proposed the [...] Read more.
To quickly obtain rice plant phenotypic traits, this study put forward the computational process of six rice phenotype features (e.g., crown diameter, perimeter of stem, plant height, surface area, volume, and projected leaf area) using terrestrial laser scanning (TLS) data, and proposed the extraction method for the tiller number of rice plants. Specifically, for the first time, we designed and developed an automated phenotype extraction tool for rice plants with a three-layer architecture based on the PyQt5 framework and Open3D library. The results show that the linear coefficients of determination (R2) between the measured values and the extracted values marked a better reliability among the selected four verification features. The root mean square error (RMSE) of crown diameter, perimeter of stem, and plant height is stable at the centimeter level, and that of the tiller number is as low as 1.63. The relative root mean squared error (RRMSE) of crown diameter, plant height, and tiller number stays within 10%, and that of perimeter of stem is 18.29%. In addition, the user-friendly automatic extraction tool can efficiently extract the phenotypic features of rice plant, and provide a convenient tool for quickly gaining phenotypic trait features of rice plant point clouds. However, the comparison and verification of phenotype feature extraction results supported by more rice plant sample data, as well as the improvement of accuracy algorithms, remain as the focus of our future research. The study can offer a reference for crop phenotype extraction using 3D point clouds. Full article
Show Figures

Figure 1

18 pages, 2032 KiB  
Article
Receptive Field Space for Point Cloud Analysis
by Zhongbin Jiang, Hai Tao and Ye Liu
Sensors 2024, 24(13), 4274; https://doi.org/10.3390/s24134274 - 1 Jul 2024
Viewed by 222
Abstract
Similar to convolutional neural networks for image processing, existing analysis methods for 3D point clouds often require the designation of a local neighborhood to describe the local features of the point cloud. This local neighborhood is typically manually specified, which makes it impossible [...] Read more.
Similar to convolutional neural networks for image processing, existing analysis methods for 3D point clouds often require the designation of a local neighborhood to describe the local features of the point cloud. This local neighborhood is typically manually specified, which makes it impossible for the network to dynamically adjust the receptive field’s range. If the range is too large, it tends to overlook local details, and if it is too small, it cannot establish global dependencies. To address this issue, we introduce in this paper a new concept: receptive field space (RFS). With a minor computational cost, we extract features from multiple consecutive receptive field ranges to form this new receptive field space. On this basis, we further propose a receptive field space attention mechanism, enabling the network to adaptively select the most effective receptive field range from RFS, thus equip** the network with the ability to adjust granularity adaptively. Our approach achieved state-of-the-art performance in both point cloud classification, with an overall accuracy (OA) of 94.2%, and part segmentation, achieving an mIoU of 86.0%, demonstrating the effectiveness of our method. Full article
Show Figures

Figure 1

18 pages, 12430 KiB  
Article
Comparison of Point Cloud Registration Techniques on Scanned Physical Objects
by Menthy Denayer, Joris De Winter, Evandro Bernardes, Bram Vanderborght and Tom Verstraten
Sensors 2024, 24(7), 2142; https://doi.org/10.3390/s24072142 - 27 Mar 2024
Viewed by 810
Abstract
This paper presents a comparative analysis of six prominent registration techniques for solving CAD model alignment problems. Unlike the typical approach of assessing registration algorithms with synthetic datasets, our study utilizes point clouds generated from the Cranfield benchmark. Point clouds are sampled from [...] Read more.
This paper presents a comparative analysis of six prominent registration techniques for solving CAD model alignment problems. Unlike the typical approach of assessing registration algorithms with synthetic datasets, our study utilizes point clouds generated from the Cranfield benchmark. Point clouds are sampled from existing CAD models and 3D scans of physical objects, introducing real-world complexities such as noise and outliers. The acquired point cloud scans, including ground-truth transformations, are made publicly available. This dataset includes several cleaned-up scans of nine 3D-printed objects. Our main contribution lies in assessing the performance of three classical (GO-ICP, RANSAC, FGR) and three learning-based (PointNetLK, RPMNet, ROPNet) methods on real-world scans, using a wide range of metrics. These include recall, accuracy and computation time. Our comparison shows a high accuracy for GO-ICP, as well as PointNetLK, RANSAC and RPMNet combined with ICP refinement. However, apart from GO-ICP, all methods show a significant number of failure cases when applied to scans containing more noise or requiring larger transformations. FGR and RANSAC are among the quickest methods, while GO-ICP takes several seconds to solve. Finally, while learning-based methods demonstrate good performance and low computation times, they have difficulties in training and generalizing. Our results can aid novice researchers in the field in selecting a suitable registration method for their application, based on quantitative metrics. Furthermore, our code can be used by others to evaluate novel methods. Full article
Show Figures

Figure 1

28 pages, 15051 KiB  
Article
Point Cloud Registration Method Based on Geometric Constraint and Transformation Evaluation
by Chuanli Kang, Chongming Geng, Zitao Lin, Sai Zhang, Siyao Zhang and Shiwei Wang
Sensors 2024, 24(6), 1853; https://doi.org/10.3390/s24061853 - 14 Mar 2024
Viewed by 992
Abstract
Existing point-to-point registration methods often suffer from inaccuracies caused by erroneous matches and noisy correspondences, leading to significant decreases in registration accuracy and efficiency. To address these challenges, this paper presents a new coarse registration method based on a geometric constraint and a [...] Read more.
Existing point-to-point registration methods often suffer from inaccuracies caused by erroneous matches and noisy correspondences, leading to significant decreases in registration accuracy and efficiency. To address these challenges, this paper presents a new coarse registration method based on a geometric constraint and a matrix evaluation. Compared to traditional registration methods that require a minimum of three correspondences to complete the registration, the proposed method only requires two correspondences to generate a transformation matrix. Additionally, by using geometric constraints to select out high-quality correspondences and evaluating the matrix, we greatly increase the likelihood of finding the optimal result. In the proposed method, we first employ a combination of descriptors and keypoint detection techniques to generate initial correspondences. Next, we utilize the nearest neighbor similarity ratio (NNSR) to select high-quality correspondences. Subsequently, we evaluate the quality of these correspondences using rigidity constraints and salient points’ distance constraints, favoring higher-scoring correspondences. For each selected correspondence pair, we compute the rotation and translation matrix based on their centroids and local reference frames. With the transformation matrices of the source and target point clouds known, we deduce the transformation matrix of the source point cloud in reverse. To identify the best-transformed point cloud, we propose an evaluation method based on the overlap ratio and inliers points. Through parameter experiments, we investigate the performance of the proposed method under various parameter settings. By conducting comparative experiments, we verified that the proposed method’s geometric constraints, evaluation methods, and transformation matrix computation consistently outperformed other methods in terms of root mean square error (RMSE) values. Additionally, we validated that our chosen combination for generating initial correspondences outperforms other descriptor and keypoint detection combinations in terms of the registration result accuracy. Furthermore, we compared our method with several feature-matching registration methods, and the results demonstrate the superior accuracy of our approach. Ultimately, by testing the proposed method on various types of point cloud datasets, we convincingly established its effectiveness. Based on the evaluation and selection of correspondences and the registration result’s quality, our proposed method offers a solution with fewer iterations and higher accuracy. Full article
Show Figures

Figure 1

30 pages, 5973 KiB  
Article
LiDAR Dynamic Target Detection Based on Multidimensional Features
by Aigong Xu, Jiaxin Gao, **n Sui, Changqiang Wang and Zhengxu Shi
Sensors 2024, 24(5), 1369; https://doi.org/10.3390/s24051369 - 20 Feb 2024
Viewed by 920
Abstract
To address the limitations of LiDAR dynamic target detection methods, which often require heuristic thresholding, indirect computational assistance, supplementary sensor data, or postdetection, we propose an innovative method based on multidimensional features. Using the differences between the positions and geometric structures of point [...] Read more.
To address the limitations of LiDAR dynamic target detection methods, which often require heuristic thresholding, indirect computational assistance, supplementary sensor data, or postdetection, we propose an innovative method based on multidimensional features. Using the differences between the positions and geometric structures of point cloud clusters scanned by the same target in adjacent frame point clouds, the motion states of the point cloud clusters are comprehensively evaluated. To enable the automatic precision pairing of point cloud clusters from adjacent frames of the same target, a double registration algorithm is proposed for point cloud cluster centroids. The iterative closest point (ICP) algorithm is employed for approximate interframe pose estimation during coarse registration. The random sample consensus (RANSAC) and four-parameter transformation algorithms are employed to obtain precise interframe pose relations during fine registration. These processes standardize the coordinate systems of adjacent point clouds and facilitate the association of point cloud clusters from the same target. Based on the paired point cloud cluster, a classification feature system is used to construct the XGBoost decision tree. To enhance the XGBoost training efficiency, a Spearman’s rank correlation coefficient-bidirectional search for a dimensionality reduction algorithm is proposed to expedite the optimal classification feature subset construction. After preliminary outcomes are generated by XGBoost, a double Boyer–Moore voting-sliding window algorithm is proposed to refine the final LiDAR dynamic target detection accuracy. To validate the efficacy and efficiency of our method in LiDAR dynamic target detection, an experimental platform is established. Real-world data are collected and pertinent experiments are designed. The experimental results illustrate the soundness of our method. The LiDAR dynamic target correct detection rate is 92.41%, the static target error detection rate is 1.43%, and the detection efficiency is 0.0299 s. Our method exhibits notable advantages over open-source comparative methods, achieving highly efficient and precise LiDAR dynamic target detection. Full article
Show Figures

Figure 1

14 pages, 1951 KiB  
Article
SAE3D: Set Abstraction Enhancement Network for 3D Object Detection Based Distance Features
by Zheng Zhang, Zhi** Bao, Qing Tian and Zhuoyang Lyu
Sensors 2024, 24(1), 26; https://doi.org/10.3390/s24010026 - 20 Dec 2023
Viewed by 716
Abstract
With the increasing demand from unmanned driving and robotics, more attention has been paid to point-cloud-based 3D object accurate detection technology. However, due to the sparseness and irregularity of the point cloud, the most critical problem is how to utilize the relevant features [...] Read more.
With the increasing demand from unmanned driving and robotics, more attention has been paid to point-cloud-based 3D object accurate detection technology. However, due to the sparseness and irregularity of the point cloud, the most critical problem is how to utilize the relevant features more efficiently. In this paper, we proposed a point-based object detection enhancement network to improve the detection accuracy in the 3D scenes understanding based on the distance features. Firstly, the distance features are extracted from the raw point sets and fused with the raw features regarding reflectivity of the point cloud to maximize the use of information in the point cloud. Secondly, we enhanced the distance features and raw features, which we collectively refer to as self-features of the key points, in set abstraction (SA) layers with the self-attention mechanism, so that the foreground points can be better distinguished from the background points. Finally, we revised the group aggregation module in SA layers to enhance the feature aggregation effect of key points. We conducted experiments on the KITTI dataset and nuScenes dataset and the results show that the enhancement method proposed in this paper has excellent performance. Full article
Show Figures

Figure 1

14 pages, 3083 KiB  
Article
RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention
by Jian Qian and Dewen Tang
Sensors 2023, 23(24), 9651; https://doi.org/10.3390/s23249651 - 6 Dec 2023
Cited by 1 | Viewed by 1041
Abstract
The problem of registering point clouds in scenarios with low overlap is explored in this study. Previous methodologies depended on having a sufficient number of repeatable keypoints to extract correspondences, making them less effective in partially overlap** environments. In this paper, a novel [...] Read more.
The problem of registering point clouds in scenarios with low overlap is explored in this study. Previous methodologies depended on having a sufficient number of repeatable keypoints to extract correspondences, making them less effective in partially overlap** environments. In this paper, a novel learning network is proposed to optimize correspondences in sparse keypoints. Firstly, a multi-layer channel sampling mechanism is suggested to enhance the information in point clouds, and keypoints were filtered and fused at multi-layer resolutions to form patches through feature weight filtering. Moreover, a template matching module is devised, comprising a self-attention map** convolutional neural network and a cross-attention network. This module aims to match contextual features and refine the correspondence in overlap** areas of patches, ultimately enhancing correspondence accuracy. Experimental results demonstrate the robustness of our model across various datasets, including ModelNet40, 3DMatch, 3DLoMatch, and KITTI. Notably, our method excels in low-overlap scenarios, showcasing superior performance. Full article
Show Figures

Figure 1

16 pages, 7096 KiB  
Article
Dimensioning Cuboid and Cylindrical Objects Using Only Noisy and Partially Observed Time-of-Flight Data
by Bryan Rodriguez, Prasanna Rangarajan, **nxiang Zhang and Dinesh Rajan
Sensors 2023, 23(21), 8673; https://doi.org/10.3390/s23218673 - 24 Oct 2023
Viewed by 860
Abstract
One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply [...] Read more.
One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply a superquadric fitting framework for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor. Our work demonstrates that an average error of less than 1 cm is possible for a box with the largest dimension of about 30 cm and a cylinder with the largest dimension of about 20 cm that are each placed 1.5 m from a ToF sensor. We also quantify the performance of dimensioning objects using various object orientations, ground plane surfaces, and model fitting methods. For cuboid objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 4% and 9% using the bounding technique and between 8% and 15% using the mirroring technique across all tested surfaces. For cylindrical objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 2.97% and 6.61% when the object is in a horizontal orientation and between 8.01% and 13.13% when the object is in a vertical orientation using the bounding technique across all tested surfaces. Full article
Show Figures

Figure 1

Back to TopTop