Next Article in Journal
Enhancing AES Using Chaos and Logistic Map-Based Key Generation Technique for Securing IoT-Based Smart Home
Previous Article in Journal
A Detection Method for Social Network Images with Spam, Based on Deep Neural Network and Frequency Domain Pre-Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The New Method of Active SLAM for Map** Using LiDAR

Department of Control and Information Systems, Faculty of Electrical Engineering and Information Technology, University of Žilina, 01026 Žilina, Slovakia
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(7), 1082; https://doi.org/10.3390/electronics11071082
Submission received: 7 March 2022 / Revised: 23 March 2022 / Accepted: 26 March 2022 / Published: 30 March 2022
(This article belongs to the Section Systems & Control Engineering)

Abstract

:
In this article, we propose a new approach to addressing the issue of active SLAM. In this design, we used the already functional SLAM algorithm, which we modified for our case. Matlab was used as the main software tool. All proposed methods were experimentally verified on a mobile robotic system. We used LiDAR as the primary sensor. After map** the environment, we created a grid map. The grid map may be used for the navigation of the mobile robotic system, but the navigation and control of the mobile robotic system are not involved in this article. The result of the whole process is an autonomous map** of the environment.

1. Introduction

Currently, robotics has several areas, each of which is closely related. The individual areas of robotics are rapidly advancing. The area of robotics that will be addressed in this work is called Simultaneous Localization and Map** (SLAM). In solving this task, the robotic system is placed in an unknown environment. Its main task is to map the environment and at the same time identify its current location on this map. This task does not seem too difficult when considering an ideal environment where various noises and measurement errors do not occur. Unfortunately, this kind of environment can only be achieved on a theoretical level. Various types of methods and algorithms have been proposed to overcome these defects. Different types of sensors can be used in the SLAM solution, which allows a combination of already known and new methods. Over time, computational technology has also evolved, and methods and theories that were neglected in the past have now begun to be used. As a result of these aspects, the number of possible solutions in this area is growing. The most frequently used SLAM methods are captured by the author in [1,2]. According to [3], the main goals in solving SLAM are to obtain the most accurate model of the mapped environment and to monitor the exact position of the robotic system in the environment. The sequence of steps is generally the same and does not depend on the type of algorithm used. These steps are initialization, data association, landmark extraction, mobile robotic system status, landmark status estimation, mobile robotic system status update, and landmark status update. For each of the mentioned steps, it is possible to use different solution methods, which are already linked to the specific used algorithm. SLAM can be used in a variety of environments, whether on land, in water, or in air [3]. The fact that the development in the field of SLAM is still advancing is also evidenced by the publication in [4], where the author improved the already known and used method and applied it in an autonomous mobile robot. When performing SLAM, the robotic system is controlled by a human operator. Gradually, methods began to emerge that are capable of autonomous navigation in map**, which is called active simultaneous localization and map**. With such an approach, it was necessary to consider minimizing wheel drift and a full coverage of the mapped space. In our case, we tried to eliminate the error caused by wheel drift using a SLAM algorithm that will correct the position from the odometer. The coverage of the largest possible area of the mapped space is ensured by the choice of a suitable target for the next movement of the robot. The right choice of a target selection criterion is still an open research question. Because each of them captures a different metric, they do not lead to the same behavior and do not require computational resources. The main task of active SLAM is to calculate trajectories and examine the environment when creating a map with minimal error. Over time, the issue of active SLAM has been divided into three sub-issues, which are location, environment map**, and trajectory planning. These areas have evolved independently over time.
In Section 2, the article focuses on a specific area of SLAM, called active SLAM. In this area, the autonomous movement of the mobile robotic system is being addressed to obtain a map. When obtaining a map, various areas of our interest can be defined, such as unexplored areas, areas of irregular shapes, areas with narrow alleys, etc. In our case, we focused on unexplored areas to reveal as much space as possible on the map. If we want to show such application in the field of industry, it is possible to cite the deployment of Autonomous Ground Vehicles (AGVs) in an industrial hall as an example. It is usually necessary to map the space before deployment. Active SLAM can be used for this. In our case, we decided to solve this problem with the help of a holonomic robotic system and LiDAR, while the space will be represented by a grid map. The used hardware and software equipment is described in Section 3. Section 4 deals with the design of a suitable algorithm for solving active SLAM. The authors have tried alternative solutions in [5,6]. The success rate of the methods, interpreted as a percentage of the coverage of the mapped area, is in the range of 90–98.13%. Our work aims to increase this success rate.

2. Active SLAM

The ASLAM method takes into account the obstacles and limitations already solved by the SLAM method. ASLAM can be considered as an extension of the classic SLAM, which solves the autonomous search of space. It is often referred to in the scientific literature as automatic SLAM [7], autonomous SLAM [8], or SPLMA (Simultaneous Planning, Localization, and Map**) [9]. In any case, all these mentioned terminologies have the same goal and thus are active motion selection. In [10], the author divided this goal into two research sub-areas, namely mobile robotics and computer vision. In mobile robotics, active research means the autonomous creation of a map in which the robot must control its movement and the associated tasks to calculate the target and the path. At the same time, an active response to unexpected situations is expected. In computer vision, this issue is called active perception [11] and is defined as an intelligent and reactive process of gathering information from the environment. It is considered intelligent because the state of the sensor intentionally changes according to the sensing strategies. Active perception applied to mobile robotics describes the robot’s ability to determine specific movements to better understand the environment. In the context of localization, the position or direction of the sensor shifts to find features that reduce position uncertainty.
If we want to divide the issue of ASLAM, we can do it as it was stated by the author in [12] in the following phases:
  • Position identification.
  • Destination selection.
  • Navigation and motion control.
These phases include localization, route planning, and map** tasks. The relationship between these tasks is shown in Figure 1. In the position identification phase, suitable destinations that need attention are identified. In the second phase, the most optimal destination is selected. The third phase is about the robotic system successfully reaching the destination.
Methods such as ASLAM can be comprehensively summarized by the author in [10]. In this article, the author included what type of robotic system was used, the type of sensor, the type of map, the optimization technique used, and whether the test was performed physically or only by simulation. The fact that the ASLAM issue is actual is also reflected in the integration into the IoT environment, as described by the author in [13], where he deals with the processing of data from sensors of autonomous vehicles.

3. Hardware and Software Equipment

A mobile robotic system, which was developed at our workplace, was used for experimental testing of the proposed methods. This mobile robotic system (MRS) was supported by projects APVV-17-0014 Smart tunnel: “Telematic support in case of emergencies in the road tunnel “and VEGA 1/0241/22 “Mobile robotic systems for support during crisis situations”. In our point of view, MRS is considered as a black box that can be controlled using the API. Communication with the robot is performed via LoRa technology. LoRa (Long Range) is a spread spectrum modulation technique derived from chirp spread spectrum technology. This technology is a long-range, low-power wireless platform that has become the de facto wireless platform of Internet of Things. A LiDAR LD-OEM 1000 is mounted on the MRS, which performs measurements. LoRa communication technology does not allow sending data in such volume and speed as required, so a WI-FI router has been added, which communicates with LiDAR and an external computing unit using the TCP/IP protocol. The concept is summarized in Figure 2.
The main element of this robotic system is the omnidirectional wheels. Because we are using omnidirectional wheels with a suitable control algorithm, we can call this robotic system holonomic. These wheels have rolling elements placed around the circumference, which rotate spontaneously and are fixed at an angle of 45°. During this rotation, a force of 45° (or −45°) is generated, with each wheel being controlled separately. When adding up the forces affecting the robotic system, we obtain the resultant speed and direction of the robot. Figure 3 shows this type of chassis.
The forward kinematics of this type of chassis can be expressed by Equation (1), and at the same time, the reverse kinematics is expressed by Equation (2):
[ v x v y ω ] = 2 π r [ 1 4 1 4 1 2 ( l + b ) 1 4 1 4 1 2 ( l + b ) 1 4 1 4 1 2 ( l + b ) 1 4 1 4 1 2 ( l + b ) ] [ θ 1 θ 2 θ 3 θ 4 ] ,
[ θ 1 θ 2 θ 3 θ 4 ] = 1 2 π r [ 1 1 l + b 2 1 1 l + b 2 1 1 l + b 2 1 1 l + b 2 ] [ v x v y ω ] ,
where vx and vy are the forward and lateral speeds of the robot, θi is the speed of rotation of the wheel (i = 1,2,3,4), ω is the angular MRS speed around its axis z, and l and b are the dimensions of the MRS [14]. LiDAR LD-OEM 1000 is attached to MRS. The prototype of this MRS can be seen in Figure 4.
One of the most well known platforms that focus on robotics and closely related areas is called ROS (Robotic Operating System). The use of ROS brings several advantages in the form of several implemented SLAM algorithms. ROS also has various disadvantages in terms of the complexity of the program architecture and the creation of the program using nodes. This may make it more complicated, so we decided to use a slightly simpler platform from our point of view, which is Matlab. Matlab uses a multi-library collaboration for the SLAM solution, with the Robotic toolbox, Computer vision, LiDAR toolbox, and Navigation toolbox. These toolboxes already include specific solutions where it is possible to use the Normal Distributions Transform (NDT) algorithm method [15], the scan comparison method [16], or the methods using the extended Kalman filter. It is possible to map the environment using a LiDAR sensor and an RGB camera, but also using an IMU. The use of Matlab software in robotic applications is discussed by the authors in [17]. The PF filter method can also be used to calculate the current position [18]. After map** the environment, it is possible to generate a map for global navigation purposes. It is also possible to use local navigation. If we decide for local navigation, we can use the Vector Field Histogram (VFH) methods. The choice of solution method depends on the requirements that are placed on the final output, such as computational complexity and time response. In our case, we decided to use the method that is implemented in the LiDAR–SLAM function. This feature uses 2D data from a LiDAR scanner to create a map. First of all, it is necessary to solve the data collection, which will be further processed. The LiDAR SICK LD-OEM 1000 was used for data collection. This sensor communicates with the computing unit via a TCP / IP socket. The author deals with this communication in [19]. The whole SLAM process is performed based on Figure 5.

4. ASLAM Method Design

In ASLAM, three basic tasks must be combined in an appropriate way: map**, localization, and route planning. Our approach to this issue is summarized in Figure 5. In the first step, initialization measurements are performed, and then the whole process is executed in a loop. After the measurement, it is necessary to determine the target destination, which the robotic system should explore in the next iteration. The selection of a suitable target area is one of the main points addressed in this article. When the start and finish positions of the robotic system are available, it is possible to apply a path-planning algorithm in the next step. In our case, the A * algorithm is used. The generated path is sent to the mobile robotic system, while the ASLAM algorithm itself no longer interferes with the robot’s movement; it only monitors the current position of the system. The mobile robotic system we used is described in the Hardware section. When moving to the target position, the data are read from the robot’s odometer so that the end of the movement can be evaluated. During the movement of the robot to the target, a time window is created, which can be used to perform other measurements. In order to not overload the whole process with redundant data, we decided to perform the measurement only after reaching a certain distance. A distance 0.5 m proved to be effective in the tests. At the end of the movement, the process from point 2 is repeated until the required number of iterations k is reached. The whole process is summarized in Figure 6.

4.1. Selecting a Potential Destination for Path Planning

Choosing the right destination when planning a route is a key task in ASLAM. In making this decision, it is necessary to consider the purpose for which the ASLAM is implemented and what areas are a priority for its implementation. As examples, we can mention unexplored areas, areas of irregular shapes, areas with narrow alleys, etc. In our case, we will focus on unexplored areas, while there are no occupied areas near them. The whole process of selecting a suitable target is captured in Algorithm 1. The necessary inputs to this algorithm are the grid map of the surroundings, the current position, and the limited area of our interest.
Algorithm 1:Target position selection
Input:Gridmap, actual position, area of interest
Output:Navigation destination
1.Generating a matrix of states based on a grid map.
2.Searching the state matrix and selecting suitable areas based on a predetermined condition.
3.Clustering of appropriate areas (k-means).
4.Assigning weight to each area.
5.Selecting the goal point with the highest weight.
A grid map is expected at the input, the cell values of which are given by three numbers (0, free beech; 1, occupied; and −1, unknown). A state matrix is created from this map. This step is followed by a search of the entire matrix and a search for potential destinations for path planning. The following condition must be fulfilled when selecting a potential target:
if map 3 value (i,j) == 0 && map 3 value (i − 1,j) == 0 && map 3 value (i + 1,j) == 0 && map 3 value(i,j − 1) == 0 && map 3 value (i,j + 1) == 0
From this condition follows that in the center of the region of interest there must be a free state (0), and at the same time, all direct neighboring cells must also be equal to zero. If this assumption is fulfilled, a matrix (newmatrix) of size 5 × 5 is created whose center is in coordinates i, j. After this step, another condition is tested, which is:
is member (1,newmatrix) == 0 && sum (newmatrix) < size (newmatrix)/3
This condition ensures that values equal to one are searched. This is due to the exclusion of areas where there is an obstacle (wall, object in the room…). At the same time, the sum of all members of the matrix is performed, which must contain at least 1/3 of unknown cells. If this condition is also fulfilled, the coordinates of the center of the area are stored in the variable and this point is considered as a potential point of interest. Depending on the map, we obtain several dozen of such points. The matrix that meets the mentioned conditions looks like this:
M = ( 1 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1 ) .
For a suitable selection of one specific point, it is necessary to create a weight function that evaluates each point separately. When creating the weight function, we decided to consider two main factors: the distance of the point from the current position and the number of points in the area. To obtain the multiplicity of points in the area, it is necessary to perform clustering. The K-means++ algorithm was used for clustering.

4.2. K-means and K-means++ Algorithm

The K-means algorithm is one of the most widely used algorithms for classifying and clustering data [20]. Clusters are searched using an iterative method. If we want to express the K-means algorithm mathematically, we must explain some concepts, first. Let XR be the set of entry points and K be the integer number of clusters into which we want to classify our data. At the same time ci ∈ R represents the i representative point and C = {c1, c2, …, cK} is the set of representative points with the number of elements K. The set Zj = {xX| arg min ‖xc2 = cj} is called the cluster for a representative point cj for j = 1, 2…, K. Data points are expected at the input of the algorithm. X = {x1,x2,…,xn}, where n represents their number. These data points are represented by (x, y) coordinates. At the same time, the condition that K must not be greater than the entry number of points must apply. Another assumption is the assumption of a unique assignment to the cluster, which tells us that each set C determines the decomposition of the set X into mutually disjoint sets Zj. Based on these statements, we will try to minimize the cost function of Equation (3)
ϕ ( C ) : = x ϵ X min c ϵ C x c 2 .
by selecting a set of representative points C to the data points X, i.e., finding the ideal set of representative points C for a given set X by minimizing the squares of the distances of the individual points x from their representative points c. The effort is to find the optimal decomposition of the set X in terms of minimizing the squares of distances. As we consider the assumption of clear assignment to clusters, a possible cost function from Equation (3) is expressed equivalently using Equation (4):
ϕ ( C ) : = c j ϵ C x ϵ Z j x c 2 .
The K-means algorithm is performed in two iterative steps, while in the second step the new representative points are calculated using Equation (5) [20]:
c j : = 1 | Z j | x ϵ Z j x .
The author explains this in more detail in [21,22]. The principle of K-means functionality is in Figure 7.
In the previous description of K-means, we considered the equally random selection of initial representative points cj. However, this choice is not optimal, as it carries a high probability of incorrect classification. As a result, some assignments may be illogical. An example of this type of classification can be seen in Figure 8.
Several methods have been developed to solve this problem. In our case, we decided to use the K-means++ algorithm, which we can consider as an extension of the original K-means algorithm. K-means++ was introduced by the author in [23]. This algorithm retains the original procedure resulting from Figure 6, with the change being the selection of initial mean values. The points cj are randomly selected from the set X with a selection probability P(x) defined in Equation (6):
x   ϵ   X   : P ( x ) = m i n c ϵ C x c 2 x ˜ ϵ X m i n c ϵ C x ˜ c 2 .
To apply the described cluster method, it was necessary to enter the quantity of clusters, which we determined hereafter considering the number 7. The result of searching for suitable target points and their distribution into clusters can be seen in Figure 9.
After clustering, the number of points in individual clusters C k was calculated. This gave us the first parameter needed to calculate the weight of the points. This number was attributed to each point. Another parameter needed to calculate the weight is the distance of the point from the current position. The calculation of this parameter is based on the Euclidean distance from Equation (7):
d ( x , y ) = i ( x i y i ) 2
After obtaining the parameters of distance and multiplicity of points in the cluster, it was possible to proceed to the calculation of the weight of individual points. The relationship for the weight calculation is given in Equation (8):
w ( x , y ) = C k d .
where w ( x , y ) is the weight of a point with coordinates x, a, y, C k is the number of points located in the corresponding cluster k, and d is the distance of the point from the current position. By the square root of the variable d, we wanted to reduce the effect of distance on the resulting weight. These mentioned parameters are written in a matrix with n rows and five columns. The line has the form (x, y, cluster number, number of points, weight). In the last step, the row with the largest weight is selected and its x and y coordinates serve as the destination point of the navigation algorithm. If we want to use the limitation of our area of interest, then before writing the weight in the matrix, we test whether the x and y coordinates of the point meet the given interval in which they should be. If they are outside the interval, then the point weight is set to zero. The problem can occur if all points of the matrix have a weight of zero or no point meets the mentioned criteria. In this case, point (0, 0) is set as the destination. The whole process of selecting the next position takes an average of 0.36 s with the dimensions of the grid map 847 × 1057 (row × column). When the search phase of our point of interest is over, we have all the necessary inputs for path planning.

4.3. Own Research

The measurement took place in the corridors of our workplace. Photos of the environment can be seen in Figure 10. This measurement contains objects that can cause noise in the measurement, such as glass doors and flowers, which we can consider desirable because the measurement will not take place in an ideal environment. The result of one of the successful measurements using ASLAM can be seen in Figure 11. The algorithm performed 13 iterations. The LiDAR made 47 measurements of the environment. Our area of interest was in the range x <0 30>, y <−0.5 2>. The trajectory of the robot’s movement is marked in blue.

4.4. Comparison and Evaluation of Results

We decided to use the size of the coverage area as a suitable indicator of the success of the proposed algorithm. The author in [5] chose the approach of the incremental capture of topology–grid hybrid maps (TGHM). From the measurements of the LiDAR sensor, he chose potential targets where the MRS could be navigated. These goals were chosen based on a geometric rule, which the author strictly defines in the source. This rule is based on the boundaries of the map of the current measurement and the range of the LiDAR sensor, while the dimensions of the MRS are also taken into account. The consideration of MRS parameters is used to avoid collisions and to keep the distance from the obstacle, due to the complexity of the measurement. After selecting these points, they search their surroundings with a radius equal to the scanning range of the LiDAR. From this search, a point information gain is obtained based on the number of unknown cells of the N u n k n o w n map. In the last step, it assigns to each potential point a topological distance L(T) from the current position T of the robotic system. The function is defined in Equation (9):
F ( T ) = N u n k n o w n ×   exp ( λ L ( T ) ) ,
where λ is a positive constant that is used to adjust the weight of the distance against the weight of unknown areas. In the experimental part, the author confirmed the success of his method in the coverage of the environment at 96.62% in the first measurement and 98.3% in the second measurement.
In [6], the author used a grid map, from which he chose targets based on the fact that the cell can be reached from all directions ( a i   j ), and at the same time to be able to navigate the robot to this cell, while in its close surroundings is at least one cell unknown. Here a large set of several points is created, which is then clustered using its modification K-means. From a predefined number of clusters, it selects a center of gravity that becomes a new potential target for its navigation. This point will be assigned a weight in the next step based on Equation (10):
f ( a i   j ) = g i   j + h i   j + F i   j m , n ,
where a i   j is a cell with coordinates i and j, f ( a i   j ) is the weight of the assumed distance to the cell a i   j , g i   j is the current price from the starting cell to the cell a i   j , and h i   j is the estimated cost of the distance from the current cell to the target cell located near cell a i   j . F i   j m , n is the throughput index from start to cell a i   j . Each trip price is determined based on the length calculated from the A * algorithm. Subsequently, the cell with the lowest weight is selected. The author performed an outdoor test on a slo** terrain on an area of 9 × 16 m while crossing a track of 24.4 m and covering 90% of the territory in eight iterations.
If we want to compare our results with the previous two approaches, the success rate is slightly higher, where in the first measurement the area of our interest was on an area of 30 × 2.5 m, which included 2892 cells of the grid map. The number of unmapped cells is 13, which represents a coverage of 99.55%. During the measurement, 13 iterations were performed and the distance of the MRS was approximately 30.3 m. These measurements are shown in Figure 12 and Figure 13, where the red border represents the area of our interest, and the blue color represents the unknown areas on the map. The area of our interest did not focus on the whole corridor due to the high occurrence of obstacles and glass walls, which caused noise and inaccuracies in this area. For an idea of this issue, see Figure 10. However, if we take into account this area, the coverage will represent 98.54% with an area size of 8.8 × 30 m with a cell count of 8742. The results are summarized in Table 1.

5. Conclusions

Two main aspects need to be considered when designing the ASLAM method. The first aspect is to eliminate the effect of wheel drift on the result of the whole measurement. In our case, we tried to eliminate the error caused by wheel drift using the SLAM algorithm, which corrects the position from the odometer. The second main aspect is the coverage of the entire environment. The coverage of the mapped environment is solved by the method of choosing a suitable goal for the next movement of the robot. This method proposes an algorithm based on clustering potential targets and calculating their distance from the measuring system. Experiments have shown that the proposed algorithm has achieved an effective approach to solving ASLAM. In our case, the calculation time of a new goal was on average 0.3 s, which is not such a time-consuming process. If the robot cannot navigate to the target area or has covered the entire area, it returns to position (0, 0). The mobile robots can then explore the environment with enhanced stability and increased efficiency. Future work falls into the following two steps. In the first step, the goal will be to improve the efficiency of the SLAM algorithm using the tool described in [24]. Using this tool, we can obtain data from IMU and GPS and use them to improve the used SLAM algorithm. The second step will be the deployment of the proposed methods on a spherical robotic system, the design of which is given in [25]. We can use this type of robot to monitor safety-critical applications.

Author Contributions

Data curation, M.M.; formal analysis, P.P.; investigation, B.M.; methodology, M.M.; project administration, P.V.; resources, P.V.; supervision, P.P.; validation, B.M.; visualization, P.V.; writing—original draft, M.M.; writing—review and editing, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by KEGA grant number 008ŽU-4/2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All necessary data are given in this article.

Acknowledgments

The publication was funded by the Cultural and Educational Grant Agency MŠVVaŠ SR, grant number 008ŽU-4/2021: Integrated Teaching for Artificial Intelligence Methods at the University of Žilina.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on simultaneous localization and map** (SLAM). In Proceedings of the 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 27–29 November 2015; pp. 85–90. [Google Scholar] [CrossRef]
  2. Singandhupe, A.; La, H.M. A Review of SLAM Techniques and Security in Autonomous Driving. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 602–607. [Google Scholar] [CrossRef]
  3. Siciliano, B.; Khatib, O. Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  4. Lei, X.; Feng, B.; Wang, G.; Liu, W.; Yang, Y. A Novel FastSLAM Framework Based on 2D Lidar for Autonomous Mobile Robot. Electronics 2020, 9, 695. [Google Scholar] [CrossRef]
  5. Liu, S.; Li, S.; Pang, L.; Hu, J.; Chen, H.; Zhang, X. Autonomous Exploration and Map Construction of a Mobile Robot Based on the TGHM Algorithm. Sensors 2020, 20, 490. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Tang, Y.; Cai, J.; Chen, M.; Yan, X.; ** and Exploration; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  7. Lluvia, I.; Lazkano, E.; Ansuategi, A. Active Map** and Robot Exploration: A Survey. Sensors 2021, 21, 2445. [Google Scholar] [CrossRef] [PubMed]
  8. Bajcsy, R. Active perception. Proc. IEEE 1988, 76, 966–1005. [Google Scholar] [CrossRef]
  9. Arévalo, M.L.R. On the Uncertainty in Active Slam: Representation, Propagation and Monotonicity. Ph.D. Thesis, Universidadde Zaragoza, Zaragoza, Spain, 2018. [Google Scholar]
  10. Lee, J.; Lee, K.; Yoo, A.; Moon, C. Design and Implementation of Edge-Fog-Cloud System through HD Map Generation from LiDAR Data of Autonomous Vehicles. Electronics 2020, 9, 2084. [Google Scholar] [CrossRef]
  11. Qian, J.; Zi, B.; Wang, D.; Ma, Y.; Zhang, D. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System. Sensors 2017, 17, 2073. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 27–31 October 2003; pp. 2743–2748. [Google Scholar]
  13. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar] [CrossRef]
  14. Safeea, M.; Neto, P. KUKA Sunrise Toolbox: Interfacing Collaborative Robots With MATLAB. IEEE Robot. Autom. Mag. 2019, 26, 91–96. [Google Scholar] [CrossRef] [Green Version]
  15. Thrun, S. Wolfram Burgard, and Dieter Fox. In Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  16. Mihálik, M.; Hruboš, M.; Janota, A. Testing of SLAM methods in the Matlab environment. In Proceedings of the 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl’any, Slovakia, 21–23 January 2021; pp. 000055–000058. [Google Scholar] [CrossRef]
  17. Žigo, V. K-means Alogoritmus a Voronoiove Regiony. Bachelor’s Thesis, Fakulta Matematiky, Fyziky a Informatiky, Univerzita Komenského v Bratislave, Bratislava, Slovakia, 2020. [Google Scholar]
  18. Boyd, S.; Vandenberghe, L. Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  19. Jain, K.A.; Dubes, R.C. Algorithms for Clustering Data; Prentice Hall: Englewood Cliffs, NJ, USA, 1998. [Google Scholar]
  20. Arthur, D.; Vassilvitskii, S. K-means++: The advantages of careful seeding. In SODA ’07, Proceedings of the Eighteenth Annual ACMSIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 7–9 January 2007; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007; pp. 1027–1035. [Google Scholar]
  21. Andel, J.; Šimák, V.; Skultety, F.; Nemec, D. IoT-based Data Acquisition Unit for aircraft and road vehicle. Transp. Res. Procedia 2021, 55, 969–976. [Google Scholar] [CrossRef]
  22. Bujňák, M.; Pirník, R.; Rástočný, K.; Janota, A.; Nemec, D.; Kuchár, P.; Tichý, T.; Łukasik, Z. Spherical Robots for Special Purposes: A Review on Current Possibilities. Sensors 2022, 22, 1413. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Components that form the Active Simultaneous Localization and Map** (ASLAM) problem, expressed as set theory.
Figure 1. Components that form the Active Simultaneous Localization and Map** (ASLAM) problem, expressed as set theory.
Electronics 11 01082 g001
Figure 2. A robotic system with omnidirectional chassis.
Figure 2. A robotic system with omnidirectional chassis.
Electronics 11 01082 g002
Figure 3. Omnidirectional chassis.
Figure 3. Omnidirectional chassis.
Electronics 11 01082 g003
Figure 4. Mobile robotic system.
Figure 4. Mobile robotic system.
Electronics 11 01082 g004
Figure 5. Flowchart of SLAM solution.
Figure 5. Flowchart of SLAM solution.
Electronics 11 01082 g005
Figure 6. Active SLAM algorithm flowchart.
Figure 6. Active SLAM algorithm flowchart.
Electronics 11 01082 g006
Figure 7. Principle of K-means algorithm functionality.
Figure 7. Principle of K-means algorithm functionality.
Electronics 11 01082 g007
Figure 8. Example of logical (graph on the right) and illogical (graph on the left) clustering using K-mean.
Figure 8. Example of logical (graph on the right) and illogical (graph on the left) clustering using K-mean.
Electronics 11 01082 g008
Figure 9. Points in a cluster on a grid map.
Figure 9. Points in a cluster on a grid map.
Electronics 11 01082 g009
Figure 10. Mapped spaces on real photos.
Figure 10. Mapped spaces on real photos.
Electronics 11 01082 g010
Figure 11. Map created using active SLAM algorithm.
Figure 11. Map created using active SLAM algorithm.
Electronics 11 01082 g011
Figure 12. Generated grid map from test.
Figure 12. Generated grid map from test.
Electronics 11 01082 g012
Figure 13. Generated grid map from test.
Figure 13. Generated grid map from test.
Electronics 11 01082 g013
Table 1. Comparison of results.
Table 1. Comparison of results.
AuthorArea Size (m × m)Coverage (%)Iteration
Shuang Liu [5]
simulation
20 × 2096.6226
Shuang Liu [5]
experiment
6 × 1298.137
Tang [6]9 × 16908
Test30 × 2.599.5513
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mihálik, M.; Malobický, B.; Peniak, P.; Vestenický, P. The New Method of Active SLAM for Map** Using LiDAR. Electronics 2022, 11, 1082. https://doi.org/10.3390/electronics11071082

AMA Style

Mihálik M, Malobický B, Peniak P, Vestenický P. The New Method of Active SLAM for Map** Using LiDAR. Electronics. 2022; 11(7):1082. https://doi.org/10.3390/electronics11071082

Chicago/Turabian Style

Mihálik, Michal, Branislav Malobický, Peter Peniak, and Peter Vestenický. 2022. "The New Method of Active SLAM for Map** Using LiDAR" Electronics 11, no. 7: 1082. https://doi.org/10.3390/electronics11071082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop