Visual Servoing of Mobile Robots

A special issue of Actuators (ISSN 2076-0825). This special issue belongs to the section "Actuators for Robotics".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 7391

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer, Control and Management Engineering “Antonio Ruberti”, Sapienza University of Rome, Via Ariosto, 25, 00185 Rome, Italy
Interests: nonlinear discrete time and sampled dynamics; optimal control; nonlinear control; sensor networks; epidemics modeling and control; robotic vision; micro manipulator control

Special Issue Information

Dear colleagues,

Visual servoing of robots has been ongoing for quite some time: there are now more than forty years’ worth of contributions on the topic, which follow the development of efficient robotic vision systems, improvements in the computational power of the informatic systems, the birth and growth of disciplines like machine learning and AI, and the many hardware and software tools which have contributed the increased efficiency of image processing methods. Improvements in the velocity, complexity, and precision of the images’ elaborations, along with the evolution of increasingly efficient big data storage and computational systems, are rapidly expanding the boundaries of the visual servoing field.

However, mobile robotic systems, with their autonomous motion capabilities, remain the key field in which visual servoing finds both theoretical and applicative developments.

The aim of the present Special Issue is to collect results on classical problems as well as examples of new, advanced visual servoing techniques for mobile robots. Original articles and reviews focused on, but not limited to, the following topics are welcome:

-       Robotic vision systems;

-       Computer vision;

-       Image processing;

-       Vision based localization and motion;

-       Vision based human–robot and robot–environment interactions;

-       Visual sensing;

-       Machine learning and AI techniques for visual servoing;

-       Visual servoing applications;

-       New trends in visual servoing

Dr. Paolo Di Giamberardino
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at mdpi.longhoe.net by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Actuators is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotic vision systems;
  • image acquisition;
  • computer vision;
  • image processing;
  • visual sensing;
  • machine learning in robotics

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

32 pages, 22058 KiB  
Article
3D Object Recognition and Localization with a Dense LiDAR Scanner
by Hao Geng, Zhiyuan Gao, Guorun Fang and Yangmin **e
Actuators 2022, 11(1), 13; https://doi.org/10.3390/act11010013 - 5 Jan 2022
Cited by 5 | Viewed by 3010
Abstract
Dense scanning is an effective solution for refined geometrical modeling applications. The previous studies in dense environment modeling mostly focused on data acquisition techniques without emphasizing autonomous target recognition and accurate 3D localization. Therefore, they lacked the capability to output semantic information in [...] Read more.
Dense scanning is an effective solution for refined geometrical modeling applications. The previous studies in dense environment modeling mostly focused on data acquisition techniques without emphasizing autonomous target recognition and accurate 3D localization. Therefore, they lacked the capability to output semantic information in the scenes. This article aims to make complementation in this aspect. The critical problems we solved are mainly in two aspects: (1) system calibration to ensure detail-fidelity for the 3D objects with fine structures, (2) fast outlier exclusion to improve 3D boxing accuracy. A lightweight fuzzy neural network is proposed to remove most background outliers, which was proven in experiments to be effective for various objects in different situations. With precise and clean data ensured by the two abovementioned techniques, our system can extract target objects from the original point clouds, and more importantly, accurately estimate their center locations and orientations. Full article
(This article belongs to the Special Issue Visual Servoing of Mobile Robots)
Show Figures

Figure 1

18 pages, 5583 KiB  
Article
A Study on Vision-Based Backstep** Control for a Target Tracking System
by Thinh Huynh, Minh-Thien Tran, Dong-Hun Lee, Soumayya Chakir and Young-Bok Kim
Actuators 2021, 10(5), 105; https://doi.org/10.3390/act10050105 - 19 May 2021
Cited by 10 | Viewed by 3287
Abstract
This paper proposes a new method to control the pose of a camera mounted on a two-axis gimbal system for visual servoing applications. In these applications, the camera should be stable while its line-of-sight points at a target located within the camera’s field [...] Read more.
This paper proposes a new method to control the pose of a camera mounted on a two-axis gimbal system for visual servoing applications. In these applications, the camera should be stable while its line-of-sight points at a target located within the camera’s field of view. One of the most challenging aspects of these systems is the coupling in the gimbal kinematics as well as the imaging geometry. Such factors must be considered in the control system design process to achieve better control performances. The novelty of this study is that the couplings in both mechanism’s kinematics and imaging geometry are decoupled simultaneously by a new technique, so popular control methods can be easily implemented, and good tracking performances are obtained. The proposed control configuration includes a calculation of the gimbal’s desired motion taking into account the coupling influence, and a control law derived by the backstep** procedure. Simulation and experimental studies were conducted, and their results validate the efficiency of the proposed control system. Moreover, comparison studies are conducted between the proposed control scheme, the image-based pointing control, and the decoupled control. This proves the superiority of the proposed approach that requires fewer measurements and results in smoother transient responses. Full article
(This article belongs to the Special Issue Visual Servoing of Mobile Robots)
Show Figures

Figure 1

Back to TopTop