Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy
Abstract
:1. Introduction
Sources
2.3. Citations
2.4. Evaluation Criteria in the Literature
3. Results
3.1. Awareness
- Spatial Evaluation (SE): The drone can account for the basic spatial limitations of its surrounding environment, such as walls or ceilings, allowing it to safely operate within an enclosed space.
- Obstacle Detection (ODe): The drone can determine independent objects, such as obstacles beyond the bounds of the previously addressed Spatial Evaluation, but does not make a distinction between those objects.
- Obstacle Distinction (ODi): The drone can identify distinct objects with independent properties or labels, e.g., identifying a target object and treating it differently from other objects or walls/floors in the environment.
3.2. Basic Navigation
- Autonomous Movement (AM): The drone has a navigation policy that allows it to fly without direct control from an operator; this policy can be represented in forms as simple as navigation commands such as “go forward” or as complex as a vector of steering angle and velocity in two dimensions that lie on the x–z plane.
- Collision Avoidance (CA): The drone’s navigation policy includes learned or sensed logic to assist in avoiding collision with non-distinct obstacles.
- Auto Take-off/Landing (ATL): The drone is able to enact self-land and take-off routines based on information from its awareness of the environment; this includes determining a safe spot to land and a safe thrust vector to take off from.
3.3. Expanded Navigation
- Path Generation (PG): The drone attempts to generate or optimize a pathway to a given location, the application of the generated pathway can vary depending on the goal of the project (e.g., pathways for safety or pathways for efficiency).
- Environment Distinction (ED): The drone can distinguish or take advantage of features of an uncommon use case environment, such as forests, rural areas or mountainous regions. Urban and indoor environments have been excluded from this criteria.
- Non-Planar Movement (NPM): The implemented navigational policy makes use of full three-dimensional movement strategies enabling the drone to navigate above or below obstacles as well as around them.
3.4. Engineering
- On-Board Processing (OBO): The drone does not rely on external computation for autonomous navigation. The on-board performance of navigation is performed with an efficiency comparable to an external system.
- Extra Sensory (ES): The drone employs the use of sensors other than a camera and rotor movement information such as the RPM or thrust. The presence of this feature is not necessarily beneficial; however, the use of additional on-board sensors to aid in autonomous navigation may be worth the weight penalty and computational trade-off.
- Signal Independent (SI): Drone movement policies do not rely on streamed information such as global position from a wireless/satellite network or other subsystems. This is likely to be a limiting factor, as such a feature may greatly improve the precision of an autonomous system.
3.5. Comparative Results
4. Discussion
4.1. Common Learning Models
4.2. Areas of Concentrated Research Effort
4.3. Areas of Opportunity
4.4. Issues
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
DNN | Deep Neural Network |
UAV | Unmanned Aerial Vehicle |
IoT | Internet of Things |
CNN | Convolutional Neural Network |
CPU | Central Processing Unit |
MDPI | Multidisciplinary Digital Publishing Institute |
IEEE | Institute of Electrical and Electronic Engineers |
SAE | Society of Automation Engineers (SAE International) |
Appendix A. Research Pool—2020 Section
Paper | Year | Citations | F1 Score | Accuracy | Efficiency |
---|---|---|---|---|---|
A. Loquercio et al. [9] | 2020 | 34 | - | - | - |
M. K. Al-Sharman et al. [10] | 2020 | 11 | - | - | - |
S. Nezami et al. [11] | 2020 | 8 | - | 0.983 | - |
H. Shiri et al. [12] | 2020 | 6 | - | - | - |
K. Lee et al. [13] | 2020 | 6 | - | - | 80 ms |
A. Anwar et al. [14] | 2020 | 5 | - | - | - |
R. Chew et al. [15] | 2020 | 4 | 0.86 | 0.86 | - |
I. Roldan et al. [48] | 2020 | 4 | - | 0.9948 | - |
Y. Liao et al. [49] | 2020 | 3 | - | 0.978 | - |
Y. Wang et al. [50] | 2020 | 1 | - | - | - |
I. Bozcan et al. [51] | 2020 | 1 | 0.9907 | - | - |
L. Messina et al. [52] | 2020 | 1 | - | - | - |
B. Li et al. [53] | 2020 | 0 | - | 0.9 | - |
J. Tan et al. [54] | 2020 | 0 | 0.8886 | 0.9 | - |
M. Gao et al. [55] | 2020 | 0 | - | - | - |
R. Yang et al. [56] | 2020 | 0 | - | 0.96 | - |
K. Menfoukh et al. [57] | 2020 | 0 | 0.85 | 0.91 | - |
V. Sadhu et al. [58] | 2020 | 0 | - | - | - |
R. Raman et al. [59] | 2020 | 0 | - | - | - |
B. Hosseiny et al. [60] | 2020 | 0 | 0.855 | 0.909 | - |
R. I. Marasigan et al. [61] | 2020 | 0 | - | - | - |
M. Irfan et al. [47] | 2020 | 0 | - | - | - |
V. A. Bakale et al. [62] | 2020 | 0 | - | - | 92 ms |
L. O. Rojas-Perez et al. [63] | 2020 | 0 | - | - | 25.4 ms |
Appendix B. Research Pool—2019 Section
Paper | Year | Citations | F1 Score | Accuracy | Efficiency |
---|---|---|---|---|---|
D. Wofk et al. [16] | 2019 | 55 | - | 0.771 | 37 ms |
E. Kaufmann et al. [17] | 2019 | 50 | - | - | 100 ms |
D. Palossi et al. [7] | 2019 | 43 | 0.821 | 0.891 | 55.5 ms |
Hossain et al. [18] | 2019 | 19 | - | - | - |
Y. Y. Munaye et al. [19] | 2019 | 11 | - | 0.98 | - |
S. Islam et al. [20] | 2019 | 9 | - | 0.8 | - |
A. Alshehri et al. [21] | 2019 | 8 | - | 0.8017 | - |
M. A. Akhloufi et al. [64] | 2019 | 8 | - | - | 33 ms |
A. G. Perera et al. [65] | 2019 | 6 | - | 0.7592 | - |
X. Han et al. [66] | 2019 | 4 | - | 0.88 | - |
D. R. Hartawan et al. [67] | 2019 | 4 | - | 1 | 330 ms |
G. Muñoz et al. [68] | 2019 | 4 | - | - | - |
Mohammadi et al. [69] | 2019 | 4 | - | - | - |
A. Garcia et al. [70] | 2019 | 3 | - | 0.98 | 45 ms |
S. Shin et al. [71] | 2019 | 3 | - | - | - |
S. Y. Shin et al. [71] | 2019 | 2 | - | - | - |
A. Garcia et al. [72] | 2019 | 1 | - | - | - |
L. Liu et al. [73] | 2019 | 1 | - | - | - |
J. A. Cocoma-Ortega et al. [74] | 2019 | 0 | - | 0.95 | - |
M. T. Matthews et al. [75] | 2019 | 0 | - | - | - |
J. Morais et al. [76] | 2019 | 0 | - | - | - |
A. Garrell et al. [77] | 2019 | 0 | - | 0.7581 | - |
E. Cetin et al. [78] | 2019 | 0 | - | - | - |
Appendix C. Research Pool—2018 Section
Paper | Year | Citations | F1 Score | Accuracy | Efficiency |
---|---|---|---|---|---|
A. Loquercio et al. [22] | 2018 | 158 | 0.901 | 0.954 | 50 ms |
E. Kaufmann et al. [23] | 2018 | 60 | - | - | 100 ms |
O. Csillik et al. [24] | 2018 | 58 | 0.9624 | 0.9624 | - |
S. Jung et al. [25] | 2018 | 57 | - | 0.755 | 34 ms |
A. A. Zhilenkov et al. [26] | 2018 | 23 | - | - | - |
S. Lee et al. [27] | 2018 | 14 | - | - | - |
S. Dionisio-Ortega et al. [28] | 2018 | 14 | - | - | - |
Y. Feng et al. [79] | 2018 | 13 | - | - | - |
N. Mohajerin et al. [80] | 2018 | 13 | - | - | - |
A. Carrio et al. [46] | 2018 | 13 | - | 0.98 | 50 ms |
A. Rodriguez-Ramos et al. [45] | 2018 | 12 | - | 0.7864 | - |
M. Jafari et al. [81] | 2018 | 11 | - | - | - |
M. A. Anwar et al. [14] | 2018 | 11 | - | - | - |
A. Khan et al. [82] | 2018 | 10 | - | 0.78 | - |
Y. Xu et al. [83] | 2018 | 7 | - | - | - |
I. A. Sulistijono et al. [84] | 2018 | 6 | - | 0.841 | 450 ms |
J. Shin et al. [71] | 2018 | 6 | - | - | - |
S. P. Yong et al. [85] | 2018 | 5 | 0.731 | 0.9732 | - |
C. Beleznai et al. [86] | 2018 | 3 | - | - | 50 ms |
H. U. Dike et al. [87] | 2018 | 3 | - | 0.865 | 86.6 ms |
X. Guan et al. [88] | 2018 | 3 | - | - | - |
Y. Liu et al. [73] | 2018 | 3 | - | - | - |
X. Dai et al. [89] | 2018 | 1 | - | - | - |
J. M. S Lagmay et al. [90] | 2018 | 1 | - | - | - |
X. Chen et al. [91] | 2018 | 0 | - | 0.95 | 50 ms |
Appendix D. Research Pool—2017 Section
Paper | Year | Citations | F1 Score | Accuracy | Efficiency |
---|---|---|---|---|---|
D. Gandhi et al. [29] | 2017 | 165 | - | - | - |
D. Falanga et al. [30] | 2017 | 98 | - | 0.8 | 0.24 ms |
K. McGuire et al. [31] | 2017 | 88 | - | - | - |
A. Zeggada et al. [32] | 2017 | 43 | - | 0.827 | 39 ms |
Y. Zhao et al. [33] | 2017 | 31 | - | - | - |
L. Von et al. [34] | 2017 | 25 | - | - | - |
P. Moriarty et al. [35] | 2017 | 11 | - | 0.985 | - |
Y. F. Teng et al. [92] | 2017 | 11 | - | - | - |
Y. Zhou et al. [93] | 2017 | 3 | - | - | - |
A. Garcia et al. [94] | 2017 | 3 | - | 0.9 | - |
Y. Choi et al. [95] | 2017 | 1 | - | 0.989 | - |
Y. Zhang et al. [96] | 2017 | 1 | - | 0.83 | - |
S. Andropov et al. [97] | 2017 | 0 | - | - | - |
Appendix E. Research Pool—2016 Section
References
- Giones, F.; Brem, A. From toys to tools: The co-evolution of technological and entrepreneurial developments in the drone industry. Bus. Horiz. 2017, 60, 875–884. [Google Scholar] [CrossRef]
- The Drone Market Report 2020–2025; Technical Report; Drone Industry Insight, 2020.
- IEEE Website. 2021. Available online: https://www.ieee.org/content/ieee-org/en/about/ (accessed on 4 June 2021).
- Aragón, A.M. A measure for the impact of research. Sci. Rep. 2013, 3, 1649. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lehmann, S.; Jackson, A.D.; Lautrup, B.E. Measures for measures. Nature 2006, 444, 1003–1004. [Google Scholar] [CrossRef] [PubMed]
- Society of Automation Engineers (SAE). J3016B Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; SAE: Warrendale, PA, USA, 2018. [Google Scholar]
- Palossi, D.; Loquercio, A.; Conti, F.; Flamand, E.; Scaramuzza, D.; Benini, L. A 64-mW DNN-Based Visual Navigation Engine for Autonomous Nano-Drones. IEEE Internet Things J. 2019, 6, 8357–8371. [Google Scholar] [CrossRef] [Green Version]
- Sasaki, Y. The Truth of the F-Measure. 2007. Available online: https://www.cs.odu.edu/{~{}}mukka/cs795sum10dm/Lecturenotes/Day3/F-measure-YS-26Oct07.pdf (accessed on 4 June 2021).
- Loquercio, A.; Kaufmann, E.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Deep Drone Racing: From Simulation to Reality with Domain Randomization. IEEE Trans. Robot. 2020, 36, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Al-Sharman, M.K.; Zweiri, Y.; Jaradat, M.A.K.; Al-Husari, R.; Gan, D.; Seneviratne, L.D. Deep-learning-based neural network training for state estimation enhancement: Application to attitude estimation. IEEE Trans. Instrum. Meas. 2020, 69, 24–34. [Google Scholar] [CrossRef] [Green Version]
- Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and RGB imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef] [Green Version]
- Shiri, H.; Park, J.; Bennis, M. Remote UAV Online Path Planning via Neural Network-Based Opportunistic Control. IEEE Wirel. Commun. Lett. 2020, 9, 861–865. [Google Scholar] [CrossRef] [Green Version]
- Lee, K.; Gibson, J.; Theodorou, E.A. Aggressive Perception-Aware Navigation Using Deep Optical Flow Dynamics and PixelMPC. IEEE Robot. Autom. Lett. 2020, 5, 1207–1214. [Google Scholar] [CrossRef] [Green Version]
- Anwar, A.; Raychowdhury, A. Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning. IEEE Access 2020, 8, 26549–26560. [Google Scholar] [CrossRef]
- Chew, R.; Rineer, J.; Beach, R.; O’Neil, M.; Ujeneza, N.; Lapidus, D.; Miano, T.; Hegarty-Craver, M.; Polly, J.; Temple, D.S. Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images. Drones 2020, 4, 7. [Google Scholar] [CrossRef] [Green Version]
- Wofk, D.; Ma, F.; Yang, T.J.; Karaman, S.; Sze, V. FastDepth: Fast Monocular Depth Estimation on Embedded Systems. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6101–6108. [Google Scholar] [CrossRef] [Green Version]
- Kaufmann, E.; Gehrig, M.; Foehn, P.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Beauty and the beast: Optimal methods meet learning for drone racing. In Proceedings of the IEEE International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; Volume 2019, pp. 690–696. [Google Scholar] [CrossRef] [Green Version]
- Hossain, S.; Lee, D.-J. Deep Learning-Based Real-Time Multiple-Object Detection and Tracking from Aerial Imagery via a Flying Robot with GPU-Based Embedded Devices. Sensors 2019, 19, 3371. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Munaye, Y.Y.; Lin, H.P.; Adege, A.B.; Tarekegn, G.B. Uav positioning for throughput maximization using deep learning approaches. Sensors 2019, 19, 2775. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Islam, S.; Razi, A. A Path Planning Algorithm for Collective Monitoring Using Autonomous Drones. In Proceedings of the 2019 53rd Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 20–22 March 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Alshehri, A.; Member, S.; Bazi, Y.; Member, S. Deep Attention Neural Network for Multi-Label Classification in Unmanned Aerial Vehicle Imagery. IEEE Access 2019, 7, 119873–119880. [Google Scholar] [CrossRef]
- Loquercio, A.; Maqueda, A.I.; Del-Blanco, C.R.; Scaramuzza, D. DroNet: Learning to Fly by Driving. IEEE Robot. Autom. Lett. 2018, 3, 1088–1095. [Google Scholar] [CrossRef]
- Kaufmann, E.; Loquercio, A.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Deep Drone Racing: Learning Agile Flight in Dynamic Environments. In Proceedings of the Conference on Robotic Learning, Zürich, Switzerland, 29–31 October 2018; pp. 1–13. [Google Scholar]
- Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
- Jung, S.; Hwang, S.; Shin, H.; Shim, D.H. Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using Deep Learning. IEEE Robot. Autom. Lett. 2018, 3, 2539–2544. [Google Scholar] [CrossRef]
- Zhilenkov, A.A.; Epifantsev, I.R. System of autonomous navigation of the drone in difficult conditions of the forest trails. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering, ElConRus 2018, Moscow and St. Petersburg, Russia, 29 January–1 February 2018; Volume 2018, pp. 1036–1039. [Google Scholar] [CrossRef]
- Lee, S.; Shim, T.; Kim, S.; Park, J.; Hong, K.; Bang, H. Vision-Based Autonomous Landing of a Multi-Copter Unmanned Aerial Vehicle using Reinforcement Learning. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems, ICUAS 2018, Dallas, TX, USA, 12–15 June 2018; pp. 108–114. [Google Scholar] [CrossRef]
- Dionisio-Ortega, S.; Rojas-Perez, L.O.; Martinez-Carranza, J.; Cruz-Vega, I. A deep learning approach towards autonomous flight in forest environments. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February 2018; pp. 139–144. [Google Scholar] [CrossRef]
- Gandhi, D.; Pinto, L.; Gupta, A. Learning to fly by crashing. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; Volume 2017, pp. 3948–3955. [Google Scholar] [CrossRef]
- Falanga, D.; Mueggler, E.; Faessler, M.; Scaramuzza, D. Aggressive quadrotor flight through narrow gaps with onboard sensing and computing using active vision. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017. [Google Scholar] [CrossRef] [Green Version]
- McGuire, K.; de Croon, G.; De Wagter, C.; Tuyls, K.; Kappen, H. Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone. IEEE Robot. Autom. Lett. 2017, 2, 1070–1076. [Google Scholar] [CrossRef] [Green Version]
- Zeggada, A.; Melgani, F.; Bazi, Y. A Deep Learning Approach to UAV Image Multilabeling. IEEE Geosci. Remote Sens. Lett. 2017, 14, 694–698. [Google Scholar] [CrossRef]
- Zhao, Y.; Zheng, Z.; Zhang, X.; Liu, Y. Q learning algorithm based UAV path learning and obstacle avoidence approach. In Proceedings of the Chinese Control Conference, CCC, Dalian, China, 26–28 July 2017; pp. 3397–3402. [Google Scholar] [CrossRef]
- Von Stumberg, L.; Usenko, V.; Engel, J.; Stuckler, J.; Cremers, D. From monocular SLAM to autonomous drone exploration. In Proceedings of the 2017 European Conference on Mobile Robots, ECMR 2017, Paris, France, 6–8 September 2017. [Google Scholar] [CrossRef] [Green Version]
- Moriarty, P.; Sheehy, R.; Doody, P. Neural networks to aid the autonomous landing of a UAV on a ship. In Proceedings of the 2017 28th Irish Signals and Systems Conference, ISSC 2017, Killarney, Ireland, 20–21 June 2017; pp. 6–9. [Google Scholar] [CrossRef]
- Giusti, A.; Guzzi, J.; Ciresan, D.C.; He, F.L.; Rodriguez, J.P.; Fontana, F.; Faessler, M.; Forster, C.; Schmidhuber, J.; Caro, G.D.; et al. A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots. IEEE Robot. Autom. Lett. 2016, 1, 661–667. [Google Scholar] [CrossRef] [Green Version]
- Zhang, T.; Kahn, G.; Levine, S.; Abbeel, P. Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; Volume 2016. [Google Scholar] [CrossRef] [Green Version]
- Daftry, S.; Zeng, S.; Khan, A.; Dey, D.; Melik-Barkhudarov, N.; Bagnell, J.A.; Hebert, M. Robust Monocular Flight in Cluttered Outdoor Environments. ar** for drone applications. In Proceedings of the 2019 IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2019, Turin, Italy, 19–21 June 2019; pp. 249–254. [Google Scholar] [CrossRef]
- Garrell, A.; Coll, C.; Alquezar, R.; Sanfeliu, A. Teaching a Drone to Accompany a Person from Demonstrations using Non-Linear ASFM. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Macau, China, 3–8 November 2019; pp. 1985–1991. [Google Scholar] [CrossRef] [Green Version]
- Cetin, E.; Barrado, C.; Munoz, G.; MacIas, M.; Pastor, E. Drone Navigation and Avoidance of Obstacles Through Deep Reinforcement Learning. In Proceedings of the AIAA/IEEE Digital Avionics Systems Conference, San Diego, CA, USA, 8–12 September 2019; Volume 2019. [Google Scholar] [CrossRef]
- Feng, Y.; Zhang, C.; Baek, S.; Rawashdeh, S.; Mohammadi, A. Autonomous Landing of a UAV on a Moving Platform Using Model Predictive Control. Drones 2018, 2, 34. [Google Scholar] [CrossRef] [Green Version]
- Mohajerin, N.; Mozifian, M.; Waslander, S. Deep Learning a Quadrotor Dynamic Model for Multi-Step Prediction. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2454–2459. [Google Scholar] [CrossRef]
- Jafari, M.; Xu, H. Intelligent Control for Unmanned Aerial Systems with System Uncertainties and Disturbances Using Artificial Neural Network. Drones 2018, 2, 30. [Google Scholar] [CrossRef] [Green Version]
- Khan, A.; Hebert, M. Learning safe recovery trajectories with deep neural networks for unmanned aerial vehicles. In Proceedings of the IEEE Aerospace Conference Proceedings, Big Sky, MT, USA, 3–10 March 2018; Volume 2018, pp. 1–9. [Google Scholar] [CrossRef]
- Xu, Y.; Liu, Z.; Wang, X. Monocular vision based autonomous landing of quadrotor through deep reinforcement learning. In Proceedings of the Chinese Control Conference, CCC, Wuhan, China, 25–27 July 2018; Volume 2018, pp. 10014–10019. [Google Scholar] [CrossRef]
- Sulistijono, I.A.; Imansyah, T.; Muhajir, M.; Sutoyo, E.; Anwar, M.K.; Satriyanto, E.; Basuki, A.; Risnumawan, A. Implementation of Victims Detection Framework on Post Disaster Scenario. In Proceedings of the 2018 International Electronics Symposium on Engineering Technology and Applications, IES-ETA 2018, Bali, Indonesia, 29–30 October 2018; pp. 253–259. [Google Scholar] [CrossRef]
- Yong, S.P.; Yeong, Y.C. Human Object Detection in Forest with Deep Learning based on Drone’s Vision. In Proceedings of the 2018 4th International Conference on Computer and Information Sciences: Revolutionising Digital Landscape for Sustainable Smart Society, ICCOINS 2018, Kuala Lumpur, Malaysia, 13–14 August 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Beleznai, C.; Steininger, D.; Croonen, G.; Broneder, E. Multi-modal human detection from aerial views by fast shape-aware clustering and classification. In Proceedings of the 2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing, PRRS 2018, Bei**g, China, 19–20 August 2018. [Google Scholar] [CrossRef]
- Dike, H.U.; Wu, Q.; Zhou, Y.; Liang, G. Unmanned Aerial Vehicle (UAV) Based Running Person Detection from a Real-Time Moving Camera. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 2273–2278. [Google Scholar] [CrossRef]
- Guan, X.; Cai, C. A new integrated navigation system for the indoor unmanned aerial vehicles (UAVs) based on the neural network predictive compensation. In Proceedings of the 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation, YAC 2018, Nan**g, China, 18–20 May 2018; pp. 575–580. [Google Scholar] [CrossRef]
- Dai, X.; Zhou, Y.; Meng, S.; Wu, Q. Unsupervised Feature Fusion Combined with Neural Network Applied to UAV Attitude Estimation. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 874–879. [Google Scholar] [CrossRef]
- Lagmay, J.M.S.; Jed, C.; Leyba, L.; Santiago, A.T.; Tumabotabo, L.B.; Limjoco, W.J.R.; Michael, C.; Tiglao, N. Automated Indoor Drone Flight with Collision Prevention. In Proceedings of the TENCON 2018—2018 IEEE Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 1762–1767. [Google Scholar] [CrossRef]
- Chen, X.; Lin, F.; Abdul Hamid, M.R.; Teo, S.H.; Phang, S.K. Real-Time Landing Spot Detection and Pose Estimation on Thermal Images Using Convolutional Neural Networks. In Proceedings of the IEEE International Conference on Control and Automation, ICCA, Anchorage, AK, USA, 12–15 June 2018; Volume 2018, pp. 998–1003. [Google Scholar] [CrossRef]
- Teng, Y.F.; Hu, B.; Liu, Z.W.; Huang, J.; Guan, Z.H. Adaptive neural network control for quadrotor unmanned aerial vehicles. In Proceedings of the 2017 Asian Control Conference, ASCC 2017, Gold Coast, Australia, 17–20 December 2017; Volume 2018, pp. 988–992. [Google Scholar] [CrossRef]
- Zhou, Y.; Wan, J.; Li, Z.; Song, Z. GPS/INS integrated navigation with BP neural network and Kalman filter. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, Macao, 5–8 December 2017; pp. 2515–2520. [Google Scholar] [CrossRef]
- Garcia, A.; Ghose, K. Autonomous indoor navigation of a stock quadcopter with off-board control. In Proceedings of the 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems, RED-UAS 2017, Linko**, Sweden, 3–5 October 2017; pp. 132–137. [Google Scholar] [CrossRef]
- Choi, Y.; Hwang, I.; Oh, S. Wearable gesture control of agile micro quadrotors. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Linko**, Sweden, 3–5 October 2017; Volume 2017, pp. 266–271. [Google Scholar] [CrossRef]
- Zhang, Y.; **ao, X.; Yang, X. Real-Time object detection for 360-degree panoramic image using CNN. In Proceedings of the 2017 International Conference on Virtual Reality and Visualization, ICVRV 2017, Zhengzhou, China, 21–22 October 2017; pp. 18–23. [Google Scholar] [CrossRef]
- Andropov, S.; Guirik, A.; Budko, M.; Budko, M. Synthesis of neurocontroller for multirotor unmanned aerial vehicle based on neuroemulator. In Proceedings of the Conference of Open Innovation Association, FRUCT, St. Petersburg, Russia, 3–7 April 2017; Volume 2017, pp. 20–25. [Google Scholar] [CrossRef]
Paper | Year | Citations | SE | ODe | ODi |
---|---|---|---|---|---|
A. Loquercio et al. [9] | 2020 | 34 | No | No | Yes |
M. K. Al-Sharman et al. [10] | 2020 | 11 | No | No | No |
S. Nezami et al. [11] | 2020 | 8 | No | No | Yes |
H. Shiri et al. [12] | 2020 | 6 | No | No | No |
K. Lee et al. [13] | 2020 | 6 | No | No | No |
A. Anwar et al. [14] | 2020 | 5 | No | No | No |
R. Chew et al. [15] | 2020 | 4 | No | No | Yes |
D. Wofk et al. [16] | 2019 | 55 | Yes | No | No |
E. Kaufmann et al. [17] | 2019 | 50 | No | No | Yes |
D. Palossi et al. [7] | 2019 | 43 | Yes | Yes | No |
Hossain et al. [18] | 2019 | 19 | No | No | Yes |
Y. Y. Munaye et al. [19] | 2019 | 11 | No | No | Yes |
S. Islam et al. [20] | 2019 | 9 | No | No | No |
A. Alshehri et al. [21] | 2019 | 8 | No | No | Yes |
A. Loquercio et al. [22] | 2018 | 158 | Yes | Yes | No |
E. Kaufmann et al. [23] | 2018 | 60 | No | No | Yes |
O. Csillik et al. [24] | 2018 | 58 | No | No | Yes |
S. Jung et al. [25] | 2018 | 57 | No | No | Yes |
A. A. Zhilenkov et al. [26] | 2018 | 23 | Yes | No | No |
S. Lee et al. [27] | 2018 | 14 | No | No | Yes |
S. Dionisio-Ortega et al. [28] | 2018 | 14 | No | Yes | No |
D. Gandhi et al. [29] | 2017 | 165 | No | Yes | No |
D. Falanga et al. [30] | 2017 | 98 | No | No | No |
K. McGuire et al. [31] | 2017 | 88 | Yes | No | No |
A. Zeggada et al. [32] | 2017 | 43 | No | No | Yes |
Y. Zhao et al. [33] | 2017 | 31 | No | No | No |
L. Von Stumberg et al. [34] | 2017 | 25 | Yes | Yes | No |
P. Moriarty et al. [35] | 2017 | 11 | No | No | Yes |
A. Giusti et al. [36] | 2016 | 424 | No | No | Yes |
T. Zhang et al. [37] | 2016 | 263 | No | No | No |
S. Daftry et al. [38] | 2016 | 26 | Yes | No | No |
M. E. Antonio-Toledo et al. [39] | 2016 | 3 | No | No | No |
Paper | Year | Citations | AM | CA | ATL |
---|---|---|---|---|---|
A. Loquercio et al. [9] | 2020 | 34 | Yes | Yes | No |
M. K. Al-Sharman et al. [10] | 2020 | 11 | No | Yes | No |
S. Nezami et al. [11] | 2020 | 8 | No | No | No |
H. Shiri et al. [12] | 2020 | 6 | No | No | No |
K. Lee et al. [13] | 2020 | 6 | Yes | Yes | No |
A. Anwar et al. [14] | 2020 | 5 | Yes | Yes | No |
R. Chew et al. [15] | 2020 | 4 | No | No | No |
D. Wofk et al. [16] | 2019 | 55 | No | No | No |
E. Kaufmann et al. [17] | 2019 | 50 | Yes | Yes | No |
D. Palossi et al. [7] | 2019 | 43 | Yes | Yes | No |
Hossain et al. [18] | 2019 | 19 | No | No | No |
Y. Y. Munaye et al. [19] | 2019 | 11 | No | No | No |
S. Islam et al. [20] | 2019 | 9 | No | Yes | No |
A. Alshehri et al. [21] | 2019 | 8 | No | No | No |
A. Loquercio et al. [22] | 2018 | 158 | Yes | Yes | No |
E. Kaufmann et al. [23] | 2018 | 60 | Yes | Yes | No |
O. Csillik et al. [24] | 2018 | 58 | No | No | No |
S. Jung et al. [25] | 2018 | 57 | Yes | No | No |
A. A. Zhilenkov et al. [26] | 2018 | 23 | Yes | Yes | No |
S. Lee et al. [27] | 2018 | 14 | No | No | Yes |
S. Dionisio-Ortega et al. [28] | 2018 | 14 | Yes | Yes | No |
D. Gandhi et al. [29] | 2017 | 165 | Yes | Yes | No |
D. Falanga et al. [30] | 2017 | 98 | Yes | Yes | No |
K. McGuire et al. [31] | 2017 | 88 | Yes | Yes | No |
A. Zeggada et al. [32] | 2017 | 43 | No | No | No |
Y. Zhao et al. [33] | 2017 | 31 | No | No | No |
L. Von Stumberg et al. [34] | 2017 | 25 | No | No | No |
P. Moriarty et al. [35] | 2017 | 11 | No | No | Yes |
A. Giusti et al. [36] | 2016 | 424 | Yes | Yes | No |
T. Zhang et al. [37] | 2016 | 263 | Yes | Yes | No |
S. Daftry et al. [38] | 2016 | 26 | Yes | Yes | No |
M. E. Antonio-Toledo et al. [39] | 2016 | 3 | No | No | No |
Paper | Year | Citations | PG | ED | NPM |
---|---|---|---|---|---|
A. Loquercio et al. [9] | 2020 | 34 | No | No | Yes |
M. K. Al-Sharman et al. [10] | 2020 | 11 | No | No | No |
S. Nezami et al. [11] | 2020 | 8 | No | Yes | No |
H. Shiri et al. [12] | 2020 | 6 | Yes | No | No |
K. Lee et al. [13] | 2020 | 6 | Yes | No | Yes |
A. Anwar et al. [14] | 2020 | 5 | No | No | No |
R. Chew et al. [15] | 2020 | 4 | No | Yes | No |
D. Wofk et al. [16] | 2019 | 55 | No | No | No |
E. Kaufmann et al. [17] | 2019 | 50 | Yes | No | Yes |
D. Palossi et al. [7] | 2019 | 43 | No | No | No |
Hossain et al. [18] | 2019 | 19 | No | No | No |
Y. Y. Munaye et al. [19] | 2019 | 11 | No | No | No |
S. Islam et al. [20] | 2019 | 9 | Yes | No | No |
A. Alshehri et al. [21] | 2019 | 8 | No | No | No |
A. Loquercio et al. [22] | 2018 | 158 | No | No | No |
E. Kaufmann et al. [23] | 2018 | 60 | No | No | No |
O. Csillik et al. [24] | 2018 | 58 | No | Yes | No |
S. Jung et al. [25] | 2018 | 57 | No | No | Yes |
A. A. Zhilenkov et al. [26] | 2018 | 23 | No | Yes | No |
S. Lee et al. [27] | 2018 | 14 | No | No | Yes |
S. Dionisio-Ortega et al. [28] | 2018 | 14 | No | Yes | No |
D. Gandhi et al. [29] | 2017 | 165 | No | No | No |
D. Falanga et al. [30] | 2017 | 98 | Yes | No | Yes |
K. McGuire et al. [31] | 2017 | 88 | No | No | No |
A. Zeggada et al. [32] | 2017 | 43 | No | No | No |
Y. Zhao et al. [33] | 2017 | 31 | Yes | No | No |
L. Von Stumberg et al. [34] | 2017 | 25 | No | No | No |
P. Moriarty et al. [35] | 2017 | 11 | No | Yes | Yes |
A. Giusti et al. [36] | 2016 | 424 | No | No | No |
T. Zhang et al. [37] | 2016 | 263 | No | No | No |
S. Daftry et al. [38] | 2016 | 26 | No | No | No |
M. E. Antonio-Toledo et al. [39] | 2016 | 3 | Yes | No | Yes |
Paper | Year | Citations | OBO | ES | SI |
---|---|---|---|---|---|
A. Loquercio et al. [9] | 2020 | 34 | Yes | No | Yes |
M. K. Al-Sharman et al. [10] | 2020 | 11 | No | No | No |
S. Nezami et al. [11] | 2020 | 8 | No | Yes | No |
H. Shiri et al. [12] | 2020 | 6 | No | Yes | No |
K. Lee et al. [13] | 2020 | 6 | No | No | No |
A. Anwar et al. [14] | 2020 | 5 | No | No | No |
R. Chew et al. [15] | 2020 | 4 | No | No | No |
D. Wofk et al. [16] | 2019 | 55 | Yes | No | Yes |
E. Kaufmann et al. [17] | 2019 | 50 | Yes | No | Yes |
D. Palossi et al. [7] | 2019 | 43 | Yes | No | Yes |
Hossain et al. [18] | 2019 | 19 | Yes | No | Yes |
Y. Y. Munaye et al. [19] | 2019 | 11 | No | No | No |
S. Islam et al. [20] | 2019 | 9 | No | Yes | No |
A. Alshehri et al. [21] | 2019 | 8 | No | No | No |
A. Loquercio et al. [22] | 2018 | 158 | No | No | No |
E. Kaufmann et al. [23] | 2018 | 60 | Yes | No | Yes |
O. Csillik et al. [24] | 2018 | 58 | No | No | No |
S. Jung et al. [25] | 2018 | 57 | Yes | No | Yes |
A. A. Zhilenkov et al. [26] | 2018 | 23 | Yes | No | Yes |
S. Lee et al. [27] | 2018 | 14 | Yes | No | Yes |
S. Dionisio-Ortega et al. [28] | 2018 | 14 | No | No | No |
D. Gandhi et al. [29] | 2017 | 165 | No | No | No |
D. Falanga et al. [30] | 2017 | 98 | Yes | Yes | Yes |
K. McGuire et al. [31] | 2017 | 88 | Yes | Yes | Yes |
A. Zeggada et al. [32] | 2017 | 43 | No | No | No |
Y. Zhao et al. [33] | 2017 | 31 | No | Yes | No |
L. Von Stumberg et al. [34] | 2017 | 25 | No | Yes | No |
P. Moriarty et al. [35] | 2017 | 11 | No | No | No |
A. Giusti et al. [36] | 2016 | 424 | No | No | No |
T. Zhang et al. [37] | 2016 | 263 | Yes | No | No |
S. Daftry et al. [38] | 2016 | 26 | No | Yes | No |
M. E. Antonio-Toledo et al. [39] | 2016 | 3 | No | No | No |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, T.; Mckeever, S.; Courtney, J. Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy. Drones 2021, 5, 52. https://doi.org/10.3390/drones5020052
Lee T, Mckeever S, Courtney J. Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy. Drones. 2021; 5(2):52. https://doi.org/10.3390/drones5020052
Chicago/Turabian StyleLee, Thomas, Susan Mckeever, and Jane Courtney. 2021. "Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy" Drones 5, no. 2: 52. https://doi.org/10.3390/drones5020052