remotesensing-logo

Journal Browser

Journal Browser

Acquire and Perceive: Novel Approaches for Imaging-Based Plant Phenoty**

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 26903

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, Edinburgh Napier University, Edinburgh EH11 4BN, UK
Interests: computer vision; deep learning; unsupervised domain adaptation; plant image analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Information and Mechanization Engineering, Department of Sensing, Institute of Agricultural Engineering, Agricultural Research Organization the Volcani Center, p.o.b. 6, Bet Dagan 50250, Israel
Interests: environmental optical acquisition for agriculture tasks; computational optics; optical design inverse problems; learning

E-Mail Website
Guest Editor
Ben-Gurion University of the Negev, P.O.B. 653 Beer-Sheva 8410501, Israel
Interests: computer vision; machine learning; plant phenoty**; deep learning aspects: explain-ability; efficient inference; modular networks

Special Issue Information

Dear Colleagues,

Plants are the fundamental source of food for people, livestock, and all live species on earth. The growth in the human population, with 10 billion people expected by 2050, requires an increase of 50% in agriculture production. Crop optimization is approached by multiple means, including automation of agricultural operations and improved plant breeding process, creating an urgent need for plant trait analysis and phenoty**. However, manual plant analysis is tedious, often destructive for the plant, and non-scalable. With improved sensors and recent advances in machine learning (especially deep learning), imaging-based plant analysis provides a promising alternative, with a growing impact.

This Special Issue invites cutting-edge contributions concerning all aspects of the imaging-based plant analysis challenge. Image acquisition is one topic that is of particular interest. Agriculture monitoring is done in complex and changing illumination conditions, often with modalities beyond RGB, such as depth or hyperspectral data. Papers considering illumination condition, illumination design, and joint illumination, as well as acquisition algorithms, sensor fusion, or image processing design, are encouraged. Another topic of interest is image perception—computer vision and machine learning techniques applied to plant analysis from images. Novel phenoty** tasks, as well as methods for improved accuracy, and/or robustness in existing tasks are welcome. Additional topics of interest include (but are not limited to) fine-grained phenoty**, flexibility and task transfer, and phenotype tracking in a time series. Furthermore, people wishing to discuss a topic of particular interest, to outline the next steps and challenges, are welcome to submit review/survey papers.

Dr. Mario Valerio Giuffrida
Dr. Iftach Klapp
Dr. Aharon Bar-Hillel
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at mdpi.longhoe.net by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Plant phenoty** and acquisition
  • Computer vision
  • Machine learning/deep learning
  • Precision agriculture
  • Multi-modal imaging and sensor fusion
  • Joint illumination and image processing design
  • Acquisition and phenotype tracking in time series

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 8412 KiB  
Article
Optimization of UAV-Based Imaging and Image Processing Orthomosaic and Point Cloud Approaches for Estimating Biomass in a Forage Crop
by Worasit Sangjan, Rebecca J. McGee and Sindhuja Sankaran
Remote Sens. 2022, 14(10), 2396; https://doi.org/10.3390/rs14102396 - 17 May 2022
Cited by 15 | Viewed by 4143
Abstract
Forage and field peas provide essential nutrients for livestock diets, and high-quality field peas can influence livestock health and reduce greenhouse gas emissions. Above-ground biomass (AGBM) is one of the vital traits and the primary component of yield in forage pea breeding programs. [...] Read more.
Forage and field peas provide essential nutrients for livestock diets, and high-quality field peas can influence livestock health and reduce greenhouse gas emissions. Above-ground biomass (AGBM) is one of the vital traits and the primary component of yield in forage pea breeding programs. However, a standard method of AGBM measurement is a destructive and labor-intensive process. This study utilized an unmanned aerial vehicle (UAV) equipped with a true-color RGB and a five-band multispectral camera to estimate the AGBM of winter pea in three breeding trials (two seed yields and one cover crop). Three processing techniques—vegetation index (VI), digital surface model (DSM), and 3D reconstruction model from point clouds—were used to extract the digital traits (height and volume) associated with AGBM. The digital traits were compared with the ground reference data (measured plant height and harvested AGBM). The results showed that the canopy volume estimated from the 3D model (alpha shape, α = 1.5) developed from UAV-based RGB imagery’s point clouds provided consistent and high correlation with fresh AGBM (r = 0.78–0.81, p < 0.001) and dry AGBM (r = 0.70–0.81, p < 0.001), compared with other techniques across the three trials. The DSM-based approach (height at 95th percentile) had consistent and high correlation (r = 0.71–0.95, p < 0.001) with canopy height estimation. Using the UAV imagery, the proposed approaches demonstrated the potential for estimating the crop AGBM across winter pea breeding trials. Full article
Show Figures

Graphical abstract

18 pages, 5497 KiB  
Article
A Multi-Source Data Fusion Decision-Making Method for Disease and Pest Detection of Grape Foliage Based on ShuffleNet V2
by Rui Yang, **angyu Lu, **g Huang, Jun Zhou, Jie Jiao, Yufei Liu, Fei Liu, Baofeng Su and Peiwen Gu
Remote Sens. 2021, 13(24), 5102; https://doi.org/10.3390/rs13245102 - 15 Dec 2021
Cited by 25 | Viewed by 3585
Abstract
Disease and pest detection of grape foliage is essential for grape yield and quality. RGB image (RGBI), multispectral image (MSI), and thermal infrared image (TIRI) are widely used in the health detection of plants. In this study, we collected three types of grape [...] Read more.
Disease and pest detection of grape foliage is essential for grape yield and quality. RGB image (RGBI), multispectral image (MSI), and thermal infrared image (TIRI) are widely used in the health detection of plants. In this study, we collected three types of grape foliage images with six common classes (anthracnose, downy mildew, leafhopper, mites, viral disease, and healthy) in the field. ShuffleNet V2 was used to build up detection models. According to the accuracy of RGBI, MSI, TIRI, and multi-source data concatenation (MDC) models, and a multi-source data fusion (MDF) decision-making method was proposed for improving the detection performance for grape foliage, aiming to enhance the decision-making for RGBI of grape foliage by fusing the MSI and TIRI. The results showed that 40% of the incorrect detection outputs were rectified using the MDF decision-making method. The overall accuracy of MDF model was 96.05%, which had improvements of 2.64%, 13.65%, and 27.79%, compared with the RGBI, MSI, and TIRI models using label smoothing, respectively. In addition, the MDF model was based on the lightweight network with 3.785 M total parameters and 0.362 G multiply-accumulate operations, which could be highly portable and easy to be applied. Full article
Show Figures

Figure 1

21 pages, 2493 KiB  
Article
Parts-per-Object Count in Agricultural Images: Solving Phenoty** Problems via a Single Deep Neural Network
by Faina Khoroshevsky, Stanislav Khoroshevsky and Aharon Bar-Hillel
Remote Sens. 2021, 13(13), 2496; https://doi.org/10.3390/rs13132496 - 26 Jun 2021
Cited by 16 | Viewed by 3274
Abstract
Solving many phenoty** problems involves not only automatic detection of objects in an image, but also counting the number of parts per object. We propose a solution in the form of a single deep network, tested for three agricultural datasets pertaining to bananas-per-bunch, [...] Read more.
Solving many phenoty** problems involves not only automatic detection of objects in an image, but also counting the number of parts per object. We propose a solution in the form of a single deep network, tested for three agricultural datasets pertaining to bananas-per-bunch, spikelets-per-wheat-spike, and berries-per-grape-cluster. The suggested network incorporates object detection, object resizing, and part counting as modules in a single deep network, with several variants tested. The detection module is based on a Retina-Net architecture, whereas for the counting modules, two different architectures are examined: the first based on direct regression of the predicted count, and the other on explicit parts detection and counting. The results are promising, with the mean relative deviation between estimated and visible part count in the range of 9.2% to 11.5%. Further inference of count-based yield related statistics is considered. For banana bunches, the actual banana count (including occluded bananas) is inferred from the count of visible bananas. For spikelets-per-wheat-spike, robust estimation methods are employed to get the average spikelet count across the field, which is an effective yield estimator. Full article
Show Figures

Graphical abstract

15 pages, 7107 KiB  
Article
Novel 3D Imaging Systems for High-Throughput Phenoty** of Plants
by Tian Gao, Feiyu Zhu, Puneet Paul, Jaspreet Sandhu, Henry Akrofi Doku, Jianxin Sun, Yu Pan, Paul Staswick, Harkamal Walia and Hongfeng Yu
Remote Sens. 2021, 13(11), 2113; https://doi.org/10.3390/rs13112113 - 27 May 2021
Cited by 18 | Viewed by 4123
Abstract
The use of 3D plant models for high-throughput phenoty** is increasingly becoming a preferred method for many plant science researchers. Numerous camera-based imaging systems and reconstruction algorithms have been developed for the 3D reconstruction of plants. However, it is still challenging to build [...] Read more.
The use of 3D plant models for high-throughput phenoty** is increasingly becoming a preferred method for many plant science researchers. Numerous camera-based imaging systems and reconstruction algorithms have been developed for the 3D reconstruction of plants. However, it is still challenging to build an imaging system with high-quality results at a low cost. Useful comparative information for existing imaging systems and their improvements is also limited, making it challenging for researchers to make data-based selections. The objective of this study is to explore the possible solutions to address these issues. We introduce two novel systems for plants of various sizes, as well as a pipeline to generate high-quality 3D point clouds and meshes. The higher accuracy and efficiency of the proposed systems make it a potentially valuable tool for enhancing high-throughput phenoty** by integrating 3D traits for increased resolution and measuring traits that are not amenable to 2D imaging approaches. The study shows that the phenotype traits derived from the 3D models are highly correlated with manually measured phenotypic traits (R2 > 0.91). Moreover, we present a systematic analysis of different settings of the imaging systems and a comparison with the traditional system, which provide recommendations for plant scientists to improve the accuracy of 3D construction. In summary, our proposed imaging systems are suggested for 3D reconstruction of plants. Moreover, the analysis results of the different settings in this paper can be used for designing new customized imaging systems and improving their accuracy. Full article
Show Figures

Figure 1

22 pages, 3027 KiB  
Article
Visual Growth Tracking for Automated Leaf Stage Monitoring Based on Image Sequence Analysis
by Srinidhi Bashyam, Sruti Das Choudhury, Ashok Samal and Tala Awada
Remote Sens. 2021, 13(5), 961; https://doi.org/10.3390/rs13050961 - 4 Mar 2021
Cited by 6 | Viewed by 3611
Abstract
In this paper, we define a new problem domain, called visual growth tracking, to track different parts of an object that grow non-uniformly over space and time for application in image-based plant phenoty**. The paper introduces a novel method to reliably detect and [...] Read more.
In this paper, we define a new problem domain, called visual growth tracking, to track different parts of an object that grow non-uniformly over space and time for application in image-based plant phenoty**. The paper introduces a novel method to reliably detect and track individual leaves of a maize plant based on a graph theoretic approach for automated leaf stage monitoring. The method has four phases: optimal view selection, plant architecture determination, leaf tracking, and generation of a leaf status report. The method accepts an image sequence of a plant as the input and automatically generates a leaf status report containing the phenotypes, which are crucial in the understanding of a plant’s growth, i.e., the emergence timing of each leaf, total number of leaves present at any time, the day on which a particular leaf ceased to grow, and the length and relative growth rate of individual leaves. Based on experimental study, three types of leaf intersections are identified, i.e., tip-contact, tangential-contact, and crossover, which pose challenges to accurate leaf tracking in the late vegetative stage. Thus, we introduce a novel curve tracing approach based on an angular consistency check to address the challenges due to intersecting leaves for improved performance. The proposed method shows high accuracy in detecting leaves and tracking them through the vegetative stages of maize plants based on experimental evaluation on a publicly available benchmark dataset. Full article
Show Figures

Graphical abstract

14 pages, 2164 KiB  
Article
Deep Learning in Hyperspectral Image Reconstruction from Single RGB images—A Case Study on Tomato Quality Parameters
by Jiangsan Zhao, Dmitry Kechasov, Boris Rewald, Gernot Bodner, Michel Verheul, Nicholas Clarke and Jihong Liu Clarke
Remote Sens. 2020, 12(19), 3258; https://doi.org/10.3390/rs12193258 - 7 Oct 2020
Cited by 26 | Viewed by 6239
Abstract
Hyperspectral imaging has many applications. However, the high device costs and low hyperspectral image resolution are major obstacles limiting its wider application in agriculture and other fields. Hyperspectral image reconstruction from a single RGB image fully addresses these two problems. The robust HSCNN-R [...] Read more.
Hyperspectral imaging has many applications. However, the high device costs and low hyperspectral image resolution are major obstacles limiting its wider application in agriculture and other fields. Hyperspectral image reconstruction from a single RGB image fully addresses these two problems. The robust HSCNN-R model with mean relative absolute error loss function and evaluated by the Mean Relative Absolute Error metric was selected through permutation tests from models with combinations of loss functions and evaluation metrics, using tomato as a case study. Hyperspectral images were subsequently reconstructed from single tomato RGB images taken by a smartphone camera. The reconstructed images were used to predict tomato quality properties such as the ratio of soluble solid content to total titratable acidity and normalized anthocyanin index. Both predicted parameters showed very good agreement with corresponding “ground truth” values and high significance in an F test. This study showed the suitability of hyperspectral image reconstruction from single RGB images for fruit quality control purposes, underpinning the potential of the technology—recovering hyperspectral properties in high resolution—for real-world, real time monitoring applications in agriculture any beyond. Full article
Show Figures

Graphical abstract

Back to TopTop