remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Techniques Applied in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 15 November 2024 | Viewed by 11533

Special Issue Editors


E-Mail Website
Guest Editor
Technologies of Vision, Digital Industry Center, Fondazione Bruno Kessler, Trento, Italy
Interests: pattern recognition; computer vision; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Centre de Développement des Energies Renouvelables - CDERUnité de Recherche Appliquée en Energies Renouvelables, Ghardaia, Algeria
Interests: machine learning; pattern recognition; classification; computer vision; image processing

Special Issue Information

Dear Colleagues,

Remote sensing has become an active area of research over the past few years. Today, unlike in the past, large amounts of data are accessible, and many applications are now derived from crossovers of imaging, sensing, machine vision and artificial intelligence.

In particular, the advent of deep learning has marked the onset of a new era of research in machine vision in general and remote sensing in particular, owing mainly to unprecedented performance in various applications. Nevertheless, there is always room for improvement. To this end, this Special Issue undertakes a wide range of applications that make use of deep learning methodologies applied to remote sensing data. For instance, potential submissions may cover (but are not limited to) the following deep learning-driven topics:

  • Image classification, segmentation, fusion, super-resolution and inpainting;
  • Adversarial techniques;
  • Area reconstruction or removal;
  • UAV image analysis;
  • Changes in detection;
  • Remote Sensing of aquatic environments;
  • Remote sensing in forestry;
  • Object detection/tracking/re-identification;
  • Assessing natural hazards;
  • Image captioning;
  • Visual question answering;
  • Image retrieval (including text to image retrieval and vice versa);
  • Land use/cover;
  • Urban development assessment;
  • Soil analysis (e.g., mineral estimation);
  • Precision farming;
  • Remote sensing datasets;
  • Domain adaptation;
  • Internet of Things, big data;
  • Remote sensing data protection/security;
  • Forecasting (e.g., weather data);
  • Remote sensing for renewable energy assessment (e.g., PV power/solar radiation).

Dr. Mohamed Lamine Mekhalfi
Prof. Dr. Yakoub Bazi
Dr. Edoardo Pasolli
Dr. Mawloud Guermoui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at mdpi.longhoe.net by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • deep learning
  • data analysis
  • modeling
  • machine vision
  • artificial intelligence

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 43187 KiB  
Article
Large-Scale Land Cover Map** Framework Based on Prior Product Label Generation: A Case Study of Cambodia
by Hongbo Zhu, Tao Yu, **aofei Mi, Jian Yang, Chuanzhao Tian, Peizhuo Liu, Jian Yan, Yuke Meng, Zhenzhao Jiang and Zhigao Ma
Remote Sens. 2024, 16(13), 2443; https://doi.org/10.3390/rs16132443 - 3 Jul 2024
Viewed by 338
Abstract
Large-Scale land cover map** (LLCM) based on deep learning models necessitates a substantial number of high-precision sample datasets. However, the limited availability of such datasets poses challenges in regularly updating land cover products. A commonly referenced method involves utilizing prior products (PPs) as [...] Read more.
Large-Scale land cover map** (LLCM) based on deep learning models necessitates a substantial number of high-precision sample datasets. However, the limited availability of such datasets poses challenges in regularly updating land cover products. A commonly referenced method involves utilizing prior products (PPs) as labels to achieve up-to-date land cover map**. Nonetheless, the accuracy of PPs at the regional level remains uncertain, and the Remote Sensing Image (RSI) corresponding to the product is not publicly accessible. Consequently, the sample dataset constructed through geographic location matching may lack precision. Errors in such datasets are not only due to inherent product discrepancies, and can also arise from temporal and scale disparities between the RSI and PPs. In order to solve the above problems, this paper proposes an LLCM framework for generating labels for use with PPs. The framework consists of three main parts. First, initial generation of labels, in which the collected PPs are integrated based on D-S evidence theory and initial labels are obtained using the generated trust map. Second, for dynamic label correction, a two-stage training method based on initial labels is adopted. The correction model is pretrained in the first stage, then the confidence probability (CP) correction module of the dynamic threshold value and NDVI correction module are introduced in the second stage. The initial labels are iteratively corrected while the model is trained using the joint correction loss, with the corrected labels obtained after training. Finally, the classification model is trained using the corrected labels. Using the proposed land cover map** framework, this study used PPs to produce a 10 m spatial resolution land cover map of Cambodia in 2020. The overall accuracy of the land cover map was 91.68% and the Kappa value was 0.8808. Based on these results, the proposed map** framework can effectively use PPs to update medium-resolution large-scale land cover datasets, and provides a powerful solution for label acquisition in LLCM projects. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

17 pages, 5065 KiB  
Article
Federated Learning Approach for Remote Sensing Scene Classification
by Belgacem Ben Youssef, Lamyaa Alhmidi, Yakoub Bazi and Mansour Zuair
Remote Sens. 2024, 16(12), 2194; https://doi.org/10.3390/rs16122194 - 17 Jun 2024
Viewed by 337
Abstract
In classical machine learning algorithms, used in many analysis tasks, the data are centralized for training. That is, both the model and the data are housed within one device. Federated learning (FL), on the other hand, is a machine learning technique that breaks [...] Read more.
In classical machine learning algorithms, used in many analysis tasks, the data are centralized for training. That is, both the model and the data are housed within one device. Federated learning (FL), on the other hand, is a machine learning technique that breaks away from this traditional paradigm by allowing multiple devices to collaboratively train a model without each sharing their own data. In a typical FL setting, each device has a local dataset and trains a local model on that dataset. The local models are next aggregated at a central server to produce a global model. The global model is then distributed back to the devices, which update their local models accordingly. This process is repeated until the global model converges. In this article, a FL approach is applied for remote sensing scene classification for the first time. The adopted approach uses three different RS datasets while employing two types of CNN models and two types of Vision Transformer models, namely: EfficientNet-B1, EfficientNet-B3, ViT-Tiny, and ViT-Base. We compare the performance of FL in each model in terms of overall accuracy and undertake additional experiments to assess their robustness when faced with scenarios of dropped clients. Our classification results on test data show that the two considered Transformer models outperform the two models from the CNN family. Furthermore, employing FL with ViT-Base yields the highest accuracy levels even when the number of dropped clients is significant, indicating its high robustness. These promising results point to the notion that FL can be successfully used with ViT models in the classification of RS scenes, whereas CNN models may suffer from overfitting problems. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

21 pages, 6228 KiB  
Article
An Improved SAR Ship Classification Method Using Text-to-Image Generation-Based Data Augmentation and Squeeze and Excitation
by Lu Wang, Yuhang Qi, P. Takis Mathiopoulos, Chunhui Zhao and Suleman Mazhar
Remote Sens. 2024, 16(7), 1299; https://doi.org/10.3390/rs16071299 - 7 Apr 2024
Cited by 1 | Viewed by 1002
Abstract
Synthetic aperture radar (SAR) plays a crucial role in maritime surveillance due to its capability for all-weather, all-day operation. However, SAR ship recognition faces challenges, primarily due to the imbalance and inadequacy of ship samples in publicly available datasets, along with the presence [...] Read more.
Synthetic aperture radar (SAR) plays a crucial role in maritime surveillance due to its capability for all-weather, all-day operation. However, SAR ship recognition faces challenges, primarily due to the imbalance and inadequacy of ship samples in publicly available datasets, along with the presence of numerous outliers. To address these issues, this paper proposes a SAR ship classification method based on text-generated images to tackle dataset imbalance. Firstly, an image generation module is introduced to augment SAR ship data. This method generates images from textual descriptions to overcome the problem of insufficient samples and the imbalance between ship categories. Secondly, given the limited information content in the black background of SAR ship images, the Tokens-to-Token Vision Transformer (T2T-ViT) is employed as the backbone network. This approach effectively combines local information on the basis of global modeling, facilitating the extraction of features from SAR images. Finally, a Squeeze-and-Excitation (SE) model is incorporated into the backbone network to enhance the network’s focus on essential features, thereby improving the model’s generalization ability. To assess the model’s effectiveness, extensive experiments were conducted on the OpenSARShip2.0 and FUSAR-Ship datasets. The performance evaluation results indicate that the proposed method achieves higher classification accuracy in the context of imbalanced datasets compared to eight existing methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

17 pages, 6789 KiB  
Article
Multi-Year Time Series Transfer Learning: Application of Early Crop Classification
by Matej Račič, Krištof Oštir, Anže Zupanc and Luka Čehovin Zajc
Remote Sens. 2024, 16(2), 270; https://doi.org/10.3390/rs16020270 - 10 Jan 2024
Cited by 2 | Viewed by 1551
Abstract
Crop classification is an important task in remote sensing with many applications, such as estimating yields, detecting crop diseases and pests, and ensuring food security. In this study, we combined knowledge from remote sensing, machine learning, and agriculture to investigate the application of [...] Read more.
Crop classification is an important task in remote sensing with many applications, such as estimating yields, detecting crop diseases and pests, and ensuring food security. In this study, we combined knowledge from remote sensing, machine learning, and agriculture to investigate the application of transfer learning with a transformer model for variable length satellite image time series (SITS). The objective was to produce a map of agricultural land, reduce required interventions, and limit in-field visits. Specifically, we aimed to provide reliable agricultural land class predictions in a timely manner and quantify the necessary amount of reference parcels to achieve these outcomes. Our dataset consisted of Sentinel-2 satellite imagery and reference crop labels for Slovenia spanning over years 2019, 2020, and 2021. We evaluated adaptability through fine-tuning in a real-world scenario of early crop classification with limited up-to-date reference data. The base model trained on a different year achieved an average F1 score of 82.5% for the target year without having a reference from the target year. To increase accuracy with a new model trained from scratch, an average of 48,000 samples are required in the target year. Using transfer learning, the pre-trained models can be efficiently adapted to an unknown year, requiring less than 0.3% (1500) samples from the dataset. Building on this, we show that transfer learning can outperform the baseline in the context of early classification with only 9% of the data after 210 days in the year. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 10636 KiB  
Article
A Near Real-Time Map** of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine
by John Burns Kilbride, Ate Poortinga, Biplov Bhandari, Nyein Soe Thwal, Nguyen Hanh Quyen, Jeff Silverman, Karis Tenneson, David Bell, Matthew Gregory, Robert Kennedy and David Saah
Remote Sens. 2023, 15(21), 5223; https://doi.org/10.3390/rs15215223 - 3 Nov 2023
Cited by 3 | Viewed by 2134
Abstract
Satellite-based forest alert systems are an important tool for ecosystem monitoring, planning conservation, and increasing public awareness of forest cover change. Continuous monitoring in tropical regions, such as those experiencing pronounced monsoon seasons, can be complicated by spatially extensive and persistent cloud cover. [...] Read more.
Satellite-based forest alert systems are an important tool for ecosystem monitoring, planning conservation, and increasing public awareness of forest cover change. Continuous monitoring in tropical regions, such as those experiencing pronounced monsoon seasons, can be complicated by spatially extensive and persistent cloud cover. One solution is to use Synthetic Aperture Radar (SAR) imagery acquired by the European Space Agency’s Sentinel-1A and B satellites. The Sentinel 1A and B satellites acquire C-band radar data that penetrates cloud cover and can be acquired during the day or night. One challenge associated with operational use of radar imagery is that the speckle associated with the backscatter values can complicate traditional pixel-based analysis approaches. A potential solution is to use deep learning semantic segmentation models that can capture predictive features that are more robust to pixel-level noise. In this analysis, we present a prototype SAR-based forest alert system that utilizes deep learning classifiers, deployed using the Google Earth Engine cloud computing platform, to identify forest cover change with near real-time classification over two Cambodian wildlife sanctuaries. By leveraging a pre-existing forest cover change dataset derived from multispectral Landsat imagery, we present a method for efficiently develo** a SAR-based semantic segmentation dataset. In practice, the proposed framework achieved good performance comparable to an existing forest alert system while offering more flexibility and ease of development from an operational standpoint. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

17 pages, 1045 KiB  
Article
Ship Detection via Multi-Scale Deformation Modeling and Fine Region Highlight-Based Loss Function
by Chao Li, Jianming Hu, Dawei Wang, Hanfu Li and Zhile Wang
Remote Sens. 2023, 15(17), 4337; https://doi.org/10.3390/rs15174337 - 3 Sep 2023
Viewed by 1291
Abstract
Ship detection in optical remote sensing images plays a vital role in numerous civil and military applications, encompassing maritime rescue, port management and sea area surveillance. However, the multi-scale and deformation characteristics of ships in remote sensing images, as well as complex scene [...] Read more.
Ship detection in optical remote sensing images plays a vital role in numerous civil and military applications, encompassing maritime rescue, port management and sea area surveillance. However, the multi-scale and deformation characteristics of ships in remote sensing images, as well as complex scene interferences such as varying degrees of clouds, obvious shadows, and complex port facilities, pose challenges for ship detection performance. To address these problems, we propose a novel ship detection method by combining multi-scale deformation modeling and fine region highlight-based loss function. First, a visual saliency extraction network based on multiple receptive field and deformable convolution is proposed, which employs multiple receptive fields to mine the difference between the target and the background, and accurately extracts the complete features of the target through deformable convolution, thus improving the ability to distinguish the target from the complex background. Then, a customized loss function for the fine target region highlight is employed, which comprehensively considers the brightness, contrast and structural characteristics of ship targets, thus improving the classification performance in complex scenes with interferences. The experimental results on a high-quality ship dataset indicate that our method realizes state-of-the-art performance compared to eleven considered detection models. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

25 pages, 2163 KiB  
Article
LPMSNet: Location Pooling Multi-Scale Network for Cloud and Cloud Shadow Segmentation
by **n Dai, Kai Chen, Min **a, Liguo Weng and Haifeng Lin
Remote Sens. 2023, 15(16), 4005; https://doi.org/10.3390/rs15164005 - 12 Aug 2023
Cited by 10 | Viewed by 1148
Abstract
Among the most difficult difficulties in contemporary satellite image-processing subjects is cloud and cloud shade segmentation. Due to substantial background noise interference, existing cloud and cloud shadow segmentation techniques would result in false detection and missing detection. We propose a Location Pooling Multi-Scale [...] Read more.
Among the most difficult difficulties in contemporary satellite image-processing subjects is cloud and cloud shade segmentation. Due to substantial background noise interference, existing cloud and cloud shadow segmentation techniques would result in false detection and missing detection. We propose a Location Pooling Multi-Scale Network (LPMSNet) in this study. The residual network is utilised as the backbone in this method to acquire semantic info on various levels. Simultaneously, the Location Attention Multi-Scale Aggregation Module (LAMA) is introduced to obtain the image’s multi-scale info. The Channel Spatial Attention Module (CSA) is introduced to boost the network’s focus on segmentation goals. Finally, in view of the problem that the edge details of cloud as well as cloud shade are easily lost, this work designs the Scale Fusion Restoration Module (SFR). SFR can perform picture upsampling as well as the acquisition of edge detail information from cloud as well as cloud shade. The mean intersection over union (MIoU) accuracy of this network reached 94.36% and 81.60% on the Cloud and Cloud Shadow Dataset and the five-category dataset L8SPARCS, respectively. On the two-category HRC-WHU Dataset, the accuracy of the network on the intersection over union (IoU) reached 90.51%. In addition, in the Cloud and Cloud Shadow Dataset, our network achieves 97.17%, 96.83%, and 97.00% in precision (P), recall (R), and F1 score (F1) in cloud segmentation tasks, respectively. In the cloud shadow segmentation task, precision (P), recall (R), and F1 score (F1) reached 95.70%, 96.38%, and 96.04%, respectively. Therefore, this method has a significant advantage over the current cloud and cloud shade segmentation methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

22 pages, 3814 KiB  
Article
MBCNet: Multi-Branch Collaborative Change-Detection Network Based on Siamese Structure
by Dehao Wang, Liguo Weng, Min **a and Haifeng Lin
Remote Sens. 2023, 15(9), 2237; https://doi.org/10.3390/rs15092237 - 23 Apr 2023
Cited by 12 | Viewed by 1764
Abstract
The change-detection task is essentially a binary semantic segmentation task of changing and invariant regions. However, this is much more difficult than simple binary tasks, as the changing areas typically include multiple terrains such as factories, farmland, roads, buildings, and mining areas. This [...] Read more.
The change-detection task is essentially a binary semantic segmentation task of changing and invariant regions. However, this is much more difficult than simple binary tasks, as the changing areas typically include multiple terrains such as factories, farmland, roads, buildings, and mining areas. This requires the ability of the network to extract features. To this end, we propose a multi-branch collaborative change-detection network based on Siamese structure (MHCNet). In the model, three branches, the difference branch, global branch, and similar branch, are constructed to refine and extract semantic information from remote-sensing images. Four modules, a cross-scale feature-attention module (CSAM), global semantic filtering module (GSFM), double-branch information-fusion module (DBIFM), and similarity-enhancement module (SEM), are proposed to assist the three branches to extract semantic information better. The CSFM module is used to extract the semantic information related to the change in the remote-sensing image from the difference branch, the GSFM module is used to filter the rich semantic information in the remote-sensing image, and the DBIFM module is used to fuse the semantic information extracted from the difference branch and the global branch. Finally, the SEM module uses the similar information extracted with the similar branch to correct the details of the feature map in the feature-recovery stage. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 4440 KiB  
Technical Note
Multi-Feature Dynamic Fusion Cross-Domain Scene Classification Model Based on Lie Group Space
by Chengjun Xu, **gqian Shu and Guobin Zhu
Remote Sens. 2023, 15(19), 4790; https://doi.org/10.3390/rs15194790 - 30 Sep 2023
Cited by 2 | Viewed by 883
Abstract
To address the problem of the expensive and time-consuming annotation of high-resolution remote sensing images (HRRSIs), scholars have proposed cross-domain scene classification models, which can utilize learned knowledge to classify unlabeled data samples. Due to the significant distribution difference between a source domain [...] Read more.
To address the problem of the expensive and time-consuming annotation of high-resolution remote sensing images (HRRSIs), scholars have proposed cross-domain scene classification models, which can utilize learned knowledge to classify unlabeled data samples. Due to the significant distribution difference between a source domain (training sample set) and a target domain (test sample set), scholars have proposed domain adaptation models based on deep learning to reduce the above differences. However, the existing models have the following shortcomings: (1) insufficient learning of feature information, resulting in feature loss and restricting the spatial extent of domain-invariant features; (2) models easily focus on background feature information, resulting in negative transfer; (3) the relationship between the marginal distribution and the conditional distribution is not fully considered, and the weight parameters between them are manually set, which is time-consuming and may fall into local optimum. To address the above problems, this study proposes a novel remote sensing cross-domain scene classification model based on Lie group spatial attention and adaptive multi-feature distribution. Concretely, the model first introduces Lie group feature learning and maps the samples to the Lie group manifold space. By learning features of different levels and different scales and feature fusion, richer features are obtained, and the spatial scope of domain-invariant features is expanded. In addition, we also design an attention mechanism based on dynamic feature fusion alignment, which effectively enhances the weight of key regions and dynamically balances the importance between marginal and conditional distributions. Extensive experiments are conducted on three publicly available and challenging datasets, and the experimental results show the advantages of our proposed method over other state-of-the-art deep domain adaptation methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Back to TopTop