1. Introduction
Precise crop map** is vitally important in agriculture and agricultural management, such as crop damage estimation [
1], crop acreage and yield estimation [
2], and precision agriculture [
3]. Crops mapped in detail are basic data and materials for scientific study and governmental decision-making. Compared with conventional field investigation approaches, remote sensing has been considered to be a cost-effective, labor-saving, and time-efficient method of vegetation map** that has been widely applied in crop map** [
4].
It is challenging for multi-spectral remote sensing data to discriminate between different species of crops. One of the reasons is the spectral similarity between different types of crops [
5]. Hyperspectral remote sensing data, which has narrow spectral bands of up to hundreds from the visible to the infrared region of the spectrum, are more powerful in identifying different crop species than multi-spectral images. In order to investigate the capability of hyperspectral data in distinguishing different crops, studies on the choosing of appropriate hyperspectral data waveband locations were performed [
6,
7]. However, due to the variability within the same crop caused by growth calendars, farmer decisions, and local weather [
5], it is still a challenging task to choose hyperspectral remote sensing data of proper bands and time phases to classify crops in detail. To improve the accuracy of crop species classification, incorporation of the plant canopy structure information into the optical remote sensing data classification is promising. A LiDAR system that can measure the vertical structural information of vegetation has been used in tree species inventory [
8,
9,
10,
11]. The combination of hyperspectral and LiDAR data in tree species map** showed its potential for tree species classification [
12,
13]. As for crops, canopy height differences of different crop species are more obvious than those of tree species. Using third-dimensional information on the crops when differentiating crops that have similar spectral characteristics could be more promising.
Another reason why it is difficult for multi-spectral remote sensing data to discriminate between different species of crops is the limitation in spatial resolution of remote sensing images [
5]. In regions that have spatially fragmented landscapes and complicated planting structures, using high-spatial resolution remote sensing data is important for accurate crop species classification. The coarse or medium spatial resolution remote sensing images could cause “mixed” pixels of multiple land cover types or crop species, which causes the data to be insufficient or inadequate for detailed crop species classification [
14,
15]. VHR images that could provide more detailed observation information on a finer or even single plant distribution model are more promising. VHR images have been widely used in urban land cover classification, forest inventory [
9,
16,
17], and crop species map** [
18,
19,
20,
21].
However, high spatial resolution imagery might be not effective in accurately map** crop species because the pixels of the VHR image could capture the information of the soil background or shadows as well, even though the crops are the only targets for map**. The background information could increase the spectral variability and the mixed pixels of parcels, which will cause a decrease in the statistical separability between different classes when applying pixel-based classification [
22]. This scenario is known as the H-resolution problem [
14]. As a way of solving the H-resolution problem, object-based image analysis (OBIA) has been developed and used in crop species classification [
5,
20,
23].
In contrast to pixel-based classification, object-based classification considers image objects to be the basic classification units [
14,
16]. One of the advantages of object-based classification over pixel-based classification is that object-based classification can achieve more reliable classification results by combining different types of features of objects [
19], such as spectral features, textural features, and geometric features. The object-based image classification consists of two stages, image segmentation and image object classification. In the image segmentation procedure, remote sensing imagery is segmented into relatively homogeneous regions as “image objects” [
24]. Previous studies have shown that multi-sensor data-based image segmentation has higher segmentation accuracy than only multi-spectral data-based image segmentation [
16,
25,
26,
27]. A combination of third-dimensional features of the vegetation canopy derived from LiDAR data and high spatial resolution images could improve the segmentation accuracy [
16,
17,
28,
29]. In the image object classification process, each segmented object is labeled as a corresponding class using an appropriate classification algorithm.
While the effectiveness of combining hyperspectral- and LiDAR-derived vegetation height data for tree species map** has been confirmed by several studies [
11,
12,
13,
30,
31], the combination of hyperspectral and LiDAR data has never been used for crop species classification, and the effectiveness of this combination in crop species map** is unknown. Furthermore, most studies that were based on combining hyperspectral- and LiDAR-derived vegetation height data for tree species map** relied on pixel-based classification and, thus, ignored the geometric and textural features that lie in high spatial resolution remote sensing data.
The main objective of this study was to develop a framework for map** crop species by combining hyperspectral and LiDAR data in an object-based image analysis (OBIA) paradigm and to test the effectiveness of this framework in the irrigated agricultural region. The study area is located in the middle reaches of the Heihe River Basin, Gansu Province, China, where the landscape is fragmented and the crop planting structure is complicated. The specific aims of this paper are: (i) to understand the performances of different spectral dimension-reduced features from hyperspectral data and their combinations with LiDAR-derived height information in image segmentation; (ii) to understand what classification accuracies of crop species can be achieved by combining hyperspectral and LiDAR data in an OBIA paradigm; and (iii) to understand the contributions of the crop height derived from LiDAR data and the textural and geometric features of the image objects to the crop species’ separabilities.
The remainder of this paper is organized as follows. In
Section 2, we describe the study area and the dataset used in the analysis. In
Section 3, we present our methods for data pre-processing, image segmentation and segmentation accuracy assessment, and object-based classification. The results are presented and analyzed in
Section 4. A summary of the entire study and the conclusions are presented in
Section 5.
5. Conclusions
In this paper, we proposed a framework for map** crop species by combining hyperspectral and LiDAR data in an object-based image analysis (OBIA) paradigm. To test the effectiveness of this framework, a study was conducted on the irrigated agricultural region in the central Heihe River Basin, where the landscape is fragmented and the crop planting structure is quite complex. A pre-processing procedure was proposed for extracting the mean crop height of the segmented image objects. The performances of different spectral dimension-reduced features from hyperspectral data and their combinations with LiDAR-derived height information in image segmentation were evaluated and compared. The contributions of the crop height derived by LiDAR data and the geometric and textural features of the image objects to the crop species’ separabilities were studied.
We evaluated and compared the performances of different combinations of features extracted from hyperspectral and LiDAR data for image segmentation and image classification. The main indications and conclusions derived from our analysis are the following:
- (i)
The framework we presented in this study for map** crop species by combining hyperspectral and LiDAR data in an object-based image analysis (OBIA) paradigm is effective. This approach produced a good crop species classification result, with an overall accuracy of 90.33% and a kappa coefficient of 0.89 in our study area, where there was a spatially fragmented agricultural landscape and a complicated planting structure.
- (ii)
The image segmentation accuracy depends heavily on the hyperspectral data dimension-reduction method. In this case, the VHR data that was selected from the hyperspectral bands has higher segmentation accuracy than the MNF. Incorporating the CHM information extracted from high point density LiDAR data could significantly improve the segmentation accuracy of the VHR data.
- (iii)
The height information derived from LiDAR data provided a substantial increase in the crop species classification accuracy. The MNF/CHM combination produced higher accuracy of crop species classification than VHR/CHM.
- (iv)
Incorporating the textural and geometric features (i.e., the shape index, length-width ratio, and rectangular fit) of objects could significantly increase the crop species classification accuracy, which indicates that, due to its ability to provide diverse textural and geometric features, object-based image classification is effective for crop species map** in regions with spatially fragmented landscape and complicated planting structure.
The remote sensing data used in this paper were airborne hyperspectral data with high spatial resolution, and LiDAR data with high density of point cloud. However, the method of crop species classification we presented in this paper is applicable to combining satellite hyperspectral data with moderate spatial resolution and LiDAR data with low cloud density for crop map** in a large area as well. For future development of this study, it would be interesting to investigate the performance of LiDAR data combined with more features derived from hyperspectral data in both image segmentation and classification. Further testing of the method in a different area with other kind of crops and with LiDAR data of different quality should also be attempted.