Performing remote sensing scene classification (RSSC) directly on satellites can alleviate data downlink burdens and reduce latency. Compared to convolutional neural networks (CNNs), the all-adder neural network (A
2NN) is a novel basic neural network that is more suitable for onboard RSSC,
[...] Read more.
Performing remote sensing scene classification (RSSC) directly on satellites can alleviate data downlink burdens and reduce latency. Compared to convolutional neural networks (CNNs), the all-adder neural network (A
2NN) is a novel basic neural network that is more suitable for onboard RSSC, enabling lower computational overhead by eliminating multiplication operations in convolutional layers. However, the extensive floating-point data and operations in A
2NNs still lead to significant storage overhead and power consumption during hardware deployment. In this article, a shared scaling factor-based de-biasing quantization (SSDQ) method tailored for the quantization of A
2NNs is proposed to address this issue, including a powers-of-two (POT)-based shared scaling factor quantization scheme and a multi-dimensional de-biasing (MDD) quantization strategy. Specifically, the POT-based shared scaling factor quantization scheme converts the adder filters in A
2NNs to quantized adder filters with hardware-friendly integer input activations, weights, and operations. Thus, quantized A
2NNs (Q-A
2NNs) composed of quantized adder filters have lower computational and memory overheads than A
2NNs, increasing their utility in hardware deployment. Although low-bit-width Q-A
2NNs exhibit significantly reduced RSSC accuracy compared to A
2NNs, this issue can be alleviated by employing the proposed MDD quantization strategy, which combines a weight-debiasing (WD) strategy, which reduces performance degradation due to deviations in the quantized weights, with a feature-debiasing (FD) strategy, which enhances the classification performance of Q-A
2NNs through minimizing deviations among the output features of each layer. Extensive experiments and analyses demonstrate that the proposed SSDQ method can efficiently quantize A
2NNs to obtain Q-A
2NNs with low computational and memory overheads while maintaining comparable performance to A
2NNs, thus having high potential for onboard RSSC.
Full article