Abstract:Maize is one of the most important food crops in China. Leaf diseases of maize can seriously damage its yield, so the correct identification of disease is of great significance. However, the efficiency of traditional manual identification of leaf diseases is low. The resolution of disease images collected from agricultural fixed points or drone monitoring is low, and the key features are not significant, which cannot meet the image resolution requirements of classification and recognition models. The training effect is poor, making it difficult to accurately identify leaf diseases. To this end, a maize disease classification and recognition model based on an improved super-resolution generative adversarial network (SRGR) was designed. The images of maize leaf disease were divided into four types: large spot, rust, gray spot, and healthy leaves. The data set was divided into low resolution (LR) and high resolution (HR) images that corresponded one-to-one. In order to realize the restoration of low-resolution maize spot images to high-resolution images, this model proposed an improved strategy for the enhanced super-resolution generative adversarial networks (ESRGAN) model based on dual attention mechanism. LR images were input into the highfrequency feature reconstruction network, and channel attention(CA) mechanism after each residual dense block (RRDB) was added to extract deep detailed features of the image, making the model highly targeted in reconstructing highfrequency details and reducing the possibility of pseudo texture phenomenon.The generation network was divided into encoding and decoding parts, and the spatial attention mechanism was introduced into U-shaped dense block with skip layers to maximize the retention of maize disease effective features in the middle and low levels of the LR image of maize lesions. The probability value of high-frequency features in the input feature map was calculated to determine the position of reconstructed lesion features in the image.The WGAN-GP loss function was used to train the network to solve the problem of vanishing generator gradients, enhancing the stability of the network. The regenerated lesion images were input into the discriminant network, and images that met HR image standards were input into the ResNet34 classification model to achieve accurate classification and identification of maize leaf lesions, and images that did not meet the standards were returned to the generation network for retraining. The experimental results showed that the addition of the dual attention mechanism and the change of the loss function increased the model’s ability to recover high-frequency features and robustness.Compared with other super-resolution image reconstruction algorithms, the high-resolution reconstructed images generated based on the SRGR model improved peak signal-to-noise ratio(PSNR) and structural similarity index measure(SSIM) values, with an average increase of 2.1dB and 0.049, which was a significant improvement. Four different classification networks were selected for image classification and recognition, and the recognition accuracy of reconstructed images was improved by an average of 28.1 percentage points compared with that of LR images. Among them, the ResNet34 classification model had the highest accuracy compared with AlexNet, VggNet, and GoogleNet models. In the attention module ablation experiment, compared with the other three models, SRGR accuracy in identifying maize lesions exceeded other models by an average of 1.3 percentage points, with an accuracy rate of 97.8%. In the visualization of the recognition results, the heat map of the lesions identified by the SRGR model had the darkest color and the highest recognition degree. In summary, the research result can serve as a reference for accurate identification of low-resolution leaf disease images in crop leaf spot monitoring or drone field monitoring.