ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于FE-P2Pnet的無人機小麥圖像麥穗計數(shù)方法
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

安徽省自然科學基金項目(2208085MC60),、安徽省科學技術廳高??蒲杏媱濏椖浚?023AH050084)和國家自然科學基金項目(62273001、32372632)


Method for Counting Wheat Ears in UAV Images Based on FE-P2Pnet
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對無人機圖像背景復雜,、小麥密集,、麥穗目標較小以及麥穗尺寸不一等問題,提出了一種基于FE-P2Pnet(Feature enhance-point to point)的無人機小麥圖像麥穗自動計數(shù)方法,。對無人機圖像進行亮度和對比度增強,,增大麥穗目標與背景之間的差異度,,減少葉、稈等復雜背景因素的影響,。引入了基于點標注的網(wǎng)絡P2Pnet作為基線網(wǎng)絡,,以解決麥穗密集的問題。同時,,針對麥穗目標小引起的特征信息較少的問題,,在P2Pnet的主干網(wǎng)絡VGG16中添加了Triplet模塊,將C(通道),、H(高度)和W(寬度)3個維度的信息交互,,使得主干網(wǎng)絡可以提取更多與目標相關的特征信息;針對麥穗尺寸不一的問題,,在FPN(Feature pyramid networks)上增加了FEM(Feature enhancement module)和SE(Squeeze excitation)模塊,,使得該模塊能夠更好地處理特征信息和融合多尺度信息;為了更好地對目標進行分類,,使用Focal Loss損失函數(shù)代替交叉熵損失函數(shù),,該損失函數(shù)可以對背景和目標的特征信息進行不同的權重加權,進一步突出特征,。實驗結果表明,,在本文所構建的無人機小麥圖像數(shù)據(jù)集(Wheat-ZWF)上,麥穗計數(shù)的平均絕對誤差(MAE),、均方誤差(MSE)和平均精確度(ACC)分別達到3.77,、5.13和90.87%,相較于其他目標計數(shù)回歸方法如MCNN(Multi-column convolutional neural network),、CSRnet(Congested scene recognition network)和WHCNETs (Wheat head counting networks)等,,表現(xiàn)最佳。與基線網(wǎng)絡P2Pnet相比,,MAE和MSE分別降低23.2%和16.6%,,ACC提高2.67個百分點。為了進一步驗證本文算法的有效性,,對采集的其它4種不同品種的小麥(AK1009,、AK1401、AK1706和YKM222)進行了實驗,,實驗結果顯示,,麥穗計數(shù)MAE和MSE平均為5.10和6.17,ACC也達到89.69%,,表明本文提出的模型具有較好的泛化性能,。

    Abstract:

    Ear count is the committed step of wheat yield estimation. With the rapid development of unmanned aerial vehicle (UAV) and computer vision technology, the problem of automatic counting of wheat ears can be solved more quickly and efficiently. An automatic counting method for UAV wheat ear images was proposed based on feature enhance-point to point (FE-P2Pnet) to address issues such as complex background, dense wheat, small wheat ear targets, and varying wheat ear sizes. Firstly, the brightness and contrast of the UAV image were enhanced to increase the difference between the wheat ear target and the background, and the influence of complex background factors such as leaves and stems were reduced. Secondly, a point annotated network P2Pnet was introduced as the baseline network to address the problem of dense wheat ears. At the same time, in response to the problem of limited feature information caused by small wheat ear targets, a Triplet module was added to the backbone network VGG16 of P2Pnet, which interacted with the information of C (channel), H (height), and W (width) dimensions, allowing the backbone network to extract more feature information related to the target. In response to the issue of varying wheat ear sizes, feature enhancement module (FEM) and squeeze excitation (SE) modules were added to feature pyramid networks (FPN), enabling this module to better process feature information and fuse multi-scale information. In order to better classify targets, Focal Loss function instead of cross entropy loss function was used. This loss function can carry out different weights on the background and target feature information to further highlight features. The experimental results showed that the mean absolute error (MAE), mean square error (MSE), and accuracy (ACC) indicators of wheat ear counting on the constructed unmanned aerial vehicle wheat image dataset (Wheat-ZWF) achieved 3.77, 5.13, and 90.87%, respectively. Compared with other target counting regression methods such as MCNN, CSRnet, and WHCNETs, the performance was the best. Compared with the baseline network P2Pnet, the MAE and MSE values were decreased by 23.2% and 16.6% respectively, and the ACC value was increased by 2.67 percentage points. In order to further validate the effectiveness of the algorithm proposed, experiments were conducted on four other different wheat varieties (AK1009, AK1401, AK1706, and YKM222) collected. The experimental results showed that the average MAE and MSE values of wheat ear counting were 5.10 and 6.17, with ACC of 89.69%. This indicated that the proposed model had good generalization performance. The research can provide certain support and assistance for related studies on wheat ear counting.

    參考文獻
    相似文獻
    引證文獻
引用本文

鮑文霞,蘇彪彪,胡根生,黃承沛,梁棟.基于FE-P2Pnet的無人機小麥圖像麥穗計數(shù)方法[J].農(nóng)業(yè)機械學報,2024,55(4):155-164,289. BAO Wenxia, SU Biaobiao, HU Gensheng, HUANG Chengpei, LIANG Dong. Method for Counting Wheat Ears in UAV Images Based on FE-P2Pnet[J]. Transactions of the Chinese Society for Agricultural Machinery,2024,55(4):155-164,,289.

復制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-08-16
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2024-04-10
  • 出版日期:
文章二維碼