ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于OrchardYOLOP的火龍果園多任務(wù)視覺感知方法
CSTR:
作者:
作者單位:

作者簡(jiǎn)介:

通訊作者:

中圖分類號(hào):

基金項(xiàng)目:

國家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2023YFD1400700)


Multi-task Visual Perception Method in Dragon Orchards Based on OrchardYOLOP
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評(píng)論
    摘要:

    現(xiàn)代果園機(jī)器人面臨復(fù)雜環(huán)境,、光線多變和非結(jié)構(gòu)化環(huán)境等問題,需要高效處理大量環(huán)境信息,,而傳統(tǒng)順序執(zhí)行多個(gè)單一任務(wù)的算法受到計(jì)算能力的限制,,難以滿足現(xiàn)代果園機(jī)器人的需求。本文針對(duì)火龍果園環(huán)境中自動(dòng)駕駛機(jī)器人處理多任務(wù)時(shí)所面臨的實(shí)時(shí)性和準(zhǔn)確性要求,,基于YOLOP模型引入了焦點(diǎn)融合高效卷積模塊,,并采用C2F和SPPF模塊,同時(shí)優(yōu)化了分割任務(wù)的損失函數(shù),,從而構(gòu)建出OrchardYOLOP模型,。實(shí)驗(yàn)結(jié)果表明,在目標(biāo)檢測(cè)任務(wù)上的精確度達(dá)到84.1%,;在可行駛區(qū)域分割任務(wù)上的mIoU達(dá)到89.7%,;在果樹區(qū)域分割任務(wù)上的mIoU提高到90.8%;推理速度達(dá)到33.33f/s,,而參數(shù)量?jī)H有9.67×106,。與YOLOP模型相比,不僅在速度上滿足了實(shí)時(shí)性要求,,而且準(zhǔn)確性上也有顯著提升,。這解決了火龍果園多任務(wù)視覺感知中的關(guān)鍵問題,為非結(jié)構(gòu)化環(huán)境下的多任務(wù)自動(dòng)駕駛視覺感知提供了一種有效的解決方案,。

    Abstract:

    In the face of challenges such as complex terrains, fluctuating lighting, and unstructured environments, modern orchard robots require the efficient processing of a vast array of environmental information. Traditional algorithms that sequentially execute multiple single tasks are limited by computational power which are unable to meet these demands. Aiming to address the requirements for realtime performance and accuracy in multitasking autonomous driving robots within dragon fruit orchard environments. Building upon the YOLOP, focus attention convolution module was introduced, C2F and SPPF modules were employed, and the loss function for segmentation tasks was optimized, culminating in the OrchardYOLOP. Experiments demonstrated that OrchardYOLOP achieved a precision of 84.1% in target detection tasks, an mIoU of 89.7% in drivable area segmentation tasks, and an mIoU increased to 90.8% in fruit tree region segmentation tasks, with an inference speed of 33.33 frames per second and a parameter count of only 9.67×106. Compared with the YOLOP algorithm, not only did it meet the real-time requirements in terms of speed, but also it significantly improved accuracy, addressing key issues in multi-task visual perception in dragon fruit orchards and providing an effective solution for multi-task autonomous driving visual perception in unstructured environments.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

趙文鋒,黃袁爵,鐘敏悅,李振源,羅梓濤,黃家俊.基于OrchardYOLOP的火龍果園多任務(wù)視覺感知方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2024,55(11):160-170. ZHAO Wenfeng, HUANG Yuanjue, ZHONG Minyue, LI Zhenyuan, LUO Zitao, HUANG Jiajun. Multi-task Visual Perception Method in Dragon Orchards Based on OrchardYOLOP[J]. Transactions of the Chinese Society for Agricultural Machinery,2024,55(11):160-170.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2024-06-04
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2024-11-10
  • 出版日期:
文章二維碼