ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于3D CNN-BiLSTM-ATFA網(wǎng)絡(luò)和步態(tài)特征的奶牛個(gè)體識(shí)別方法
CSTR:
作者:
作者單位:

作者簡(jiǎn)介:

通訊作者:

中圖分類號(hào):

基金項(xiàng)目:

河北省重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(22327404D、22326609D)


Individual Identification Method of Cows Based on 3D CNN-BiLSTM-ATFA Network and Gait Feature
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問(wèn)統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評(píng)論
    摘要:

    針對(duì)基于花紋的奶牛個(gè)體識(shí)別中純色或花紋較少的奶牛識(shí)別準(zhǔn)確率較低的問(wèn)題,,本文提出一種基于步態(tài)特征的奶牛個(gè)體識(shí)別方法,。首先,將DeepLabv3+語(yǔ)義分割算法的主干網(wǎng)絡(luò)替換為MobileNetv2網(wǎng)絡(luò),,并引入基于通道和空間的CBAM注意力機(jī)制,,利用改進(jìn)后模型分割出奶牛的剪影圖。然后,,將三維卷積神經(jīng)網(wǎng)絡(luò)(3D CNN)和雙向長(zhǎng)短期記憶網(wǎng)絡(luò)(BiLSTM)構(gòu)建為3D CNN-BiLSTM網(wǎng)絡(luò),,并進(jìn)一步集成自適應(yīng)時(shí)間特征聚合模塊(ATFA)生成3D CNN-BiLSTM-ATFA奶牛個(gè)體識(shí)別模型。最后,,在30頭奶牛的共1242條視頻數(shù)據(jù)集上進(jìn)行了奶牛個(gè)體識(shí)別實(shí)驗(yàn),。結(jié)果表明,改進(jìn)后DeepLabv3+算法的平均像素準(zhǔn)確率,、平均交并比,、準(zhǔn)確率分別為99.02%、97.18%和99.71%,。采用r3d_18作為3D CNN-BiLSTM-ATFA的主干網(wǎng)絡(luò)效果最優(yōu),。基于步態(tài)的奶牛個(gè)體識(shí)別平均準(zhǔn)確率,、靈敏度和精確度分別為94.58%,、93.47%和95.94%。奶牛軀干和腿部不同部位進(jìn)行加權(quán)特征融合的個(gè)體識(shí)別實(shí)驗(yàn)表明識(shí)別準(zhǔn)確率還可進(jìn)一步提高,。奶牛跛足對(duì)步態(tài)識(shí)別效果影響較為明顯,,實(shí)驗(yàn)期間由健康變?yōu)轷俗愫鸵恢滨俗愕哪膛€(gè)體識(shí)別準(zhǔn)確率分別為89.39%和92.61%。本文研究結(jié)果可為奶牛的智能化個(gè)體識(shí)別提供技術(shù)參考。

    Abstract:

    Aiming at the low identification accuracy of cows with solid color or less pattern in pattern-based individual identification of cows, an individual identification method was proposed based on cow gait features. Firstly, the backbone network of DeepLabv3+ semantic segmentation algorithm was replaced by MobileNetv2 network. The channel and space based CBAM attention mechanism was introduced into this segmentation algorithm. The improved model was used to segment the silhouette of the cow. Then the 3D convolutional neural network (3D CNN) and the bidirectional long short-term memory network (BiLSTM) were constructed as the 3D CNN-BiLSTM network. The adaptive temporal feature aggregation module (ATFA) was further integrated into the above network to generate the 3D CNN-BiLSTM-ATFA cow individual identification model. Finally, individual identification experiments were conducted on a total of 1242 video datasets from 30 cows. The results showed that the MPA, MIOU and Accuracy of the improved DeepLabv3+ algorithm were 99.02%, 97.18% and 99.71%, respectively. Individual recognition was optimal when r3d_18 was used as the backbone network of 3D CNN-BiLSTM-ATFA. The average accuracy, sensitivity and precision of individual identification based on cow gait were 94.58%, 93.47% and 95.94%, respectively. Individual identification experiments with weighted feature fusion for torso and legs showed that identification accuracy can be further improved. Lameness in dairy cows had a significant effect on gait identification, the individual identification accuracies were 89.39% and 92.61% for cows that changed from healthy to lame and cows that remained lame during the experiment, respectively. The results can provide technical reference for intelligent individual identification of dairy cows.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

司永勝,寧澤普,王克儉,馬亞賓,袁明.基于3D CNN-BiLSTM-ATFA網(wǎng)絡(luò)和步態(tài)特征的奶牛個(gè)體識(shí)別方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2024,55(7):315-324. SI Yongsheng, NING Zepu, WANG Kejian, MA Yabin, YUAN Ming. Individual Identification Method of Cows Based on 3D CNN-BiLSTM-ATFA Network and Gait Feature[J]. Transactions of the Chinese Society for Agricultural Machinery,2024,55(7):315-324.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2024-02-20
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2024-07-10
  • 出版日期:
文章二維碼