ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于“圖像-文本”間關(guān)聯(lián)增強的多模態(tài)豬病知識圖譜融合方法
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

國家自然科學(xué)基金面上項目(32472007)和安徽省高等學(xué)??茖W(xué)研究項目(自然科學(xué)類)重點項目(2023AH051020)


“Image-Text” Association Enhanced Multi-modal Swine Disease Knowledge Graph Fusion
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    傳統(tǒng)的豬病防治主要依賴于人工經(jīng)驗,,很可能因為人工疏忽存在疾病漏診。為此,,構(gòu)建一個多模態(tài)豬病知識圖譜,,幫助管理者更好地理解豬只間的關(guān)聯(lián)關(guān)系,,為后續(xù)有效識別潛在的疾病傳播路徑和異常情況提供良好的數(shù)據(jù)基礎(chǔ)。首先,,從不同來源獲取豬病數(shù)據(jù),,經(jīng)過知識抽取以及圖像匹配后初步構(gòu)建兩個多模態(tài)豬病知識圖譜;其次,,提出基于“圖像-文本”間關(guān)聯(lián)增強的多模態(tài)融合方法,,利用多頭注意力機制學(xué)習(xí)圖像與文本之間的語義關(guān)聯(lián),通過減少豬病視覺模態(tài)模糊問題帶來的負面作用,,以增強豬病實體的向量表征,;最后,基于對實體向量表征相似度的計算,,融合兩個多模態(tài)數(shù)據(jù)集中的豬病實體,,以形成一個知識完備性更高的豬病知識圖譜。實驗表明,,本文提出的多模態(tài)融合方法在豬病實體對齊任務(wù)上取得了優(yōu)異的性能,,相較于現(xiàn)有方法,,對齊準確性(Hits@1)提升0.033,在通用數(shù)據(jù)集DBPZH-EN,、DBPFR-EN,、DBPJA-EN上進行實驗驗證,對齊準確性分別提升0.152,、0.236,、0.180,證明了該方法在多模態(tài)知識圖譜融合方面的有效性,。

    Abstract:

    Traditional swine disease prevention primarily relies on human expertise, which risks missed diagnoses due to human error. To address this challenge, a multi-modal swine disease knowledge graph was developed to assist managers in better understanding the connections between pigs, providing a solid data foundation for identifying potential disease transmission paths and anomalies. Firstly, the swine disease data from various sources were collected, and then two preliminary multi-modal knowledge graphs were constructed through knowledge extraction and image matching. Secondly, a multi-modal knowledge graph fusion method based on “image-text” association was proposed, using a multi-head attention mechanism to reduce the impact of visual ambiguity and enhance swine disease entity representation. Finally, by calculating the similarity of entity representations in vector space, entities from the two multi-modal datasets were integrated into a more comprehensive knowledge graph. Experiments demonstrated that the proposed method improved alignment accuracy, as reflected by a 0.033 increase in Hits@1 compared with that of existing methods. Additional accuracy gains of 0.152, 0.236 and 0.180 were observed on the DBPZH-EN, DBPFR-EN, and DBPJA-EN datasets respectively, demonstrating its effectiveness in multi-modal knowledge graph fusion.

    參考文獻
    相似文獻
    引證文獻
引用本文

蔣婷婷,徐澳,吳飛飛,楊帥,何進,辜麗川.基于“圖像-文本”間關(guān)聯(lián)增強的多模態(tài)豬病知識圖譜融合方法[J].農(nóng)業(yè)機械學(xué)報,2025,56(1):56-64. JIANG Tingting, XU Ao, WU Feifei, YANG Shuai, HE Jin, GU Lichuan.“Image-Text” Association Enhanced Multi-modal Swine Disease Knowledge Graph Fusion[J]. Transactions of the Chinese Society for Agricultural Machinery,2025,56(1):56-64.

復(fù)制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2024-11-01
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2025-01-10
  • 出版日期:
文章二維碼