ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于關鍵點預測的裝配機器人工件視覺定位技術
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

吉林省重點研發(fā)計劃項目(20200101130GX)


Visual Positioning Technology of Assembly Robot Workpiece Based on Prediction of Key Points
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對目前裝配機器人基于手工的特征檢測易受光照條件,、背景和遮擋等干擾因素的影響,,而基于點云特征檢測又依賴模型構建精度,本文采用深度學習的方式,,對基于關鍵點預測的工件視覺定位技術展開研究,。首先,采集工件各個角度的深度圖像,,計算得到工件的位姿信息,,選取工件表面的關鍵點作為數(shù)據(jù)集。然后,,構造工件表面關鍵點的向量場,,與數(shù)據(jù)集一同進行深度訓練,以實現(xiàn)前景點指向關鍵點的向量場預測,。之后,,將向量場中各像素指向同一關鍵點的方向向量每兩個劃分為一組,取其向量交點生成關鍵點的假設,,并基于RANSAC的投票對所有假設進行評價,。使用EPnP求解器計算工件位姿,并生成工件的有向包圍盒顯示位姿估計結果,。最后,,通過實驗驗證了系統(tǒng)估計結果的準確性和魯棒性。

    Abstract:

    Aiming at the problem that the current manual feature detection of assembly robots was susceptible to interference factors such as illumination conditions, background and occlusion, and the feature detection based on point cloud depends on the accuracy of model construction, the method of deep learning was proposed to carry out research on the visual positioning technology of the workpiece based on key point prediction. Firstly, the ArUco pose detection marker and ICP point cloud registration technology were used to construct a set of data for training the pose estimation network model. The depth images from various angles of the workpiece were collected, and the pose information of the workpiece was calculated. The key points on the workpiece surface were selected as the data set. Then the vector field of the key points on the workpiece surface was constructed, and the depth training was carried out to gather with the data set to realize the vector field prediction of the foreground points pointing to the key points. And the direction vectors of each pixel in the vector field pointing to the same key point were divided into two groups, the intersection points of their vectors were taken to generate the hypothesis of the key point, and all the hypotheses were evaluated based on RANSAC voting. The EPnP solver was used to calculate the pose of the workpiece, and the orientation bounding box of the workpiece was generated to display the pose estimation results. Finally, the accuracy and robustness of the estimation results were verified by experiments.

    參考文獻
    相似文獻
    引證文獻
引用本文

倪濤,張泮虹,李文航,趙亞輝,張紅彥,翟海陽.基于關鍵點預測的裝配機器人工件視覺定位技術[J].農業(yè)機械學報,2022,53(6):443-450. NI Tao, ZHANG Panhong, LI Wenhang, ZHAO Yahui, ZHANG Hongyan, ZHAI Haiyang. Visual Positioning Technology of Assembly Robot Workpiece Based on Prediction of Key Points[J]. Transactions of the Chinese Society for Agricultural Machinery,2022,53(6):443-450.

復制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2021-06-23
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2021-08-13
  • 出版日期:
文章二維碼