Abstract:The deployment of piglet target detection model at the edge of the device is an important basis for fine management of piglets during lactation. Recognition of suckling piglets under complex environments is a difficult task, and deep learning methods are usually used to solve this problem. However, the object detection model of piglets based on deep learning often needs high computer force support, which is difficult to deploy in the field. To solve these problems, a object detection model of suckling piglets based on embedded terminal deployment was proposed, which made the deployment of piglet object detection system more flexible. A database was established by using images of suckling piglets with a data volume of 14000 pieces. The training set, test set, and validation set were divided by 8∶1∶1. The YOLO v5s, YOLO v5m, YOLO v5l, and YOLO v5x deep learning networks were trained to extract the characteristics of suckling piglets, and the corresponding piglets detection model was established to conduct object detection for suckling piglets. The Conv, BN, Activation Function layer, the same tensor and operation part of the network were fused, and the Concat layer was deleted to quantify the network structure and reduce the computational force demand of the model during operation. An embedded device Jetson Nano was used to infer the modified model to realize the deployment of piglet target detection model in the embedded terminal. The experimental results showed that the average running time of the optimized YOLO v5s, YOLO v5m, YOLO v5l, and YOLO v5x models were 65ms, 170ms, 315ms and 560ms, respectively, but the detection accuracy was dropped to 96.8%, 97.0%, 97.0% and 96.6%, respectively. The optimized YOLO v5s model can implement real-time detection of suckling piglets on embedded devices, which can lay a foundation for the edge computing model of piglets detection and provide technical support for precision breeding.