300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > OpenCV-Python调用训练好的深度学习模型进行常见物体识别

OpenCV-Python调用训练好的深度学习模型进行常见物体识别

时间:2020-09-10 07:58:22

相关推荐

OpenCV-Python调用训练好的深度学习模型进行常见物体识别

网上教程太多了,今天总算是试出来了,常见物体的识别人、车、动物、植物等,对人和车的识别尤其优秀

简单介绍一下步骤:

1、把这段代码粘贴到pycharm的py文件里面(编辑器随意)

2、打开一个terminal终端(可以使用win+cmd,我推荐直接使用pycharm自带的终端),按照Python代码里面argparse的要求输入指令,如图所示:

示例:python deep_learning_object_detection.py -p MobileNetSSD_deploy.prototxt.txt -m MobileNetSSD_deploy.caffemodel -i “find (2).jpg”

最后一个"find (2).jpg"是文件路径,大家随意修改,有兴趣的朋友还可以把这儿改成摄像头读取。

3、运行,大力出奇迹!!!

下面是代码段,训练的模型后面会在下载里面放出来

# USAGE# python deep_learning_object_detection.py --image images/example_01.jpg \#--prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel# import the necessary packagesimport numpy as npimport argparseimport cv2# construct the argument parse and parse the argumentsap = argparse.ArgumentParser()ap.add_argument("-i", "--image", required=True,help="path to input image")ap.add_argument("-p", "--prototxt", required=True,help="path to Caffe 'deploy' prototxt file")ap.add_argument("-m", "--model", required=True,help="path to Caffe pre-trained model")ap.add_argument("-c", "--confidence", type=float, default=0.2,help="minimum probability to filter weak detections")args = vars(ap.parse_args())# initialize the list of class labels MobileNet SSD was trained to# detect, then generate a set of bounding box colors for each classCLASSES = ["background", "aeroplane", "bicycle", "bird", "boat","bottle", "bus", "car", "cat", "chair", "cow", "diningtable","dog", "horse", "motorbike", "person", "pottedplant", "sheep","sofa", "train", "tvmonitor"]COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))# load our serialized model from diskprint("[INFO] loading model...")net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])# load the input image and construct an input blob for the image# by resizing to a fixed 300x300 pixels and then normalizing it# (note: normalization is done via the authors of the MobileNet SSD# implementation)image = cv2.imread(args["image"])(h, w) = image.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5)# pass the blob through the network and obtain the detections and# predictionsprint("[INFO] computing object detections...")net.setInput(blob)detections = net.forward()# loop over the detectionsfor i in np.arange(0, detections.shape[2]):# extract the confidence (i.e., probability) associated with the# predictionconfidence = detections[0, 0, i, 2]# filter out weak detections by ensuring the `confidence` is# greater than the minimum confidenceif confidence > args["confidence"]:# extract the index of the class label from the `detections`,# then compute the (x, y)-coordinates of the bounding box for# the objectidx = int(detections[0, 0, i, 1])box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])(startX, startY, endX, endY) = box.astype("int")# display the predictionlabel = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)print("[INFO] {}".format(label))cv2.rectangle(image, (startX, startY), (endX, endY),COLORS[idx], 2)y = startY - 15 if startY - 15 > 15 else startY + 15cv2.putText(image, label, (startX, y),cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)# show the output imagecv2.imshow("Output", image)cv2.waitKey(0)

我的识别结果示例:(图片均为百度图片上下载来的网图)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。