current position:Home>Python image processing, CV2 module, OpenCV to achieve target tracking

Python image processing, CV2 module, OpenCV to achieve target tracking

2022-02-02 01:20:05 Dai mubai

Time is not negative , Create constantly , This article is participating in 2021 Year end summary essay contest

Preface

utilize Python Realization OpenCV Target tracking . I don't say much nonsense .

Let's start happily ~

development tool

Python edition : 3.6.4

Related modules :

cv2 modular ;

As well as some Python Built in modules .

Environment building

install Python And add to environment variable ,pip Install the relevant modules required .

Target tracking refers to the process of locating moving targets in video .

In today's AI There are many application scenarios in the industry , For example monitoring , Auxiliary driving, etc .

The difference between frames

By calculating the difference between video frames ( That is, consider the difference between the background frame and other frames ), Then realize target tracking

Code implementation

import cv2

#  Get video 
video = cv2.VideoCapture('007.mp4')

#  Generate elliptical structure elements 
es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 4))
#  Set background frame 
background = None

while True:
    #  Read every frame of video 
    ret, frame = video.read()

    #  Get background frame 
    if background is None:
        #  Convert the first frame image of the video into a grayscale image 
        background = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        #  Gaussian blur is applied to the gray image , Smooth the image 
        background = cv2.GaussianBlur(background, (21, 21), 0)
        continue

    #  Convert each frame image of the video into a grayscale image 
    gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    #  Gaussian blur is applied to the gray image , Smooth the image 
    gray_frame = cv2.GaussianBlur(gray_frame, (21, 21), 0)

    #  Obtain the image difference between the current frame and the background frame , Get the difference diagram 
    diff = cv2.absdiff(background, gray_frame)

    #  Threshold segmentation using pixel values , Get a black-and-white image 
    diff = cv2.threshold(diff, 25, 255, cv2.THRESH_BINARY)[1]

    #  Dilated image , Reduce errors 
    diff = cv2.dilate(diff, es, iterations=2)

    #  Get the target contour in the image 
    image, cnts, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    for c in cnts:
        if cv2.contourArea(c) < 1500:
            continue
        #  Draw the target rectangle 
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(frame, (x+2, y+2), (x+w, y+h), (0, 255, 0), 2)

    #  Display detection video 
    cv2.namedWindow('contours', 0)
    cv2.resizeWindow('contours', 600, 400)
    cv2.imshow('contours', frame)

    #  Show difference video 
    cv2.namedWindow('diff', 0)
    cv2.resizeWindow('diff', 600, 400)
    cv2.imshow('diff', diff)
    if cv2.waitKey(1) & 0xff == ord('q'):
        break

#  end 
cv2.destroyAllWindows()
video.release()
 Copy code 

Background splitter

OpenCV Provide a BackgroundSubtractor Class , It can be used to segment the foreground and background of video .

The effect of background detection can also be improved by machine learning .

There are three kinds of background splitters , Namely KNN,MOG2,GMG, The background segmentation is calculated by the corresponding algorithm .

BackgroundSubtractor Class can compare different frames , And store previous frames , The results of motion analysis can be improved over time .

Can also calculate shadows , By detecting shadows , Exclude the shadow area of the detection image .

Code implementation

import cv2

#  Get video 
video = cv2.VideoCapture('traffic.flv')
# KNN Background splitter , Set shadow detection 
bs = cv2.createBackgroundSubtractorKNN(detectShadows=True)

while True:
    #  Read every frame of video 
    ret, frame = video.read()
    #  Calculate the foreground mask of the video 
    fgmask = bs.apply(frame)
    #  Image thresholding 
    th = cv2.threshold(fgmask.copy(), 244, 255, cv2.THRESH_BINARY)[1]
    #  Dilated image , Reduce errors 
    dilated = cv2.dilate(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)), iterations=2)

    #  Get the target contour in the image 
    image, contours, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    for c in contours:
        if cv2.contourArea(c) > 1600:
            #  Draw the target rectangle 
            (x, y, w, h) = cv2.boundingRect(c)
            cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 255, 0), 2)

    #  Show difference video 
    cv2.imshow('mog', fgmask)
    # cv2.imshow('thresh', th)
    #  Display detection video 
    cv2.imshow('detection', frame)
    if cv2.waitKey(30) & 0xff == ord('q'):
        break

video.release()
cv2.destroyAllWindows()

 Copy code 

give the result as follows

 The image processing 1-1.gif

copyright notice
author[Dai mubai],Please bring the original link to reprint, thank you.
https://en.pythonmana.com/2022/02/202202020120042945.html

Random recommended