Background subtraction is a major preprocessing steps in many vision based applications. For example, consider the cases like visitor counter where a static camera takes the number of visitors entering or leaving the room, or a traffic camera extracting information about the vehicles etc. In all these cases, first you need to extract the person or vehicles alone. Technically, you need to extract the moving foreground from static background.
BackgroundSubtractorMOG: It is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It was introduced in the paper “An improved adaptive background mixture model for real-time tracking with shadow detection” by P. KadewTraKuPong and R. Bowden in 2001. It uses a method to model each background pixel by a mixture of K Gaussian distributions (K = 3 to 5). The weights of the mixture represent the time proportions that those colours stay in the scene. The probable background colours are the ones which stay longer and more static.
While coding, we need to create a background object using the function, cv2.createBackgroundSubtractorMOG(). It has some optional parameters like length of history, number of gaussian mixtures, threshold etc. It is all set to some default values. Then inside the video loop, use backgroundsubtractor.apply() (in my code fgbg.apply()) method to get the foreground mask. (read more about this function here)
import cv2 import cv2.cv as cv import numpy as np capture = cv2.VideoCapture("class.avi") size = (int(capture.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)), int(capture.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))) fourcc = cv2.cv.FOURCC(*"DIB ") video = cv2.VideoWriter('output.avi', fourcc, 30,size) fgbg = cv2.BackgroundSubtractorMOG() while True: ret, img = capture.read() if ret==True: fgmask = fgbg.apply(img) video.write(fgmask) cv2.imshow('forehead',fgmask) if(cv2.waitKey(27)!=-1): break capture.release() video.release() cv2.destroyAllWindows() print('Done!')