For solving part of my problem I needed to find the transformation matrix between the rotated image and its original so I told myself why not write the post in my blog about this problem. For this post I am going to show you how we can transform rotated image to the original image. Let’s start:
%% Input images.
Detect features in both images and match the features:
Distance of circle from camera
Some days ago, I was talking to my friends and one of them asked me if I can write a program to measure the distance of the object to the camera, so I told myself why not write a post about it on my blog. I got the idea of writing this code from Adrian’s blog. You can find the code of distance of object to the camera at the end of this post
In order to determine the distance from our camera to a known object or marker, I am going to utilize triangle similarity.
The triangle similarity goes something like this: Let’s say I have a marker or object with a known width W. Then I place this marker some distance D from my camera. I take a picture of my object using our camera and then measure the apparent width in pixels P. This allows me to derive the perceived focal length F of my camera:
F = (P x D) / W
For example, I place a 21 x 29cm piece of paper (vertically; W = 21) D = 20 cm in front of my camera and take a photo. When I measure the width of the piece of paper in the image, I notice that the perceived width of the paper is P = 133 pixels.
My focal length F is then:
F = (1338px x 20cm) / 21cm = 126.35
As I continue to move my camera both closer and farther away from the object/marker, I can apply the triangle similarity to determine the distance of the object to the camera:
D’= (W x F) / P
Classical “Shape from shading” theories assume that surfaces are perfectly
Lambertian (thus the radiance at the eye is perfectly proportional with irradiance) and
that each part of the surface is illuminated by the same source. Then the radiance received by the eye depends only upon the angle between the local surface normal and the net flux vector. Shape from shading is the problem of recovering the shape of a surface from this intensity variation is known as shape from shading. I prepared this presentation about “Shape from shading” and I decided to share it here as well 🙂 I love to learn new things and then teach (present) them to other people.
Recently I bought this car and assembled it and I wanted to install wireless camera on top of that but I didn’t have wireless camera. I read in lifehacker that IP Webcam turns your Android phone into a wireless camera. So, the general solution would need two parts, one to broadcast the data from the device and another part to read this data into Matlab.
- Install IP Webcam app from your mobile play store.
- Open the app, tweak the settings (login/pass, resolution, image quality), set the desired resolution (will impact the speed!)
- Scroll to the bottom and tap on ‘Start Server’
- In the camera preview window, you can see the url at the bottom of the screen.
- Open MATLAB and use below code to obtain a live preview window. Note that this uses JPG files for discrete frames, which is probably not the fastest way. The app can stream the video and/or audio in multiple ways.
Background subtraction is a major preprocessing steps in many vision based applications. For example, consider the cases like visitor counter where a static camera takes the number of visitors entering or leaving the room, or a traffic camera extracting information about the vehicles etc. In all these cases, first you need to extract the person or vehicles alone. Technically, you need to extract the moving foreground from static background.
BackgroundSubtractorMOG: It is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It was introduced in the paper “An improved adaptive background mixture model for real-time tracking with shadow detection” by P. KadewTraKuPong and R. Bowden in 2001. It uses a method to model each background pixel by a mixture of K Gaussian distributions (K = 3 to 5). The weights of the mixture represent the time proportions that those colours stay in the scene. The probable background colours are the ones which stay longer and more static.
While coding, we need to create a background object using the function, cv2.createBackgroundSubtractorMOG(). It has some optional parameters like length of history, number of gaussian mixtures, threshold etc. It is all set to some default values. Then inside the video loop, use backgroundsubtractor.apply() (in my code fgbg.apply()) method to get the foreground mask. (read more about this function here)