The Google Static Maps API returns an image (either GIF, PNG or JPEG) in response to an HTTP request via a URL. For each request, you can specify the location of the map, the size of the image, the zoom level, the type of map, and the placement of optional markers at locations on the map. You can additionally label your markers using alphanumeric characters.
In this post I decided to write a function to generating a Google Static Maps with marker as easy as possible here is my version:
Cropping result for corners points of [(444, 203), (623, 243), (691, 177), (581, 26)]
As all you know sometimes cropping image can be challenging. Some days ago I had some issues in cropping part of my training image. My problem was my region wasn’t rectangular, so here is my solution to non-rectangular region cropping. I hope in would be useful for us as well 🙂
In this post I am going to solve this problem, how to find simple shapes like triangular and square ( or rectangular) in the image. For simple shape like square or triangular I normally use this procedure:
1. Find Contours in the image ( image should be binary)
2. Approximate each contour using approxPolyDP function.
3. Check number of elements in the approximated contours of all the shapes to recognize the shape. For eg, triangular will have 3; for square or rectangle, it has to meet the following conditions:
* It is convex.
* It has 4 vertices.
* All angles are ~90 degree.
4. Assign the color, run the code for your test image, check its number, fill it with corresponding colors.
Assumptions: Shapes don’t overlap, both of them solid (meaning, there is no white pixels inside the shape (all shapes are black). There can be multiple shapes in the image and they can be rotated any arbitrary number of degrees, and they can be of any size. Important: Triangles are non-obtuse!
Distance of circle from camera
Some days ago, I was talking to my friends and one of them asked me if I can write a program to measure the distance of the object to the camera, so I told myself why not write a post about it on my blog. I got the idea of writing this code from Adrian’s blog. You can find the code of distance of object to the camera at the end of this post
In order to determine the distance from our camera to a known object or marker, I am going to utilize triangle similarity.
The triangle similarity goes something like this: Let’s say I have a marker or object with a known width W. Then I place this marker some distance D from my camera. I take a picture of my object using our camera and then measure the apparent width in pixels P. This allows me to derive the perceived focal length F of my camera:
F = (P x D) / W
For example, I place a 21 x 29cm piece of paper (vertically; W = 21) D = 20 cm in front of my camera and take a photo. When I measure the width of the piece of paper in the image, I notice that the perceived width of the paper is P = 133 pixels.
My focal length F is then:
F = (1338px x 20cm) / 21cm = 126.35
As I continue to move my camera both closer and farther away from the object/marker, I can apply the triangle similarity to determine the distance of the object to the camera:
D’= (W x F) / P
Background subtraction is a major preprocessing steps in many vision based applications. For example, consider the cases like visitor counter where a static camera takes the number of visitors entering or leaving the room, or a traffic camera extracting information about the vehicles etc. In all these cases, first you need to extract the person or vehicles alone. Technically, you need to extract the moving foreground from static background.
BackgroundSubtractorMOG: It is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It was introduced in the paper “An improved adaptive background mixture model for real-time tracking with shadow detection” by P. KadewTraKuPong and R. Bowden in 2001. It uses a method to model each background pixel by a mixture of K Gaussian distributions (K = 3 to 5). The weights of the mixture represent the time proportions that those colours stay in the scene. The probable background colours are the ones which stay longer and more static.
While coding, we need to create a background object using the function, cv2.createBackgroundSubtractorMOG(). It has some optional parameters like length of history, number of gaussian mixtures, threshold etc. It is all set to some default values. Then inside the video loop, use backgroundsubtractor.apply() (in my code fgbg.apply()) method to get the foreground mask. (read more about this function here)