How To Find Distance Of Object From Camera
Last updated on July 8, 2021.
A couple of days ago, Cameron, a PyImageSearch reader emailed in and asked about methods to detect the distance from a camera to an object/marking in an image. He had spent some time researching, simply hadn't constitute an implementation.
I knew exactly how Cameron felt. Years ago I was working on a small project to analyze the movement of a baseball game as it left the bullpen's hand and headed for abode plate.
Using motion analysis and trajectory-based tracking I was able to detect/estimate the ball location in the frame of the video. And since a baseball has a known size, I was also able to estimate the distance to dwelling house plate.
It was an interesting project to work on, although the system was not as accurate as I wanted it to be — the "motion blur" of the brawl moving so fast made it difficult to obtain highly accurate estimates.
My project was definitely an "outlier" situation, but in general, determining the altitude from a camera to a marker is really a very well studied problem in the computer vision/image processing space. You tin observe techniques that are very straightforward and succinct like the triangle similarity. And you lot tin can find methods that are circuitous (admitting, more accurate) using the intrinsic parameters of the camera model.
In this blog post I'll prove you how Cameron and I came up with a solution to compute the distance from our camera to a known object or mark.
Definitely give this post a read — y'all won't want to miss it!
- Update July 2021: Added three new sections. The start section covers improving altitude measurement with camera scale. The second section discusses stereo vision and depth cameras to measure distance. And the final section briefly mentions how LiDAR can be used with camera data to provide highly accurate altitude measurements.
Looking for the source code to this mail?
Jump Right To The Downloads SectionTriangle Similarity for Object/Marking to Camera Altitude
In order to determine the distance from our camera to a known object or marker, we are going to use triangle similarity.
The triangle similarity goes something similar this: Let'due south say we accept a marker or object with a known width W. We then identify this marker some distance D from our photographic camera. Nosotros take a picture of our object using our camera and so measure the credible width in pixels P. This allows the states to derive the perceived focal length F of our camera:
F = (P 10 D) / W
For example, let's say I place a standard piece of 8.v ten 11in piece of paper (horizontally; Due west = 11) D = 24 inches in front of my camera and have a photo. When I measure the width of the slice of paper in the prototype, I find that the perceived width of the paper is P = 248 pixels.
My focal length F is then:
F = (248px 10 24in) / 11in = 543.45
As I continue to move my photographic camera both closer and farther away from the object/marker, I tin can employ the triangle similarity to make up one's mind the distance of the object to the camera:
D' = (Westward x F) / P
Once again, to make this more concrete, allow's say I move my camera 3 ft (or 36 inches) abroad from my marker and take a photo of the same piece of paper. Through automatic prototype processing I am able to determine that the perceived width of the slice of paper is now 170 pixels. Plugging this into the equation we now get:
D' = (11in x 543.45) / 170 = 35in
Or roughly 36 inches, which is three feet.
Note: When I captured the photos for this instance my tape measure had a bit of slack in it and thus the results are off past roughly 1 inch. Furthermore, I also captured the photos hastily and not 100% on top of the anxiety markers on the tape measure, which added to the 1 inch fault. That all said, the triangle similarity even so holds and y'all can apply this method to compute the distance from an object or marker to your camera quite easily.
Make sense at present?
Awesome. Allow'due south motility into some code to see how finding the distance from your camera to an object or marker is done using Python, OpenCV, and paradigm processing and estimator vision techniques.
Finding the altitude from your camera to object/marker using Python and OpenCV
Let'southward go ahead and get this project started. Open up a new file, name information technology distance_to_camera.py
, and we'll get to work:
# import the necessary packages from imutils import paths import numpy as np import imutils import cv2 def find_marker(image): # catechumen the epitome to grayscale, mistiness information technology, and detect edges gray = cv2.cvtColor(paradigm, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (five, 5), 0) edged = cv2.Canny(gray, 35, 125) # find the contours in the edged image and keep the largest i; # we'll assume that this is our slice of paper in the paradigm cnts = cv2.findContours(edged.re-create(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) c = max(cnts, key = cv2.contourArea) # compute the bounding box of the of the paper region and return it return cv2.minAreaRect(c)
The outset thing we'll practise is import our necessary packages (Lines ii-5). Nosotros'll use paths
from imutils
to load the available images in a directory. We'll use NumPy for numerical processing and cv2
for our OpenCV bindings.
From at that place we ascertain our find_marker
part. This function accepts a unmarried statement, image
, and is meant to be utilized to detect the object we desire to compute the distance to.
In this example nosotros are using a standard piece of 8.5 x 11 inch piece of newspaper equally our marker.
Our offset task is to now find this piece of paper in the image.
To to do this, we'll convert the image to grayscale, mistiness it slightly to remove high frequency noise, and employ edge detection on Lines 9-11.
After applying these steps our prototype should look something similar this:
As you can run into, the edges of our mark (the slice of paper) have clearly been reveled. At present all we need to do is observe the contour (i.e. outline) that represents the piece of paper.
We find our marker on Lines 15 and 16 by using the cv2.findContours
function (taking care to handle different OpenCV versions) and then determining the profile with the largest area on Line 17.
We are making the assumption that the contour with the largest area is our piece of paper. This supposition works for this item example, but in reality finding the marker in an image is highly application specific.
In our example, simple edge detection and finding the largest contour works well. Nosotros could likewise make this example more than robust past applying contour approximation, discarding any contours that do not take iv points (since a piece of paper is a rectangle and thus has 4 points), and and so finding the largest four-point contour.
Note: More on this methodology can be found in this post on edifice a kick-ass mobile document scanner.
Other alternatives to finding markers in images is to utilize color, such that the colour of the marking is substantially different from the rest of the scene in the paradigm. Yous could also apply methods similar keypoint detection, local invariant descriptors, and keypoint matching to discover markers; even so, these approaches are outside the scope of this article and are again, highly application specific.
Anyhow, now that we have the contour that corresponds to our marker, we return the bounding box which contains the (ten, y)-coordinates and width and height of the box (in pixels) to the calling function on Line 20.
Let's also quickly to define a part that computes the distance to an object using the triangle similarity detailed above:
def distance_to_camera(knownWidth, focalLength, perWidth): # compute and return the distance from the maker to the camera return (knownWidth * focalLength) / perWidth
This office takes a knownWidth
of the marker, a computed focalLength
, and perceived width of an object in an image (measured in pixels), and applies the triangle similarity detailed above to compute the actual distance to the object.
To see how nosotros apply these functions, continue reading:
# initialize the known distance from the camera to the object, which # in this case is 24 inches KNOWN_DISTANCE = 24.0 # initialize the known object width, which in this example, the piece of # paper is 12 inches wide KNOWN_WIDTH = xi.0 # load the furst image that contains an object that is KNOWN TO Be two feet # from our camera, so discover the paper marker in the prototype, and initialize # the focal length image = cv2.imread("images/2ft.png") marker = find_marker(image) focalLength = (marking[ane][0] * KNOWN_DISTANCE) / KNOWN_WIDTH
The first stride to finding the distance to an object or marker in an epitome is to calibrate and compute the focal length. To practise this, we need to know:
- The altitude of the photographic camera from an object.
- The width (in units such every bit inches, meters, etc.) of this object. Note: The acme could also be utilized, but this example simply uses the width.
Let's as well have a second and mention that what we are doing is non true camera calibration. True camera calibration involves the intrinsic parameters of the photographic camera, which you can read more on here.
On Line 28 nosotros initialize our known KNOWN_DISTANCE
from the camera to our object to exist 24 inches. And on Line 32 we initialize the KNOWN_WIDTH
of the object to be 11 inches (i.e. a standard eight.5 ten 11 inch piece of paper laid out horizontally).
The next stride is of import: it'south our elementary calibration step.
We load the first image off disk on Line 37 — nosotros'll be using this epitome every bit our scale image.
Once the epitome is loaded, we find the slice of newspaper in the paradigm on Line 38, and and so compute our focalLength
on Line 39 using the triangle similarity.
Now that we accept "calibrated" our organisation and have the focalLength
, nosotros tin can compute the distance from our photographic camera to our marker in subsequent images quite easily.
Let's see how this is done:
# loop over the images for imagePath in sorted(paths.list_images("images")): # load the paradigm, notice the marker in the image, so compute the # altitude to the marker from the photographic camera epitome = cv2.imread(imagePath) marker = find_marker(image) inches = distance_to_camera(KNOWN_WIDTH, focalLength, marker[one][0]) # draw a bounding box around the prototype and brandish information technology box = cv2.cv.BoxPoints(marker) if imutils.is_cv2() else cv2.boxPoints(marking) box = np.int0(box) cv2.drawContours(image, [box], -1, (0, 255, 0), two) cv2.putText(image, "%.2fft" % (inches / 12), (image.shape[one] - 200, paradigm.shape[0] - 20), cv2.FONT_HERSHEY_SIMPLEX, two.0, (0, 255, 0), three) cv2.imshow("image", image) cv2.waitKey(0)
We outset looping over our epitome paths on Line 42.
Then, for each image in the list, we load the epitome off disk on Line 45, find the mark in the image on Line 46, and so compute the distance of the object to the camera on Line 47.
From at that place, we simply draw the bounding box effectually our marker and brandish the distance on Lines 50-57 (the boxPoints
are calculated on Line 50 taking care to handle different OpenCV versions).
Results
To see our script in action, open up a concluding, navigate to your code directory, and execute the following command:
$ python distance_to_camera.py
If all goes well you lot should starting time come across the results of 2ft.png
, which is the prototype we use to "calibrate" our system and compute our initial focalLength
:
From the higher up prototype nosotros can see that our focal length is properly determined and the distance to the piece of paper is 2 feet, per the KNOWN_DISTANCE
and KNOWN_WIDTH
variables in the code.
Now that nosotros take our focal length, nosotros can compute the distance to our marker in subsequent images:
In above example our camera is now approximate 3 feet from the marking.
Let's try moving dorsum some other foot:
Again, it'due south important to annotation that when I captured the photos for this instance I did so hastily and left too much slack in the tape measure. Furthermore, I also did not ensure my camera was 100% lined upwardly on the human foot markers, so over again, there is roughly one inch error in these examples.
That all said, the triangle similarity approach detailed in this commodity will still piece of work and allow you to detect the distance from an object or mark in an image to your photographic camera.
Improving distance measurement with photographic camera scale
In order to perform distance measurement, we commencement need to calibrate our system. In this post, we used a simple "pixels per metric" technique.
Yet, better accurateness can be obtained past performing a proper camera calibration by computing the extrinsic and intrinsic parameters:
- Extrinsic parameters are rotation and translation matrices used to convert something from the world frame to the camera frame
- Intrinsic parameters are the internal camera parameters, such equally the focal length, to catechumen that information into a pixel
The nearly mutual way is to perform a checkerboard photographic camera scale using OpenCV. Doing so will remove radial distortion and tangential distortion, both of which impact the output image, and therefore the output measurement of objects in the image.
Hither are some resources to help you become started with photographic camera calibration:
- Agreement Lens Distortion
- Camera Calibration using OpenCV
- Camera Calibration (official OpenCV documentation)
Stereo vision and depth cameras for distance measurement
As humans, we accept having two eyes for granted. Fifty-fifty if we lost an middle in an blow we could still encounter and perceive the globe effectually usa.
Nevertheless, with only one centre we lose important information — depth .
Depth perception gives us the perceived altitude from where nosotros stand to the object in front end of us. In guild to perceive depth, nosotros demand two eyes.
About cameras on the other hand simply have one center, which is why information technology's and so challenging to obtain depth information using a standard 2D camera (although you tin employ deep learning models to endeavor to "learn depth" from a 2D paradigm).
You tin create a stereo vision system capable of computing depth information using two cameras, such every bit USB webcams:
- Introduction to Epipolar Geometry and Stereo Vision
- Making A Low-Cost Stereo Camera Using OpenCV
- Depth perception using stereo photographic camera (Python/C++)
Or, you could use specific hardware that has depth cameras built in, such every bit OpenCV'south AI Kit (OAK-D).
LiDAR for depth data
For more authentic depth information you should consider using LiDAR. LiDAR uses light sensors to measure the distance between the sensor and any object(s) in front of it.
LiDAR is specially popular in self-driving cars where a camera is but not enough.
Nosotros cover how to employ LiDAR for self-driving auto applications inside PyImageSearch University.
What'south next? I recommend PyImageSearch University.
Grade information:
35+ full classes • 39h 44m video • Last updated: Apr 2022
★★★★★ iv.84 (128 Ratings) • xiii,800+ Students Enrolled
I strongly believe that if y'all had the correct teacher you could master computer vision and deep learning.
Do you think learning calculator vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve circuitous mathematics and equations? Or requires a degree in information science?
That's not the instance.
All yous need to principal computer vision and deep learning is for someone to explain things to yous in unproblematic, intuitive terms. And that's exactly what I do. My mission is to modify education and how complex Bogus Intelligence topics are taught.
If you're serious about learning computer vision, your next end should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you'll learn how to successfully and confidently apply figurer vision to your piece of work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University yous'll find:
- ✓ 35+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 35+ Certificates of Completion
- ✓ 39+ hours of on-demand video
- ✓ Brand new courses released regularly , ensuring you can continue up with land-of-the-fine art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 450+ tutorials on PyImageSearch
- ✓ Piece of cake one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Click hither to join PyImageSearch Academy
Summary
In this blog post we learned how to make up one's mind the distance from a known object in an paradigm to our photographic camera.
To accomplish this chore we utilized the triangle similarity, which requires us to know 2 of import parameters prior to applying our algorithm:
- The width (or height) in some distance measure, such every bit inches or meters, of the object nosotros are using as a marker.
- The distance (in inches or meters) of the photographic camera to the marking in step ane.
Computer vision and image processing algorithms can and so exist used to automatically determine the perceived width/summit of the object in pixels and consummate the triangle similarity and give us our focal length.
And then, in subsequent images nosotros simply demand to find our marking/object and utilize the computed focal length to determine the altitude to the object from the camera.
Download the Source Code and Complimentary 17-page Resource Guide
Enter your email address below to get a .aught of the code and a Gratuitous 17-page Resources Guide on Reckoner Vision, OpenCV, and Deep Learning. Inside you lot'll find my hand-picked tutorials, books, courses, and libraries to help you main CV and DL!
Source: https://pyimagesearch.com/2015/01/19/find-distance-camera-objectmarker-using-python-opencv/
Posted by: nelsonhisomed59.blogspot.com
0 Response to "How To Find Distance Of Object From Camera"
Post a Comment