ROBOTICS

How to capture sharper images of moving objects

[ad_1]

So you’d like to count or inspect moving objects?  Here are a few tips that might help you out.

You can work with a few parameters: camera model (its sensor size and sensitivity), object speed, lighting, lens aperture, and exposure time.  The first step is to determine which parameters are fixed and which are variable. For example, you might want to consider object speed as fixed, because you can’t modify it without affecting a lot of other processes.  

Although your strategy will vary depending on which parameters are fixed, this article will teach you a bit about how they’re related, which will help you figure out how to create the perfect setup.  

 

The exposure

Exposure time is the amount of time for which your camera sensor will gather light, i.e. an image.  If you want to take a picture of a moving object, this is the main parameter you’ll need to work with.  However, changing the exposure time will affect the other parameters—so if you adjust one, you’ll want to adjust the others as well.  

For instance, in order to get a sharp image of a moving object, you might need to decrease the exposure time. But this will reduce the amount of light the camera takes in. If there isn’t enough light, you will have to adjust the lens aperture to compensate.

Be careful: raising the aperture will reduce the depth of field, which is the zone in which objects appear in focus. 

 

Capture d’écran, le 2020-03-09 à 14.49.44Changing the exposure time will affect the other parameters

Aperture 

Here’s a quick way to understand why this happens. Try looking at something far away and slightly blurry (such as numbers that you can barely read), then bend your index finger to form a small circle.  Now close one eye and look at the far object through your little circle (with your opened eye, of course!). The object should appear sharper. The smaller aperture increases the depth of field, and thus more objects appear in focus. Now back to normal: look again with both eyes opened: with a bigger aperture, there is a diminished depth of field, and objects appear less in focus.

Of course, you want all your objects to be in focus.  Raising the aperture (and thus decreasing the depth of field) might not be an issue if your camera is on top of a conveyor and all objects are exactly the same height. However, if you are inspecting objects of varying height, increasing the aperture is not the best option. Instead, you could add more lighting: either turn up the brightness of your external light sources, or bring them closer to the objects. If you need even more brightness, use flash lighting: light can be a lot more powerful if it’s only on for a fraction of a second rather than all the time.

If you haven’t chosen your camera yet, then you also have another option: selecting a camera with with better light sensitivity.

 

Light sensitivity

So you understand the principle: decrease the exposure time, adjust the other parameters to compensate for the decreased lighting, and you’re good to go.  Now… how much exactly should you decrease the exposure time? To find the answer,  all you need is your object speed (typically the conveyor belt speed); pixel tolerance level (the maximum acceptable number of blurry pixels, such as half a pixel; and your image resolution (pixel per mm).

Simply plug those values into the following formula:

Ideal exposure time = (pixel tolerance level × image resolution) / object speed.  

 

Example

Pixel tolerance level: 0.5 pixels

Image resolution: 640 pixels/150mm = 4.26 pixels/mm

Object speed: 200 mm/sec

Ideal exposure time = (0.5 px * 0.23 mm/px) / (200 mm/sec) = 0.00058 sec = 0.58 ms

Of course, if you can change another parameter to accomplish the same goal (e.g. reduce conveyor speed to help keep objects in focus), then this might be the way to go. Now that you know all the ways to capture a sharper image of a moving object—by decreasing exposure time while maintaining sufficient lighting—it’s up to you to decide which methods to use.

New call-to-action



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *