Many of our space programs rely on customized image processing to produce the intended product.  As such, we have developed an expertise in applying advanced imaging processing methods to achieve, and often surpass, mission goals.  Here are a few examples:

 

Blind Deconvolution

During our support of a sparse aperture imaging program, a method was needed to remove complicated blur functions from video sequences.  The use calculated and/or measured blur functions (also called point spread functions) were insufficient to produce an end product without large artifacts.  The measurement process was also very time consuming.  Existing blind deconvolution methods (finding the PSF without measuring or modellling) required large amounts of computation and put constraints on the type of blur that could be removed.  We found a solution in the Fourier domain.  Dr. Caron developed an algorithm, SeDDaRA (Caron, J. N., et al. Optics Letters 26.15 (2001): 1164-1166.), that extracted the PSF by comparing the spatial frequencies of the blurred image with the spatial frequencies of a similar, but not blurred, image.  A pseudo-inverse filter extracts the PSF from the image, producing a clear artifact-free result in a few seconds of computation.  pdf

 

Image of Saturn taken by the Hubble Telescope before the optics were corrected.  This image is a standard test for blind deconvolution algorithms.

Image after SeDDaRA blind deconvolution application.  In addition to the improvement in the planet, several features in the background can now be seen.

 

An image of the Mars 'Happy Face' or Galle crater as taken by the ESA Mars Express. A SeDDaRA deconvolution of the Mars Happy Face

 

Image Registration

Information that can be extracted from an image is limited by the noise floor, bit depth, and resolution.  These limitations can be diminished by combining multiple images together and taking the average.  This reduces the noise floor and effectively increases the bit depth.  With video imaging platforms, the camera is often moving with respect to the scene.  In order to combine the images, one must determine the Many programs offer iterative image alignment, essentially moving one image around another until the error is minimized.  This requires much computation time.  We prefer to once again look at the frequency domain to derive the image differences using a method called Phase Correlation.  This method has been around for decades but is often overlooked as a means of image registration.  We have used to this method to great affect, achieving 1/17th of a pixel in accuracy, to find translational, rotational, scale, and perspective differences between images. The image below was taken from a sequence of aerial images.  The right image is a later frame that was adjusted for translation, rotation, scale, and perspective.

                       One frame of an aerial image sequence  One frame of an aerial image sequence after registration.

 

Flat field Extraction

A flat-field correction is used to remove pixel-to-pixel gain variations in an image.  (Caron, James N., et al Review of Scientific Instruments 87.6 (2016): 063710.)Typically one takes a picture of a completely white scene, normalizes the image, and divides it out from the target image.  There are some situations, such as space-borne imaging, where this cannot be done effectively.  We developed a method to extract the same imformation from a video sequence of a non-uniform scene.  There is an extensive explanation in our recent paper, but we derive an estimate of the scene from the sequence.  The scene is then removed from the sequence, and provided that nothing in the scene moves, the result is a series of non-registered flat-fields.  The sequence of flat-fields are re-aligned and averaged.

 

 

 The image above (left) simulated several different types of pixel-to-pixel gain variations (gradients, striping, fingerprint, etc.).  It was embedded in a series of images to demontrate which artifacts can be removed using the flat field extraction technique. A flat-field correction image extracted from the video sequence as shown on the right..  The image demonstrates that small-scale artifacts can be accurately recovered.  Larger artifacts are apparent, but not as prominent.  pdf

 

Multi-spectral feature extraction

We are developing methods to extract objects from images, for the purpose of object counting, based on object size and spectral content. Our goal is to apply the techniques biomedical imaging to aerial imaging.  This combines our expertise in imaging with our advanced image processing methods to improve upon the current state-of-the-art.

 

An aerial image of a field of roses The roses have been isolated from the rest of timage (shown in grayscale) based on the size and color of the objects.

 

Super-sampling

Super-sampling (sometimes referred to as super resolution) is a process that achieves higher resolution of an image than can be obtained with the image system.  (Caron, James N. Applied Optics 59.23 (2020): 7066-7073.) It is often acheieved by aligning a sequence of images to a larger pixel scale at sub-pixel accuracy, and then averaging and sharpening the result.  We combined our techniques to achieve a superior result with less computation.  For our method, images are aligned to pixel accuracy and combined.  This produces a single larger image, but with a unknown motion blur.  SeDDaRA is used to identify and remove the motion blur, resulting in a super-sampled image. 

A close-up of the sun, from a sequence of images captured by NASA's Solar Dynamics Observatory The super-sampled result of aligning, combining, and super-sampling the SDO image.