A2-JeffDonahue

From CS294-69 Image Manipulation and Computational Photography Fa11

Jump to: navigation, search

I implemented the algorithm of the "Domain Transform for Edge-Aware Image and Video Processing" paper. [1]

The paper describes a method of transforming an image in R^5 (x,y,r,g,b) into a lower dimension that retains both spatial and color distances between pixels to preserve edges. It then gives three filters for this lower dimensional domain: Normalized Convolution, Interpolated Convolution, and Recursive Filtering. I implemented the Normalized Convolution filter.

While the main purpose of this paper was to optimize the performance of edge-aware image/video processing, my MATLAB implementation was unfortunately not nearly as fast as the one in the paper. For images of the size shown in my results below, 3 iterations of filtering took between 30 seconds and 1 minute, whereas the paper boasts performance on the order of milliseconds for large images.

Below are some sample results:

Contents

Austin skyline

The input was this photograph of the Austin skyline:

Austinoriginal.jpg

The output is this more artistic version (parameters: sigma_s = 30, sigma_r = 0.57):

Austinblurred.jpg

Note that while the image is blurrier, all the well-defined edges (such as the outlines of the buildings) are maintained very nicely.

To show the domain transform, I cut the 250th row from the original image (which, on the left, begins just above the power lines and passes through the sky and buildings) converted it to grayscale, and plotted it (green), next to the transformed domain (blue):

Domaintransform.png

Notice how the domain (blue) increases monotonically (due to integration from 0 to the index) and rises the fastest at sharp edges, such as when the signal changes from sky to building or when there are many windows with lights on in the centermost 2 buildings. This is what we want because the domain transform needs to maintain both the spatial and the color distances between pixels.

Van Gogh's "Starry Night"

The input was this image of Van Gogh's "Starry Night":

Starrynightoriginal.jpg

The output is this more "impressionist" version (parameters: sigma_s = 40, sigma_r = 0.77):

Starrynightblurred.jpg

I also tried using different parameters for "detail enhancement", though the results in this case are not very noticeable, except a bit in the moon in the upper-right (parameters: sigma_s = 15, sigma_r = 0.04):

Starrynightdetail.jpg

Fruit

The input was this image of some fruits:

Fruitoriginal.jpg

The output was this image, which looks sort of paint-like and doesn't have much of the "noise" of the original, but still has the sharp edges where one piece of fruit ends and another begins (parameters: sigma_s = 30, sigma_r = 0.57):

Fruitblurred.jpg

References

[1] Domain Transform for Edge-Aware Image and Video Processing. Eduardo Gastal and Manuel Oliveira. SIGGRAPH 2011.



[add comment]
Personal tools