2015-04-28 EECS 556 abstracts / schedule, 4-6pm 1005 EECS Part I: Denoising 1 4:05 Image denoising using the higher order singular value decomposition Shweta Khushu, Tanya Verma Image denoising is a widely researched topic in the history of image processing and continues to receive attention from researchers all over to better the current state of the art. In this project, we implement a patch-based method for denoising a grayscale image using the higher order singular value decomposition (HOSVD) which achieves close to the state-of-the-art performance. The technique stacks patches similar to a reference patch in an image, by using a similarity index defined by a statistically motivated criterion, into a 3D stack and performs HOSVD on the stack. The transform coefficients are manipulated using hard thresholding and the HOSVD transform is inverted to reconstruct the 3D stack, thereby filtering all the individual patches. The procedure is repeated over all the pixels in a sliding window fashion with averaging of hypotheses to produce the final filtered image. Additionally, we discuss impact of various parameters of the algorithm on accuracy and computational efficiency. We also augment HOSVD denoising with a Wiener filter step, calling it HOSVD2 denoising. The results from HOSVD and HOSVD2 image denoising algorithm are compared to other denoising algorithms existing in the literature including NL-Means, BM3D and LPG-PCA. 2 4:25 Texture conserving image denoising by gradient histogram preservation Ripudaman Singh Arora, David Hong, Alfredo Bravo Iniguez Image denoising is the task of estimating an image from a noise-corrupted version. This is a fundamental challenge in image processing with applications ranging from commercial photography to pre-processing for computer vision algorithms. Though many successful denoising algorithms have been developed, they tend to smooth fine textures and strong edges, deteriorating the image's visual quality. To deal with this problem, some authors have recently proposed a novel denoising method that attempts to preserve textures by conserving the histogram of the image gradients. The proposed method consists of two steps. The first step estimates the gradient histogram of the original (noise-less) image from the gradient histogram of its noisy version. The second step denoises the image while forcing the gradient histogram of the denoised image to match the estimated gradient histogram. Two variants are also proposed to handle images that contain regions with distinct textures. The authors finally provide experimental results suggesting that the proposed method performs on par with various state-of-the-art algorithms on the basis of PSNR and SSIM, while outperforming these methods on a qualitative comparison of the texture. In this report, we reproduce a few of their experimental results with a more quantitative measurement of the texture preservation and propose extensions to handle image restoration and non-Gaussian noise. Part II: Deblurring 3 4:45 A general framework for regularized, similarity-based image restoration Yu Chen, Yumeng Shang To deal with image restoration problems, A. Kheradmand and P. Milanfar developed an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. They proposed a new cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. The algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. To test the general applicability of this algorithm on image restoration problems, we do experiments on different kinds of blurs, such as Gaussian blur, box average blur, out-of-focus blur, and motion blur, and compute both the PSNR and SSIM values of the deblurred images. Also, we compare some of the results with other deblurring methods. Besides, an experiment on image sharpening is implemented. Most of the experimental results verify the effectiveness of the proposed algorithm on different image restoration problems. 4 5:05 Parametric blur estimation for blind restoration of natural images: linear motion and out-of-focus Soumyanil Banerjee, Ajay Vasudevan The project studies a method to estimate the parameters of two types of image blurs, namely, linear motion (which is when the camera moves in a uniform linear motion at an angle and is characterized by a line of length L and angle theta) and out-of-focus (which is when the camera is out of focus and is modeled as a uniform disk characterized by its radius which specifies by how much the camera is out of focus), for blind restoration of images. We make assumptions such as the images are natural images, i.e., they have an approximately isotropic power spectrum and their spectrum follows a power law decay with spatial frequency. The project included implementation of Radon-d & the Radon-c transforms, which are the modified Radon Transforms and error calculation with modeled functions (fitting functions) to calculate the blur parameters. Part III: Segmentation / classification 5 5:25 Decoupled active contour (DAC) for boundary detection Madan Ravi Ganesh, Adeline Hong, Leyou Zhang This project proposes to replicate the work in the paper "Decoupled Active Contour (DAC) for Boundary Detection". This paper seeks to split the energy minimization scheme into two independent steps, measurement update (external) and prior addition (internal), to improve both accuracy and speed of convergence to a desired contour. This is accomplished by employing a measurement update step, built on a Hidden Markov Model (HMM) and Viterbi search algorithm, followed by a prior addition step based on measurement uncertainty and a non-stationary prior. In our project, we will replicate the work of the authors to boost the speed and accuracy of convergence to contours whilst verifying its performance under challenging conditions. The challenging conditions include presence of noise, high-curvature contours and random initialization of snake. Proposed extensions include evaluation of the algorithm over images with different types of noise,and modification of the algorithm to enable adaptive algorithm parameter tuning based on image gradient. 6 5:45 Sparse coding with a universal regularizer model for image classification Tianyu Jiang, Sydney Williams This project replicates work from "Universal Regularizers for Sparse Coding and Modeling" in IEEE Transaction on Image Processing. In the paper, a universal coding framework is derived and implemented for creating a sparse coding regularization term. We employ this method for the application of image classification in which images are sorted based on their sparse representations. Using one database texture images are classified by texture type. In a separate database scenery images are classified as object or foreground in a basic detection problem. Sparse coding classification with the universal regularizer model proposed in the original paper is compared to an l1 penalty function, a common sparsity-promoting regularizer. We do not make as strong of claims as the original paper about the advantages of the universal regularizers over l1 regularization, however in some cases we find that the universal model performs much better than l1 in terms of classification accuracy. We additionally explore the optimality of both regularization methods for the application of image texture classification. We examine the regularization parameter space of these models at an attempt to tune these parameters for highest classification accuracy. We also look at the relationship between percent sparsity of the sparse representations and classification performance. 7 6:05 Object detection on satellite map based on SIFT keypoints and graph theory Kising Lee, Zhixun Zhao High resolution satellite images can provide a wealth of valuable data such as urban-suburban-rural boundaries, building location, road location, or boundaries of bodies of water. Manual extraction of this type of data can be tedious and there is no guarantee of consistency. An automated algorithm; however, can be implemented to effectively extract the desired information. Unfortunately, construction of such an algorithm is not straight forward if standard techniques of image processing and pattern recognition are used because the aforementioned objects share similar patterns that standard techniques will have difficulty distinguishing between. Thus, we intend to use a scale invariant feature transform (SIFT), which has the advantage of being invariant under various image, in conjunction with graph theoretical tools to identify building that are present in a given image. Presented previously: 8 Blur estimation methods for blind deconvolution Brian Gonzalez, David Hiskens, Arvind Prasadan