W18 EECS 556 project abstracts Thu Apr 19, 4:00-6:40 PM, EECS 3427 # Denoising 4:00-4:18 Multi-channel weighted nuclear norm minimization for real color image denoising Can Cui, Jiaqi Wang, Jiarui Xu, Xi Yang Image denoising is a classical yet important topic in computer vision and image processing. In this project, we investigate the Multi-channel Weighted Nuclear Norm Minimization (MC-WNNM) color image denoising algorithm. This algorithm concatenates the RGB patches to make use of the channel redundancy and introduces a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics. Since the MC-WNNM model does not have a close form solution, it is reformulated into a linear equality-constrained problem and is solved by alternating direction method of multipliers (ADMM). In this project, we reproduce the experiment results and propose one major extension. We introduce Multi-channel Weighted Schatten p-Norm Minimization (MC-WSNM) color image denoising algorithm by changing the regularization term to weighted schatten p-norm and provide two method to solve the problem. MC-WSNM V1 applies ADMM method while MC-WSNM V2 applies WSNM method to solve the optimization problem. The results show that MC-WSNM V2 outperforms MC-WNNM and MC-WSNM V1, which can achieve around 1 dB higher PSNR than the MCWNNM and is more than 10 times faster than MC-WNNM. 4:18-4:36 Haze removal using dark channel prior algorithm with guided filter Zhenyu Fei, Yuwen Luo, Peng Xue, Tianchen Zhao [schedule 4-6PM due to GSI duties!] Natural outdoor images are usually degraded by scattering phenomena like haze, which is undesired in Photography and Computer Vision applications. Previous image dehazing methods can be classified into two main categories: (i) image enhancement based on image processing, and (ii) image restoration based on physical model. In this project, we implement the straightforward and effective concept called Dark Channel Prior proposed by Kaiming He, which is the lowest intensity channel for most natural haze-free images. Combining this prior with the haze imaging model, the thickness of haze can be estimated directly and the original image can be recovered. Because of the long processing time for soft matting algorithm in He's method, we use the guided filter algorithm instead to boost the algorithm significantly. Then, we utilize different quantitative metrics including SSIM, PSNR and NRMSE to evaluate the performance of the proposed method. At last, we compare our method with other haze removal methods both quantitatively and qualitatively. # Deblurring 4:36-4:54 Blind deblurring of natural images Parker, Howard; Pong, Xiang Ming Benjamin; Rajith Weerasinghe; Ye, Zhuen Joel Blind image deblurring involves restoring a blurred image without knowledge of the blurring operator. Blind deblurring is a very ill-posed problem, having an infinite number of solutions that are compatible with the degraded image. Because of this, the problem is more challenging than non-blind deblurring, but also more useful in situations where knowledge of the filter is limited. The proposed method focuses on space-invariant blurring filters in the presence of additive noise. We assume that edges in natural images are sharp and sparse, and that the blurring filter has finite support. These are weak assumptions on both the deblurring filter and the original image, allowing it to be applicable for a wide range of images. The proposed method models this deblurring operation as a minimization problem. By using minimization tools, such as gradient descent, we are able to find a minimum for the cost function. This produces an image that is clearer than the original, both visually and in terms of the increase of signal to noise ratio (ISNR). # Dictionaries and reconstruction 4:54-5:12 Sparse representation learning with sketching Siying Li, Haowei Xiang, Alexander Zaitzeff, Xiyu Zhang Convolutional Dictionary Learning (CDL) and Convolutional Analysis Operator Learning (CAOL) are two methods to obtain sparse representations of large datasets. Obtaining the representations quickly and efficiently is a problem of great interest. Sketching is a method of projecting data on a subspace and solving the problem in that space, which has been used to speed up least squares problems. We present the basics of CDL, CAOL, and sketching and answer the question “can sketching be used to speed up CDL or CAOL?” 5:12-5:30 Low-dose X-ray CT reconstruction via dictionary learning Mingjie Gao, Niral Shah, Kevin Xu The concerns of the effects of ionizing radiation on patients in medical imaging environments has motivated new low-dose image reconstruction techniques for computed tomography (CT). Since the radiation detection process is a quantum process, lowering the dose to patients inherently reduces the quality of the reconstructed image. A dictionary learning based image reconstruction method is proposed to reduce the noise of low-dose CT images. A statistical image reconstruction process reconstructs the image from low-dose projection data using a sparsity constraint in terms of a redundant dictionary. A study using a numerical phantom is performed and images generated with the proposed method are compared to traditional filtered backprojection (FBP) and a total-variation (TV) reconstruction method. 5:30-5:48 MRI reconstruction using sparse subspace clustering Shouchang Guo, Michelle Karker, Cheng Ouyang, Steven Whitaker Subspace clustering consists of taking high-dimensional data and grouping the data into the low-dimensional subspaces in which the data reside (hence the data lie in a union of subspaces). Elhamifar & Vidal implement this in a Sparse Subspace Clustering (SSC) algorithm, which is designed to address data imperfections (e.g., noise, missing entries). This work proposes a union of subspaces model via modified patch-based SSC as a novel approach to magnetic resonance image (MRI) reconstruction. Since MRI acquisition acceleration often entails undersampling k-space, this application benefits from the ability of SSC to work with missing data. The proposed method is solved using two alternate optimization algorithms: proximal gradient method (PGM) and alternating direction method of multipliers (ADMM), and evaluated using an undersampled cardiac perfusion MRI dataset. # Segmentation 5:48-6:06 Fast partitioning of vector-valued images Qiao Huang, Xiechen Wang, Jianqiao Zheng, Tian Zhou Abstract: Multiple state-of-art algorithms have been developed for image segmentation, including those using discrete label space such as graph cut, and those substituting the l0-norm penalty in the Potts problem to the l1-norm penalty such as total variation. Storath and Weinmann proposed a new method having comparable efficiency to the l1-norm solutions, and desirable segmentation quality like the discrete label space methods. In our project, we implemented Storath and Weinmann's method for multi-channel image segmentation and compared its result to the state-of-art solutions. 6:06-6:24 Automatic segmentation of tumorous liver CT scans Sang Choi, Caroline Crockett, Alexander Ritchie, Rebecca Shen Accurate segmentation of liver CT scans is required for treatment planning. However, there is a lack of automated or semi-automated algorithms for fast and repeatable measurements of the progression of liver cancer. We first reproduce the results of Moghbel et al., whose automated segmentation procedure is based on fuzzy c-means clustering and the random walker algorithm. We then propose two semi-automated alternatives based on: (1) the Potts model, which is particularly suited to CT, as the images fit the model's piecewise constant assumption, and (2) the watershed algorithm, which segments based on an image's gradient and prior knowledge about liver tumors. Finally, we compare the results of all three methods on a publicly available dataset that includes ground truth data, using a measure that captured the amount of overlap of tumor segmented regions.