2013-04-29 EECS 556 abstracts / schedule, 1-4pm 1303 EECS All students need to be present for all presentations, unless prior arrangements were made with Prof. Fessler, to be able to earn full credit for the oral presentation score and to be eligible for any project prizes. Be sure to test your laptop in the room (1303 EECS) before Monday to iron out any technical issues, to be able to earn full credit for oral presentations. 01 1:10-1:30 (KLA-Tencor 2nd place project prize) Optimal motion-compensated frame rate up conversion Taining Liu, Xiaolin Song, Jinze Yu : Frame rate up conversion (FRUC) methods that employ motion have been proven to provide better image quality compared to nonmotion-based methods. While motion-based methods improve the quality of interpolation, artifacts are introduced in the presence of incorrect motion vectors. In this paper, we study the design problem of optimal temporal interpolation filter for motioncompensated FRUC (MC-FRUC). The optimal filter is obtained by minimizing the prediction error variance between the original frame and the interpolated frame. In FRUC applications, the original frame that is skipped is not available at the decoder, so models for the power spectral density of the original signal and prediction error are used to formulate the problem. The closed-form solution for the filter is obtained by Lagrange multipliers and statistical motion vector error modeling. The effect of motion vector errors on resulting optimal filters and prediction error is analyzed. The performance of the optimal filter is compared to nonadaptive temporal averaging filters by using two different motion vector reliability measures. The results confirm that to improve the quality of temporal interpolation in MC, the interpolation filter should be designed based on the reliability of motion vectors and the statistics of the MC prediction error. 02 1:30-1:50 A spatially adaptive statistical method for the binarization of degraded document images Xi Han, Amrita Ray Chaudhury, Suhang Wang, Yang Zhua : Document image binarization is an essential process for accurately storing and searching degraded documents. Traditional binarization algorithms fail to clearly transform low quality document images into binarized image. In this project, the proposed spatially adaptive method for binarization is able to restore the main text part of the document images including weak stroke and connections. The basis for this method is the spatial relationship in the initial binarized image using Sauvola grid-based binarization algorithm. To estimate the text and background features, grid-based model and inpainting techniques are adopted to make this process faster and more robust. The maximum likelihood (ML) classification method is then used to produce the final binarization of document image based on the features of text and background. 03 1:50-2:10 Hierarchical image segmentation of multicolor images Zhongtang Tian, Seongjin Yoon : Image segmentation is the process of partitioning an image into `informative' segments. The process plays an important role in many automatic object recognition applications. Among the various approaches proposed so far, watershed is an efficient region-growing method based on the simulation of flooding in the gradient map. However, one critical drawback of this method is over-segmentation due to its sensitivity to even small fluctuation of gradient and lack of global information. In this project, with the purpose of overcoming over-segmentation and imprving the segmentation quality of the watershed method, we closely followed an existing research regarding the image segmentation, Vanhamel etal. 2003. "Multiscale Gradient Watersheds of Color Images*"* and implemented a watershed-based hierarchical image segmentation algorithm of color images. We applied 1) scale space samples generated by anisotropic diffusion filtering (ADF) as the pre-processing, 2) watershed segmentation with color invariant gradient, and 3) region merging based on dynamics of contours in scale space (DCS) as the post-processing. From the set of outputs produced by the choice of different scale levels, gLY metric was applied to find the optimal segmentation result. Additionally, we suggested ADF with nonlinear time integration scheme, and compared the performance with other existing schemes in terms of numerical errors and computational time. The overall method is evaluated using BSDS500 metric, which is an empirical evaluation measure based on human-labeled images. The optimal score is compared with that of other segmentation methods. 04 2:10-2:30 (KLA-Tencor 1st place project prize) Reconstruction of accelerated MRI acquisitions which use partial Fourier, partial parallel (PFPP) imaging techniques Gopal Nataraj, Brandon Oselio, and Yash Shah : In magnetic resonance (MR) imaging, acquiring frequency-encoded readouts along many phase-encoding steps in k-space is a time-consuming step, due to the lengthy intrinsic time constants of bodily matter. It is therefore desirable to reduce the number of phase-encoding steps acquired in k-space, yet preserve image contrast and detail as best as possible. Previous methods such as Partial Fourier (PF) and Partial Parallel (PP) imaging introduce phase artifacts and noise amplification, respectively. In this work, we investigate combining these methods in a Partial Fourier, Partially Parallel (PFPP) framework, originally introduced in Bydder et. al. [1]. By varying a regularization parameter # that penalizes against complex solutions, we are able to control the balance between undesirable artifacts introduced through PF and PP imaging alone. Optimization of this tradeoff allows for increased under-sampling in the phase encode direction, while still preserving image quality. Using a robust iterative reconstruction scheme on 8-coil data, we are able to achieve acceleration factors up to nearly 8× with as low as 11.5% NMRSE as compared to a fully-sampled gold standard. Such large acceleration factors allow for significantly faster acquisitions, thereby increasing patient comfort and reducing scan costs. 2:30-2:40 break 05 2:40-3:00 Defocus magnification with single image Chia-Ling Chang, Xiang Li, Jingxiang Yuan : Sharp foreground with blurry background caused by shallow depth of field is sometimes aesthetically preferable. However, the common point-and-shoot or smartphone cameras do not produce enough defocus like single-lens reflex (SLR) because of its smaller diameter of lenses. In this project, we would like to generate a shallow depth-of-field image from all-focus one with the method proposed by Bae et al. This approach estimates the defocus map, magnifies the blurriness, and generates the more defocused image with the new defocus map. To acquire the defocus map, we first estimate the spatially-varying amount of blur at edges, and then propagate the defocus measure over the image. To magnify the defocus effect, we blur each pixel according to the estimated blurriness. Then we can generate the result images by Adobe Photoshop lens blur function with the new defocus map. Unlike many of the studies on focus/defocus effects relying on the depth map, whose reconstruction requires multiple images with different camera settings, this approach estimates the defocus map instead of the precise depth map and is therefore more practical for daily application. 06 3:00-3:20 Nonlocally centralized sparse representation for image restoration Arun Dutta, Allen Gu, Irene Zhu : One approach to image restoration is sparse modeling. The idea behind sparse modeling is that an image patch can be coded with a small number of basis vectors from an over-complete dictionary and a minimization problem can be performed. However, this method still may not produce accurate enough reconstructions. To improve the outcome, the nonlocally centralized sparse representation method (NCSR) introduces the idea of sparse coding noise (the difference between sparse coefficients of the observed image and original image). The goal of NCSR is to minimize this noise. Since the sparse coefficients of the original image are unknown, we can obtain estimates by using the nonlocal self-similarities in the degraded image. This is where the name of the method stems from: we centralize the sparse coding coefficients of the observed image to the estimates from nonlocal similarities. Our project focuses on applying this algorithm in two areas of image restoration: denoising and deblurring. 07 3:20-3:40 Image deblurring with blurred/Noisy image pairs Huichao Ma, Buping Wang, Jiabei Zheng, Menglian Zhou : Photos taken under dim lighting conditions by a handheld camera are usually either too noisy or too blurry. In our project, we implement an image deblurring method that takes advantage of a pair of blurred/noisy images. We first utilize the image pair to estimate the blur kernel using a regularized linear least-squares method. With this kernel, we do Richardson-Lucy deconvolution to the residual image (blurred image minus denoised image) to reduce ringing artifacts of standard RL algorithm. To further reduce the ringing artifacts, we use a gain-controlled RL deconvolution method. This method first calculates the gain map of the image from Gaussian pyramids and then uses the map to suppress the contrast in smooth area, which efficiently reduces ringing but also causes loss of the detail. At last a detail layer is extracted from residual RL result through an adaptive high pass filter and added to the gain-controlled RL result to make up the detail loss. We also examine another deblurring method that only takes a single blurred image as input. We first use Fergus' algorithm to do the blind kernel estimation. Then we implement the joint bilateral RL method to obtain the deblurred image as well as to add more and more details progressively. The results are comparable to the first method, and both methods improve the deblurring performance in the sense that the ringing artifacts are significantly suppressed compared with the standard RL algorithm.