EECS 556 Project Abstracts, W05 Tuesday April 19, 5:30PM-?, 3437 EECS ---- PATTERN RECOGNITION 5:30-5:55 Finger print matching Sonia Gupta; Sean Motsinger; Ajay Raghavan; Sumedha Sinha This project examines the image-processing requirements inherent to fingerprint recognition. Specifically, minutiae-based algorithms will be studied and implemented. Minutiae, which are the ridge bifurcations and endings seen in fingerprints (see Fig. 1), will be used as the discerning features in this algorithm. Each feature is characterized by its location and the direction of the ridge on which it resides. In particular, the methodology presented in Jain et al. will be implemented. This methodology consists of four main steps: Orientation field estimation using local gradients, ridge detection by convolving the image with filters oriented using the orientation field estimated, ridge thinning and minutiae extraction, and minutiae correlation. The algorithms developed will be compared based on performance criteria which are based on the False Acceptance Rate (FAR), False Rejection Rate (FRR) and the Receiver Operating Curve (plot of FAR versus FRR). They will also be compared to the commercial software VeriFinger 4.1. 5:55-6:20 Optical character recognition (OCR) Harrington, Jason; Mattis, Paul; Meyers, Aaron; Novello, Peter A text document can be scanned and then broken into sub-images representing each typed letter. These sub-images can then be analyzed to determine a distinct set of features based upon that sub-image. These features are used to classify the sub-image into a specific letter. This method of feature classification yields OCR accuracy rates of 90% to 95%. 6:20-6:45 Image processing for face recognition Chuck Divin, Daniel Kurikesu, David Kurikesu, Elson Liu Face recognition is a large and active area of research in image processing and pattern recognition, with applications in biometric identification and security. In its full scope, it is a very complex problem, as changes in illumination magnitude, illumination direction, facial pose, or facial expression can adversely affect the reliability of a face recognition system. A common strategy used by many face recognition systems is to empirically derive an image-representation basis from a set of training data. In our project, we use a common method for constructing an empirical basis called principal component analysis (also known as Karhunen-Loeve transformation). When applied to the face-recognition problem, principal component analysis generates basis vectors that are called eigenfaces (named for their resemblance to faces). Faces are then represented by the weighting coefficients of the linear combination of eigenfaces that minimizes the mean-square error. Each individual is then represented by a vector of representation coefficients, and a database of individuals can be represented by an array of these vectors. Then the face recognition procedure consists of representing a given face image as a linear combination of eigenfaces, then finding the database entry that is the minimum distance from our given image in the representation space. For our project, we examined the effect of noise and changes in illumination on the recognition rate of an eigenface face-recognition system and explored different image processing procedures to try to improve the recognition rate and to make the system robust to these variations in face images. 6:45-7:00 pizza 7:00-7:25 Face detection in color images Damannagari Chandan, Kamdar Pratik, Misra Sidharth, Sarkar Saradwata Human face detection plays an important role in applications such as video surveillance, human computer interface, face recognition, and face image database management. We propose a face detection algorithm for color images in the presence of varying light conditions as well as partial occlusions and non-frontal faces. We implement a non-linear color transformation to detect skin regions over the entire image and then isolate the potential face candidates using morphological operations. We use an average face template for the process of detecting the faces from the potential face candidates. We find the dimensions, centroid and orientation of the isolated skin regions and then resize and reorient the average face template and find the correlation between the isolated skin region and the face candidate. A threshold is fixed based on the experiments performed to decide whether a given skin region corresponds to a face or not. Finally we plot an ROC curve based on different threshold values. An optimal implementation of the above algorithms would lead to successful face detection. ---- MOTION 7:25-7:50 Motion estimation using 2nd order trajectories and adaptive image segmentation Bashan, Carter, Grikschat, Stepanian Motion estimation is an important technique used in such applications as video coding or temporal interpolation to 'upsample' a sequence of frames. The standard method segments a target image uniformly and simply assumes a linear path for blocks between frames. In this project, a quadratic motion is fitted to the block path between frames presumably leading to more natural looking interpolation. We will simulate block matching and linear motion and compare to block matching with a quadratic path. In addition to this, we plan to study the possible gains of adaptively segmenting an image based on the presence of edges and then block matching. Smaller blocks in areas with edges may yield less blockiness in interpolated frames. The potential gains of adaptive segmentation and quadratic motion will be evaluated visually as well as with numerical metrics such as mean squared error and peak signal to noise ratio. The potential gains of these methods will be gauged against any added computational complexity. Several sequences of images will be decimated and MSE of the interpolated frames will be calculated with respect to the actual missing frames. 7:50-8:15 Methods for obtaining fieldmaps and their effect on motion correction and iterative image reconstruction in MR imaging Will Grissom, Kim Khalsa, Milosh Petrovich, and Bryan Donald Magnetic field susceptibility artifacts can severely degrade the performance of motion correction and reconstruction in MR imaging of the brain. The fieldmaps that are used to correct for these distortions can vary drastically with patient motion. We investigated three methods for obtaining fieldmaps for iterative image reconstruction. The methods we compared were a 'straw man' method in which a static fieldmap acquired at a single time point is applied to all images, a method in which the same static fieldmap is rotated according to estimates of subject movement, and a dynamic method in which a new fieldmap was acquired at each time point. Scans were collected using a 3.0 Tesla GE MRI scanner, motion correction was performed using MCFLIRT software, and iterative image reconstruction was performed using Dr. Fessler's Image Reconstruction MATLAB Toolbox. ---- RESTORATION 8:15-8:40 Survey of blind image restoration methods Congxian, Chih, Mark, Alex Many image restoration algorithms assume knowledge of the blur function that degrades an image. However, in many situations the blur is not known and therefore blind image restoration techniques must be applied. Blind deconvolution is a technique for the restoration of degraded images without explicit knowledge of both the original image and the point spread function (PSF) of the blur. It shows much more promise for future than conventional image restoration since the PSF of the blur is typical not available in many applications such as astronomy and medical imaging. The objective of this project is to conduct an extensive survey study of current techniques in blind image deconvolution. First, the general imaging model is discussed in detail. Next, we compare and contrast several blind deconvolution methods including blind a-priori parametric method with Wiener filter, maximum likelihood estimate (ML), iterative blind deconvolution (IBD), nonnegativity and support constraints recursive inverse filtering (NAS-RIF), and finally maximum likelihood estimate (ML). We will discuss the algorithmic assumptions, improvement signal to noise ratio, image artifacts, computational complexity, and blur to signal to noise ratio for these algorithms. We then present blind image restoration results on a binary image and the well known Lena image. 8:40-9:05 Image Completion: A comparative analysis of exemplar based image inpainting method and Simultaneous cartoon and texture image inpainting using sparse representations Kizilkale, Cagdas; Lo, Edmund; Louro, Hugo; Shaw, Cole Cracks in old photographs, lost pixels in data transmission, and text removal from a digital image are typical applications for image restoration. In the digital world, background blur and edge discontinuities have been obvious indicators that an image has been modified. Today, smooth object removal is particularly useful in applications like special effects and digital photo touchup. This project will explore two new digital techniques for filling in regions of an image. Previously, Exemplar-Based algorithms and Image Inpainting techniques have been researched separately. Image Inpainting focuses on linear structures, i.e. creating continuous lines and edges. Exemplar-Based algorithms focus on repeating textures in order to create a smooth-looking background. Both techniques have been combined into one -Exemplar-Based Image Inpainting--that we will explore further. A comparative analysis will be made between Exemplar-Based Image Inpainting and a new technique called Simultaneous Cartoon and Texture Image Inpainting using Morphological Component Analysis (MCA). This is one of the most recent inpainting methods and combines the advantages of variational and local statistical analysis methods, i.e., performs well in images containing simultaneously picewise smooth regions (cartoons) and texture. It has the desirable properties: (1) the image is allowed to include additive noise; (2) the image is allowed to have missing pixels; (3) allows the use of overcomplete representations; (4) performs a global treatment of the image rather then a block based analysis; (5) ability to treat overlapping texture and cartoon layers separately. The goal is to find a sparse representation for any arbitrary image containing both texture and smooth content. The overall problem will be modeled as a minimization optimization having thus substantial advantages over pure numerical methods. However when the goal is to fill in a big missing region this algorithm is outperformed by Exemplar Based Image Inpainting methods. The analysis of this and other tradeoffs altogether with implementation details are of great interest due to the fact this method uses a novel approach and still lacks theoretical formulation. ---- SEGMENTATION 9:05-9:30 Interactive segmentation of medical images using snakes Joonki Noh, Jeffrey Pursell, Dinesh Thogulua, Philip Tsai Accurate and smooth segmentation of noisy or blurred images is a problem of particular interest to those in the medical profession. One method to perform such segmentation is snakes. A snake is an evolving contour used to segment images based on the optimization of some energy functions. Jacob et al. proposed some new energy functions that increase the performance of the snake algorithm. We compare the performance of these new energy equations to simpler ones.