---- emission Compton image reconstruction in combined direction and isotope space Wahl, Chris : List-mode maximum-likelihood expectation-maximization image reconstruction has previously been applied to reconstruct gamma-ray source distributions using position-sensitive gamma-ray detectors. Often, one assumes that the full energy of the photon has been recorded in the sensor, in which case reconstruction need only occur over direction pixels (the image). However, since a significant fraction of recorded photons do not deposit their full energy, more recent algorithms have reconstructed source intensity over a combined image and energy space. This has the advantage of reporting the gamma-ray energy spectrum along with the spatial distribution of activity. Yet, the quantity that most operators ultimately care about is not the energy spectrum but what isotopes produced the emissions. Since each isotope has a unique emission energy, we reconstruct source intensity in a combined image and isotope space. We will report computation time and imaging and isotope identification performance compared to reconstruction in a combined image and energy space. Penalized-likelihood for Poisson model with L1 regularization Lingenfelter, Dan : This work compares and contrasts algorithms for solving penalized likelihood problems with Poisson data and l1 frame-based and roughness regularization. I present two algorithms, derived using augmented Lagrangian and optimization transfer techniques, that solve the l1 penalized likelihood problem exactly. These algorithms are compared to competing methods that approximate the l1 norm with a differentiable function. The algorithms are applied to an emission tomography problem. ---- motion Compton image reconstruction using energy and time-varying spatial basis functions Jaworski, Jason : Previous work has shown great success using an MLEM based algorithm for deconvolving Compton images in the combined energy-imaging domain. However, when sources of interest are moving in the field of view during the measurement time, this technique cannot correctly represent the source in a single voxel. By adding time-varying (moving) spatial basis functions and if the source position is known as a function of time, the algorithm can correctly deconvolve the source into a single time-varying spatial voxel while concurrently deconvolving in the energy domain. Deblurring space-variant motion with PWLS estimation from a single image Donghwan Kim : Deblurring motion enhances the quality of the motion-blurred image. This method is complicated due to the diversity of the motion existing in a single image such as translation, rotation, and nonparametric motion. Since motion blur itself retains the information of the motion, using the #-motion blur constraint model, space-variant motion can be estimated and this enables us to restore the blurred image by deconvolution. However, the inaccuracy of the blind deconvolution causes inacceptable ringing artifacts near the edges. Therefore, this paper addresses Penalized Weighted Least Squares (PWLS) estimation to suppress the artifacts while preserving the edges. For enhanced PWLS estimation, least squares function is weighted by a certainty of data. Edge-preserving penalty function is also spatially weighted based on a local smoothness prior. Finally, the appropriate PWLS function for roughly motion-deblurred image is selected by comparing the resolution and artifacts of the results. Comparison of optimization transfer approach to conventional optimization algorithm in joint registration/reconstruction for motion compensated image reconstruction Jang Hwan Cho : For the image reconstruction problems where the object has motion, such as cardiac CT imaging, reducing the motion artifacts is an important issue. Joint estimation of both motion and image using a penalized-likelihood cost function is a suitable method for this task. However, when conventional gradient-based methods is used for motion estimation, updating the motion parameters is computationally expensive. In this project, the optimization transfer method was used to decrease the computational burden of the motion estimation step. The method is compared to the conventional optimization algorithm (conjugate gradients) applied to the PWLS cost function. The comparison will be performed using the simulated cardiac CT image reconstruction. ---- CT Wavelet Regularization for inverse-Hilbert reconstructed CT Schmitt, Steve : A CT image can be reconstructed using DBP, in which each each projection is differentiated and back-projected. This results in an image that is correct, but Hilbert-transformed along some axis. In this paper, I explore using various methods to reconstruct a CT image using DBP by simultaneously performing an inverse Hilbert transform and applying l_p (p < 2) wavelet-based regularization to the image to encourage sparsity in a wavelet basis. This has the intended effect of removing noise while preserving edges in the image, which is possible if the assumption that the image is sparse in a wavelet basis is correct. FPGA Implementation of Forward-Projection for X-Ray CT using Separable Footprints Kim, Jungkuk : Forward projection for X-ray CT requires lots of computation despite of the separability of Separable Footprints algorithm. The large number of computation postpones a simulation result. In this project, FPGA implementation accelerates the overall processing time of Separable Footprints algorithm in two ways; parallelism and memory organization. Pipelining computes in parallel so as to lessen the simulation time to the order of n, the number of pipeline. Segmenting a given data into m parts helps to compute at most m instructions at a time. Based on the proposed architecture, implementing on FPGA is expected to be tens to a hundred faster than MATLAB simulation. Two-Material Decomposition from Single-Energy CT Using Statistical Image Reconstruction Long, Yong : An accurate image of attenuation coefficients at a higher treatment energy can be synthesized by combining component images separated at low diagnosis energies. This accurate image ensures precise does calculation, enhances visualization and thus segmentation of anatomy for radiotherapy treatment planing, and may lead to future improvements in reducing image artifacts from highly attenuating materials. Most separation methods utilize dual-energy CT measurements. However, additional X-ray scan introduces higher radiation exposure to patients over a single scan. We propose to separate two basis materials with a single-energy CT scan by utilizing the differences of incident X-ray intensities for projection rays created by filtration, such as bow-tie filters and possibly sparsity of material images in appropriate domains. Artifact reduction and field of view extension of truncated CT imaging based on model-based iterative reconstruction algorithms and sinogram completion Liu, Langechuan : Under various conditions, a certain portion of the scanned object can extend beyond a CT scanner, resulting in undesirable artifacts within limited field of view (FOV) in reconstructed image using conventional filtered back projection (FBP). Previous researches have proposed different sinogram completion (SC) methods to mitigate artifacts within FOV or to extend FOV. This study explores the possibility to use model-based iterative reconstruction (MBIR) algorithms for this problem. Results using MBIR and SC methods are compared with traditional FBP method, showing that model-based approaches are effective and promising in artifact reduction, and provides more flexibility in case of noise. Regularized Image Reconstruction Algorithm for Dual-Energy X-ray Computed Tomography Imaging using a Cross-material prior Huh, Won Seok : X-ray computed tomography (CT) images are used routinely for attenuation correction in PET/CT systems. Recently, Dual-energy (DE) CT X-ray has the potential to improve the accuracy of attenuation correctin in PET by estimating material characterizations. However, conventional regularized image reconstruction algorithms ignored the prior fact that the images of the different materials are perfectly registered. In this paper, we propose a novel regularized method, Cross-material, by using the prior knowledge that if neighboring voxels in one similar, assuming those voxels belong to the same organ and corresponding voxels in the other materials belong to the same organ too. The goal of this work is to study statistical methods for image reconstruction using a Cross-material prior from DECT data for PET. ---- priors / regularization Sparse Prior in Natural Images and Its Applications Nien, Hung : In nature, most images, or in general, most signals, have some particular structure and redundancy; therefore, we believe that most signal are sparse when represented in some domain. In this project, I evaluate this "sparse" prior in natural images in different norms, make a comparison with the "sparse gradient prior", and apply it to solve some problems in image restoration, e.g., the boundary value problem in image deconvolution - given a blurred image, how to extrapolate the border to make it "circular-convolved-like" and satisfy the "sparse" prior at the same time. In order to do this, I also implement several algorithms that aim to solve the minimization problem in compressed sensing and make comparisons between them. After doing that, I can have a thorough view to the field of compressed sensing. Regularization Methods for Nonrigid Image Registration to Incorporate Tissue-Type Prior and Physical Constraints Watanabe, Tak : Medical image registration is the process of finding the transformation that maps the homologous image's coordinate space to the reference image's coordinates, which brings alignment in the anatomical features of the images. Image registration can be classified according to the transformation model used. Rigid or affine registration is appropriate for modeling movements of individual bone structures and requires few degrees of freedom (DOF), whereas nonrigid registration is capable of modeling more complicated deformation in the soft tissue regions at the expense of higher DOF. This high DOF can make nonrigid registration a grossly ill-conditioned problem with multiple optima, which suggests us to include regularization into the algorithm. : This project focuses on designing a regularizer that incorporates the elasticity level in the local image regions and encourages the final warping to be topology preserving. The regularizer shall allow bending to take place in regions of soft tissue, but penalize warping in regions of osseous tissue to avoid unrealistic distortion such as bone-warping. In addition, the regularizer shall encourage local invertibility to enforce the transformation to be diffeomorphic, helping avoid unrealistic results such as 'folding' in the anatomy. Therefore the results from the nonrigid registration algorithm should agree with the tissue-type dependent elasticity level prior and conform to some physical constraints imposed by design. ---- MR Regularization of magnitude and phase images in MRI Zhao, Feng : In some MRI applications which utilize or introduce field inhomogeneity, the imaginary component of the complex image cannot be negligible compared to the real part, such as -weighted fMRI BOLD imaging and PRF-shift thermometry [1]. In such applications, the accuracies of reconstructions for phase and magnitude can affect with each other, as they are highly correlated in data fitting. This problem can be seen in the conventional iterative MRI reconstructions for complex images which equally fit and regularize (if any) the real and imaginary components, therefore the phase in low intensity areas will suffer from huge errors. To solve this problem, Fessler et al. [2] proposed a method to regularize magnitude and phase separately, which exploits the spatial smoothness of phase images. However, the undersampling rate is relatively limited in that reconstruction, and meanwhile the optimization transfer approach [3] used in this paper converges the cost function too slowly. To improve it, this paper regularizes the magnitude component by Compressed Sensing (CS) [6] regularizer to further undersampling the data and applies the Preconditioned Conjugate Gradient (PCG) [4] with monotonic line searches [5] to speed up the optimization. In addition, some modification in the alternant updating process is made to reduce the correlation between phase and magnitude updates, which improves the results of both and increases the convergence rate as well. Monotonic line search using optimization transfer principles for compressed sensing MRI reconstruction Allison, Michael : Compressed sensing (CS) techniques can be used to accelerate MRI by reducing the amount of data required for a given reconstruction. The cost function associated with CS is composed of a data-fit term and a sparsity promoting regularization term. The minimization of such cost functions is often complicated by the use of the non-differentiable l1 norm in the regularizer. One way to avoid this complication is to approximate the l1 norm with a sum of hyperbola functions. The conjugate gradient algorithm can then be used to minimize this approximate cost function. However, one is faced with the task of selecting an appropriate step size for each CG iteration. This work investigates using surrogate functions to develop monotonic line search algorithms for the case of complex data. The convergence properties of these new algorithms are compared with those of traditional techniques. ---- optimization Total Variation Denoising via Split Bregman iteration Farmer, Brittan : Total variation (TV) regularization provides a strongly edge-preserving technique for denoising images. This technique can be implemented using the recently developed Split Bregman iteration. In this paper, we will discuss the performance of this algorithm. We will also compare regularization via an anisotropic TV norm and an isotropic TV norm. Finally, we will explore the local impulse response for this regularization technique. Improving algorithm convergence by finding a tight lower bound for circulant majorizers of Hessians Antonis Matakos : The purpose of this project is to find improved approximations to the Hessian matrix when creating a quadratic surrogate, to improve convergence speed. Since in many cases of interest the Hessian is (approximately) block Toeplitz, a natural approach is to find a block circulant approximation. The block circulant approximation has the advantage of being easily invertible using FFT's. There are many proposed "optimal" circulant approximations (e.g., T. Chan's optimal and Tyrtyshnikov's superoptimal circulant preconditioners), but none guarantees that it is a majorizer. In this work we will investigate approximations of the form C + alpha I where C is an "optimal" circulant preconditioner and alpha is a constant that guarantees the majorization condition. The main goal of this work is to find a tight lower bound on the parameter alpha and an efficient method for calculating it. In case that the lower bound cannot be found efficiently we will find an efficiently calculated approximation. In this project we will consider both the cases of symmetric Toeplitz and more general non-Toeplitz Hessians. We will evaluate the proposed approximations by comparing the convergence speed of different choices of C when the lower bound for alpha is used and also by comparing the convergence speed of different choices of alpha when the choice of C is fixed.