2014, pp 663-670

Performance Limits for Computational Photography

* Final gross prices may vary according to local VAT.

Get Access

Introduction

Over the last decade, a number of Computational Imaging (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effects of multiplexing by appropriate reconstruction algorithms. However, a detailed analysis of CI has proven to be a challenging problem because performance depends equally on three components: (1) the optical multiplexing, (2) the noise characteristics of the sensor, and (3) the reconstruction algorithm, which typically uses signal priors. In this paper, we utilize a recently proposed framework incorporating all three components [13]. We model signal priors using a Gaussian Mixture Model (GMM), which allows us to analytically compute Minimum Mean-Squared Error (MMSE). We analyze the specific problem of motion and defocus deblurring, showing how to find the optimal exposure time and aperture setting for defocus and motion deblurring cameras, respectively. This framework gives us the machinery to answer an open question in computational imaging: “To deblur or denoise?”