You are on page 1of 3

10.1117/2.1201105.

003747

Re-designing the camera for computational photography


Roarke Horstmeyer Modied optical setups enable new optical imaging functionalities, including the ability to capture depth, varied angular perspectives, and multispectral content. Although digital sensors have become almost ubiquitous, the majority of cameras that contain them have changed little in form over the past century. After capturing a focused 2D image, digitization leads to efcient storage for future editing and sharing, but little else. Computational cameras aim to break down the wall between the optics that captures a photo and the postprocessing used to enhance it, with designs that jointly optimize for both. The initial images from these cameras appear blurry or even unrecognizable, but contain much useful information for the computer that immediately processes it. A post-processed image is clear and sharp, and can also provide measurements of an objects 3D location, spectral properties (color spectrum), or material composition, for example. Many optical systems have been developed to extract these unseen clues from a scene of interest, some requiring little or no post-processing computation at all. For example, clever illumination can provide a direct indication of object depth, as demonstrated by Microsofts Kinect camera. Multispectral imagery can even be created with an unmodied camera, such as a regular point-and-shoot device. A combination of images of the same scene using different lters over the lens will capture more than the three standard color spectrum ranges (red, green, and blue). The aim of many computational cameras is to add these useful functionalities to a conventional 2D image captured in a single snapshot. Such an ambitious goal inherently requires modication of the optical setup, which can often be realized by adding simple patterned elements to regular camera designs. The patterned optical elements used in current computational cameras fall into two general classes. The rst includes elements that are placed at the camera aperture stop, typically referred to as pupil masks, that globally modify the entire image. The second consists of elements, placed very close to

Figure 1. Example of designing a cameras point-spread function (PSF) at three planes of defocus (z1 , z2 , z3 ). Three desired intensity distributions (I1 , I2 , I3 ) are input to an optimization procedure that nds an optimal pupil mask. Simulation and experiment (using a Nikon singlelens reex camera and a Nikon AF NIKKOR 50mm f/1.8D lens with a printed binary pupil mask) show close agreement. This PSF, which begins as one point, then defocuses into four points, and then nine points, offers a simple depth detection scheme.

the image sensor, which locally modify regions of pixels, much like the Bayer lter pattern in most of todays color cameras. The post-processing of a computational image taken with cameras like these comes in a variety of forms, ranging from simple deconvolution and pixel re-binning (recombining data from adjacent sensors to create one pixel of data) to more complex, sparse recovery procedures. Pupil mask design is a constantly evolving area of research, with various mask patterns proposed to extract image depth,1 extend a cameras depth of eld,2 or offer super-resolution,3 among other enhanced functionalities. Each mask alters the cameras 3D point-spread function (PSF) to better present the information to be extracted during post-processing. We have demonstrated a method to optimally design any desired PSF intensity pattern in 3D (see Figure 1).4 Continued on next page

10.1117/2.1201105.003747 Page 2/3

Figure 2. (a) A pixel-level high-pass optical transfer function (OTF) design using a surface-wave-enabled darkeld aperture and a darkeld image obtained with this ring-like design. (b) A low-pass OTF created by a circular sub-aperture and a conventional bright-eld image obtained using this aperture geometry. a.u.: Arbitrary units. (Figure courtesy of Guoan Zheng, Biophotonics Laboratory, Caltech.) Sensor-based coding can help obtain different angular perspectives of an object (its light eld), which is closely related to detecting the phase of an incoming wavefront. Once captured, interesting effects like digital refocusing can be achieved in postprocessing. Periodic arrays of small lenses or pinholes provide a simple way to extract these varied perspectives from a single image. More complex periodic pattern designs can lead to phase detection5 or pixel-level optical transfer function design with background noise reduction6, 7 (see example in Figure 2). Finally, pupil- and sensor-based coding can be combined. For example, we can obtain a multispectral image in a single snapshot by inserting a variable lter at the pupil and a periodic array near the sensor (see Figure 3).8 In this way, 27 spectral channels are directly captured at the expense of the images spatial resolution. A large degree of exibility is gained when dynamic optical elements are used to improve the computational image capture process. Although research is still in its initial stages, we have developed a framework to optimally design the 3D PSF formation of a dynamic pupil mask, made with a small LCD screen in the camera lens.9 The screens pattern changes during the exposure of one image to shape any desired 3D intensity pattern

Figure 3. (a) Schematic of the snapshot multispectral camera layout. (b) Head-on image of the camera lens with a variable bandpass lter inserted at the aperture stop. (c) Example output after post-processing. A multispectral data cube of crayons (235 141 spatial resolution with 27 spectral channels), with measured spectra shown for four example pixels. near the sensor. We have also demonstrated the extraction of mixed spatial, angular, and temporal scene content using a pupil element that changes over time.10 This design captures multiple frames of a scenes light eld, allowing one to digitally refocus on a moving object, or create an image with varying spatial resolution. Likewise, compressive sensing is possible with variable sensor-based elements11 (i.e., pixel-level optical control) that can also be used for object tracking and deblurring. These early results suggest how future cameras can greatly benet from dynamic, adaptive optical elements in their specic imaging tasks. In general, since a computational camera captures and processes optical data to measure something besides a simple 2D image, its light-capturing optics and post-processing procedures must be jointly optimized. Our future work will focus on applying the novel camera designs described to image otherwise undetectable features, such as biomedical, microscopic, or ultrafast phenomena. This research is supported in part by a National Defense Science and Engineering Graduate Fellowship. Author Information Roarke Horstmeyer Media Lab Massachusetts Institute of Technology Cambridge, MA Continued on next page

10.1117/2.1201105.003747 Page 3/3

References 1. E. R. Dowski and W. T. Cathey, Extended depth of eld through wave-front coding, Appl. Opt. 34 (11), pp. 18591866, 1995. doi:10.1364/AO.34.001859 2. A. Greengard, Y. Schechner, and R. Piestun, Depth from diffracted rotation, Opt. Lett. 31 (2), pp. 181183, 2006. doi:10.1364/OL.31.000181 3. A. Ashok and M. Neifeld, Pseudorandom phase masks for superresolution imaging from subpixel shifting, Appl. Opt. 46 (12), pp. 22562268, 2007. doi:10.1364/AO.46.002256 4. R. Horstmeyer, S. B. Oh, and R. Raskar, Iterative aperture mask design in phase space using a rank constraint, Opt. Express 18 (21), pp. 2254522555, 2010. doi:10.1364/OE.18.022545 5. X. Cui, M. Lew, and C. Yang, Quantitative differential interference contrast microscopy based on structured-aperture interference, Appl. Phys. Lett. 93 (9), p. 091113, 2008. doi:10.1063/1.2977870 6. G. Zheng and C. Yang, Improving weak-signal identication via predetection background suppression by a pixel-level, surface-wave enabled dark-eld aperture, Opt. Lett. 35 (15), pp. 26362638, 2010. doi:10.1364/OL.35.002636 7. G. Zheng, Y. Wang, and C. Yang, Pixel level optical-transfer-function design based on the surface-wave-interferometry aperture, Opt. Express 18 (16), pp. 1649916506, 2010. doi:10.1364/OE.18.016499 8. R. Horstmeyer, R. A. Athale, and G. Euliss, Modied light eld architecture for recongurable multimode imaging, Proc. SPIE 7468 (1), p. 746804, 2009. doi:10.1117/12.828653 9. R. Horstmeyer, S. B. Oh, O. Gupta, and R. Raskar, Partially coherent ambiguity functions for depth-variant point spread function design, Prog. Electromagn. Res. Symp. Proc., pp. 267272, 2011. 10. A. Agrawal, A. Veeraraghavan, and R. Raskar, Reinterpretable imager: towards variable post-capture space, angle, and time resolution in photography, Comput. Graph. Forum 29 (2), pp. 763772, 2010. doi:10.1111/j.1467-8659.2009.01646.x 11. D. Reddy, A. Veeraraghavan, and R. Chellappa, P2C2: programmable pixel compressive camera for high speed imaging, Proc. IEEE Intl Conf. Comput. Photogr., pp. 329336, 2011.

c 2011 SPIE

You might also like