MAT 595M Seminar Series

Computing Realistic Imagery of the World Around Us

February 3, 2014
5:30 pmto6:45 pm


Pradeep Sen, Associate Professor, Department of Electrical and Computer Engineering, UC Santa Barbara


Dr. Pradeep Sen is an Associate Professor in the Department of
Electrical and Computer Engineering at the University of California,
Santa Barbara. He received his B.S. from Purdue University and
his M.S. and Ph.D. from Stanford University. His core research
is in the areas of computer graphics, computational image processing,
and computer vision. He is the co-author of seven ACM SIGGRAPH papers
and has been awarded more than $1.7 million in research funding,
including an NSF CAREER award to study the application of sparse
reconstruction algorithms to computer graphics and imaging. He
received two best-paper awards at the Graphics Hardware conference
in 2002 and 2004.


Humans are visual creatures, and so the ability to reproduce
accurate images of the world around us is an important problem.
In this talk, I will discuss some of our work to address this
problem in two very different areas.

First, we examine the problem of photorealistic image synthesis,
which involves the generation of an image from a scene description.
The most powerful methods for this are based on Monte Carlo (MC)
algorithms, but they are plagued by noise at low sampling rates
which makes them impractical for many applications and has limited
their use. We propose a new way to think about the source of Monte
Carlo noise and use this understanding to create an image-space
filter that removes MC noise but preserves important scene detail.
This enables the generation of photorealistic images in a few
minutes that are comparable to those that took hundreds of times
longer to render.

Second, I discuss our recent developments for high dynamic range (HDR)
imaging. Although the world has high dynamic range illumination
(meaning that the darkest and brightest parts of a scene differ in
intensity by many orders of magnitude), conventional digital cameras
only capture a narrow portion of this range. A common way to capture
HDR images with a conventional camera is to take a set of images at
different exposures and then merge them together. This works well for
static scenes, but produces visible ghosting artifacts for scenes with
motion. In the second half of my talk, I present a new way to think
about the problem of reconstructing HDR images from a set of inputs
based on a new optimization that minimizes what we call the HDR image
synthesis equation. Using this framework, we show that we can produce
results superior to previous techniques for HDR imaging and demonstrate
how this approach can be extended to video as well.