MAT 265: Open Projects in Optical/Motion - Computational Processes Resources


Hardware and Software

What is Computational Camera



Related News

The Frankencamera is an experimental platform for computational photogrphy.
The Frankencamera: An Experimental Platform for Computational Photogrphy (PDF)
Experimental Platforms for Computational Photography (PDF)
'Frankencamera': A Giant Leap For Digital Photos?
Stanford “Frankencamera” project aims to create an open source imaging platform

Nokia N900


Related Research Group

Computational Photography on Cell Phones. Stanford researchers now made a software platform with applys the technic from "Frankencamera" to Nokia N900.
Stanford 'Frankencamera' platform available on Nokia N900 ahead of unveiling at graphics conference
New Focus for Digital Photography
Computational Photography
Four Eyes Lab (University of California, Santa Barbara)

High-Rank 3D Display using Content-Adaptive Parallax Barriers


Camera Culture Group, MIT
To build our prototype HR3D display, we disassembled two Viewsonic VX2265wm 120Hz LCD panels. The front panel was completely disassembled, removing both the front diffusing polarizing layer and rear transparent polarizing layer. The images below show the sequence of steps to disassemble the display. The most difficult step is the final step of removing the polarizing films from the LCD glass. After carefully pulling up the films, we used a pencil eraser and acetone to remove the adhesive.

Looking Around Corners using Femto-Photography

Camera Culture Group, MIT
The device has been developed by the MIT Media Lab’s Camera Culture group in collaboration with Bawendi Lab in the Department of Chemistry at MIT. An earlier prototype was built in collaboration with Prof. Joe Paradiso at MIT Media Lab and Prof. Neil Gershenfeld at the Center for Bits and Atoms at MIT. A laser pulse that lasts less than one trillionth of a second is used as a flash and the light returning from the scene is collected by a camera at the equivalent of close to 1 trillion frames per second. Because of this high speed, the camera is aware of the time it takes for the light to travel through the scene. This information is then used to reconstruct shape of objects that are visible from the position of the wall, but not from the laser or camera.

Fluttered Shutter Camera

Camera Culture Group, MIT
Rather than leaving the shutter open for the entire exposure duration, we ``flutter'' the camera's shutter open and closed during the chosen exposure time with a binary pseudo-random sequence. The flutter changes the box filter to a broad-band filter that preserves
high-frequency spatial details in the blurred image and the corresponding deconvolution becomes a well-posed problem. We demonstrate that manually-specified point spread functions are sufficient for several challenging cases of motion-blur removal including extremely large motions, textured backgrounds and partial occluders.

360 Degree Cameras

CAVE, Columbia University
This project involves the application of our work on catadioptric cameras with wide fields of view. We have developed compact 360 degree cameras that have been mounted on a variety of robots and used to control or even drive the robots from remote locations. We have also developed intelligent surveillance systems that use 360 degree video to simultaneously track multiple objects moving within the large field of view. A perspective video stream is computed from the omnidirectional video stream that seeks to keep the moving objects within its field of view. In another project, we have combined a 360 degree camera (master) with a conventional pan/tilt/zoom (PTZ) camera (slave) and used the 360 degree camera to determine where the PTZ camera should look next. We have also developed an imaging system called the Zoomnicam which is an omnidirectional camera with a very wide range of optical zoom settings. In this system, the curved mirror is mounted on a controllable translational stage and the optics includes a controllable zoom lens. This enables the system to go from super-wide angle imaging to high-zoom imaging by simply translating the mirror and changing the optical zoom of the lens.

Adaptive Dynamic Range Imaging

CAVE, Columbia University
This project is focused on the development of a new approach to imaging that significantly enhances the dynamic range of an imaging system. The key idea is to adapt the exposure of each pixel on the detector based on the radiance value of the corresponding scene point. This adaptation is done in optical domain, that is, during image formation. In practice, this is achieved using a two-dimensional spatial light modulator, whose transmittance function can be varied with high resolution over space and time.

A real-time control algorithm has been developed that uses a captured image to compute the optimal transmittance function for the spatial modulator. The captured image and the corresponding transmittance function are used to compute a very high dynamic range image that is linear in scene radiance.
PPT on ICCV 2003

Catadioptric Cameras for 360 Degree Imaging

CAVE, Columbia University
This project is geared towards the development of catadioptric (lens + mirror) imaging systems with unusually large fields of view. One important design goal in catadioptric imaging is choosing the shapes of the mirrors in a way that ensures that the complete imaging system has a single effective viewpoint. The reason a single viewpoint is desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In the first part of the project, we have derived the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We have derived expressions for the spatial resolution of a catadioptric camera in terms of the resolution of the cameras used to construct it. In addition, we have analyzed the defocus blur caused by the use of a curved mirror in a catadioptric sensor.MORE

Catadioptric Stereo: Planar and Curved Mirrors

CAVE, Columbia University
Conventional stereo uses two or more cameras to compute three-dimensional scene structure. Catadioptric stereo enables the capture of multiple views of a scene using a single camera. In this project, we are exploring the use of planar as well as curved mirrors to develop catadioptric stereo systems.

By placing planar mirrors in front of the camera, multiple virtual cameras are created. We have studied the geometric properties and self-calibration of planar catadioptric systems in detail. A couple of prototypes have been developed. In addition, a real-time stereo algorithm has been developed that computes depth maps at frame rate. The use of a single video camera ensures that the radiometric properties of both views of the scene are identical. This leads to more robust correspondence and hence depth estimation.

Cata-Fisheye Camera for Panoramic Imaging

CAVE, Columbia University
The cata-fisheye camera is a panoramic imaging system which uses a curved mirror as a simple optical attachment to a fisheye lens. The cata-fisheye camera has a simple, compact and inexpensive design, and at the same time yields high optical performance. It is most suitable for capturing panoramic images with a reasonable vertical field of view that has more or less equal coverage above and beneath the horizon - ideal for applications like perimeter surveillance, video conferencing and navigation. The camera captures the desired panoramic field of view in two parts -- the upper part is obtained directly by the fisheye lens, and the lower part after reflection by the curved mirror. These two parts of the field of view have a small overlap that is used to stitch them into a single seamless panorama.

Depth from Defocus

CAVE, Columbia University
Structures of dynamic scenes can only be recovered using a real-time range sensor. Depth from defocus offers a direct solution to fast and dense range estimation. It is computationally efficient as it circumvents the correspondence problem faced by stereo and feature tracking in structure from motion. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including recovery of textureless surfaces, precise blur estimation and magnification variations caused by defocusing. In the first part of this project, both textured and textureless surfaces are recovered using an illumination pattern that is projected via the same optical path used to acquire images. The illumination pattern is optimized to ensure maximum accuracy and spatial resolution in computed depth. The relative blurring in two images is computed using a narrow-band linear operator that is designed by considering all the optical, sensing and computational elements of the depth from defocus system. Defocus invariant magnification is achieved by the use of an additional aperture in the imaging optics. A prototype focus range sensor has been developed that produces up to 512x480 depth images at 30 Hz with an accuracy better than 0.3%. Several experiments have been conducted to verify the performance of the sensor.

More Hardware related project from CAVE

CAVE, Columbia University
Eyes for Relighting
Generalized Mosaicing
Motion Deblurring Using Hybrid Imaging
Programmable Imaging: Micro-Mirror Arrays
Radial Imaging Systems
Spherical Mosaics: Regular and Stereoscopic
Super-Resolution: Jitter Camera
Temporal Modulation Imaging
True Spherical Camera
Wide Angle Lenses and Polycameras

Icinema, The University of New South Wales Sydney, Australia
Research Focus


Dennis Del Favero
AVIE - Advanced Visualisation and Interaction Environment
Distributed Interface Systems
Theories of Interactive Digital Narrative Systems


FCamera is an open-source camera application for the N900. You can use it as an alternative to the built-in camera application. It provides significantly more manual control over capture parameters, and shoots raw (in the Adobe DNG format). More