Project P17:
Displaying high dynamic range video

Project Goal

High dynamic range images are images with more than 8 bits per color channel, which avoid the problem of over- and underexposure of images with a lot of brightness variation. Unfortunately, common displays cannot display these high brightness changes.

There have been many papers in Siggraph and elsewhere on tone mapping, i.e. converting a high dynamic range image into one suitable for display, typically by applying compression of global intensity variations while preserving local edges in some way. In the Stanford Multi-Camera Array Project, we produce high dynamic range video. The goal of this project is to tone map such a video for display.

Project Scope

If one applies any of the published tone mappers, they will work fine on individual images, but the results will undoubtedly look inconsistent across a video sequence, because local edges come and go. The right answer is probably to extend the published tone mapping algorithms to space-time "video cubes" in some natural way - looking for gradients or features in 3D spacetime. For some algorithms, this might be very straightforward; for others, less so.

Tasks

This project is very research-oriented and can be considered quite hard. It is probably not suited for everyone.

A detailed point-by-point plan should be discussed with Marc Levoy. You can, however, start by obtaining said papers about tone-mapping algorithms from the Siggraph Proceedings.

Project Status

Andrew Adams (abadams at stanford),
Eino-ville Talvala (talvala at stanford)
2 open spots

Point of Contact

Marc Levoy and Dan Morris

Midterm Report

not yet submited

Final Report

not yet submitted






















































































Course overview
Announcements
Time and location
Course materials
Schedule
Instructors
Assignments
Projects
Policies
Links