You are on page 1of 13

High Dynamic Range Photography

Prof. Wolf
“Dynamic range”
A ratio of “bright to dark”
For an image:
The numerical ratio between the lightest and the darkest pixel
(exclude outliers)
For a display (like a monitor):
The ratio of maximum to minimum luminance the screen is capable of
(luminance = perceived brightness : physics + biology)
For a camera:
The ratio of the luminance that saturates the (digital) sensor and the
luminance that takes you a bit above the noise.
Conventional images offer a dynamic range of about
100.
Consistent with defining each pixel by red, green, blue values of 0–255
in digital imaging.
Not enough for many scenes because…

In the real world, the dynamic range can be 100,000,000…

(The sun at noon vs. starlight.)


Condition Illumination
Starlight 0.001
Moonlight 0.1
Indoor lighting 100
Maximum intensity of a CRT 1 2
monitor 100
Sunlight 100,000
Your eyes are often working with 5 orders of magnitude of illumination
simultaneously. The scene therefore has a “high dynamic range”
(HDR).

1
No, LCDs aren’t much better.
2
Nor printed paper – it can’t glow.
Conventional displays can only show “low dynamic range” (LDR)
images.
LDR images may not “satisfy”… Tones aren’t faithful, not a lot of
contrast (dimmest things disappear, brightest things “flare”).
Single photos can’t capture all we want to capture. Each exposure in a
series of “bracketed” exposure may do a better job, locally.
Ok overall, but sky is
blah, and much detail
lost in shadow.

NORMALLY

OVEREXPOS

Nice detail on
building
Pretty
UNDEREXPOS sky
UNDEREXPO
HDR Imaging
We can imagine that a computer might be persuaded to process
multiple images, combine the best parts, and then store more
information (dynamic range) than was in a single photo.
We’d need more than 0-255 values of R, G and B values at each pixel
in a file to hold a scene with increased dynamic range. Such a file is
called a “radiance map.”
BUT that file can’t be displayed on a monitor, a piece of (photo)
A radiance map
paper, or any other typical output device, and yield something better
displayed directly on
than an LDR result! It’s the fault of the output medium.
an LDR device looks
Washed out worse than the
sky images it was made
from (see left).
But it isn’t intended
for display directly.
No detail
in
buildings
So we “tone map” (dynamic range reduction).

Still an LDR image,


but nicer than the
original 3 images.

LDRs 
HDR 
LDR (but
better)
HDR images have an “Ansel Adams” look. But his “zone system”
(which used one exposure) only worked for black and white
photography. HDR exploits ‘all’ colors.

Pure black is
somewhere in each
photo.
So is pure white.
So are most/all of the
tones in between.
Wikipedia: “The Zone System
gained an early reputation for
being complex, difficult to
understand, and impractical
to apply to real-life shooting
Ansel Adams, Grand Teton National
Park, 1942

Additional problems with faithful reproduction of


scenes

* Sensitivity of human eye varies with color (λ).

* Sensitivity of digital camera sensor varies with color


* and has a finite number of bits (e.g., 12) of resolution.
which sounds like a dynamic range of 212 = 4096, but
is really about 100:1. The response would ideally be
logarithmic to capture widely varying luminances, but
is linear. Most of those bits don’t hold useful info.

(Black and white film captures 10,000:1, but “flare” (around the sun, or a
bright light) hurts you at the upper end. Your B&W or color print is ultimately
about 100:1. Paper doesn’t glow.)
* A spectroradiometer ($5,000) pointed at your printer or monitor reveals poor
color consistency.

So, multiple exposures, ranging from generally


overexposed to generally underexposed. Each exposure
hopefully perfectly “registered” (lined up) with the others.
(Tripod!) Exposures differ by 1 “f stop” each.
(F-Stop: The ratio of the diameter of the aperture in the lens and the
focal length of
the lens. Going 1 F-Stop from f1.4 to f2.0 cuts the area of the aperture
by a factor of 2, which means a factor of 2 increase in exposure time is
required to compensate.)
Each pixel is properly exposed in one or more images in
the sequence.
First, to bring the images into the same “domain,” divide each pixel
by the exposure time for that image.
Second, average pixels across exposures.
Easy… An HDR image!

Not so fast… Difficulties!


• Need to omit pixels that are over or under-exposed. And maybe
“devalue” pixels that are even close to either state.
• Images, even taken on a good tripod, won’t line up perfectly. You
will have to distort them (shift them, rotate them, warp them) to
match pixels.
• Even if registration isn’t a problem, things in the scene (people,
water, clouds, leaves) move between exposures. “Ghosting” can
be reduced if you track motion.
• Pixels can be averaged with equal weights if the camera’s response
at different exposure times is linear. (Dividing by factors of 2
makes them equal.) But it isn’t linear. For lots of reasons,
including mfgs. tweaking of optics to get lively colors.
So ask the mfg. for their response curves. Sorry, proprietary
information. So you will have to figure that out. Which you can do,
from the same series of exposures.
• Shortest exposures (great clouds!) involve few photons landing on
each “cell” of the digital sensor. Statistical noise may be
significant.

HDR image must now be tone mapped to produce an improved LDR


image.
What if we just compress the 10,000:1 dynamic range of the HDR
image down to 100:1 to match the display’s dynamic range?
[Photo, page 236] Doesn’t work well – almost complete loss of
detail/contrast.

The large dynamic range of the HDR image must be compressed to


fit into the display range, while preserving the detail.
This requires study of the human visual system…
The pupil, the rod-cone system, photochemical reactions,
photoreceptor mechanisms.
The pupil: Can only vary the amount of light entering the eye by a
factor of 16. Often ignored…
For the rest, a number of biological processes, a number of models,
required to figure out the best way to compress.
[Photo, page 247] If photoreceptors exposed continuously to high
intensities, the initial saturated response gradually returns toward
the dark-adapted response, and the photoreceptors sensitivity to
incremental response is gradually restored.
Too hard!

You might also like