29-08-2014, 10:49 AM
Femto-Photography
Femto-Photography.pdf (Size: 1.69 MB / Downloads: 25)
Abstract
Weture and visualize the propagation of light. With an effective expo present femto-photography, a novel imaging technique to capsure time of 1.85 picoseconds (ps) per frame, we reconstruct movies
of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed
do not exist, we re-purpose modern imaging hardware to record an
ensemble average of repeatable events that are synchronized to a
streak sensor, in which the time of arrival of light from the scene is
coded in one of the sensor’s spatial dimensions.
1 Introduction
Forward and inverse analysis of light transport plays an important
role in diverse fields, such as computer graphics, computer vision,
and scientific imaging. Because conventional imaging hardware is
slow compared to the speed of light, traditional computer graphics
and computer vision algorithms typically analyze transport using
low time-resolution photos. Consequently, any information that is
encoded in the time delays of light propagation is lost. Whereas the
joint design of novel optical hardware and smart computation, i.e,
computational photography, has expanded the way we capture
2 Related Work
Ultrafast Devices The fastest 2D continuous, real-time
monochromatic camera operates at hundreds of nanoseconds per
frame [Goda et al. 2009] (about 6· 106
frames per second), with a
spatial resolution of 200×200 pixels, less than one third of what we
achieve. Avalanche photodetector (APD) arrays can reach temporal
resolutions of several tens of picoseconds if they are used in a
photon starved regime where only a single photon hits a detector
within a time window of tens of nanoseconds [Charbon 2007].
Repetitive illumination techniques used in incoherent LiDAR [Tou
1995; Gelbart et al. 2002] use cameras with typical exposure times
on the order of hundreds of picoseconds [Busck and Heiselberg
2004; Colac¸o et al. 2012], two orders of magnitude slower than
our system. Liquid nonlinear shutters actuated with powerful laser
pulses have been used to capture single analog frames imaging
light pulses at picosecond time resolution [Duguay and Mattick
1971]. Other sensors that use a coherent phase relation between
the illumination and the detected light, such as optical coherence
tomography (OCT) [Huang et al. 1991], coherent LiDAR [Xia
and Zhang 2009], light-in-flight holography [Abramson 1978],
or white light interferometry [Wyant 2002], achieve femtosecond
resolutions; however, they require light to maintain coherence
(i.e., wave interference effects) during light transport, and are
therefore unsuitable for indirect illumination, in which diffuse reflections remove coherence from the light. Simple streak sensors
capture incoherent light at picosecond to nanosecond speeds, but
are limited to a line or low resolution (20 × 20) square field of
view [Campillo and Shapiro 1987; Itatani et al. 2002; Shiraga
et al. 1995; Gelbart et al. 2002; Kodama et al. 1999; Qu et al.
2006]. They have also been used as line scanning devices for
image transmission through highly scattering turbid media, by
recording the ballistic photons, which travel a straight path through
the scatterer and thus arrive first on the sensor [Hebden 1993]
3 Capturing Space-Time Planes
We capture time scales orders of magnitude faster than the exposure times of conventional cameras, in which photons reaching the
sensor at different times are integrated into a single value, making
it impossible to observe ultrafast optical phenomena. The system
described in this paper has an effective exposure time down to 1.85
ps; since light travels at 0.3 mm/ps, light travels approximately 0.5
mm between frames in our reconstructed movies.
System: An ultrafast setup must overcome several difficulties in
order to accurately measure a high-resolution (both in space and
time) image. First, for an unamplified laser pulse, a single exposure
time of less than 2 ps would not collect enough light, so the SNR
would be unworkably low. As an example, for a table-top scene
illuminated by a 100 W bulb, only about 1 photon on average would
reach the sensor during a 2 ps open-shutter period. Second, because
of the time scales involved, synchronization of the sensor and the
illumination must be executed within picosecond precision. Third,
standalone streak sensors sacrifice the vertical spatial dimension in
order to code the time dimension, thus producing x-t images. As
a consequence, their field of view is reduced to a single horizontal
line of view of the scene.
Performance Validation To characterize the streak sensor, we
compare sensor measurements with known geometry and verify the
linearity, reproducibility, and calibration of the time measurements.
4 Capturing Space-Time Volumes
Although the synchronized, pulsed measurements overcome SNR
issues, the streak sensor still provides only a one-dimensional widths: a typical dimension is roughly 103
pixels, so a three
movie. Extension to two dimensions requires unfeasible band dimensional data cube has 109
elements. Recording such a large
quantity in a 10−9
second (1 ns) time widow requires a bandwidth
of 1018 byte/s, far beyond typical available bandwidths.
5 Depicting Ultrafast Videos in 2D
We have explored several ways to visualize the information contained in the captured x-y-t data cube in an intuitive way. First,
contiguous Nij slices can be played as the frames of a movie. Figure 1 (bottom row) shows a captured scene (bottle) along with several representative Nij frames. (Effects are described for various
scenes in Section 7.) However, understanding all the phenomena
Integral Photo Fusion By integrating all the frames in novel
ways, we can visualize and highlight different aspects of the light
flow in one photo. Our photo fusion results are calculated as
Nij =
PwkMijk, {k = 1..512}, where wk is a weighting factor
determined by the particular fusion method. We have tested several
different methods, of which two were found to yield the most in tuitive results: the first one is full fusion, where wk = 1 for all k.
Summing all frames of the movie provides something resembling
a black and white photograph of the scene illuminated by the laser,
while showing time-resolved light transport effects. An example
is shown in Figure 6 (left) for the alien scene. (More information
about the scene is given in Section 7.) A second technique, rainbow
fusion, takes the fusion result and assigns a different RGB color to
each frame, effectively color-coding the temporal dimension. An
example is shown in Figure 6 (middle).
Peak Time Images The inherent integration in fusion methods,
though often useful, can fail to reveal the most complex or subtle
behavior of light. As an alternative, we propose peak time images,
which illustrate the time evolution of the maximum intensity in each
frame. For each spatial position (i, j) in the x-y-t volume, we find
the peak intensity along the time dimension, and keep information
within two time units to each side of the peak. All other values in
the streak image are set to zero, yielding a more sparse space-time
volume. We then color-code time and sum up the x-y frames in
this new sparse volume, in the same manner as in the rainbow fusion case but use only every 20th frame in the sum to create black
lines between the equi-time paths, or isochrones. This results in a
map of the propagation of maximum intensity contours, which we
term peak time image. These color-coded isochronous lines can be
thought of intuitively as propagating energy fronts. Figure 6 (right)
shows the peak time image for the alien scene, and Figure 1 (top,
middle) shows the captured data for the bottle scene depicted uing this visualization method. As explained in the next section, this
visualization of the bottle scene reveals significant light transport
phenomena that could not be seen with the rainbow fusion visualization.
6 Time Unwarping
Visualization of the captured movies (Sections 5 and 7) reveals results that are counter-intuitive to theoretical and established knowledge of light transport. Figure 1 (top, middle) shows a peak time
visualization of the bottle scene, where several abnormal light transport effects can be observed: (1) the caustics on the floor, which
propagate towards the bottle, instead of away from it; (2) the curved
spherical energy fronts in the label area, which should be rectilinear as seen from the camera; and (3) the pulse itself being located
behind these energy fronts, when it would need to precede them.
These are due to the fact that usually light propagation is assumed
to be infinitely fast, so that events in world space are assumed to be
detected simultaneously in camera space. In our ultrafast photography setup, however, this assumption no longer holds, and the finite
speed of light becomes a factor: we must now take into account the
time delay between the occurrence of an event and its detection by
the camera sensor
We therefore need to consider two different time frames, namely
world time (when events happen) and camera time (when events are
detected). This duality of time frames is explained in Figure 7: light
from a source hits a surface first at point P1 = (i1, j1) (with (i, j)
being the x-y pixel coordinates of a scene point in the x-y-t data
cube), then at the farther point P2 = (i2, j2), but the reflected light
is captured in the reverse order by the sensor, due to different total
path lengths (z1 + d1 > z2 + d2). Generally, this is due to the fact
that, for light to arrive at a given time instant t0, all the rays from
the source, to the wall, to the camera, must satisfy zi +di = ct0, so
that isochrones are elliptical. Therefore, although objects closer to
the source receive light earlier, they can still lie on a higher-valued
(later-time) isochrone than farther ones.
7 Captured Scenes
We have used our ultrafast photography setup to capture interesting
light transport effects in different scenes. Figure 10 summarizes
them, showing representative frames and peak time visualizations.
The exposure time for our scenes is between 1.85 ps for the crystal
scene, and 5.07 ps for the bottle and tank scenes, which required
imaging a longer time span for better visualization. Please refer to
the video in the supplementary material to watch the reconstructed
movies. Overall, observing light in such slow motion reveals both
subtle and key aspects of light transport. We provide here brief
descriptions of the light transport effects captured in the different
scenes.
Bottle
This scene is shown in Figure 1 (bottom row), and has
been used to introduce time-unwarping. A plastic bottle, filled with
water diluted with milk, is directly illuminated by the laser pulse,
entering through the bottom of the bottle along its longitudinal axis.
The pulse scatters inside the liquid; we can see the propagation of
the wavefronts .
Tomato-tape
This scene shows a tomato and a tape roll, with a
wall behind them. The propagation of the spherical wavefront, after
the laser pulse hits the diffuser, can be seen clearly as it intersects
the floor and the back wall (A, B). The inside of the tape roll is out
of the line of sight of the light source and is not directly illuminated.
It is illuminated later, as indirect light scattered from the first wave
reaches it ©. Shadows become visible only after the object has
been illuminated. The more opaque tape darkens quickly after the
light front has passed, while the tomato continues glowing for a
longer time, indicative of stronger subsurface scattering (D).
Alien
A toy alien is positioned in front of a mirror and wall. Light
interactions in this scene are extremely rich, due to the mirror, the
multiple interreflections, and the subsurface scattering in the toy.
The video shows how the reflection in the mirror is actually formed:
direct light first reaches the toy, but the mirror is still completely
dark (E); eventually light leaving the toy reaches the mirror, and
the reflection is dynamically formed (F). Subsurface scattering is
clearly present in the toy (G), while multiple direct and indirect
interactions between the wall and the mirror can also be seen (H).
Crystal
A group of sugar crystals is directly illuminated by the
laser from the left, acting as multiple lenses and creating caustics
on the table (I). Part of the light refracted on the table is reflected
back to the candy, creating secondary caustics on the table (J)
Tank
A reflective grating is placed at the right side of a tank filled
with milk diluted in water.
8 Conclusions and Future Work
Our research fosters new computational imaging and image processing opportunities by providing incoherent time-resolved in formation at ultrafast temporal resolutions. We hope our work
will inspire new research in computer graphics and computational photography, by enabling forward and inverse analysis of light
transport, allowing for full scene capture of hidden geometry and
materials, or for relighting photographs. To this end, captured
movies and data of the scenes shown in this paper are available at
femtocamera.info. This exploitation, in turn, may influence
the rapidly emerging field of ultrafast imaging hardware.