1 MPI Informatik 2 University College London
Simulating combinations of depth-of-field and motion blur is an important factor to cinematic quality in synthetic images but can take long to compute. Splatting the point-spread function (PSF) of every pixel is general and provides high quality, but requires prohibitive compute time. We accelerate this in two steps: In a pre-process we optimize for sparse representations of the Laplacian of all possible PSFs that we call spreadlets. At runtime, spreadlets can be splat efficiently to the Laplacian of an image. Integrating this image produces the final result. Our approach scales faithfully to strong motion and large out-of-focus areas and compares favorably in speed and quality with off-line and interactive approaches. It is applicable to both synthesizing from pinhole as well as reconstructing from stochastic images, with or without layering.
Synthesis 1 Synthesis 2 Synthesis 3 Synthesis 4 Reconstruction 1 Reconstruction 2 Reconstruction 3 Reconstruction 4
Thomas Leimkühler, Hans-Peter Seidel, Tobias Ritschel
Laplacian Kernel Splatting for Efficient Depth-of-field and Motion Blur Synthesis or Reconstruction
To appear in: ACM Transactions on Graphics (Proc. SIGGRAPH 2018)
@article{Leimkuhler2018,
author = {Thomas Leimk\"uhler and Hans-Peter Seidel and Tobias Ritschel},
title = {Laplacian Kernel Splatting for Efficient Depth-of-field and
Motion Blur Synthesis or Reconstruction},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH 2018)},
year = {2018},
volume = {37},
number = {4},
doi = {10.1145/3197517.3201379}
}
The authors would like to thank Gabriel Brostow, Bernhard Reinert, and Rhaleb Zayer for their contributions. This work was partly supported by the Fraunhofer and the Max Planck cooperation program within the framework of the German pact for research and innovation (PFI).