DC2: Dual-Camera Defocus Control by Learning to Refocus

Hadi Alzayer🐢,     Abdullah Abuolaim     Leung Chun Chan     Yang Yang     Ying Chen Lou     Jia-Bin Huang🐢     Abhishek Kar
🐢University of Maryland        Google

CVPR 2023


Given a dual camera capture, we can both deblur and add realistic blur to enable complete defocus control including refocusing, aperture control and more.
Here we showcase examples where we simulate changing the focal plane and aperture, and the first frame of each video is the input view.
Please view in full screen for highest quality.

Defocus-Control Demo

Using a single capture with dual-camera, we can perform post-capture refocus and DoF control by simulating different apertures. Below is the captured frame that is used as an input. You can interactively change the refocus and depth of field to see how our method can be used in the real world.

Loading...

Click on a subject to refocus

and use the slider for DoF Control


The interactive demo above is using a SINGLE capture with dual-camera. A photo captured with the main camera, also called wide (W), which has high resolution and shallow depth of field (DoF) and a photo from the ultra-wide camera (UW) which has a deep DoF but lower resolution. Our method can combine the two inputs to get a controllable DoF while maintaining high resolution!
Below we show the inputs for this example.

Video

Abstract

Smartphone cameras today are increasingly approaching the versatility and quality of professional cameras through a combination of hardware and software advancements. However, fixed aperture remains a key limitation, preventing users from controlling the depth of field (DoF) of captured images. At the same time, many smartphones now have multiple cameras with different fixed apertures - specifically, an ultra-wide camera with wider field of view and deeper DoF and a higher resolution primary camera with shallower DoF.

In this work, we propose DC2, a system for defocus control for synthetically varying camera aperture, focus distance and arbitrary defocus effects by fusing information from such a dual-camera system. Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus. Quantitative and qualitative evaluations on real-world data demonstrate our system's efficacy where we outperform state-of-the-art on defocus deblurring, bokeh rendering, and image refocus. Finally, we demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.

How Did We Build It?

We can't collect dataset of smartphone photos with variable depth of fields (due to the fix aperture), but we CAN refocus! Since image refocus requires deblurring and blurring different regions of the image at the same time, that means that image refocus is at least as hard as DoF Control. Based on this observation, we hypothesize that a model trained on image refocus can also perform DoF control, so we use image refocus as a proxy task to learn arbitrary defocus control. So to train a model, we collected a dataset of 100 focus-stacks.


We train our defocus control model (Detail Fusion Network -- DFNet) to refocus using dual-camera input, by giving it input and target defocus maps. The value of each pixel in the defocus map is proportional to how blurry that pixel is.


Then after training the model to refocus, we can perform arbitrary defocus control by simply maniuplating the target defocus map at test time!

Comparison with Baselines

Deblurring

Refocusing

Creative Applications

Our method allows for arbitrary defocus control by simply providing the desired defocus map to the model. The defocus control does not have to be physically realistic, so you can go wild with your imaginations! Here we show some examples that we thought of, and include the source and target defocus maps used for each example.

Tilt shift effect

Tilt shift can be used to control the depth of field, and produce miniature-like effect. With hardware, it is done by tilting the lens with respect to the sensor, but here we simulate it by setting the target defocus map to correspond to a very narrow DoF. At the top of the image below, you can view the source defocus map (all-in-focus so all black) and target defocus map used for this example.

Split focus effect

Split focus is a cinematic effect used in movies to emphasize and focus on two subjects at different distances from the camera. It often requires custom lens to make it work with hardware, but we can simply do it in software with our method. Here we set the left half of the photo to be focused on the woman, and the right half is focused on the man and with a reduced DoF for emphasis. You can compare the source defocus map and target defocus map used to generate this photo.

Content driven defocus control

Here we choose to highlight the woman and dog by simply setting them to be all-in-focus and blur out everything else. We simply set the target defocus map to be the segmentation mask we created.

BibTeX

@inproceedings{alzayer2023defocuscontrol,
      title={DC2: Dual-Camera Defocus Control by Learning to Refocus},
      author={Alzayer, Hadi and Abuolaim, Abdullah and Chun Chan, Leung and Yang, Yang and Chen Lou, Ying and Huang, Jia-Bin and Kar, Abhishek},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      pages={--},
      year={2023}
    }