In photography, depth of field is used to draw attention to a subject, and to create a dramatic effect. The practice is widely used in portrait photography, but it can also be applied to other scenes as well, allowing a photographer’s subject to stand out while the rest of the scene is blurred. This “bokeh” effect is typically achieved with SLR cameras—just pop your f-stop to f/5.6 or below—but thanks to advancements in modern technology, it’s easy to replicate bokeh in a smartphone. We’ve seen Nokia and HTC do it with varying results, and now Google is joining the action.
The desire to shoot now and focus later really took shape thanks to the Lytro. But that was a dedicated camera with limited capabilities. Like every cool idea, however, the mobile market latched onto it, which is why the feature is so quickly rising to such popularity. Following the release of its new Google Camera app, the search giant wrote a post explaining how it was able to achieve this effect, and why it doesn’t take a fancy Duo camera to make it happen.
“Lens Blur replaces the need for a large optical system with algorithms that simulate a larger lens and aperture,” said Carlos Hernandez, software engineer at Google. “Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames. From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the depth (distance) to every point in the scene.”
Google provided an example of this data (below), showing off a raw input photo alongside a “depth map,” where the foreground is dark and the background is light. This allows Google’s software algorithms to create the depth of field effect. The results are certainly convincing, but not quite as natural and beautiful as a traditional SLR camera. For a phone, however, it’s a testament to how quickly camera technology has evolved.
Google says that once the software has the necessary information, it’s capable of picking out visual cues, and then tracks these features over a series of images. The algorithms—known as Structure-from-Motion (SfM)—computes the camera’s 3D position and orientation, applying that to all of the images in the series. Google’s software is then able to compute the correct depth of each pixel with Multi-View-Stereo (MVS), where the scene is optimized using Markov Random Field inference methods. The result, hopefully, is an image you can focus and re-focus, even after it was taken.
That’s quite a bit of technology behind a single feature, but just think of how neat your selfies will look. To get a more in-depth explanation of Google’s Lens Blur feature, check out Google’s full blog post at the link below.

Disguise your little one with the help of a themed costume
From avocado halves to hoppy bunnies, costumes speak to every child's unique spirit. And we've collected our favorite options.

Add magic to your living space with these string lights
String lights add personality and soft light to your living space. Here are some of the best.

The Galaxy S20 Ultra's Space Zoom camera is amazing and a bit creepy
The Galaxy S20 Ultra supports up to 100X zoom, which Samsung calls Space Zoom, but is it any good? Can a phone really product usable photos at 100x zoom? We've got our Galaxy S20 Ultra already so join us to find out!

Keep that impressive Pixel 3 XL camera protected with these cases
If you have a Pixel 3 XL, chances are you're worried that its stunning glass and metal design will break during that first unexpected drop. We know the feeling, so we’ve compiled the best cases to keep your phone protected.