In photography, depth of field is used to draw attention to a subject, and to create a dramatic effect. The practice is widely used in portrait photography, but it can also be applied to other scenes as well, allowing a photographer’s subject to stand out while the rest of the scene is blurred. This “bokeh” effect is typically achieved with SLR cameras—just pop your f-stop to f/5.6 or below—but thanks to advancements in modern technology, it’s easy to replicate bokeh in a smartphone. We’ve seen Nokia and HTC do it with varying results, and now Google is joining the action.
The desire to shoot now and focus later really took shape thanks to the Lytro. But that was a dedicated camera with limited capabilities. Like every cool idea, however, the mobile market latched onto it, which is why the feature is so quickly rising to such popularity. Following the release of its new Google Camera app, the search giant wrote a post explaining how it was able to achieve this effect, and why it doesn’t take a fancy Duo camera to make it happen.
“Lens Blur replaces the need for a large optical system with algorithms that simulate a larger lens and aperture,” said Carlos Hernandez, software engineer at Google. “Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames. From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the depth (distance) to every point in the scene.”
Google provided an example of this data (below), showing off a raw input photo alongside a “depth map,” where the foreground is dark and the background is light. This allows Google’s software algorithms to create the depth of field effect. The results are certainly convincing, but not quite as natural and beautiful as a traditional SLR camera. For a phone, however, it’s a testament to how quickly camera technology has evolved.
Google says that once the software has the necessary information, it’s capable of picking out visual cues, and then tracks these features over a series of images. The algorithms—known as Structure-from-Motion (SfM)—computes the camera’s 3D position and orientation, applying that to all of the images in the series. Google’s software is then able to compute the correct depth of each pixel with Multi-View-Stereo (MVS), where the scene is optimized using Markov Random Field inference methods. The result, hopefully, is an image you can focus and re-focus, even after it was taken.
That’s quite a bit of technology behind a single feature, but just think of how neat your selfies will look. To get a more in-depth explanation of Google’s Lens Blur feature, check out Google’s full blog post at the link below.
The Galaxy S20 Ultra's Space Zoom camera is amazing and a bit creepy
The Galaxy S20 Ultra supports up to 100X zoom, which Samsung calls Space Zoom, but is it any good? Can a phone really product usable photos at 100x zoom? We've got our Galaxy S20 Ultra already so join us to find out!
Win an iPhone, iPad and Apple Watch with the Reader's Choice giveaway!
What's the best phone of 2019? Is it the iPhone 11 Pro, Pixel 4 or OnePlus 7T? What about the best laptop, games console, tablet and more? Vote NOW in the Reader's Choice awards and win BIG in time for the holidays!
Here are the best products from IFA 2019!
Here are the products announced at IFA 2019 that were worthy of our Best of IFA 2019 awards. Also featuring MrMobile's single best product at the show!
Bring your art into the next dimension with these 3D pens
3D pens are a lot of fun and can produce amazing results. We put together a list of our favorite 3D pens for you to try.