Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Input Devices Android Cellphones Google Handhelds

Google's New Camera App Simulates Shallow Depth of Field 127

New submitter katiewilliam (3621675) writes with a story at Hardware Zone about a new feature that Google's working on for Android phones' built-in cameras: the illusion of shallow depth of field in phone snapshots, which typically err on the side of too much in focus, rather than too little. Excerpting: "The Google Research Blog [note: here's a direct link] revealed that there's quite a fair bit of algorithms running to achieve this effect; to put it in a nutshell, computer vision algorithms create a 3D model of the world based on the shots you have taken, and estimate the depth to every point in the scene."
This discussion has been archived. No new comments can be posted.

Google's New Camera App Simulates Shallow Depth of Field

Comments Filter:
  • There is no 3D modelling involved. And the results are, well, mixed.

    • computer vision algorithms create a 3D model of the world

      Sounds like 3D modelling to me, albeit guessed at from the content of a 2D photo.

      • Re:2 1/2 D (Score:5, Interesting)

        by BasilBrush ( 643681 ) on Saturday April 19, 2014 @03:34PM (#46796251)

        Depends what you mean by 3D modelling. Looking further at the article, it's a depth mapping technique for each pixel. Which is more analogous to DOOM than Quake. Remember those restrictions? No bridges in the map, no tables. Just a single height for the floor and a single height for the ceiling at any map position.

        As the OP says it's 2.5D not 3D.

        • If you have a depth channel you could displace a 3D plane in camera space and render that in 3D. So 2.5D/3D is a bit arbitrary.

          If you had a perfect 3D model and the one photo though you still wouldn't have enough information to render true Depth of Field. The real problem isn't 2.5D/3D it's the fact that there is no parallax information for occluded information. That can be interpolated well enough for simple situations but ultimately you're trying to infer data which will cause artifacts.

          • If you have a depth channel you could displace a 3D plane in camera space and render that in 3D.

            No, you'd only have the surfaces that are first hit with raytracing from the eye. That's not 3D. That's why it's 2.5D.

            The real problem isn't 2.5D/3D it's the fact that there is no parallax information for occluded information.

            But that's exactly the problem that 2.5D brings. You don't know what's behind foreground objects.

            • But that's exactly the problem that 2.5D brings. You don't know what's behind foreground objects.

              That's not necessarily true. With a deep framebuffer you can have multiple ZSamples including occluded objects. For DOF that would be perfectly sufficient and a 2.5D point cloud. Conversely you could have a perfectly detailed 3D scene but use a per-pixel camera projection of your plate to refocus but have occlusion artifacts.

              2.5D/3D isn't terribly important for DOF calculation. Then again even with the Deep image you still would have problems with reflections on curved objects and refraction etc.

              • That's not necessarily true. With a deep framebuffer you can have multiple ZSamples including occluded objects.

                Rather like the 2 samples I already pointed out for Doom? Are you getting the idea yet that you aren't telling me anything I don't already know?

        • by mlyle ( 148697 )

          Note a depth mapping technique for each pixel isn't Doom-style restrictions unless the camera is in an unusual orientation.

          You can have tables, etc. Every pixel has a distance from the camera to the object estimated. Since the camera is probably in a horizontal location this works. What you -can't- know about are objects behind other objects from the camera's standpoint, or stuff behind the camera. This is mostly OK for faking depth of field.

          • Note a depth mapping technique for each pixel isn't Doom-style restrictions unless the camera is in an unusual orientation.

            It's just an analogy. One that illustrates that depth mapping doesn't give proper 3D.

            What you -can't- know about are objects behind other objects from the camera's standpoint, or stuff behind the camera. This is mostly OK for faking depth of field.

            Absolutely. But it's not still not 3D, it's 2.5D. No one said 2.5D wouldn't work for this application.

  • comes with a new 'Lens Blur' feature that lets you adds creamy bokeh to your pictures.

    Yeah, hi, I have a question. Does it have to be creamy?

  • by MindPrison ( 864299 ) on Saturday April 19, 2014 @02:47PM (#46796041) Journal
    Just take a look at the auto-blurring used in street-view, nothing beats it. My neighbors dogs face was blurred instead of their kid. ;)
  • by Anonymous Coward

    Another 'new feature' that's been out for over a year. https://itunes.apple.com/gb/app/focustwist/id597654594?mt=8 Kinda like the 'awesome' photosphere which MSFT had out for 2 years (as photosynth) before they did it.

  • When Google finally reveals its true name, Skynet, this is the technology that will allow its T-1000s to exterminate most of humanity.

    But don't worry, they'll be sure to take an instagram of your death and post it to your Google+ livestream so your friends and family can mourn.

    (There will also be ads for bereavement-related products. Neither Google nor Skynet are monopolies, honest.)

  • by smchris ( 464899 ) on Saturday April 19, 2014 @03:05PM (#46796127)

    But I absolutely, totally LOVE depth of field. Screw the art school graduates. I bought a large screen digital tv for the illusion of a window upon the world.

    I would like to think -- I sincerely HOPE -- that artificially inducing audience "focus" by depth of field will be as quaint as silent movie captions in 50 years.
     

    • by blueg3 ( 192743 ) on Saturday April 19, 2014 @03:30PM (#46796223)

      You know, your eyes have a substantial depth-of-field effect, too. You often don't notice, because your mental ability to pay attention to objects is tied pretty strongly to where your eyes are actually focusing, so anything you look at is in focus (because you focus on what you're looking at). However, you can really notice when you look at images that have deep DoF or, say, 3D movies (where they can't possibly get the DoF right).

      • IIRC, Gravity had 3d lens flare.

      • by drolli ( 522659 )

        For sure they can get the DoF right in 3D movies, why not?

        • by blueg3 ( 192743 )

          Because the depth-of-field effect generated by your eyes depends on the distance to the subject, which is largely flat in 3D movies. They can't add DoF blur because they don't know where your eye will focus. They can put the most-obvious object in focus and then the other objects will be blurred, but if you focus your eyes on them, they won't come in to focus, which is not how your eyes normally work. (The same is true in 2D movies, naturally, but there isn't the illusion of the ability to focus in those.)

    • The human eye has it's own depth of field characteristics plus a much greater dynamic range and resolution than any large flat screen.

      So your large screen is going to fall short of that illusion.

    • by tomhath ( 637240 )
      Depth of Field isn't an all or nothing thing. Thegoal is being able to control it so you can create the image you want. Love a picture that's sharp corner to corner? Great! Want a picture that emphasizes the subject and blurs the background or vignettes the corners? That's great too!
    • This was my initial reaction too, it's like glorifying the gramophone record in an age of practically unlimited bit depth and sampling frequency. However, that doesn't mean I can't enjoy the full precision of current tech. Lo-fi effects can be nice in the right place, finally we have the ability to choose, to get the occasional buglike feature instead of the other way round.
      • You are seriously confused if you think depth of field control is a lo-fi effect.

        • Throwing away original information is kind of lo-fi IMHO. I don't mean it's bad, it serves an artistic purpose, for example by helping the viewer focus on an intended part of the picture. In the same way, overdriving a guitar amp is a lossy transformation that many people find pleasant.
    • by Animats ( 122034 )

      Some movie directors are still bitching over the disappearance of film grain. There are companies putting unnecessary film grain in digital images. [cinegrain.com]

      We need to get to 48FPS or better, so slow pans over detailed backgrounds look right. No more strobing!

      (Instead, we're getting 4K resolution, which is only useful if the screen is in front of your face and a meter wide.)

    • by bws111 ( 1216812 )

      Interesting that you use the phrase 'window upon the world.' Ever look through a real window with an insect screen on it? Now imagine that instead of clearly seeing the house across the street, what you see is the house with a neat grid in sharp focus upon it. That is what you are asking for.

      A photo where everything is in equally sharp focus is absolutely not what your eyes see, unless you are standing on a cliff and seeing only things that are far away.

      In real life your window upon the world would only

    • But I absolutely, totally LOVE depth of field. Screw the art school graduates. ..... -- I sincerely HOPE -- that artificially inducing audience "focus" by depth of field will be as quaint as silent movie captions in 50 years.

      You are talking as if choosing a shallow depth of field is something new, and necessarily "artistic". It's neither. A shallow depth of field is a practical way of eg taking scientific natural history (think bugs) photos without the background distracting; also of taking people's portrait pictures ditto. It has been used that way since the early days of photography. Generally, until now, only the more expensive cameras have had this kind of control; snapshot cameras (of which phone cameras are a modern e

    • by xigxag ( 167441 )

      If anything, DOF will become more important as home screens get larger and sharper. It's an important tool in showing the audience where to look in a shot. Otherwise, staring at that gigantic screen would sometimes be like a live action "Where's Waldo."

    • by Teun ( 17872 )
      The stuff produced for large screen HD is typically done with lenses that by themselves have a great bokeh as it is called, no need to have processing done by Google.

      And by consequence you must have endured many instances where this 'isolating of the subject by minimal depth of field' was intended to be part of the scene.

    • But I absolutely, totally LOVE depth of field. Screw the art school graduates. I bought a large screen digital tv for the illusion of a window upon the world.

      I would like to think -- I sincerely HOPE -- that artificially inducing audience "focus" by depth of field will be as quaint as silent movie captions in 50 years.

      I had a good run of several hundred years though...

    • Name your favorite movies and tv shows and I'm sure someone can point out all the long lens shots that prove you wrong.

  • there's quite a fair bit of algorithms

    I'll wait for version 2, with 50% more algorithms.

  • by reg ( 5428 ) <reg@freebsd.org> on Saturday April 19, 2014 @03:38PM (#46796265) Homepage

    The best feature of the new camera app is that if you try to take vertical video it puts up an overlay telling you to hold it right! Hopefully everyone will copy this!

    • So you're saying you can never take pictures of trees (which are vertical, not horizontal) because modern, digital technology is incapable of doing what analog technology had done for over 100 years.

      Just another example of the failings of the digital age.

      • by Zebedeu ( 739988 )

        It's only for video. Vertical pictures do not get the overlay.

        In fact, the new app now allows for vertical panorama shots, which is something I had found lacking ever since that feature first appeared on Android.

  • by koan ( 80826 )

    It's a pain in the ass to use on the tablet "too fast.... too fast"

  • God help us (Score:2, Interesting)

    by PeeAitchPee ( 712652 )
    Just what is needed . . . another Photoshop-esque filter for all the douchebag hipsters of the world to make their snaps look even more deep and brooding.
  • The summary makes it sound like this is an algorithm tuning problem - "err on the side of too much in focus" - which isn't the case. It's a byproduct of sensor size.

    Even with real cameras the rule of thumb is a full frame (35mm film equivalent size) camera, at a given focal length, has a stop "better" depth of field than a camera with an APS-C sensor taking the same picture - so a Nikon D7100 would need to shoot at f/2.0 to get the same blurring as a D800 shooting the same photo at f/2.8.

    Most camera phone sensors are rather tiny compared to real cameras.

    On a side note... pedants may going to have fun nitpicking all of this apart. :-)

    • by Arkh89 ( 2870391 )

      DoF has no link to FoV. Hence, having an APS-C or Full-Frame sensor does not change the DoF. It just barely changes your "feeling" of it, because of the larger FoV.

      • I hear comments like this all the time. The reality is you've changed your FoV so you're now taking a completely different picture. If you want all other things staying as equal as possible, then to take the same photo on an APS-C camera as a Full Frame camera you'd need to switch to a narrower lens and step back from the subject. Oh your subject - camera - background ratio now changed, and so has your depth of field.

        Or are you going to tell me all camera are equal because if you over expose your image by 1

        • by Arkh89 ( 2870391 )

          This is not about photography. This is about Optics.
          All cameras are different. And If you think that the camera types you would use, with their sensor size norms, aperture norms, quality norms, are the only one in the world : you are wrong.

           

          • It's about camera software. You view that it isn't about photography is outright indefensible.

            From the first line in the fucking original source:

            One of the biggest advantages of SLR cameras over camera phones is the ability to achieve shallow depth of field and bokeh effects

            YOU are wrong. Now please take your pointless and irrelevant argument elsewhere.

            • by Arkh89 ( 2870391 )

              The claims. The previous claims are only about optics. I am not talking about the content of the article (yet)...
              I feel sad for you now...

    • by Anonymous Coward

      Exposure values, or "stops", have little to do with why smaller sensors have greater depth of field.

      For a given field of view, a smaller sensor requires a shorter focal length than a larger sensor. Given the same f-stop, longer focal lengths have shallower depths of field. So, to produce an image equivalent (other than for DOF) to that of a full-frame camera with a 50mm lens, an APS-C camera will use a 35mm lens, and will have a correspondingly deeper DOF.

      Some cameras deal with that limitation by using filt

      • Exposure values, or "stops", have little to do with why smaller sensors have greater depth of field.

        Correct, however opening the aperture is a workaround to creating as similar an image as possible providing you haven't hit the limits.

  • by Snufu ( 1049644 ) on Saturday April 19, 2014 @04:06PM (#46796393)

    Digital TV with artificial interference.
    Digital audio player that simulates permanent scratches in vinyl records.
    Automobile interior that smells like horseshit.
    Digital camera that 'exposes' (erases) your photos if you open the battery compartment incorrectly.

  • by SeaFox ( 739806 ) on Saturday April 19, 2014 @04:30PM (#46796493)

    The reason cell phone camera err on the side of too much in focus is because they originally were all fixed-focus lenses. If you didn't have a high depth of field, you'd have to make sure your subject was an exact distance from the camera to get them in focus. Even once we had focusing lenses the auto-focus software wasn't the greatest at determining what the real subject of the photo was supposed to be.

    You know what would give a great shallow depth of field? A better lens in the camera. A lens with an aperture that could open up to lower f-stops would give a REAL depth of field effect, plus it would make the camera just plain better at taking pictures -- better low-light performance, less noise in high ISO speeds captures.

    • by Arkh89 ( 2870391 )

      And you know why it will never exists? Because the shallowness of the DoF is determined by the diameter of the aperture AND you cannot simply put a diameter larger than a few mm on these devices. This compared to the 30mm~70mm entrance aperture of the objectives on current DSLRs.

      That's why they are trying the computational way...

    • by bws111 ( 1216812 )

      You need more than a good lens. You need a bigger sensor, and more distance between lens and sensor. And that ain't gonna happen in a phone whoe goal is to be thin and lightweight.

    • The problem is the focal length of the lens not the quality of the design. With such a small sensor (due to the size constraints of a cell phone package) you have to have extremely short lenses. Even if you had F0.8 in order to get a reasonable portrait focal length you're looking at single digit focal lengths.

    • unless the sensor were much larger. even at fast focal ratios, a cell phone sensor still has close to infinite depth of field if you're focusing on any subject closer than a few inches away. The smaller the sensor size, the shallower the depth of field for a given focal ratio. That's why large and medium format lenses don't have to be as fast as 35mm.

      • by Arkh89 ( 2870391 )

        The smaller the sensor size, the shallower the depth of field for a given focal ratio.

        Hahahaha, but no.

        • um, yes. http://photo.net/learn/optics/dofdigital/
          for any given aperture and field of view a smaller senser has a larger dof. and for a subject at the hyperfocal distance or beyond, it's much greater than for a larger sensor.

        • oh, oops, i meant to say deeper.

    • You know what would give a great shallow depth of field? A better lens in the camera. A lens with an aperture that could open up to lower f-stops would give a REAL depth of field effect, plus it would make the camera just plain better at taking pictures -- better low-light performance, less noise in high ISO speeds captures.

      A typical phone camera has an aperture of around f/1.8 to f/2.5. You care to tell me how you would get past the laws of physics to improve on this? I mean the lens is already nearly a ball to focus light on such a tiny dot. One could increase the sensor size but then the lens would need more physical separation to the sensor making the thickest component in a phone thicker still.

      Software AF is simple contrast detection. Every phone I've used has the ability to select the subject to focus on, so why would th

  • The problem with "artistic" blur: shrink the image a bit, and the blur is gone!
    (Try it and be amazed).

  • that would be really useful for extracting mattes and such in photoshop!

  • I tend to side with the pragmatic individuals here who are saying, it's bad enough that our modern historical record lacks the fine grain of Matthew Brady's silver emulsion plates and are generally USELESS for blow-ups of large groups of humans standing in groups --- "Mommy why does granny look like my LEGO people?"

    In order to preserve what vibrant detail can be captured and push focus tricks into post-production where they belong, how about this,

    A stereo multi-megapixel camera, where a second ccd+lens is o

  • Circles of confusion. This ridiculously clumsy term is what makes for an images depth of field.

    In it's simplest form, it is that at the plane of focus, the lens will be as sharp as the lens is capable of being. In that plane, the circles of confusion will be as small as that lens can make them

    Moving closer to or further away from the lens, the circles of confusion become larger and larger, until they can no longer carry any worthwhile information, and are completely unsharp.

    The circles of confusion can

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...