Google's New Camera App Simulates Shallow Depth of Field 127
New submitter katiewilliam (3621675) writes with a story at Hardware Zone about a new feature that Google's working on for Android phones' built-in cameras: the illusion of shallow depth of field in phone snapshots, which typically err on the side of too much in focus, rather than too little. Excerpting: "The Google Research Blog [note: here's a direct link] revealed that there's quite a fair bit of algorithms running to achieve this effect; to put it in a nutshell, computer vision algorithms create a 3D model of the world based on the shots you have taken, and estimate the depth to every point in the scene."
2 1/2 D (Score:2)
There is no 3D modelling involved. And the results are, well, mixed.
Re: (Score:2)
computer vision algorithms create a 3D model of the world
Sounds like 3D modelling to me, albeit guessed at from the content of a 2D photo.
Re:2 1/2 D (Score:5, Interesting)
Depends what you mean by 3D modelling. Looking further at the article, it's a depth mapping technique for each pixel. Which is more analogous to DOOM than Quake. Remember those restrictions? No bridges in the map, no tables. Just a single height for the floor and a single height for the ceiling at any map position.
As the OP says it's 2.5D not 3D.
Re: (Score:2)
If you have a depth channel you could displace a 3D plane in camera space and render that in 3D. So 2.5D/3D is a bit arbitrary.
If you had a perfect 3D model and the one photo though you still wouldn't have enough information to render true Depth of Field. The real problem isn't 2.5D/3D it's the fact that there is no parallax information for occluded information. That can be interpolated well enough for simple situations but ultimately you're trying to infer data which will cause artifacts.
Re: (Score:2)
If you have a depth channel you could displace a 3D plane in camera space and render that in 3D.
No, you'd only have the surfaces that are first hit with raytracing from the eye. That's not 3D. That's why it's 2.5D.
The real problem isn't 2.5D/3D it's the fact that there is no parallax information for occluded information.
But that's exactly the problem that 2.5D brings. You don't know what's behind foreground objects.
Re: (Score:2)
But that's exactly the problem that 2.5D brings. You don't know what's behind foreground objects.
That's not necessarily true. With a deep framebuffer you can have multiple ZSamples including occluded objects. For DOF that would be perfectly sufficient and a 2.5D point cloud. Conversely you could have a perfectly detailed 3D scene but use a per-pixel camera projection of your plate to refocus but have occlusion artifacts.
2.5D/3D isn't terribly important for DOF calculation. Then again even with the Deep image you still would have problems with reflections on curved objects and refraction etc.
Re: (Score:2)
That's not necessarily true. With a deep framebuffer you can have multiple ZSamples including occluded objects.
Rather like the 2 samples I already pointed out for Doom? Are you getting the idea yet that you aren't telling me anything I don't already know?
Re: (Score:2)
Note a depth mapping technique for each pixel isn't Doom-style restrictions unless the camera is in an unusual orientation.
You can have tables, etc. Every pixel has a distance from the camera to the object estimated. Since the camera is probably in a horizontal location this works. What you -can't- know about are objects behind other objects from the camera's standpoint, or stuff behind the camera. This is mostly OK for faking depth of field.
Re: (Score:2)
Note a depth mapping technique for each pixel isn't Doom-style restrictions unless the camera is in an unusual orientation.
It's just an analogy. One that illustrates that depth mapping doesn't give proper 3D.
What you -can't- know about are objects behind other objects from the camera's standpoint, or stuff behind the camera. This is mostly OK for faking depth of field.
Absolutely. But it's not still not 3D, it's 2.5D. No one said 2.5D wouldn't work for this application.
creamy bokeh (Score:2)
comes with a new 'Lens Blur' feature that lets you adds creamy bokeh to your pictures.
Yeah, hi, I have a question. Does it have to be creamy?
Re: (Score:2)
creamy bokeh
Be careful not to stutter when you say it.
Re: (Score:2)
Yeah. What if you want catadioptric bokeh?
Re: (Score:2)
Then you are an idiot.
Google rules when it comes down to blurriness. (Score:5, Funny)
Re: (Score:1)
It blurred my penis but not my face.
Re: (Score:2)
My neighbors dogs face was blurred instead of their kid. ;)
just line on "Guess Her Muff"
Not new (Score:1)
Another 'new feature' that's been out for over a year. https://itunes.apple.com/gb/app/focustwist/id597654594?mt=8 Kinda like the 'awesome' photosphere which MSFT had out for 2 years (as photosynth) before they did it.
Skynet target mode (Score:1)
When Google finally reveals its true name, Skynet, this is the technology that will allow its T-1000s to exterminate most of humanity.
But don't worry, they'll be sure to take an instagram of your death and post it to your Google+ livestream so your friends and family can mourn.
(There will also be ads for bereavement-related products. Neither Google nor Skynet are monopolies, honest.)
Call me a rock wielding barbarian (Score:3, Informative)
But I absolutely, totally LOVE depth of field. Screw the art school graduates. I bought a large screen digital tv for the illusion of a window upon the world.
I would like to think -- I sincerely HOPE -- that artificially inducing audience "focus" by depth of field will be as quaint as silent movie captions in 50 years.
Re:Call me a rock wielding barbarian (Score:5, Insightful)
You know, your eyes have a substantial depth-of-field effect, too. You often don't notice, because your mental ability to pay attention to objects is tied pretty strongly to where your eyes are actually focusing, so anything you look at is in focus (because you focus on what you're looking at). However, you can really notice when you look at images that have deep DoF or, say, 3D movies (where they can't possibly get the DoF right).
Re: (Score:2)
IIRC, Gravity had 3d lens flare.
Re: (Score:2)
For sure they can get the DoF right in 3D movies, why not?
Re: (Score:2)
Because the depth-of-field effect generated by your eyes depends on the distance to the subject, which is largely flat in 3D movies. They can't add DoF blur because they don't know where your eye will focus. They can put the most-obvious object in focus and then the other objects will be blurred, but if you focus your eyes on them, they won't come in to focus, which is not how your eyes normally work. (The same is true in 2D movies, naturally, but there isn't the illusion of the ability to focus in those.)
Re: (Score:2)
The human eye has it's own depth of field characteristics plus a much greater dynamic range and resolution than any large flat screen.
So your large screen is going to fall short of that illusion.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You are seriously confused if you think depth of field control is a lo-fi effect.
Re: (Score:2)
Re: (Score:2)
Some movie directors are still bitching over the disappearance of film grain. There are companies putting unnecessary film grain in digital images. [cinegrain.com]
We need to get to 48FPS or better, so slow pans over detailed backgrounds look right. No more strobing!
(Instead, we're getting 4K resolution, which is only useful if the screen is in front of your face and a meter wide.)
Re: (Score:2)
About as unnecessary as room tone to a soundtrack.
Re: (Score:2)
It only looks right if you want everything to look like an 80s daytime soap.
Re: (Score:3)
Interesting that you use the phrase 'window upon the world.' Ever look through a real window with an insect screen on it? Now imagine that instead of clearly seeing the house across the street, what you see is the house with a neat grid in sharp focus upon it. That is what you are asking for.
A photo where everything is in equally sharp focus is absolutely not what your eyes see, unless you are standing on a cliff and seeing only things that are far away.
In real life your window upon the world would only
Re: (Score:2)
But I absolutely, totally LOVE depth of field. Screw the art school graduates. ..... -- I sincerely HOPE -- that artificially inducing audience "focus" by depth of field will be as quaint as silent movie captions in 50 years.
You are talking as if choosing a shallow depth of field is something new, and necessarily "artistic". It's neither. A shallow depth of field is a practical way of eg taking scientific natural history (think bugs) photos without the background distracting; also of taking people's portrait pictures ditto. It has been used that way since the early days of photography. Generally, until now, only the more expensive cameras have had this kind of control; snapshot cameras (of which phone cameras are a modern e
Re: (Score:3)
If anything, DOF will become more important as home screens get larger and sharper. It's an important tool in showing the audience where to look in a shot. Otherwise, staring at that gigantic screen would sometimes be like a live action "Where's Waldo."
Re: (Score:2)
And by consequence you must have endured many instances where this 'isolating of the subject by minimal depth of field' was intended to be part of the scene.
Re: (Score:2)
But I absolutely, totally LOVE depth of field. Screw the art school graduates. I bought a large screen digital tv for the illusion of a window upon the world.
I would like to think -- I sincerely HOPE -- that artificially inducing audience "focus" by depth of field will be as quaint as silent movie captions in 50 years.
I had a good run of several hundred years though...
Re: (Score:2)
Name your favorite movies and tv shows and I'm sure someone can point out all the long lens shots that prove you wrong.
Re:"subject" (Score:4, Informative)
Can boken be overdone? Sure. A 1mm think depth of field is overdoing it, but so is shooting at f/16 everywhere. But even a thin DoF and the right can result in some magical results
Just because you know what you're talking about, and we're among friends:
It's bokeh, with an 'h'. And it refers to the character [wikipedia.org] of the blur, not the blur itself. If you've got an image, say f/3.4, a hipster might say "nice bokeh" to you, but he means that you have a good lens, not that you've selected a good aperture. And then he might also suggest you make a "glisse" print. ;)
And, of course, shallow depth of field is a huge fad, and there's an entire generation of kids who won't ever be able to tell where they were in any of their childhood pictures. *That* will seem very "early 21st century" in a couple decades.
Re: (Score:3)
Bullshit.
A well done wide-open portrait with tack-sharp eyes and everything else blurred connects you with the model like no other shot could. I know it sounds hipsterish, but it's an immutable reality.
Re:"subject" (Score:4, Insightful)
And, of course, shallow depth of field is a huge fad, and there's an entire generation of kids who won't ever be able to tell where they were in any of their childhood pictures.
Wow lets step back a bit. Though I guess someone called the automobile a fad at some point.
The battle for wider apertures dated back to post war. The 1950s was all about big lenses, wide apertures. I fondly recall using a Canon R mount 50mm f/1.2. Not a very sharp lens but provided incredibly narrow depth of field. Mind you it wasn't until the FE mount in the 80s they managed to get a 50mm f/0.95, something which Leica managed quite a lot earlier on their M series cameras in the 1960s.
Now that the history lesson is over, how about an art lesson. Depth of field is used to direct attention. If you want someone looking at a subject rather than the image on the whole you can isolate the subject by blurring the background. I did this on my holidays and I'm going to look back and think about what I looked like at the time who the hell cares where I was. If I wanted to take a photo of where I was I would do so. Now on the flip side, why the hell would you want to ruin a perfectly good photo of the Pantheon or some other wonderful place by standing in front of it? Why would you want to give up artistic control to some passer by telling them to look through the viewfinder and push the button.
You seem to know the technical details of how something is done, but not have a clue of why someone would do it. Go to your grandpa and ask him if he used wide apertures when he took photos. You'll likely find him don his oversize framed glasses and say "Kid, I was the master of bokeh before it was cool."
Fad indeed.
Engrish (Score:2)
there's quite a fair bit of algorithms
I'll wait for version 2, with 50% more algorithms.
A more interesting feature is vertical video (Score:4, Interesting)
The best feature of the new camera app is that if you try to take vertical video it puts up an overlay telling you to hold it right! Hopefully everyone will copy this!
Re: (Score:2)
So you're saying you can never take pictures of trees (which are vertical, not horizontal) because modern, digital technology is incapable of doing what analog technology had done for over 100 years.
Just another example of the failings of the digital age.
Re: (Score:2)
It's only for video. Vertical pictures do not get the overlay.
In fact, the new app now allows for vertical panorama shots, which is something I had found lacking ever since that feature first appeared on Android.
Yep (Score:1)
It's a pain in the ass to use on the tablet "too fast.... too fast"
God help us (Score:2, Interesting)
Overcoming hardware limitations with software (Score:3)
The summary makes it sound like this is an algorithm tuning problem - "err on the side of too much in focus" - which isn't the case. It's a byproduct of sensor size.
Even with real cameras the rule of thumb is a full frame (35mm film equivalent size) camera, at a given focal length, has a stop "better" depth of field than a camera with an APS-C sensor taking the same picture - so a Nikon D7100 would need to shoot at f/2.0 to get the same blurring as a D800 shooting the same photo at f/2.8.
Most camera phone sensors are rather tiny compared to real cameras.
On a side note... pedants may going to have fun nitpicking all of this apart. :-)
Re: (Score:3)
DoF has no link to FoV. Hence, having an APS-C or Full-Frame sensor does not change the DoF. It just barely changes your "feeling" of it, because of the larger FoV.
Re: (Score:2)
I hear comments like this all the time. The reality is you've changed your FoV so you're now taking a completely different picture. If you want all other things staying as equal as possible, then to take the same photo on an APS-C camera as a Full Frame camera you'd need to switch to a narrower lens and step back from the subject. Oh your subject - camera - background ratio now changed, and so has your depth of field.
Or are you going to tell me all camera are equal because if you over expose your image by 1
Re: (Score:2)
This is not about photography. This is about Optics.
All cameras are different. And If you think that the camera types you would use, with their sensor size norms, aperture norms, quality norms, are the only one in the world : you are wrong.
Re: (Score:2)
It's about camera software. You view that it isn't about photography is outright indefensible.
From the first line in the fucking original source:
One of the biggest advantages of SLR cameras over camera phones is the ability to achieve shallow depth of field and bokeh effects
YOU are wrong. Now please take your pointless and irrelevant argument elsewhere.
Re: (Score:2)
The claims. The previous claims are only about optics. I am not talking about the content of the article (yet)...
I feel sad for you now...
Re: (Score:1)
Exposure values, or "stops", have little to do with why smaller sensors have greater depth of field.
For a given field of view, a smaller sensor requires a shorter focal length than a larger sensor. Given the same f-stop, longer focal lengths have shallower depths of field. So, to produce an image equivalent (other than for DOF) to that of a full-frame camera with a 50mm lens, an APS-C camera will use a 35mm lens, and will have a correspondingly deeper DOF.
Some cameras deal with that limitation by using filt
Re: (Score:2)
Exposure values, or "stops", have little to do with why smaller sensors have greater depth of field.
Correct, however opening the aperture is a workaround to creating as similar an image as possible providing you haven't hit the limits.
More Google innovations: (Score:3)
Digital TV with artificial interference.
Digital audio player that simulates permanent scratches in vinyl records.
Automobile interior that smells like horseshit.
Digital camera that 'exposes' (erases) your photos if you open the battery compartment incorrectly.
Re: (Score:3)
digital audio systems that minimize background noise.
Better idea: Improve cell phone camera lenses. (Score:3)
The reason cell phone camera err on the side of too much in focus is because they originally were all fixed-focus lenses. If you didn't have a high depth of field, you'd have to make sure your subject was an exact distance from the camera to get them in focus. Even once we had focusing lenses the auto-focus software wasn't the greatest at determining what the real subject of the photo was supposed to be.
You know what would give a great shallow depth of field? A better lens in the camera. A lens with an aperture that could open up to lower f-stops would give a REAL depth of field effect, plus it would make the camera just plain better at taking pictures -- better low-light performance, less noise in high ISO speeds captures.
Re: (Score:2)
And you know why it will never exists? Because the shallowness of the DoF is determined by the diameter of the aperture AND you cannot simply put a diameter larger than a few mm on these devices. This compared to the 30mm~70mm entrance aperture of the objectives on current DSLRs.
That's why they are trying the computational way...
Re: (Score:3)
You need more than a good lens. You need a bigger sensor, and more distance between lens and sensor. And that ain't gonna happen in a phone whoe goal is to be thin and lightweight.
Re: (Score:2)
The problem is the focal length of the lens not the quality of the design. With such a small sensor (due to the size constraints of a cell phone package) you have to have extremely short lenses. Even if you had F0.8 in order to get a reasonable portrait focal length you're looking at single digit focal lengths.
That wouldn't work (Score:2)
unless the sensor were much larger. even at fast focal ratios, a cell phone sensor still has close to infinite depth of field if you're focusing on any subject closer than a few inches away. The smaller the sensor size, the shallower the depth of field for a given focal ratio. That's why large and medium format lenses don't have to be as fast as 35mm.
Re: (Score:1)
The smaller the sensor size, the shallower the depth of field for a given focal ratio.
Hahahaha, but no.
Re: (Score:2)
um, yes. http://photo.net/learn/optics/dofdigital/
for any given aperture and field of view a smaller senser has a larger dof. and for a subject at the hyperfocal distance or beyond, it's much greater than for a larger sensor.
Re: (Score:2)
Re-read your original comment and find the missing "for a given [...] field".
Re: (Score:2)
oh, oops, i meant to say deeper.
Re: (Score:2)
You know what would give a great shallow depth of field? A better lens in the camera. A lens with an aperture that could open up to lower f-stops would give a REAL depth of field effect, plus it would make the camera just plain better at taking pictures -- better low-light performance, less noise in high ISO speeds captures.
A typical phone camera has an aperture of around f/1.8 to f/2.5. You care to tell me how you would get past the laws of physics to improve on this? I mean the lens is already nearly a ball to focus light on such a tiny dot. One could increase the sensor size but then the lens would need more physical separation to the sensor making the thickest component in a phone thicker still.
Software AF is simple contrast detection. Every phone I've used has the ability to select the subject to focus on, so why would th
The problem with blur (Score:2)
The problem with "artistic" blur: shrink the image a bit, and the blur is gone!
(Try it and be amazed).
hope they allow exporting of depth map (Score:1)
that would be really useful for extracting mattes and such in photoshop!
How about a TRUE 2.5D camera? (Score:2)
I tend to side with the pragmatic individuals here who are saying, it's bad enough that our modern historical record lacks the fine grain of Matthew Brady's silver emulsion plates and are generally USELESS for blow-ups of large groups of humans standing in groups --- "Mommy why does granny look like my LEGO people?"
In order to preserve what vibrant detail can be captured and push focus tricks into post-production where they belong, how about this,
A stereo multi-megapixel camera, where a second ccd+lens is o
On the nature of depth of field (Score:2)
In it's simplest form, it is that at the plane of focus, the lens will be as sharp as the lens is capable of being. In that plane, the circles of confusion will be as small as that lens can make them
Moving closer to or further away from the lens, the circles of confusion become larger and larger, until they can no longer carry any worthwhile information, and are completely unsharp.
The circles of confusion can
Re:Why? (Score:5, Insightful)
Because often, what you can't see is as important as what you can. Imagination is important. Composition is important, and emotion is important.
Re: (Score:3, Funny)
and emotion is important.
but I have aspergers you insenitive clod!
Re: (Score:2)
Those who claim aspergers have no right to call anyone insensitive.
That is just SO wrong.
Re:Why? (Score:4, Informative)
Right, the thing is though theres more going on to depth of field than just "This part of the image in focus, and that part out of focus". I mean its definately a useful effect because it pretty much defines what part of the photo your supposed to be looking at, but good shallow DOF really is quite an amazing effect down well and terrible when done bad.
On my 50mm lens (I recomend a 50mm to ANYONE whos playing with SLRs. Its a cheap lens, handles great in low light and very easy to take attractive photos with) the depth of field also interacts with light so you see these great specks of light all through the background and other esoteric effects that really enhance the effect. If I just put the background out of focus with a blur, it'd be just.... well blury.
Finally its not a linear blur either. Some parts are more in focus than others and this adds to the effect because its how your eye does it too.
The test photo in the article just makes it look like someones put a lasso tool on the model, inverted it, then just done some sort of blur on the background. Its just not the same as the DOF on a real wide apearature camera.
Re: (Score:3)
You're describing Bokeh [wikipedia.org]. And yes, it is one of those techniques that done well, can greatly enhance a picture. There are entire web sites and discussion groups devoted to the topic - which lens, camera, technique is best and who is a total poser. There have been numerous attempts to do this in software, all of which have yielded meh results. I suspect that Google's attempt will be another one of these, but who knows. Perhaps they will finally figure out how to let photographers match their $15,000 DLSR
Re: (Score:2)
I'm a little bummed about this. My first reaction was, "Oh, cool. This is just like the idea I had a few days ago." Then, I realized they're trying to do it from a single photo instead of taking advantage of the camera hardware to obtain actual depth info.
You have a lens that can focus. Take your shot, throw the focus off as far as you can (in whichever direction you can move the focus farther, by some definition of farther), then take a second shot. You can then compute some reasonable approximation
Re: (Score:1)
I'm a little bummed about this. My first reaction was, "Oh, cool. This is just like the idea I had a few days ago." Then, I realized they're trying to do it from a single photo instead of taking advantage of the camera hardware to obtain actual depth info.
Read the cited blog:
"Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames."
The 3-D effect comes from comparing the frames.
Re:Why? (Score:5, Insightful)
Because it makes the intended subject stand out more.
Re: (Score:3)
Why would I want to ruin large parts of a good image with this effect? It seems just as stupid as adding a large lense flare.
lenses that can achieve a narrower field of focus are the more expensive ones, so there is established artistic value. Lens flare can also have value, and is really difficult to use effectively, so there is probably also a market for that.
Re: (Score:2)
lenses that can achieve a narrower field of focus are the more expensive ones, so there is established artistic value.
I'm not really taking issue with your conclusion, but a decent quality 50mm lens (widely known as a portrait lens because of its shallow depth of field) can be got new for about $200. And I got a beautiful 1984-vintage 105mm prime lens for $250 a few years back. It's an exception to the rule, yes, but sometimes the glass is less expensive than the camera body. That said, if you've got good lenses, they can make up for a lot of shortcomings in the camera body.
My own feeling about algorithms such as this is
Re: (Score:2)
Widely known where? Round these parts it's a 90 or a 105, assuming you're talking 35mm film.
Re: (Score:2)
Widely known where? Round these parts it's a 90 or a 105, assuming you're talking 35mm film.
I think the poster was talking crop sensor cameras, where a 50mm produces the same size image as a 75mm on a full frame (35mm) camera. On a side note, I never got the 50mm hysteria. I always found it either too long or too short when used with 35mm. It was a nice wide angle focal length on 4x5 though...
Re: (Score:2)
I never got the 50mm hysteria. I always found it either too long or too short when used with 35mm.
At 35mm it works well in a studio setting where the subject is always at a fixed distance and you want to minimize foreground distortion while still giving a sense of depth.
I never did much posed portraits, and in candid or on-site portraits I also tended longer or shorter. Generally an outdoor background provides a better sense of depth so the intermediate length isn't useful.
It can also be a good length for macros.
Here in the western US, for 35mm film if you were to say "portrait lens" it would almost cer
Re: (Score:2)
There are other reasons they're more expensive than purely artistic. They cost more to make for a start, but also give you benefits such as a brighter viewfinder and the ability to take pictures in lower light. The last is less relevant now, because high ISO performance is modern DSLRs is so good, but even so, more light getting in helps the camera focus in low light.
Re: (Score:2)
Why would I want to ruin large parts of a good image with this effect?
It's for camera phones: crappy, non-adjustable lens and cheap, noisy sensor. So it isn't a good image; deliberately blurring the picture can distract you from the fact that it is *not* a good image.
Re: (Score:3)
Quality bridge cameras ($300+ models) also have the ability to mimic a narrow depth of field. That can be very useful in wedding party photography, etc, where capturing candid portrait shots is critical to the photographer's success, and he will not have time to swap between lenses on his DSLR.
On my Fuji HS25EXR, the camera identifies the subject with its face recognition technology and takes 2 or 3 shots, The foreground is handled normally but the extra images are used to double or triple expose the back
Re: (Score:2)
Quality bridge cameras ($300+ models) also have the ability to mimic a narrow depth of field.
If you have a real camera and lens, just shoot with the lens wide open and fast shutter speed; you'll have a narrow depth of field with no computer wizardry needed.
Re: (Score:2)
I guess a lot depends on what you define a "real camera" to be.
For me, any camera body + lens combo that costs more than $750 is unrealistic. That's more than I can afford to replace if I lose it while kayaking. I'm happy with bridge cameras, and there are advantages in being able to go from wide angle to telephoto without swapping lenses. It does mean that you have to rely on the firmware for narrow DOF, etc-- but it is a reasonable trade-off.
Re: (Score:2)
Why would I want to ruin large parts of a good image with this effect? It seems just as stupid as adding a large lense flare.
For the same reason they use spotlight and shade in theatre shows, and floodlights on a sports field.
If you want a utilitarian document recording a place or event, such as a traffic cop taking a picture of a illegally parked car, then you can't do better than having every pixel in focus.
If you want something with artistic merit then you can use focus just as you can use light and shade to draw attention to one part of the image, and away from the background.
Re:Why? (Score:5, Insightful)
Consider this picture [flickr.com] of a spider dining on its prey--possibly a cricket.
What's important? the spider, the web, the meal.
What's not important? the storm drain, the foliage
It's not completely successful, but both the foliage and the storm drain are out of focus, while the spider, the meal, and the web are in focus. The aperture control on a large sensor camera lets the photographer select where the blurriness ends, and where it begins. Generally, the longer the focal length of the lens, the more dramatic the effects of opening up the aperture. Since camera phones use short focal length lenses, the blurring effect is quite subtle, and is often insufficient to draw in the viewers eye.
In this particular case, it's a macro shot, so even a very narrow aperture (f/16) involves some blurriness. Quite often, macro-photographers use very narrow apertures-- f/16-f32, in an attempt to resolve all of the interesting aspects of their subjects.
Re: (Score:2)
That is dumb. Less information. I want to have everything crisp. I want to see everything I can. Not some artsy bs. Damn hipster old notalgic tech bullshit.
You may need to replace your eyes then https://xkcd.com/1080/ [xkcd.com]
Re:Why? (Score:5, Insightful)
Why would I want to ruin large parts of a good image with this effect?
Portrait photography.
Or any time when the presence of crap in the background degrades the photo. That candid picture of your Mom sharing a moment with your aunt would look great if it were not for the Ronald McDonald billboard in the background.
Re: (Score:2)
That candid picture of your Mom sharing a moment with your aunt would look great if it were not for the Ronald McDonald billboard in the background.
Hmm, perhaps. But I have increasingly over the last few years seen so many, brilliantly clear photos with fabulous colour etc etc which are so achingly dull because the photographer has no sense of the artistic and no experimental curiousity. Despite the fact that with a digital SLR camera it is cheaper and easier than ever to experiment: just try and then throw out the failures.
Personally, I have started on taking deliberately imperfect pictures; I am particularly fond of under-exposure - it is surprising
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
He seems to be mixing some terms a little bit. Correcting distortion lowers sharpness -- though for any image displayed only at 1080p, it probably makes no difference. Correcting some other aberrations, like chromatic aberration (CA) lowers contrast (and sharpness). Higher sensitivity in a digital sensor lowers contrast a whole lot more, though. That, and poorly-controlled lens flare, usually the major driver of low-contrast images out of smartphones. If you take a picture in daylight, don't point it right
Re: (Score:3)
Wow. You're so awesome. You own a big-boy camera and know all the fancy photography words!*
That's what you wanted to hear, right? Because I can't think of any other good reason for you to post this.
The "muggles" have all got cameras now. This is just a nice bit of software that'll make their shots a bit more fancy.
Get over it.
(*disclaimer: so do I, but I don't use it as an excuse for scoffing at those who don't)
You'll take my Haruo Sato designed lenses away from me when you pry them from my cold, dead fingers.
No-one's coming for your lenses, you self-aggrandizing lunatic.
Re: (Score:2)
Wow. You're so awesome. You own a big-boy camera and know all the fancy photography words!*
This being slashdot, I think it's appropriate to discuss the technical aspects of photographic lenses. You might even learn something by reading it with an open mind.
Re: (Score:2)
You'll take my Haruo Sato designed lenses away from me when you pry them from my cold, dead fingers.
But it's not an Apple board, so there's no need to be so smug about it.
Re: (Score:2)
This being slashdot, I think it's appropriate to discuss the technical aspects of photographic lenses.
The AC wasn't trying to start a discussion. He just wanted to sneer down his nose at people using "inferior" tech.
We all know real cameras take better photos than smart phones, and this software isn't going to suddenly close the gap, so I don't know why the AC was acting so threatened and insulted.