Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Input Devices Iphone Cellphones Graphics Microsoft

Microsoft Tech Can Deblur Images Automatically 204

An anonymous reader writes "At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors — the same sensors currently added to the iPhone 4. No more blurry low light photos!"
This discussion has been archived. No new comments can be posted.

Microsoft Tech Can Deblur Images Automatically

Comments Filter:
  • Frankencamera. (Score:5, Interesting)

    by Greger47 ( 516305 ) on Saturday July 31, 2010 @06:06PM (#33097858)

    Step back! This is a job for Frankencamera [stanford.edu]. Run it on your Nokia N900 [nokia.com] today.

    OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.


  • by supernova87a ( 532540 ) <kepler1@@@hotmail...com> on Saturday July 31, 2010 @06:10PM (#33097886)
    I recall that some other cameras, like a Casio I've seen a friend using, also do deblurring, but rather by stacking of rapid subframes (I guess using bright reference points). If I understand correctly, this new method is operated on a single frame. I wonder if anyone has a useful comparison of the hardware requirement/image quality/useability differences between the two methods?
  • Re:Enhance (Score:5, Interesting)

    by slasho81 ( 455509 ) on Saturday July 31, 2010 @06:27PM (#33097980)
    Don't forget the TVTropes page: http://tvtropes.org/pmwiki/pmwiki.php/Main/EnhanceButton [tvtropes.org] Sorry.
  • Re:Okay. (Score:3, Interesting)

    by sker ( 467551 ) on Saturday July 31, 2010 @06:35PM (#33098008) Homepage Journal

    Agreed. I feel the same way about auto-focus.

  • by peter303 ( 12292 ) on Saturday July 31, 2010 @06:48PM (#33098078)
    For the past 8 years or so, MicroSoft has been co-author on more papers than any other organization at SIGGRAPH. This is impressive because SIGGRAPH has a the highest paper rejection rate of any conference I know of - they reject (or downgrade to non-published session) 85% of the paper submissions. And you have to submit publication-ready papers nearly a year in advance, with a video summary.

    This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off.
  • Even so... (Score:5, Interesting)

    by fyngyrz ( 762201 ) on Saturday July 31, 2010 @08:19PM (#33098572) Homepage Journal

    Clearly (pun intended) the results have a ways to go yet. Look at the coca-cola image, at the 'a' on the end of the cola... that thing is hosed by the blur, and they're unable to recover it because there's no intermediate contrasting color. Same thing for the spokes on the car rims.

    This problem can't be completely solved post-picture. Only large-scale elements with nothing else around them will yield pixel-sharp solutions.

    The optimum way to correct blur is to apply active or passive (e.g. tripod) stabilization to the lens prior to the shot; active technology is already pretty decent (photographers tend to measure things in stops; it's intuitive to them... when they say an active stabilizer "gives you" four stops, for instance with Canon, what they mean is that you can shoot four stops slower with the shutter and you won't get blur from camera movement.) Doesn't solve subject movement at all, but then, nothing really does other than cranking down the exposure time.

    So... considering lens stabilization has been in-camera for years, and this requires more hardware, but gives you less... I'm going to go out on a limb and say it isn't of interest to camera folks. Maybe in some esoteric role... a spacecraft or something else with a tight power budget where stabilization can't be done for some reason (certainly measurement takes less power than actual stabilization)... but DSLRs and point-and-shoots... no.

  • Information théory (Score:3, Interesting)

    by Vapula ( 14703 ) on Saturday July 31, 2010 @08:36PM (#33098646)

    Information théory tell us that once some info has been lost, it can't be recovered. If the picture has been somehow "damaged" by some motion blur, the original picture can't be reconstructed.

    On the image, we'll have much more than the motion blur from the camera's movement :
    - noise added from sensor electronic noise
    - blur from target movement
    - distortion coming from lens defect (mostly for low end cameras)
    - distortion/blur from bad focus (autofocus in not perfect) ...

    The operation that will reduce the camera's motion blur will probably increase the effect from all other defects. You reduce one kind of image destruction and increase the impact of the other one.

  • Re:Even so... (Score:3, Interesting)

    by EvanED ( 569694 ) <evaned@gmail. c o m> on Saturday July 31, 2010 @11:38PM (#33099260)

    I'm going to go out on a limb and say it isn't of interest to camera folks. Maybe in some esoteric role... a spacecraft or something else with a tight power budget where stabilization can't be done for some reason (certainly measurement takes less power than actual stabilization)... but DSLRs and point-and-shoots... no.

    Well, sort of, I disagree somewhat. For starters, take camera phones. What do they need to do this? I'm too lazy to read the paper, but seems like accelerometer data. How many phones come with accelerometers nowadays? Pretty much every smart phone does. So no extra hardware there. Second, think of an actual camera. Sure, a lot of P&S cameras now have IS, and a lot of SLR lenses have IS. Maybe that gets you an extra stop or two. But what if you *also* had accelerometer data to apply? If you were in a really low-light scenario and a tripod was impractical (for any number of completely realistic reasons), could this give you yet another stop?

    I mean, you say that the results aren't great, and point out flaws. And the results definitely aren't great. Actually, if you look at the Coke image, the whole thing has a very substantial double image -- it looks like the image was translated a couple dozen pixels and added to itself. But in some sense, "does it look as good as a completely stable" is the wrong metric -- if you ask me, the revised images do look rather better than the version before processing. Low-light photography to me always is a huge game of tradeoffs between using a slow shutter and getting blurring, using a fast aperture setting and getting narrow DoF, and using a high ISO setting and getting high noise. And for that reason, I would welcome anything that gives more choices in that arena. (In my dream world Canon stops pushing the pixel count for a couple generations and just works on decreasing noise.)

  • Re:Enhance (Score:1, Interesting)

    by aliquis ( 678370 ) <dospam@gmail.com> on Sunday August 01, 2010 @12:40AM (#33099470) Homepage

    I doubt we are seeing anything new here. I assume they just use the accelerometers to determine how much they should crop away from the current sample, and then in the end stitch everything together.

    iMovie do software image stabilization that way (by cropping enough to keep the image steady) and a lot of cameras notice motion and move the sensor.

    The question is whatever people would call cheat on such a method or whine about lost pixels. I for sure would rather lose pictures for a sharper image vs getting a full res blurry one ...

    One obvious advantage would be that it's not mechanical.

    If this is somewhat like the things they do then I hate that people most likely get things like this patented, even though it's based upon ideas which is most likely patented they to. Just think about how fucking stupid math, music, art and such would had been if the same approached was used there. "No you can't use my idea in your solution!"

  • by AliasMarlowe ( 1042386 ) on Sunday August 01, 2010 @04:23AM (#33100006) Journal

    Do you mention FTs just for reference, or are you implying that they are typically used in deconvolutions? In my experience, signals with any amount of noise are much better handled with iterative algorithms.

    Yes. Fourier methods are unlikely to be used in practice on images. However, it's instructive to look at the process in a transform space to understand the extent to which information is irrecoverably lost in the optical path. The consequences can be explained using any suitable integral transform, but engineers are most familiar with Fourier and wavelet methods.

    Suppose the Fourier transform of the "perfect" image is J, and the Fourier transform of the exact blurring kernel is K, then the transform of the blurred image is L=JK. If the kernel is known exactly, and has adequate magnitude through the frequency range of interest, the image can be recovered simply by using an inverse of the kernel J=L/K. Due to the physics of photon detection, the measured image will also contain some amount of noise N, so even in this ideal case, the recovered image will be corrupted by an amount of noise J'=(L+N)/K. Realistically, the kernel is not known exactly, and may have small magnitude at some frequencies so that its inverse is unreliable. A related consequence is that the blurred image at those bands will be primarily noise. Use of pseudo-inverses is also unreliable, since they are discontinuous near a spectral zero with very high sensitivity to perturbation near that zero. So-called Wiener deconvolution attempts to circumvent this by diagonally biasing the transform of the estimated kernel before inverting, but with generally unsatisfactory results.

    Information which has been destroyed cannot be recovered, by any method. Any attempt to do so would merely amplify the noise in the measured blurry image at those bands. Iterative methods (typically a variant of Richardon-Lucy for images) try to minimize the amplification of noise in various ways, all imperfect but preferable to a direct Fourier method. Most iterative methods will, left to themselves, converge on the same asymptote as the Fourier method. However, iterative deconvolution methods always employ a regularization step in each iteration, whose primary purpose is to attenuate adjustments in bands where the kernel is uncertain. They also generally use a small number of iterations, since the first iterations are less affected by noise than later iterations. The end result is that information which was destroyed is not spuriously recreated from noise (in principle at least).

    If you're interested, I recommend: P. Jansson (ed.), Deconvolution of Images and Spectra, 2nd ed., Academic Press, 1996. Alas, it appears to be out of print (and my copy is not for sale).

A committee is a group that keeps the minutes and loses hours. -- Milton Berle