Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Input Devices Iphone Cellphones Graphics Microsoft

Microsoft Tech Can Deblur Images Automatically 204

An anonymous reader writes "At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors — the same sensors currently added to the iPhone 4. No more blurry low light photos!"
This discussion has been archived. No new comments can be posted.

Microsoft Tech Can Deblur Images Automatically

Comments Filter:
  • Enhance (Score:5, Funny)

    by TheSwampDweller ( 1076321 ) on Saturday July 31, 2010 @04:53PM (#33097782)
    Enhance!
  • Windows 7 (Score:3, Funny)

    by nacturation ( 646836 ) * <nacturation@gmAUDENail.com minus poet> on Saturday July 31, 2010 @05:00PM (#33097822) Journal

    I bet it can remove the blur from the titlebar for screenshots of a Windows 7 app. Now we can all see what those developers are viewing behind that window!

    • Useful, but limited (Score:5, Informative)

      by AliasMarlowe ( 1042386 ) on Saturday July 31, 2010 @06:18PM (#33098262) Journal
      It won't help at all if the object is moving. In fact, this feature should be switched off if you're trying to photograph a moving object with the camera (common enough, and not just in sports). It would not be able to compensate for a mismatch between the object speed and your tracking movement, and would do entirely the wrong thing even if you tracked the moving object perfectly for the shot. In this case, there is no substitute for adequate light and/or a fast lens and/or a smooth accurate tracking movement.

      As another comment, deconvolution requires a very accurate approximation of the true convolution kernel, which may be provided by the motion sensors. However, to reconstruct the image without artifacts, the true kernel must not approach zero in the Fourier domain below the Nyquist frequency of the intended reconstruction (which is limited by the antialias filter in front of the Bayer mask). In fact, if the kernel's Fourier transform has too small a magnitude at some frequency, the reconstruction at that frequency will be essentially noise, or will be zero if adequate regularization is used. If the motion blur is more than a few pixels, this will generally mean that the reconstructed image will have an abridged spectrum in the direction of blur, compared to directions in which no blur occurred. Of course, if your hand is so shaky and the exposure so long that blur occurs in all directions, then the spectrum of the reconstructed image will be more uniform. It is likely to be truncated compared to the spectrum of an image taken without motion blur.

      The quality of the reconstructed image would also be limited by the effects of other convolutions in the optical pathway. For instance, if you're using a cheap superzoom lens, don't expect to get anywhere near the antialias filter's Nyquist frequency in the final image, as the lens will have buggered up the details nonlinearly across the image even before the motion blur is added. If you're using nice lenses (Canon "L" series or Pentax "*" series and suchlike), then this will not be an issue.

      The method would seem to be useful in low-ish light photography of stationary objects. A sober photographer would beat a drunk photographer at this, but the technique would help both to some extent. A photographer using a tripod would do best, of course.
      • And how often do you have an object moving beneath a Windows 7 titlebar?

        Perhaps you should have started your own thread.

      • There are some full-size samples of the results of the technique [microsoft.com], where you can compare the original image with the result of their technique, and the results of two older techniques. Their technique show some very obvious problems:

        1. Doubling of high-contrast edges that are "ghosted" in the original because of the motion blur. In the original, presumably, the motion was something like this: start at position A, hold for a relatively large fraction of the exposure, then quickly move to position B, and hold
  • by Manip ( 656104 ) on Saturday July 31, 2010 @05:02PM (#33097836)
    This is like one of those "Why didn't I think of that?" ideas that you wonder why your camera doesn't already have. The nice part is that it can be done very cheaply (relative to the cost of a camera) and would improve images in many cases. My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement. Like for example you're standing on a boat rocking in the waves, you take a photo of the deck, and this technology compensates for the rock which results in a ton of blur.
    • by Hast ( 24833 )

      This method (like all motion compensating algorithms) can correct for some motion blur but it will add other defects to the image. So while it might "save" a picture already captured it's better to take a new photo.

    • The concept is in fact so simple it has already been done. This is probably just a new enhanced(!) algorithm. I have a several years old digital camera which has the ability to compensate for moving or shaking the camera, this basic feature is just off by default on my camera, though some more idiot-proff cameras have it on by default.

      • Re: (Score:2, Informative)

        by Threni ( 635302 )

        This isn't IS like Canon (for example) has on its lenses. This is making a note of the movement and removing it later (where later could mean just after the pic is taken) rather than using gyros or whatever to prevent the shaking from affecting the picture in the first place. Perhaps both systems could be used, but I'm not sure, given that I'm not sure if it makes sense to use a note of how a camera was moved when the picture was taken at the same time that some of the movement has been compensated for - y

      • I usually like that feature. I had to turn it off on my video camera when I was doing a shoot a couple weeks ago. My hands aren't always steady, so it's nice having it fix that automagically. I set up for a tripod shot (filming a stage). It detected the motion on the stage as the still part, so the stage itself must have been moving, so it looked like I was unsteady. With the steady shot turned off, it came out perfectly. Well, until someone bumped my tripod, but there isn't much we can

    • by Firehed ( 942385 )

      I imagine that if this tech does make it into higher-end cameras (namely SLRs), the accelerometer data will in fact be saved as extra data in the RAW file. In fact due to the nature of RAW files, I think it would have to be done that way. Naturally if you're shooting jpegs (phones, P&S, foolish SLR users), then you just take what you get and that's it. It will probably just become another part of RAW "development" for higher-end shooters.

      Ultimately, the concept isn't very different than the image stabil

    • non-inertial frame (Score:3, Insightful)

      by martyb ( 196687 )

      My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement.

      I suspect in the majority of cases, this would improve photos. As to your query, my first thought of a problematic environment would be trying to take a photo of a friend sitting next to you--in a moving roller coaster as it hurls around a bend. You and your friend are [mostly] stationary WRT each other, but you (and the camera) are all undergoing acceleration, which the camera dutifully attepts to remove from the photo. Certainly a comparatively rare event compared to the majority of photo-ops.

      • by Nemyst ( 1383049 )
        Further, I assume it could be done so that instead of immediately applying the post-processing, the motion sensors' data is stored alongside every picture for later usage. It would be more efficient (in terms of photo quality, power savings and speed) to let a computer with user-selectable settings do the job instead of embedding the entire algorithm in the camera.
  • There is a lot of poor porn out there from people that can't hold a camera still. Microsoft should redeem itself and sort that out asap.
  • Frankencamera. (Score:5, Interesting)

    by Greger47 ( 516305 ) on Saturday July 31, 2010 @05:06PM (#33097858)

    Step back! This is a job for Frankencamera [stanford.edu]. Run it on your Nokia N900 [nokia.com] today.

    OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.

    /greger

    • Sounds like a great way to land a spot on a terrorist watch list, to me...
    • by gmuslera ( 3436 )
      Is perfect for it. You have already most of the needed hardware already included, and you can install in it any needed software to play with the photo or the process of taking it. But the words "Microsoft Research" sound a bit ominous in the article. Probably the research have a big fat patent that say somewhere "and is forbidden to try this in open source operating systems".
    • Re:Frankencamera. (Score:5, Informative)

      by slashqwerty ( 1099091 ) on Saturday July 31, 2010 @07:57PM (#33098736)
      It's worth noting that page nine of the Frankencamera team's paper [stanford.edu] mentions the work of Joshi et al when it discusses deblurring pictures. Neel Joshi [microsoft.com] was the lead researcher from the article we are discussing.
    • OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.

      Cred, yes. Easy passage through an airport checkpoint, not so much.

  • by Average_Joe_Sixpack ( 534373 ) on Saturday July 31, 2010 @05:09PM (#33097872)

    Social networking sites are about to get a whole lot more ugly

  • by supernova87a ( 532540 ) <kepler1.hotmail@com> on Saturday July 31, 2010 @05:10PM (#33097886)
    I recall that some other cameras, like a Casio I've seen a friend using, also do deblurring, but rather by stacking of rapid subframes (I guess using bright reference points). If I understand correctly, this new method is operated on a single frame. I wonder if anyone has a useful comparison of the hardware requirement/image quality/useability differences between the two methods?
  • Okay. (Score:3, Insightful)

    by kurokame ( 1764228 ) on Saturday July 31, 2010 @05:11PM (#33097890)

    Great, you can improve your motion blur removing algorithm by recording the motion which created the blur.

    Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues. So this is more of supplementary data. The before and after images leave out the whole "you can already do this without the extra sensor data" aspect.

    And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization. Brace your shooting arm. If you want to get fancy, use something like Canon IS lenses.

    Yeah, this is nifty, especially for smartphone based cameras which may already have built-in sensors to do this. But neither is it exactly revolutionary. You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.

    • by mark-t ( 151149 )
      I think that the idea is that this would be intended for everyday point-and-shoot cameras that are usually hand-held.
    • Re:Okay. (Score:5, Insightful)

      by profplump ( 309017 ) <zach-slashjunk@kotlarek.com> on Saturday July 31, 2010 @05:26PM (#33097968)

      This isn't for people who want to learn photography and take good pictures, it's for people who are shooting their friends in a bar at night to post on Your Face in a Tube and laugh about for a week before being forgotten -- it's merely intended to allow point-and-click shooting work more reliably in poor conditions on cheap equipment with inattentive and untrained operators.

      • Re: (Score:3, Interesting)

        by sker ( 467551 )

        Agreed. I feel the same way about auto-focus.

        • I disagree; autofocus is usually better than manual even if you have both - especially if your only image preview is on a relatively low-res LCD, but also if the subject is moving (in macro shots a little subject movement can *completely* de-focus the shot). And face recognition is one of those "blingy"-seeming features that actually makes sense, since in an image with with objects at various focal depths, usually you want the face. In cases where that's wrong, a focus lock button allows you to autofocus at
          • by babyrat ( 314371 )

            If you are trying to focus with a low res LCD (or any lcd for that matter) I can see why you wouldn't see the need for manual focus...

      • Re:Okay. (Score:4, Insightful)

        by kurokame ( 1764228 ) on Saturday July 31, 2010 @05:46PM (#33098066)
        That would be a great point if it involved learning something more complicated than bracing your hand.
      • There are real limits to the human body. Anyone who says "I can hold a camera perfectly steady," is lying. We are not perfect platforms. So image stabilization can help a lot. Long range photography, in particular of fast moving objects like in sports, got a big boost when optical image stabilization came out. The length that you could zoom and still get a good shot increased. Wasn't that the photographers were bad, it was that they were at the human limits. The optical stabilizers enhanced that are upped t

        • There are a lot of limits, including the human factor. Photography is one, but try target shooting (like, with a gun). You'll never see someone who can put 10 shots at 100 feet into the same hole. If they get two, it's dumb luck.

          For cameras, sometimes there are extreme examples. I put my Nikon D90 onto my telescope (Newtonian). I was shooting using a USB cable to my laptop, so I could use the laptop as a remote trigger, and set the camera to lift the mirror, so it wouldn't shake. When l

    • And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization. Brace your shooting arm. If you want to get fancy, use something like Canon IS lenses.

      Yeah, this is nifty, especially for smartphone based cameras which may already have built-in sensors to do this. But neither is it exactly revolutionary. You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.

      Maybe you should learn m

    • So you just hate technology used for new application. Guess what people can't be good at everything. That is why technology exists. It spent need to replace the expert but it alows the novice to get the jobs done easier

    • Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues.

      In order to have any hope of getting that motion information from the blurred image, wouldn't you have to also have the image of what the image is supposed to look like without the blur?

      And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization.

      Well that's the whole problem, right? Short exposure times mean dark images, long exposure times mean blur. Sure, you can set up a professional camera with a tripod and do it the right way, but what about the rest of us who just want to take the occasional picture on a cheap camera without thinking about it? That's most o

    • by Polo ( 30659 ) *

      Actually, I kind of wonder if an IS lens might actually work AGAINST you with this algorithm. You'd have camera motion that the IS lens system is cancelling and you'd have to subtract that vector from the camera motion vector to use in this algorithm.

      But I can still see this being used in professional settings. Heck, there are applications that contain databases of per-lens data, and you can correct for distortion and light-falloff along with sensor corrections.

      look at http://www.dxo.com/ [dxo.com] or maybe canon D

  • by peter303 ( 12292 ) on Saturday July 31, 2010 @05:48PM (#33098078)
    For the past 8 years or so, MicroSoft has been co-author on more papers than any other organization at SIGGRAPH. This is impressive because SIGGRAPH has a the highest paper rejection rate of any conference I know of - they reject (or downgrade to non-published session) 85% of the paper submissions. And you have to submit publication-ready papers nearly a year in advance, with a video summary.

    This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off.
    • "This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off."

      I suspect the idea is mainly to keep the people from going elsewhere.

      • by dbIII ( 701233 )
        Microsoft used to get mercilessly flamed by the rest of the industry for doing no research at all and just acquiring companies with technology, ripping off the ideas of others or entering into dodgy contracts to licence technology (eg. Spyglass if you give us that web browser we'll give you a percentage of every copy of IE sold - only we're giving it away free suckers!). They couldn't keep plundering startups and copying apple forever and still be viable so they started Microsoft Research. Some good ideas
    • by yyxx ( 1812612 )

      High reject rates are not necessarily an indicator of scientific quality, they simply may mean that the conference gets a lot of crap submitted. They are often more an indication of the perception of a conference as being important, not an actual indicator of quality. And for SIGGRAPH, you know what counts: nice pictures and videos. It's not really a surprise that Microsoft is good at producing those.

      If you want to know about the quality of Microsoft Research, you need to look at how much they're spending

  • The whole premise seems kinda ridiculous. You might have some idea how the camera swung, but that only helps you if you're pointing at some 2D surface that's perpendicular to the camera.

    If there is any depth to the scene, points closer will move more than points farther away. You might have an estimate of the distance from the auto-focus feature, but that's only going to help you fix up points near the focus sweet-spot. Points closer and farther away are going to be made worse, not better.

  • by melted ( 227442 ) on Saturday July 31, 2010 @07:28PM (#33098608) Homepage

    Now they just need to attach this to Ballmer's head to deblur the company vision a little.

    • Now they just need to attach this to Ballmer's head to deblur the company vision a little.

      This is the first time in five years I've seen a +5 Ballmer joke that did not contain the word 'chair'.

  • Information théory (Score:3, Interesting)

    by Vapula ( 14703 ) on Saturday July 31, 2010 @07:36PM (#33098646)

    Information théory tell us that once some info has been lost, it can't be recovered. If the picture has been somehow "damaged" by some motion blur, the original picture can't be reconstructed.

    On the image, we'll have much more than the motion blur from the camera's movement :
    - noise added from sensor electronic noise
    - blur from target movement
    - distortion coming from lens defect (mostly for low end cameras)
    - distortion/blur from bad focus (autofocus in not perfect) ...

    The operation that will reduce the camera's motion blur will probably increase the effect from all other defects. You reduce one kind of image destruction and increase the impact of the other one.

    • by beej ( 82035 ) on Sunday August 01, 2010 @12:32AM (#33099590) Homepage Journal

      But they are adding information to the system with the additional hardware attachment with all the gyroscopes and so-on. This information can be used to improve the photo, correcting some of the damage. So information wasn't "lost"'; it was just reacquired from a different source, as it were.

      It looks like camera shake blur would be reduced, but target motion blur would remain intact.

      Of course, if you do a 90-second exposure of the sun, it's likely going to be all-white no matter how much shake-correction occurs. But this solution wasn't meant to fix that problem.

      • by yyxx ( 1812612 )

        Sorry, your analysis is wrong. The information added by the gyroscope is tiny compared to the information that was "lost" (and it wasn't really "lost" in the sense of the GP).

    • by yyxx ( 1812612 )

      Information theory tell us that once some info has been lost, it can't be recovered. If the picture has been somehow "damaged" by some motion blur, the original picture can't be reconstructed.

      You're making a lot of implicit assumptions. If you know ahead of time that an image is a black-and-white image of a square, you can recover it quite well even in the presence of lots of noise and motion blur. You lose a lot of information about the individual pixel values, but you can reconstruct them with prior kno

  • Here's a dumb question...

    If you just need some shaking data to unblur very nicely, why can't one just (offline, with a hour or two to crank on it) just figure out what the motion was by unblurring as hypothesis testing, perhaps on a small section of the picture. Then you unblur the whole thing on the most likely candidates?

    • If you just need some shaking data to unblur very nicely, why can't one just (offline, with a hour or two to crank on it) just figure out what the motion was by unblurring as hypothesis testing, perhaps on a small section of the picture. Then you unblur the whole thing on the most likely candidates?

      There may not be a reason. It could simply be they needed the actual data first to perfect the technique before trying an algorithm like that.

    • Re: (Score:3, Informative)

      by ceoyoyo ( 59147 )

      You've described blind deconvolution. It does work, but guided deconvolution, a version of which they're doing here, usually works better because you're providing more information. The search space is very large and you have to make assumptions anyway (just how does the computer assess the "sharpness" of an image?) so anything you can do to narrow it down usually improves your results.

  • At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors -- the same sensors currently added to the iPhone 4. No more blurry low light photos!

    Uh, what? No more blurry low light photos... if you can get your Apple phone to work with Microsoft technology!
  • Blind deconvolution and computational photography have been around for a long time. They are being used, for example, to enhance astronomical images.

    Microsoft is making an incremental improvement to this field. That's nice, but why is it worth reporting any more than any of the other papers on this field?

  • What they have actually used is the fact that motion blur is not normal blur since normal blur would most definitely result in information loss. Apparently motion blur can be counteracted with the extra "motion" information. Now, all I wonder, how hard would it be to brute force that information that the extra camera sensors record. In many cases, I bet that in many cases, the direction wont have time to change during the short exposure, which could limit the number of directions to one. Now all you need is

Economists state their GNP growth projections to the nearest tenth of a percentage point to prove they have a sense of humor. -- Edgar R. Fiedler

Working...