Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Input Devices Iphone Cellphones Graphics Microsoft

Microsoft Tech Can Deblur Images Automatically 204

An anonymous reader writes "At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors — the same sensors currently added to the iPhone 4. No more blurry low light photos!"
This discussion has been archived. No new comments can be posted.

Microsoft Tech Can Deblur Images Automatically

Comments Filter:
  • by Manip ( 656104 ) on Saturday July 31, 2010 @06:02PM (#33097836)
    This is like one of those "Why didn't I think of that?" ideas that you wonder why your camera doesn't already have. The nice part is that it can be done very cheaply (relative to the cost of a camera) and would improve images in many cases. My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement. Like for example you're standing on a boat rocking in the waves, you take a photo of the deck, and this technology compensates for the rock which results in a ton of blur.
  • Okay. (Score:3, Insightful)

    by kurokame ( 1764228 ) on Saturday July 31, 2010 @06:11PM (#33097890)

    Great, you can improve your motion blur removing algorithm by recording the motion which created the blur.

    Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues. So this is more of supplementary data. The before and after images leave out the whole "you can already do this without the extra sensor data" aspect.

    And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization. Brace your shooting arm. If you want to get fancy, use something like Canon IS lenses.

    Yeah, this is nifty, especially for smartphone based cameras which may already have built-in sensors to do this. But neither is it exactly revolutionary. You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.

  • Re:lol yea sure (Score:4, Insightful)

    by binarylarry ( 1338699 ) on Saturday July 31, 2010 @06:15PM (#33097906)

    Have you used GIMP in past 5 years?

  • by offrdbandit ( 1331649 ) on Saturday July 31, 2010 @06:20PM (#33097928)
    Sounds like a great way to land a spot on a terrorist watch list, to me...
  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Saturday July 31, 2010 @06:24PM (#33097958) Journal
    Is the camera needing to do something else at the time that "sucking processing power" is some sort of issue?
  • Re:Okay. (Score:5, Insightful)

    by profplump ( 309017 ) <zach-slashjunk@kotlarek.com> on Saturday July 31, 2010 @06:26PM (#33097968)

    This isn't for people who want to learn photography and take good pictures, it's for people who are shooting their friends in a bar at night to post on Your Face in a Tube and laugh about for a week before being forgotten -- it's merely intended to allow point-and-click shooting work more reliably in poor conditions on cheap equipment with inattentive and untrained operators.

  • Poor mans IS (Score:1, Insightful)

    by Anonymous Coward on Saturday July 31, 2010 @06:42PM (#33098048)

    While it's a nice idea, isn't this just a poor man's image stabilization? Even cheap compacts come with some form of IS these days, and high end SLR lenses certainly do.

  • Re:Okay. (Score:4, Insightful)

    by kurokame ( 1764228 ) on Saturday July 31, 2010 @06:46PM (#33098066)
    That would be a great point if it involved learning something more complicated than bracing your hand.
  • non-inertial frame (Score:3, Insightful)

    by martyb ( 196687 ) on Saturday July 31, 2010 @06:50PM (#33098088)

    My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement.

    I suspect in the majority of cases, this would improve photos. As to your query, my first thought of a problematic environment would be trying to take a photo of a friend sitting next to you--in a moving roller coaster as it hurls around a bend. You and your friend are [mostly] stationary WRT each other, but you (and the camera) are all undergoing acceleration, which the camera dutifully attepts to remove from the photo. Certainly a comparatively rare event compared to the majority of photo-ops.

  • Re:lol yea sure (Score:3, Insightful)

    by bill_mcgonigle ( 4333 ) * on Saturday July 31, 2010 @06:56PM (#33098128) Homepage Journal

    Those people should use a better setup.

    Surprisingly enough, different people have different needs.

    The lack of auto mouse focus default really makes windows desktop suck, plus the lack of workspaces.

    I'm too much of a spaz to use focus-follows-mouse. Every time I try it I wind up bumping the mouse and typing into the wrong window. If I were a hardcode pre-trunk GIMP user I'd definitely have a session set up that way, though. Fortunately, the GIMP developers have come around to an option that works with most peoples' desktops.

  • Re:CSI Miami (Score:3, Insightful)

    by Dragoniz3r ( 992309 ) on Saturday July 31, 2010 @07:32PM (#33098328)
    Sorry, no. The blur in CSI Miami is not caused by motion, thus motion compensation won't help. That blur is just a sheer lack of pixels, and this algorithm does nothing to help that situation. CSI-mocking is safe.
  • by Anonymous Coward on Saturday July 31, 2010 @07:34PM (#33098344)

    Not surprisingly, the title is somewhat inacurate. Blur can be caused by several things. One is movement of the camera while the "shutter" is open. That is the one ms has a solution for. I probably would have called it digital image stabalization or something.

    So an alternative is optical (more accuratly would be mechanical) image stabalization.

    Pretty neat, but it wont remove any kind of blur unfortunately...

  • Re:lol yea sure (Score:3, Insightful)

    by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Saturday July 31, 2010 @08:05PM (#33098502) Homepage Journal

        I only run into the occasional problem with GIMP. They really have come a long way.

        I switched from Photoshop to GIMP years ago. Photoshop kept crashing on my machine, and GIMP didn't. Then I found there were more things I could do with GIMP, so I stayed. Once in a while I try out Photoshop again, but I stay with GIMP. A few times, Photoshop folks have run into problems, so I tell them to just send me their file, and I fix it in GIMP and send it back. :)

        But hey, it's a holy war. Sides have been drawn, and there are zealots on both sides who trash talk each other. Don't ever try to convince someone that the other is better, because it'll just be an argument. I don't play holy wars. I try both sides, and use what works best.

        On the computer I'm using right now, it's a dual boot Windows 7 and Slackware64 machine. Windows 7 crashed yet again, with the only solution being "format and reinstall". Bah, I just did that a month before. Instead, I'm staying booted up in Slack64, and am very happy. My other copy of Windows is sitting in a VirutalBox window, which I only bring up for the odd occasions that I need to run a Windows only app. Will I convince a Windows user to switch to Linux? Probably not. Am I perfectly content? Yes.

  • by fyngyrz ( 762201 ) on Saturday July 31, 2010 @08:33PM (#33098630) Homepage Journal

    You know, you -- and 99% of the others bitching about the Gimp -- you're utterly full of shit. I write commercial image processing / editing / animation / generation software for a living, I'm expert - you can read that as "terrifyingly exert" - with Photoshop, Gimp and a whole raft of others... and Gimp is an easy to use powerhouse.

    Now I will grant you exactly ONE thing, and that is, you need to sit down and learn to use it. That should take a few hours if you're familiar with something (anything) else; maybe a week hunting down tutorials, or a day hanging with a qualified mentor, if editing bitmaps is all new to you.

    If it takes you longer than that, you're either stupid or lazy.

    There's *nothing* significantly wrong with the Gimp. It has its limits, like everything does (Photoshop has some really annoying limits too), but for the vast majority of image processing and touch-up needs, it's very nice.

    Oh, mommie, my crop function is in a different menu... Some people just need a good smack in the head.

    If you really knew what you were doing, you'd have, and use, a whole suite of these programs, because for the big ones, there are areas where they excel, and that's the time to put them into play. If you can't learn to use them because the keystrokes are different, or there is a different paradigm... it isn't the program that sucks. It's you.

    Also, if you actually knew how to use them, you wouldn't be bitching about them.

  • by Estanislao Martínez ( 203477 ) on Saturday July 31, 2010 @08:40PM (#33098662) Homepage

    There are some full-size samples of the results of the technique [microsoft.com], where you can compare the original image with the result of their technique, and the results of two older techniques. Their technique show some very obvious problems:

    1. Doubling of high-contrast edges that are "ghosted" in the original because of the motion blur. In the original, presumably, the motion was something like this: start at position A, hold for a relatively large fraction of the exposure, then quickly move to position B, and hold for another large fraction of the exposure. This means that the photo records two copies of any high-constrast edges, one corresponding to A, and the other to B.

      There are several examples in the link that seem to be like that. The technique doesn't seem to figure this out in all cases, and renders the two ghost lines as separate, sharp lines. Most obvious example: the edge of the front rim of the red car in the second photo. Though compare with the result they got in the photo of the Coca-Cola cans, where it did figure it out for the rack, but not for the text on the cans, and where it introduced some artifact lines perpendicular to the rack.

    2. Severe white sharpening halos around edges.

    The more instructive comparison is the results of these guys' techniques with the older techniques. Clearly, they're doing a lot better than the older techniques. Still, this is very far away from primetime, IMO.

  • Re:lol yea sure (Score:5, Insightful)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Saturday July 31, 2010 @09:13PM (#33098810) Homepage
    Yeah, Microsoft does some decent research and develops some interesting technologies. It's turning things into products that they seem to have trouble with.
  • Interesting idea. (Score:2, Insightful)

    by Estanislao Martínez ( 203477 ) on Sunday August 01, 2010 @12:47AM (#33099486) Homepage

    What about combining the accelerometer data with a setting that records low-light images is a series of high-speed, underexposed images, then just using to accelerometer data to merge them?

    I think the problem with any method that doesn't change the optical path or move the sensor is that it just can't deal with parallax [wikipedia.org].

    So, your accelerometer records that between the first and the second microexposure, the camera shifted by x amount to the left. What relative shift do you apply to the frames? Well, the problem is that the correct shift is different for objects at different distances--so as soon as you have an image with large depth of field, there is no solution that corrects the blur for all objects in the frame. It might still be useful, though, because you'd be able to reduce camera blur at one distance--e.g., the camera could assume that the correct distance is the focus distance, or if you used RAW processing, you might be able to choose the correction distance at processing time.

    Note that optical stabilization systems don't have this problem to the same degree, because they're designed to keep the same ray of light hitting the same pixel during the whole exposure.

    There are other complications, though, because each of the microexposures will have more noise and reduced dynamic range compared to the full conventional exposure. I.e., by spending less time recording the value of a pixel, a microexposure is correspondingly less able to finely discriminate its level, and more so when the pixel is dark. Combining the microexposures has the potential to average out the noise, thus gaining you more shadow detail and dynamic range; theoretically you can get the same dynamic range and noise floor as the conventional exposure, but in practice it might well be different. There's a problem, however, that if the sensor noise is not random, the accelerometric shifts you apply to the microexposures as you combine them runs the risk of producing noise artifacts, as the pattern of the noise might produce interference patterns when superimposed on shifted copies of itself (see moiré [wikipedia.org], or more generally, interference [wikipedia.org]). That's because, to put it briefly, camera motion moves the apparent position of the objects in the frame, but doesn't move the noise patterns.

    Yeah, this stuff is complicated.

  • by beej ( 82035 ) on Sunday August 01, 2010 @01:32AM (#33099590) Homepage Journal

    But they are adding information to the system with the additional hardware attachment with all the gyroscopes and so-on. This information can be used to improve the photo, correcting some of the damage. So information wasn't "lost"'; it was just reacquired from a different source, as it were.

    It looks like camera shake blur would be reduced, but target motion blur would remain intact.

    Of course, if you do a 90-second exposure of the sun, it's likely going to be all-white no matter how much shake-correction occurs. But this solution wasn't meant to fix that problem.

  • by leuk_he ( 194174 ) on Sunday August 01, 2010 @07:23AM (#33100394) Homepage Journal

    That should take a few hours if you're familiar with something (anything) else; maybe a week hunting down tutorials, or a day hanging with a qualified mentor, if editing bitmaps is all new to you.

    If it takes you longer than that, you're either stupid or lazy.

    I think i am stupid. I am an occasional user of editing software for my home needs. I do manage to do some things with (a downloaded) photoshop, but I stay new with it HOW to do things. With GIMP I often fail to do it in a reasonable timeframe. Recent versions have become better, but to draw a simple line like in paint can be a nightmare.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...