Microsoft Tech Can Deblur Images Automatically 204
An anonymous reader writes "At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors — the same sensors currently added to the iPhone 4. No more blurry low light photos!"
Enhance (Score:5, Funny)
Re:Enhance (Score:5, Funny)
Re:Enhance (Score:5, Interesting)
Re: (Score:2, Funny)
[link to tvtropes.org]
You bastard! Help! [xkcd.com]
Re:Enhance (Score:5, Funny)
Actually, that's a Alpha version. The production version is demonstrated here:
http://www.youtube.com/watch?v=KUFkb0d1kbU [youtube.com]
Re: (Score:2)
Now in use by Crime Scene Investigators the world over!
http://www.youtube.com/watch?v=3uoM5kfZIQ0 [youtube.com]
Re: (Score:2, Funny)
Ugh (Score:2)
That's the first thing that came to mind. Luckily, it's a hardware attachment so we can still tell people to fuck off when they come to us with blurry photos.
Unless they have the attachment.
Re: (Score:2)
Enhance!
There's an app for that !
Re: (Score:2)
http://www.youtube.com/watch?v=v8w3fhYy6w4 [youtube.com]
Windows 7 (Score:3, Funny)
I bet it can remove the blur from the titlebar for screenshots of a Windows 7 app. Now we can all see what those developers are viewing behind that window!
Useful, but limited (Score:5, Informative)
As another comment, deconvolution requires a very accurate approximation of the true convolution kernel, which may be provided by the motion sensors. However, to reconstruct the image without artifacts, the true kernel must not approach zero in the Fourier domain below the Nyquist frequency of the intended reconstruction (which is limited by the antialias filter in front of the Bayer mask). In fact, if the kernel's Fourier transform has too small a magnitude at some frequency, the reconstruction at that frequency will be essentially noise, or will be zero if adequate regularization is used. If the motion blur is more than a few pixels, this will generally mean that the reconstructed image will have an abridged spectrum in the direction of blur, compared to directions in which no blur occurred. Of course, if your hand is so shaky and the exposure so long that blur occurs in all directions, then the spectrum of the reconstructed image will be more uniform. It is likely to be truncated compared to the spectrum of an image taken without motion blur.
The quality of the reconstructed image would also be limited by the effects of other convolutions in the optical pathway. For instance, if you're using a cheap superzoom lens, don't expect to get anywhere near the antialias filter's Nyquist frequency in the final image, as the lens will have buggered up the details nonlinearly across the image even before the motion blur is added. If you're using nice lenses (Canon "L" series or Pentax "*" series and suchlike), then this will not be an issue.
The method would seem to be useful in low-ish light photography of stationary objects. A sober photographer would beat a drunk photographer at this, but the technique would help both to some extent. A photographer using a tripod would do best, of course.
Re: (Score:2)
And how often do you have an object moving beneath a Windows 7 titlebar?
Perhaps you should have started your own thread.
I think you oughta look at the examples. (Score:3, Insightful)
There are some full-size samples of the results of the technique [microsoft.com], where you can compare the original image with the result of their technique, and the results of two older techniques. Their technique show some very obvious problems:
Re:Useful, but limited (Score:4, Interesting)
Do you mention FTs just for reference, or are you implying that they are typically used in deconvolutions? In my experience, signals with any amount of noise are much better handled with iterative algorithms.
Yes. Fourier methods are unlikely to be used in practice on images. However, it's instructive to look at the process in a transform space to understand the extent to which information is irrecoverably lost in the optical path. The consequences can be explained using any suitable integral transform, but engineers are most familiar with Fourier and wavelet methods.
Suppose the Fourier transform of the "perfect" image is J, and the Fourier transform of the exact blurring kernel is K, then the transform of the blurred image is L=JK. If the kernel is known exactly, and has adequate magnitude through the frequency range of interest, the image can be recovered simply by using an inverse of the kernel J=L/K. Due to the physics of photon detection, the measured image will also contain some amount of noise N, so even in this ideal case, the recovered image will be corrupted by an amount of noise J'=(L+N)/K. Realistically, the kernel is not known exactly, and may have small magnitude at some frequencies so that its inverse is unreliable. A related consequence is that the blurred image at those bands will be primarily noise. Use of pseudo-inverses is also unreliable, since they are discontinuous near a spectral zero with very high sensitivity to perturbation near that zero. So-called Wiener deconvolution attempts to circumvent this by diagonally biasing the transform of the estimated kernel before inverting, but with generally unsatisfactory results.
Information which has been destroyed cannot be recovered, by any method. Any attempt to do so would merely amplify the noise in the measured blurry image at those bands. Iterative methods (typically a variant of Richardon-Lucy for images) try to minimize the amplification of noise in various ways, all imperfect but preferable to a direct Fourier method. Most iterative methods will, left to themselves, converge on the same asymptote as the Fourier method. However, iterative deconvolution methods always employ a regularization step in each iteration, whose primary purpose is to attenuate adjustments in bands where the kernel is uncertain. They also generally use a small number of iterations, since the first iterations are less affected by noise than later iterations. The end result is that information which was destroyed is not spuriously recreated from noise (in principle at least).
If you're interested, I recommend: P. Jansson (ed.), Deconvolution of Images and Spectra, 2nd ed., Academic Press, 1996. Alas, it appears to be out of print (and my copy is not for sale).
Interestingly simple concept (Score:4, Insightful)
Re: (Score:2)
This method (like all motion compensating algorithms) can correct for some motion blur but it will add other defects to the image. So while it might "save" a picture already captured it's better to take a new photo.
Re: (Score:2)
The concept is in fact so simple it has already been done. This is probably just a new enhanced(!) algorithm. I have a several years old digital camera which has the ability to compensate for moving or shaking the camera, this basic feature is just off by default on my camera, though some more idiot-proff cameras have it on by default.
Re: (Score:2, Informative)
This isn't IS like Canon (for example) has on its lenses. This is making a note of the movement and removing it later (where later could mean just after the pic is taken) rather than using gyros or whatever to prevent the shaking from affecting the picture in the first place. Perhaps both systems could be used, but I'm not sure, given that I'm not sure if it makes sense to use a note of how a camera was moved when the picture was taken at the same time that some of the movement has been compensated for - y
Re: (Score:2)
I usually like that feature. I had to turn it off on my video camera when I was doing a shoot a couple weeks ago. My hands aren't always steady, so it's nice having it fix that automagically. I set up for a tripod shot (filming a stage). It detected the motion on the stage as the still part, so the stage itself must have been moving, so it looked like I was unsteady. With the steady shot turned off, it came out perfectly. Well, until someone bumped my tripod, but there isn't much we can
Re: (Score:2)
I imagine that if this tech does make it into higher-end cameras (namely SLRs), the accelerometer data will in fact be saved as extra data in the RAW file. In fact due to the nature of RAW files, I think it would have to be done that way. Naturally if you're shooting jpegs (phones, P&S, foolish SLR users), then you just take what you get and that's it. It will probably just become another part of RAW "development" for higher-end shooters.
Ultimately, the concept isn't very different than the image stabil
non-inertial frame (Score:3, Insightful)
My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement.
I suspect in the majority of cases, this would improve photos. As to your query, my first thought of a problematic environment would be trying to take a photo of a friend sitting next to you--in a moving roller coaster as it hurls around a bend. You and your friend are [mostly] stationary WRT each other, but you (and the camera) are all undergoing acceleration, which the camera dutifully attepts to remove from the photo. Certainly a comparatively rare event compared to the majority of photo-ops.
Re: (Score:2)
Re: (Score:2)
Interesting idea. (Score:2, Insightful)
I think the problem with any method that doesn't change the optical path or move the sensor is that it just can't deal with parallax [wikipedia.org].
So, your accelerometer records that between the first and the second microexposure, the camera shifted by x amount to the left. What relative shift do you apply to the frames? Well, the p
Put it to good use (Score:2)
Frankencamera. (Score:5, Interesting)
Step back! This is a job for Frankencamera [stanford.edu]. Run it on your Nokia N900 [nokia.com] today.
OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.
/greger
Arduino board and a mess of wires... (Score:2, Insightful)
Re: (Score:2)
Re:Frankencamera. (Score:5, Informative)
Re: (Score:2)
OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.
Cred, yes. Easy passage through an airport checkpoint, not so much.
Re: (Score:2)
Or on the other other hand you could just learn how to use your camera and not get such a shit load of blur in the first place.
That's what I always tell people. The raw light they spontaneously emit from knowing what they're doing will go and light up that dim scene, thus letting them use a faster shutter. It's really quite simple.
I thought this only worked when taking pictures over your shoulder and your buttocks were pointed at the subject?
Re: (Score:2)
"No more blurry low light photos!" (Score:4, Informative)
Social networking sites are about to get a whole lot more ugly
comparison to other methods? (Score:4, Interesting)
Re: (Score:2)
What if you just take a quarter of those pixels, four times, and then march them out, as in the mentioned-not-too-long ago high-speed-photography trick later incorporated into CHDK?
Okay. (Score:3, Insightful)
Great, you can improve your motion blur removing algorithm by recording the motion which created the blur.
Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues. So this is more of supplementary data. The before and after images leave out the whole "you can already do this without the extra sensor data" aspect.
And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization. Brace your shooting arm. If you want to get fancy, use something like Canon IS lenses.
Yeah, this is nifty, especially for smartphone based cameras which may already have built-in sensors to do this. But neither is it exactly revolutionary. You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.
Re: (Score:2)
Re:Okay. (Score:5, Insightful)
This isn't for people who want to learn photography and take good pictures, it's for people who are shooting their friends in a bar at night to post on Your Face in a Tube and laugh about for a week before being forgotten -- it's merely intended to allow point-and-click shooting work more reliably in poor conditions on cheap equipment with inattentive and untrained operators.
Re: (Score:3, Interesting)
Agreed. I feel the same way about auto-focus.
Re: (Score:2)
Re: (Score:2)
If you are trying to focus with a low res LCD (or any lcd for that matter) I can see why you wouldn't see the need for manual focus...
Re:Okay. (Score:4, Insightful)
Also could well help pros (Score:2)
There are real limits to the human body. Anyone who says "I can hold a camera perfectly steady," is lying. We are not perfect platforms. So image stabilization can help a lot. Long range photography, in particular of fast moving objects like in sports, got a big boost when optical image stabilization came out. The length that you could zoom and still get a good shot increased. Wasn't that the photographers were bad, it was that they were at the human limits. The optical stabilizers enhanced that are upped t
Re: (Score:2)
There are a lot of limits, including the human factor. Photography is one, but try target shooting (like, with a gun). You'll never see someone who can put 10 shots at 100 feet into the same hole. If they get two, it's dumb luck.
For cameras, sometimes there are extreme examples. I put my Nikon D90 onto my telescope (Newtonian). I was shooting using a USB cable to my laptop, so I could use the laptop as a remote trigger, and set the camera to lift the mirror, so it wouldn't shake. When l
Re: (Score:2)
Maybe you should learn m
Re: (Score:2)
So you just hate technology used for new application. Guess what people can't be good at everything. That is why technology exists. It spent need to replace the expert but it alows the novice to get the jobs done easier
Re: (Score:2)
Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues.
In order to have any hope of getting that motion information from the blurred image, wouldn't you have to also have the image of what the image is supposed to look like without the blur?
And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization.
Well that's the whole problem, right? Short exposure times mean dark images, long exposure times mean blur. Sure, you can set up a professional camera with a tripod and do it the right way, but what about the rest of us who just want to take the occasional picture on a cheap camera without thinking about it? That's most o
Re: (Score:2)
Actually, I kind of wonder if an IS lens might actually work AGAINST you with this algorithm. You'd have camera motion that the IS lens system is cancelling and you'd have to subtract that vector from the camera motion vector to use in this algorithm.
But I can still see this being used in professional settings. Heck, there are applications that contain databases of per-lens data, and you can correct for distortion and light-falloff along with sensor corrections.
look at http://www.dxo.com/ [dxo.com] or maybe canon D
MicroSoft is impressive at SIGGRAPH (Score:5, Interesting)
This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off.
Re: (Score:2)
"This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off."
I suspect the idea is mainly to keep the people from going elsewhere.
Re: (Score:2)
Re: (Score:2)
High reject rates are not necessarily an indicator of scientific quality, they simply may mean that the conference gets a lot of crap submitted. They are often more an indication of the perception of a conference as being important, not an actual indicator of quality. And for SIGGRAPH, you know what counts: nice pictures and videos. It's not really a surprise that Microsoft is good at producing those.
If you want to know about the quality of Microsoft Research, you need to look at how much they're spending
Kinda ridiculous (Score:2)
The whole premise seems kinda ridiculous. You might have some idea how the camera swung, but that only helps you if you're pointing at some 2D surface that's perpendicular to the camera.
If there is any depth to the scene, points closer will move more than points farther away. You might have an estimate of the distance from the auto-focus feature, but that's only going to help you fix up points near the focus sweet-spot. Points closer and farther away are going to be made worse, not better.
Now they just need to attach this to Ballmer's (Score:5, Funny)
Now they just need to attach this to Ballmer's head to deblur the company vision a little.
Re: (Score:2)
Now they just need to attach this to Ballmer's head to deblur the company vision a little.
This is the first time in five years I've seen a +5 Ballmer joke that did not contain the word 'chair'.
Information théory (Score:3, Interesting)
Information théory tell us that once some info has been lost, it can't be recovered. If the picture has been somehow "damaged" by some motion blur, the original picture can't be reconstructed.
On the image, we'll have much more than the motion blur from the camera's movement : ...
- noise added from sensor electronic noise
- blur from target movement
- distortion coming from lens defect (mostly for low end cameras)
- distortion/blur from bad focus (autofocus in not perfect)
The operation that will reduce the camera's motion blur will probably increase the effect from all other defects. You reduce one kind of image destruction and increase the impact of the other one.
Re:Information théory (Score:5, Insightful)
But they are adding information to the system with the additional hardware attachment with all the gyroscopes and so-on. This information can be used to improve the photo, correcting some of the damage. So information wasn't "lost"'; it was just reacquired from a different source, as it were.
It looks like camera shake blur would be reduced, but target motion blur would remain intact.
Of course, if you do a 90-second exposure of the sun, it's likely going to be all-white no matter how much shake-correction occurs. But this solution wasn't meant to fix that problem.
Re: (Score:2)
Sorry, your analysis is wrong. The information added by the gyroscope is tiny compared to the information that was "lost" (and it wasn't really "lost" in the sense of the GP).
Re: (Score:2)
Information theory tell us that once some info has been lost, it can't be recovered. If the picture has been somehow "damaged" by some motion blur, the original picture can't be reconstructed.
You're making a lot of implicit assumptions. If you know ahead of time that an image is a black-and-white image of a square, you can recover it quite well even in the presence of lots of noise and motion blur. You lose a lot of information about the individual pixel values, but you can reconstruct them with prior kno
Here's a dumb question... (Score:2)
Here's a dumb question...
If you just need some shaking data to unblur very nicely, why can't one just (offline, with a hour or two to crank on it) just figure out what the motion was by unblurring as hypothesis testing, perhaps on a small section of the picture. Then you unblur the whole thing on the most likely candidates?
Re: (Score:2)
If you just need some shaking data to unblur very nicely, why can't one just (offline, with a hour or two to crank on it) just figure out what the motion was by unblurring as hypothesis testing, perhaps on a small section of the picture. Then you unblur the whole thing on the most likely candidates?
There may not be a reason. It could simply be they needed the actual data first to perfect the technique before trying an algorithm like that.
Re: (Score:3, Informative)
You've described blind deconvolution. It does work, but guided deconvolution, a version of which they're doing here, usually works better because you're providing more information. The search space is very large and you have to make assumptions anyway (just how does the computer assess the "sharpness" of an image?) so anything you can do to narrow it down usually improves your results.
Unholy alliance? (Score:2)
Uh, what? No more blurry low light photos... if you can get your Apple phone to work with Microsoft technology!
why this focus on Microsoft and Apple? (Score:2)
Blind deconvolution and computational photography have been around for a long time. They are being used, for example, to enhance astronomical images.
Microsoft is making an incremental improvement to this field. That's nice, but why is it worth reporting any more than any of the other papers on this field?
Brute Force? (Score:2)
Re:lol yea sure (Score:5, Informative)
Microsoft Research puts out a lot of really interesting and successful research. They aren't the people programming the OS or office applications.
Re: (Score:2)
Microsoft Research puts out a lot of really interesting and successful research. They aren't the people programming the OS or office applications.
Yes, but I just took two of the images, for research purposes, and applied a simple Sharp mask to them (two different levels), and it seems the results are pretty comparable. If I actually spent more than 2 mouse clicks to try to properly sharpen them, I betcha the results would be even better, and not require additional hardware. As a matter of fact, the results they get can easily be duplicated with IN CAMERA filters and thus save a boatload of dev costs, and a bunch of money.
These (SHARP 1 and SHARP 2)
Re: (Score:2)
To be honest, the MS UNBLUR images are clearly better thant SHARP 1 and SHARP 2, at least for me. Their method allows the software to user more data than any post-processing filter, data that will not be preserved in the image itself. For some people, unless if it adds a substantial fee to a phone or small cam, it is a benefit that cannot be replaced by a simple filter.
While their method allows it, it doesnt yet fully utilize it. There are things about both the filtered images and theirs that is not desirable. Theirs does a little better with contrasting blur (look at the bright spot on the car door - doubled in the sharp (filter) image (and original). But then again, their method currently adds ghosting (in some cases serious ghosting) to the image. Look at the cars in the parking lot and you will notice an "aura" around them.
Both methods equally have issues. But then
Re:lol yea sure (Score:5, Insightful)
Re: (Score:2)
Actually, the product teams and research teams often work together - regularly and very deliberately. Many developers have moved between the research and product groups. There are many features in Office, Windows, Bing and other products that came right out of MS Research. In my experience, we're really good at this.
-Foredecker
Even so... (Score:5, Interesting)
Clearly (pun intended) the results have a ways to go yet. Look at the coca-cola image, at the 'a' on the end of the cola... that thing is hosed by the blur, and they're unable to recover it because there's no intermediate contrasting color. Same thing for the spokes on the car rims.
This problem can't be completely solved post-picture. Only large-scale elements with nothing else around them will yield pixel-sharp solutions.
The optimum way to correct blur is to apply active or passive (e.g. tripod) stabilization to the lens prior to the shot; active technology is already pretty decent (photographers tend to measure things in stops; it's intuitive to them... when they say an active stabilizer "gives you" four stops, for instance with Canon, what they mean is that you can shoot four stops slower with the shutter and you won't get blur from camera movement.) Doesn't solve subject movement at all, but then, nothing really does other than cranking down the exposure time.
So... considering lens stabilization has been in-camera for years, and this requires more hardware, but gives you less... I'm going to go out on a limb and say it isn't of interest to camera folks. Maybe in some esoteric role... a spacecraft or something else with a tight power budget where stabilization can't be done for some reason (certainly measurement takes less power than actual stabilization)... but DSLRs and point-and-shoots... no.
Re: (Score:3, Interesting)
I'm going to go out on a limb and say it isn't of interest to camera folks. Maybe in some esoteric role... a spacecraft or something else with a tight power budget where stabilization can't be done for some reason (certainly measurement takes less power than actual stabilization)... but DSLRs and point-and-shoots... no.
Well, sort of, I disagree somewhat. For starters, take camera phones. What do they need to do this? I'm too lazy to read the paper, but seems like accelerometer data. How many phones come wit
Re: (Score:2)
> Doesn't solve subject movement at all, but then, nothing really does other than cranking down the exposure time.
I suspect that if it's possible to get very many images of the subject then you can gather enough data to rebuild what would be a more accurate image of the subject. Even if the individual images are blurry...
Re: (Score:3, Informative)
Yes. We call that technique "stacking." And it can result in profound improvements. Here is [flickr.com] a before and after of stacking; at left, one normal shot from the camera at pushed ISO 12800 (ISO 3200 with an additional 2-stop digital push, in-camera), at right, the result of combining 36 of those shots and recovering the data through the noise.
Re:lol yea sure (Score:5, Funny)
Probably only half-working coming from microsoft
It could be worse... the GIMP developers could have built it, in which case it would be a mostly working implementation of half the features of some existing software. However, nobody would realize this since only the developers would be able to comprehend the UI.
Re:lol yea sure (Score:4, Insightful)
Have you used GIMP in past 5 years?
Re: (Score:3, Informative)
Single-window mode hasn't been released yet, but it's coming. This will make it usable for folks who aren't using fvwm with focus-follows-cursor.
Re: (Score:2)
Re: (Score:3, Insightful)
Those people should use a better setup.
Surprisingly enough, different people have different needs.
The lack of auto mouse focus default really makes windows desktop suck, plus the lack of workspaces.
I'm too much of a spaz to use focus-follows-mouse. Every time I try it I wind up bumping the mouse and typing into the wrong window. If I were a hardcode pre-trunk GIMP user I'd definitely have a session set up that way, though. Fortunately, the GIMP developers have come around to an option that works with mos
Re: (Score:2)
That's because that's not how "focus follows mouse" should work. You don't move *all* the ui elements over to what the mouse is over. You move the *mouse* ui events over there, and maybe some others, depending on context. Certainly not typing, though.
The biggest, most useful one, IMO, is scroll. I can't tell you how many times I've wanted to scroll a window to view some stuff, but not have that window cover up the one for the app I'm actually working in.
It's doubly frustrating, because when I get home,
Re: (Score:2)
Probably someone should point out that on many unixy systems, the mouse pointer disappears after a few moments of not moving it, and/or typing, which also would solve the original poster's problem of wanting to move the mouse off-window to get rid of the pointer.
Re: (Score:2)
Re: (Score:2)
You don't use a mouse in a text editor.
Re: (Score:3, Insightful)
I only run into the occasional problem with GIMP. They really have come a long way.
I switched from Photoshop to GIMP years ago. Photoshop kept crashing on my machine, and GIMP didn't. Then I found there were more things I could do with GIMP, so I stayed. Once in a while I try out Photoshop again, but I stay with GIMP. A few times, Photoshop folks have run into problems, so I tell them to just send me their file, and I fix it in GIMP and send it back. :)
But he
Re: (Score:2)
I'm content with using 7 and dual-booting to Ubuntu or Fedora when needed.
Re: (Score:2)
I'm not quite sure what's happened. I do a lot with this machine, as it's my primary machine. I have Windows 7 Home Premium on my laptop. In about 6 months of owning it, it had one similar problem that the repair took care of. For some reason this one took me into checkdisk, found errors, fixed them, and rebooted. It was an endless cycle. S.M.A.R.T. doesn't report any drive errors, and Linux is on another partition on the same drive and doesn't have any problems. The machine is nice an
Bitching about gimp (Score:5, Insightful)
You know, you -- and 99% of the others bitching about the Gimp -- you're utterly full of shit. I write commercial image processing / editing / animation / generation software for a living, I'm expert - you can read that as "terrifyingly exert" - with Photoshop, Gimp and a whole raft of others... and Gimp is an easy to use powerhouse.
Now I will grant you exactly ONE thing, and that is, you need to sit down and learn to use it. That should take a few hours if you're familiar with something (anything) else; maybe a week hunting down tutorials, or a day hanging with a qualified mentor, if editing bitmaps is all new to you.
If it takes you longer than that, you're either stupid or lazy.
There's *nothing* significantly wrong with the Gimp. It has its limits, like everything does (Photoshop has some really annoying limits too), but for the vast majority of image processing and touch-up needs, it's very nice.
Oh, mommie, my crop function is in a different menu... Some people just need a good smack in the head.
If you really knew what you were doing, you'd have, and use, a whole suite of these programs, because for the big ones, there are areas where they excel, and that's the time to put them into play. If you can't learn to use them because the keystrokes are different, or there is a different paradigm... it isn't the program that sucks. It's you.
Also, if you actually knew how to use them, you wouldn't be bitching about them.
Re: (Score:3, Insightful)
That should take a few hours if you're familiar with something (anything) else; maybe a week hunting down tutorials, or a day hanging with a qualified mentor, if editing bitmaps is all new to you.
If it takes you longer than that, you're either stupid or lazy.
I think i am stupid. I am an occasional user of editing software for my home needs. I do manage to do some things with (a downloaded) photoshop, but I stay new with it HOW to do things. With GIMP I often fail to do it in a reasonable timeframe. Recent
Re: (Score:3, Funny)
It could be worse... the GIMP developers could have built it, in which case it would be a mostly working implementation of half the features of some existing software. However, nobody would realize this since only the developers would be able to comprehend the UI.
If you don't like the GUI, there's always the Lisp interface. If another OSS project gets named after a disability, I'm sure the gimp devs will incorporate it somehow.
Re: (Score:2)
. If another OSS project gets named after a disability, I'm sure the gimp devs will incorporate it somehow.
I guess they haven't heard about my OSS project, TARD, yet.
Re: (Score:2, Troll)
Probably only half-working coming from microsoft, plus if you use black light in the room you can get brick you're phone/camera.
And then, four years later, we get an Open Source carbon-copy of it that works a little better but is much harder to use.
Tee hee giggle snort. Ignorant stereotypes about an organization are funny.
Re: (Score:3, Insightful)
Re: (Score:2)
Taking the next shot maybe?
Re: (Score:2)
Yes and no. There are limitations to how quickly and accurately the physical IS systems can work. Overall they're fantastic and well worth the premium if you're a serious shooter, but this could provide a much cheaper alternative that could be nearly as effective. Also, provided you have sufficiently accurate accelerometer data, you could reprocess the RAWs as deblurring algorithms improve for better results (check out the difference in noise reduction in the latest version of Adobe Camera RAW). This could
Re: (Score:2)
There's an old rule: Never do something in hardware that can be done in software. Just imagine that you could put this stuff easily into cellphones, which would never include a image stabilization as it is used in DSLRs right now, because it's too bulky and too expensive. All just with a software update.
Re: (Score:2)
In Canon parlance, ultrasonic or USM has nothing to do with image stabilization, but refers to the motor function that drives autofocus.
Image stabilization is marked by a logo that says "Image Stabilization", or "IS".
Re: (Score:2)
While it's a nice idea, isn't this just a poor man's image stabilization? Even cheap compacts come with some form of IS these days, and high end SLR lenses certainly do.
I think that the key point here is the 6DOF measurement of the camera movement. I will admit to not knowing about IS, but I would guess that it doesn't handle 6 DOF.
Re: (Score:3, Insightful)
Re: (Score:2)
You gotta love how they can take a single pixel, and come out with whatever they need. "If we [tap][tap][tap] zoom in on the reflection in the eye of the victim in the photo, we'll notice [tap][tap][tap] there is a mirror. In the reflection in the mirror is [tap][tap][tap] Oh, its a clear face which [tap][tap][tap] matches the DMV database in Austria for [tap][tap][tap] this bad guy!" Not bad for a shot accidentally taken from a camera phone as the victim was being murdered.