Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Robotics Technology

Robotic Camera Extension Takes Gigapixel Photos 102

schliz writes "Scientists at Carnegie Mellon University have developed a device that lets a standard digital camera take pictures with a resolution of 1-gigapixel (1,000-megapixels). The Gigapan is a robotic arm that takes multiple pictures of the same scene and blends them into a single image. The resulting picture can be expanded to show incredible detail."
This discussion has been archived. No new comments can be posted.

Robotic Camera Extension Takes Gigapixel Photos

Comments Filter:
  • Not so novel (Score:5, Interesting)

    by Anonymous Coward on Saturday May 17, 2008 @02:34PM (#23446944)
    Seth Teller at MIT EE was doing this 8 years ago. Check out his Cityproject.
  • ALE (Score:5, Interesting)

    by pipatron ( 966506 ) <pipatron@gmail.com> on Saturday May 17, 2008 @02:40PM (#23446988) Homepage
    Also check out Anti-Lameness Engine, http://auricle.dyndns.org/ALE/ [dyndns.org] which does exactly the same thing, but you have to provide your own arm.
  • by grimJester ( 890090 ) on Saturday May 17, 2008 @02:47PM (#23447024)
    Is there any superresolution software good enough that I could, for example, take twenty blurry pics with my phone and merge them to a single sharp one?
  • Re:Not so novel (Score:3, Interesting)

    by GiMP ( 10923 ) on Saturday May 17, 2008 @02:58PM (#23447120)
    I believe that Steve Mann [wikipedia.org] of wearable computing fame was the first to create an algorithm for photo stitching [wearcam.org].
  • We Did It in 1990 (Score:5, Interesting)

    by Doc Ruby ( 173196 ) on Saturday May 17, 2008 @03:44PM (#23447368) Homepage Journal
    I worked for a SF area startup in 1990 that produced and sold cameras for "digital prepress" [accessmylibrary.com] (later called "desktop publishing", and now just "publishing" ;) that had the highest resolution around, to compete with drum scanners [wikipedia.org] that were then the expensive industry standard equipment.

    We took a 512x512 Hitachi video sensor with a 2x2 C-M/Y-K mask repeated over it, for initial 1Kx1Kx40bit images that we derived from DSP on the intensity of the color-masked pixels. Then we physically stepped the sensor through 8x8 subpixel shifts, subsampling each pixel 64x. We ran the resulting 320MB raw composite files through a bank of multiple 25MFLOPS DSPs (interconnected and logic-accelerated by a fat FPGA) to produce 4Kx4Kx36bit 72MB files. In 1990 that was an awesome achievement.

    We poured dramatic engineering work into that platform, which replaced a $150K drum scanner with a $30K PC (on DOS or Win3.0, or plus optional $5K Mac with its GUI including Photoshop 1.0). We had to deal with DSP for micropositioning the video sensor quickly (using feedback data from a laser/interferometer), with new color spaces (I was part of the JPEG org that produced the image format), with custom interconnects at blazing bandwidth, with parallel multiprocessing at then-supercomputer speeds written in C on DOS, and even with the physics of the light variably distorted by turbulence in the air between the camera and scanned slides, heated by the hot lights necessary for exposures fast enough to allow 64 frames and rescan before the sensor wiggled.

    All for a 16Mpxl camera that's now beaten by big sensors on handheld consumer devices for under $2K (in 2008, not 1990, dollars). But I can proudly say that we beat them by almost 20 years.
  • by Anonymous Coward on Saturday May 17, 2008 @04:29PM (#23447602)
    Combine this with tourist remover http://www.snapmania.com/info/en/trm/ (take several pictures of the same scene, and only use the bits which are stationary).

    That would make large pics without the motion distortion.
  • by electrostatic ( 1185487 ) on Saturday May 17, 2008 @08:24PM (#23449214)
    Autostitch [cs.ubc.ca] "is the world's first fully automatic 2D image stitcher." The order in which you take the photos in not important, just that you cover everything and that there is plenty of overlap. You don't have to worry about keeping the camera horizontal -- it will rotate individual shots as needed. And you can ZOOM in on certain shots for more detail. I've used it to merge 154 shots into one panorama. Free.
  • Re:We Did It in 1990 (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Saturday May 17, 2008 @10:09PM (#23449770) Homepage Journal
    Not really, though we did have a Lenna slide kicking around. We primarily used a Kodak test slide of color bars/wheels and greyscale gradients, and one image of a European/American looking blonde on Kodak slide and one image of a young Japanese looking woman on Fuji slide. We had different colorspaces for US/Europe and Japan, because Fuji film had a larger green dynamic range supposedly because Japanese people have more acute green-band vision (though I've never independently verified that).

    Once the camera was initially calibrated we'd use a test target to test how well I'd calibrated the film recorder. We printed a slide of Lenna, then scanned and reprinted it a few times through our DSP convergence algorithm, adjusting the film recorder's colorspace instead of the camera. Then we doubled a Lenna scan/print over a purely photographic repro of Lenna and scanned that subtractive image, calibrating the camera until it converged.

"Ignorance is the soil in which belief in miracles grows." -- Robert G. Ingersoll

Working...