Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Researchers Developing Single-Pixel Camera 274

Assassin bug writes "According to the BBC, researchers in the US are developing a single-pixel camera to capture high-quality images without the 'expense' of traditional digital photography. The idea behind such a device is that traditional digital photography is wasteful. Most of the information taken in by the camera is thrown away in the compression process. From the article: 'The digital micromirror device, as it is known, consists of a million or more tiny mirrors each the size of a bacterium. "From that mirror array, we then focus the light through a second lens on to one single photo-detector - a single pixel." As the light passes through the device, the millions of tiny mirrors are turned on and off at random in rapid succession. Complex mathematics then interprets the signals assembling a high resolution image from the thousands of sequential single-pixel snapshots. '"
This discussion has been archived. No new comments can be posted.

Researchers Developing Single-Pixel Camera

Comments Filter:
  • complex mathematics? (Score:4, Informative)

    by superwiz ( 655733 ) on Thursday January 18, 2007 @05:48PM (#17671338) Journal
    Surely, you mean "complicated". Mathematics already has a use for the word "complex".
  • Dupe (Score:2, Informative)

    by rumith ( 983060 ) on Thursday January 18, 2007 @05:54PM (#17671468)
    From the mysterious past: http://science.slashdot.org/science/06/10/19/22552 39.shtml [slashdot.org].
  • Still patented too (Score:4, Informative)

    by goombah99 ( 560566 ) on Thursday January 18, 2007 @05:56PM (#17671508)
    The coherent detection version of this was patented 11 years ago.
    Apparatus and method for heterodyne-generated, two-dimensional detector array using a single detector [uspto.gov]
  • by Jeff DeMaagd ( 2015 ) on Thursday January 18, 2007 @06:01PM (#17671632) Homepage Journal
    Micromirrors are actually very reliable and even exceed the lifetime of a typical LED now, of hundreds of thousands of hours of constant flexing. It turns out that nano-scale objects have different properties. A piece of metal on the nanoscale is likely to be a single crystal and that usually eliminates the fatigue issue. I think this has more uses in the sciences though.
  • The basis (Score:3, Informative)

    by Jerf ( 17166 ) on Thursday January 18, 2007 @06:19PM (#17671986) Journal
    Here's some of the basic math behind the idea:

    When you lossfully compress an image, you are literally throwing away data. If you compress a 1MB image down to 100 KB, which with JPG is still very good quality, you are mapping many, many, many slightly different but ultimately very similar source images all onto the same compressed image.

    Consumer cameras "waste" time starting from a full lossless image, and compressing it with JPG; the waste comes from collecting all of this data that has no bearing on the final result. (Anything that stores the .RAW of the image isn't doing this compression, it's storing the entire data set.)

    The idea of this system is that by mixing the pixels together in a certain way, we can collect less information in the first place. For what would be a 1MB picture in a standard camera, you'd start off by only collecting 100KB of information, and then computing the image from your sequential numbers.

    Two problems leap to mind:
    • I find it very, very hard to believe that "random" is the optimal approach. I would have thought there would be something much better than that for the bases, but I could be wrong. (There almost certainly is something better than "random" but it may not be better enough to justify the computational expense.)
    • JPG bases were carefully designed to match the human visual perception system and make it difficult for us to perceive the compression artifacts. The compression bases in this situation will have to be optimized for information gathering, which won't be the same as the human eye, which will result in somewhat inferior pictures, bit for bit. If you know what you're looking for, you can see it in their sample pictures; it's going to take a lot more bits to make that mosaic effect "go away" that it will to make JPG artifacts "go away".
    A clever PhD may be able to solve both problems in one swipe, by using a clever mirror progression that happens to map better to the JPG standard. (You can't get it perfect though because you can't predict in advance how many bits go to one JPG block, that's computed dynamically.)

    It works, and it's a clever algorithm, but I would definitely still question its practical usefulness over a conventional imaging system. I think the current trend of compression is temporary; the megapixel race should start to slow down (who needs 100megapixel pictures of their baby?) and then as cameras and storage continue to advance, we'll start getting uncompressed or losslessly compressed images instead. I could see this technology winning the race to be the first to produce a single camera that matches the image capturing power of the human eye, though; by manipulating the incoming light you may better be able to manage widely varying light levels.

    (Finally, bear in mind before posting criticisms of how impossible this all is that they appear to have actually built a device that does this, which trumps skepticism.)
  • by AK Marc ( 707885 ) on Thursday January 18, 2007 @06:42PM (#17672442)
    Is http://en.wikipedia.org/wiki/Pointillism [wikipedia.org] what you were thinking of? I love a good Seurat.
  • by treeves ( 963993 ) on Thursday January 18, 2007 @06:59PM (#17672722) Homepage Journal
    You're probably thinking of Seurat, a pointillist [wikipedia.org], who built up his painted images from lots of little dots he made with his brush. Of course, he wasn't scanning across the canvas with three or four colored brushes dotting as he went but using some less deterministic approach.
  • Re:Reverse DLP? (Score:3, Informative)

    by Marxist Hacker 42 ( 638312 ) * <seebert42@gmail.com> on Thursday January 18, 2007 @07:23PM (#17673118) Homepage Journal
    How many bits at a time do you think a harddrive head can read?

    One per head, buffered. But unlike bits on a hard drive, subjects in real life MOVE. Just because you read a pixel on one side of the picture one nanosecond, doesn't mean that the next nanosecond that pixel will be the same. By using the mirrors instead of a massively parallel system, you're moving the serial from the connection to the hard drive or long-term memory storage, to actually taking the photo. Which will, at best, cause some pretty blurry photos when taking moving images. Look at the website referenced in the story- you'll see what I mean in their sample photos of even still items. The lossy compression is rotten.
  • Re:Reverse DLP? (Score:3, Informative)

    by x2A ( 858210 ) on Thursday January 18, 2007 @07:48PM (#17673526)
    "Look at the website referenced in the story- you'll see what I mean in their sample photos of even still items"

    *pmsl* what way exactly do you think that photos of a STILL SCENE in any way reflect (hehe, reflect) image loss that WOULD be caused by taking photos of a moving scene?!!

    Anyway, this isn't a simple case of turn-by-turn turning on each mirror then off again, at any one sample time multiple mirrors will be reflecting to the sensor, and for each photograph taken, each mirror will have been read from multiple times, in random (enough) order. The amount of blur you get from photographing a moving scene will be proportional to the total exposure time, as it is with any type of photography.

  • Re:Reverse DLP? (Score:1, Informative)

    by Anonymous Coward on Thursday January 18, 2007 @08:50PM (#17674364)
    The main reason that telescopes use mirrors instead of lenses is because it's cheaper.

    Common glass will cause different wavelengths of light to refract at slightly different angles. This means that the red light and violet light will not focus at the same spot, and you see color fringing on bright objects. It is possible to design a refractor telescope with multiple element, exotic glass lenses that works around this problem, but the exotic glass is spectacularly expensive - typically around $1000 per inch of aperture for small (less than 3" or 4") telescopes, and going up rapidly in price-per-inch for larger scopes.

    Also, for larger scopes, the weight of the lenses and the length of the light path start to become problems.

    Some well known companies that make apochromatic refractors (that work around the color aberations) include Teleview, Astro-Physics, and Takahashi. Compare the cost of their refractors with similar sized reflector or catadioptric scopes (the latter scopes use both mirrors and lenses.) The reason that people usually pay for the expensive refractors is they make excellent optical systems for astrophotography. They also make great planetary scopes due to better contrast, since they lack a central obstruction. But I don't personally know many people who use them strictly for visual use.

    I don't see how any of this gives an advantage to the imaging detector described in the article.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...