Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Technology

Researchers Developing Single-Pixel Camera 274

Assassin bug writes "According to the BBC, researchers in the US are developing a single-pixel camera to capture high-quality images without the 'expense' of traditional digital photography. The idea behind such a device is that traditional digital photography is wasteful. Most of the information taken in by the camera is thrown away in the compression process. From the article: 'The digital micromirror device, as it is known, consists of a million or more tiny mirrors each the size of a bacterium. "From that mirror array, we then focus the light through a second lens on to one single photo-detector - a single pixel." As the light passes through the device, the millions of tiny mirrors are turned on and off at random in rapid succession. Complex mathematics then interprets the signals assembling a high resolution image from the thousands of sequential single-pixel snapshots. '"
This discussion has been archived. No new comments can be posted.

Researchers Developing Single-Pixel Camera

Comments Filter:
  • Yes, it's a dupe. (Score:5, Insightful)

    by Anonymous Coward on Thursday January 18, 2007 @04:44PM (#17671236)
    A Single Pixel Camera [slashdot.org]
    Posted by CowboyNeal on 10-20-06 12:44 AM
    from the high-tech-pointilism dept.

    From the FAQ:

    Sometimes I see duplicate stories on Slashdot. What's up with that?

    These are just mistakes on the part of the staff. They happen. We have posted over ten thousand stories in our history. The occasional duplicate is inevitable.

    If you see a duplicate, you can mail the story's author. If the story is still quiet, we may pull it down. However, once the comments are rolling in, we often leave the story up so that the discussion can continue.

    Some people have suggested that there might be a software solution to this problem. If you think you've got one, visit the Slashcode site and submit a diff. As long as it isn't a performance hit, I'd consider using it. (Be aware however that the trick of searching for duplicate URLs isn't as helpful as you might think, since the same story can appear in multiple locations.)


    So if you really want to complain about it, consider contributing a Slashcode [slashcode.com] patch to fix it.
  • If the one-pixel camera behaves like a traditional digital camera, I will need to take 100 pixels to get 20 decent pixels that I can use.
  • by k4_pacific ( 736911 ) <k4_pacific@yahoo . c om> on Thursday January 18, 2007 @04:47PM (#17671306) Homepage Journal
    In related news, a major roofing manufacturer has announced the "single shingle" roof. It consists of a small plate that is quickly moved about above a building during a rainstorm to block each individual raindrop. This eliminates the "complexity" of asphalt shingles.
    • by Chandon Seldon ( 43083 ) on Thursday January 18, 2007 @05:10PM (#17671804) Homepage

      That would work... if shingles were really expensive and the mechanism to move the one shingle around at the necessary speed were comparatively cheap. Oh... and you knew that you never needed to block raindrops in two places at the same time.

      There are tons of ideas that work great in computerized systems that sound *really stupid* when you think of doing something that seems similar but uses other materials / technology. I mean - consider the mechanism of an ink jet printer from the perspective of a portrait artist who works with pencils...

      • by blugu64 ( 633729 )
        "There are tons of ideas that work great in computerized systems that sound *really stupid* when you think of doing something that seems similar but uses other materials / technology. I mean - consider the mechanism of an ink jet printer from the perspective of a portrait artist who works with pencils..."

        Wasn't there a painted around the turn of the century that did something similar though? I can't remember the name of the artist sadly.
    • by Black Art ( 3335 )
      Sounds like a "Breakout" product to me.
  • by Anonymous Coward on Thursday January 18, 2007 @04:47PM (#17671308)
    Bet it'd suck to have a bad pixel with that camera, huh? :-)
  • Finally! (Score:2, Funny)

    by Anonymous Coward
    Now we can get pr0n at the level of quality in Duke Nukem! One fleshy-pink-colored pixel is enough to get most me off...
  • by account_deleted ( 4530225 ) on Thursday January 18, 2007 @04:47PM (#17671312)
    Comment removed based on user account deletion
  • RAW format anyone? (Score:5, Interesting)

    by Anonymous Coward on Thursday January 18, 2007 @04:48PM (#17671326)
    > Most of the information taken in by the camera is thrown away in the compression process.

    Doesn't the RAW format take care of this?
    • by John Meacham ( 1112 ) on Thursday January 18, 2007 @05:19PM (#17671996) Homepage
      The problem is not getting at that extra information, like you say, we can already do that with RAW. the problem is that a lot of resources (such as CCD area) go into capturing this extra information which is then simply discarded. By taking a random sampling of pixels, one gets exactly as much information is needed to construct the compressed version of the image without waste. plus, with only a single CCD, you can make it incredibly sensitive, to the point where it can count single photons. Heck, you could probably have some fun with wavelengths. different wavelengths get diffracted slightly differently, if you could take advantage of that to redirect photons of different wavelengths at the sensor. you could have a camera that takes _full spectrum_ pictures. not just at the single pretty but not very informative red green and blue lines. (tetrachromats rejoice!). Full spectrum sampling in a small package would be really cool, I mean, that is tricorder technology. This is very neat research.
      • by Pieroxy ( 222434 ) on Thursday January 18, 2007 @05:34PM (#17672290) Homepage
        There is always a catch, however. Let's take an example of a 1MP camera, taking a picture at 1/100th of a second. Each CCD can acquire light for a full 1/100th of a second. But each one is small and as such, not very sensitive.

        Let's say this new 1 pixel camera is set-up to take a picture of 1MP at 1/100th of a second. Each one of the 1M mirrors will reflect its light on the CCD for ... (1/100)/1000000 th of a second, because only one pixel (of the final image) can be recorded at a time. So yes, the new sensor will be more sensitive. And it better be ! 1 000 000 times to be correct (for 1MP pictures.)

        • by Goaway ( 82658 )
          Incorrect. Even the summary implies that it lets a lot of different pixel shine on the sensor at any moment, and untangles it with maths afterwards. It still needs to be faster, but there's also just one, and it can be big.
      • the problem is that a lot of resources (such as CCD area) go into capturing this extra information which is then simply discarded.

        What do you mean discarded? I use every pixel of my digital SLR in RAW mode, and I often wish there were more pixels. A lot more. So, where am I discarding these pixels?

      • by Thuktun ( 221615 )

        Heck, you could probably have some fun with wavelengths. different wavelengths get diffracted slightly differently, if you could take advantage of that to redirect photons of different wavelengths at the sensor. you could have a camera that takes _full spectrum_ pictures.

        The limiting factor there would be finding a material that allowed you to reflect or diffract photons all the way from infrared light to x-rays. Since we're talking about a single-pixel sensor, constructing separate sensors for the various areas of the EM spectrum would probably be more feasible.

  • complex mathematics? (Score:4, Informative)

    by superwiz ( 655733 ) on Thursday January 18, 2007 @04:48PM (#17671338) Journal
    Surely, you mean "complicated". Mathematics already has a use for the word "complex".
  • Throwing away data? (Score:2, Interesting)

    by kerohazel ( 913211 )
    Well, there's no reason a digital camera *has* to throw away any data at all. It's likely the case that all digital cameras do perform on-the-fly JPEG compression, but it's not a limitation of the hardware, so why bother reinventing the wheel if you really care about losing data that much? Just make a digital camera that saves pictures as some lossless format.

    And at any rate, how are the single-pixel cameras throwing away any *less* data than their plain digicam counterparts? Doesn't it all just depend on t
    • Scanning back? (Score:3, Interesting)

      by MoxFulder ( 159829 )
      I think this design is sort of like an ultra-fast scanning back [wikipedia.org]. A scanning back is a high-end type of digital camera sensor where the sensor has only a very small resolution, but it physically moves and takes a frame at each step. The many resulting frames are then interpolated together appropriately. This can produce EXTREMELY high-resolution images (we're talking 100s of megapixels) but it is sloooow (minutes or hours per exposure). Good for art reproductions and such.

      As I understand it, this camera
    • by x2A ( 858210 )
      Tiny pixels have a chance of being triggered by a single stray photon (or something), causing speckles on the image. You can figure out the anomalies by taking multiple readings from each pixel, and discarding any that are very different from the rest, but that can be slow. Or, you can discard based on pixels very different from their surrounding pixels, and replace with the average of surrounding pixels, and then you start to lose picture quality.

      Using lenses, each mirror can capture light from a larger ar
  • FTA: Compressive Sensing is an emerging field based on the revelation that a small group of non-adaptive linear projections of a compressible signal contains enough information for reconstruction and processing. We have developed algorithms and hardware to support a new theory of Compressive Imaging. Our approach is based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels. Our camera architecture employs a digital micromirro
    • It's more like linear algebra... you have a series of equations based on the random-ish mirror patterns, so for a simple 2x2 grid of pixels (mirrors):

      1*x0y0 + 0*x1y0 + 1*x0y1 + 0*x1y1 = sample1
      1*x0y0 + 1*x1y0 + 0*x0y1 + 1*x1y1 = sample2
      0*x0y0 + 1*x1y0 + 0*x0y1 + 0*x1y1 = sample3 ...

      Then you basically solve it for the pixels. So think interpolating the entire image at once as a single value.
  • by MuChild ( 656741 ) on Thursday January 18, 2007 @04:49PM (#17671364)
    1) Create a million bacteria-sized mirrors. 2) ???? 3) Profit!
  • by heroine ( 1220 ) on Thursday January 18, 2007 @04:50PM (#17671390) Homepage
    Always thought the single pixel idea would be more practical in a reflector telescope. Such a telescope could have a much higher dynamic range than any other telescope due to the extra money available for the pixel. The telescope would use the Earth's rotation to scan one axis and servos to scan the other axis.
    • by Cecil ( 37810 )
      That would require *extremely long* exposures. You can't really up the sensitivity of the detector no matter how much money you throw at it. Good CCDs are already at the level of being able to record single photons, and it still takes hours or even days for some exposures. So what's a several hour exposure multiplied by a million pixels? Ouch is what it is.
      • You can't really up the sensitivity of the detector, but you can change what you do with the signal it sends once you get it.

        With current pixels densities (say about 16MP on a full frame 35mm sensor) the signal can be boosted to give the ISO equivalent of 3200 speed film before the noise becomes objectionable. Noise is a problem in part because signals from adjacent pixels affect one another, which is magnified when the signal is boosted to ISO 3200.

        I would suppose that a single pixel would be less prone to
  • by jo7hs2 ( 884069 ) on Thursday January 18, 2007 @04:50PM (#17671394) Homepage
    Oh great, now I'll end up with a camera with a stuck or hot pixel and be totally screwed. Thanks, progress.
    • Well, you could always use the dark frame subtraction method to fix the problem.
      • Re: (Score:3, Insightful)

        by cfulmer ( 3166 )
        Actually, I don't think you could. If you had a mirror that got stuck into the 'on' position (i.e. it's pointing at the single sensor), it would partially blind the sensor whenever any other mirror was also pointing at the sensor. If that one mirror happened to be seeing pink, the entire photo would have a pinkish hue. If it happened to be seeing white, the picture would be washed out. If it happened to be seeing pitch black, well, then you're in business.
  • by toby ( 759 ) * on Thursday January 18, 2007 @04:51PM (#17671416) Homepage Journal
    And this story hit the UK Guardian on 9 Nov 2006. [guardian.co.uk] (via CS maven my slice of pizza [blogspot.com].)
  • by Timesprout ( 579035 ) on Thursday January 18, 2007 @04:52PM (#17671422)
    This is me skydiving
    .

    This is me swimming with dolphins
    .

    This is me at the grand canyon
    .

  • ...with only a single CCD pixel, they can spend all their resources making it exquisitely sensitive, so as to outperform normal array CCDs.

    Of course, they'd have to do that anyway, because to get a decent shutter speed they're already going to have to 'scan' the viewed area extremely quickly. It's the old tradeoff of serial versus parallel processing.

    • I'm gonna sit and wait until they perfect this, and just before it gets popular (because its so cheap) I'm going to patent the 2 pixel camera with twice the resolution for only a tiny higher cost, and beat them at their own game!

      Mouahahahah...
      • by Jeremi ( 14640 )
        I'm gonna sit and wait until they perfect this, and just before it gets popular (because its so cheap) I'm going to patent the 2 pixel camera with twice the resolution for only a tiny higher cost, and beat them at their own game!

        You joke, but it might be a good idea... there's no need to stick to either the one-CCD-pixel-per-camera or the one-CCD-pixel-per-image-pixel extremes. Perhaps there is a happy medium somewhere, like having 256 scanning-CCD-pixels operating in parallel to build up a (simulated) 16-

  • by ScentCone ( 795499 ) on Thursday January 18, 2007 @04:52PM (#17671434)
    Is it just me, or does the concept seem inherently more complex and fragile than a multi-pixel sensor with light cast on it?

    And how can this possibly deal with the equivalent of a range of shutter speeds in front of a standard sensor? Perhaps it's a matter of how many times the pixel is exposed to the same part of the lens' projection in repeated scans... but that just seems clunky, and that much harder/slower to re-assemble into a stored image.

    And it doesn't stop the megapixel chest thumping - it just starts up megamirror arguments, instead.
    • by Jeff DeMaagd ( 2015 ) on Thursday January 18, 2007 @05:01PM (#17671632) Homepage Journal
      Micromirrors are actually very reliable and even exceed the lifetime of a typical LED now, of hundreds of thousands of hours of constant flexing. It turns out that nano-scale objects have different properties. A piece of metal on the nanoscale is likely to be a single crystal and that usually eliminates the fatigue issue. I think this has more uses in the sciences though.
      • by jandrese ( 485 )
        It still sounds to me like a camera with literally millions of moving parts though. Also, the sheer physics of getting the photons to the sensor (especially in low light conditions) when each pixel only has the most absolutely tiny slice of time to collect light seems rather difficult to surmount. In low light conditions it's easy to see where this could amount to only a handful of photons per pixel. It's hard to see how you wouldn't get lots of noise in low light conditions just due to the magnified eff
    • by pla ( 258480 )
      And how can this possibly deal with the equivalent of a range of shutter speeds in front of a standard sensor?

      Not to mention, you get the previously-not-an-issue joy of temporal aliasing.

      And you thought the flicker of fluorescents annoyed you now? Wait until any exposure longer than 1/120th of a second includes both the "lit" and "unlit" version in one picture. Good luck figuring out the meaning of white and black levels on that monstrosity...



      And it doesn't stop the megapixel chest thumping - it
  • by JohnnyGTO ( 102952 ) on Thursday January 18, 2007 @04:52PM (#17671440) Homepage
    oops, crash sevem million years bad luck !?!
  • Urgh! (Score:2, Funny)

    by HerrEkberg ( 971000 )
    Look how many MegaMirrors my new camera has!
  • Excuse me (Score:3, Interesting)

    by markov_chain ( 202465 ) on Thursday January 18, 2007 @04:53PM (#17671460)
    Please don't move until I sequentially activate a few hundred thousand micromirrors!

    'nuff said.
  • Dupe (Score:2, Informative)

    by rumith ( 983060 )
    From the mysterious past: http://science.slashdot.org/science/06/10/19/22552 39.shtml [slashdot.org].
  • It sounds very much like Sigma-Delta Modulation (http://en.wikipedia.org/wiki/Sigma-delta_modulati on [wikipedia.org]). Lots of samples in time, fewer bits.
  • by Xoltri ( 1052470 ) on Thursday January 18, 2007 @04:55PM (#17671502)
    The article says that this new camera will have do do "Complex mathematics to interpret the signals" but at the same time will "do away with the need to process and compress each image". So which is it? I just don't see how this will save anything if you have 1 pixel doing something 5 million times or 5 million pixels doing something one time.
  • So essentially, it seems that we go from an array of pixels on the photodetector in the camera, from which the data being collected is filtered and redundant information is deleted during compression into a jpeg. Now a new "less wasteful" method uses an array of tiny mirrors that must turn on and off and then focus the reflected light efficiently onto a single photodetector that then just filters the information using some complex maths? I am not quite sure how this is better, or is it just different?
  • best of both worlds (Score:2, Interesting)

    by cpearson ( 809811 )
    Why could this idea be combined with the current technology. Millions of mirrors AND thousands if not millions of photo detectors would allow faster exposure times without as much waste as current CCD digital cameras.

    Windows Vista Help Forum [vistahelpforum.com]
    • by jo7hs2 ( 884069 )
      Not everyone wants faster exposure times. Default faster exposure times would ruin many shots. Forget motion blur, water blur, etc... Everything would be a perfect freeze-frame. Of course, the consheepers would get razor sharp pictures of their sticky, chocolaty offspring.
  • We'd still be using a pin hole camera... lenses --wasteful, shutters-just extra parts, zoom -- why would you need it...

    -S
  • Doh! so what if data is lost in compression? that's why you shoot in RAW format dumbass.

    With a 4GB CF card and average RAW image size of about 20MB I don't see any need for JPEG if you have the time to work on the RAW files.
    • by Bandman ( 86149 )
      I'm torn about this.

      There are some shots that I'd like to get that just arn't worth the post processing that raw entails, and they take away from the "good" shots I'm interested in. That being said, I agree with you, RAW is the way to go for any serious photography that doesn't involve film.
      • by sahonen ( 680948 )
        My camera lets you choose whether to shoot RAW or JPG. If you don't want to post process, shoot JPG, if you do, shoot RAW.
  • by sharkb8 ( 723587 ) on Thursday January 18, 2007 @04:58PM (#17671574)
    You can have a million little moving parts in your camera!

    The microelectrical mechanical device fabrication techniques used to make the DLP scanning mirrors are taken from tech used to etch transistors. Instead of a circuit bring etched, a movable mirror os etched into slicon or other substrates. And you end up with a bunch of little tiny mirrors moving around on a portable device. Moving parts tend to wear out more rapidly than solid state parts, and are more easily broken. I'd be interested to see how durable this tech is. DLP doesn't have this issue because no one carried a DLP projector or TV around.
  • by Solandri ( 704621 ) on Thursday January 18, 2007 @04:59PM (#17671604)
    Low light sensitivity. Digital cameras gain light sensitivity by acting as light buckets. Moreso for CMOS sensors than CCD, but the important thing is that all those sensor pixels are collecting light for their individual pixel simultaneously - in parallel. With a single pixel sensor, this light collection would have to happen in series to achieve the same light sensitivity. If your shutter speed in low light is 1/25 sec with a 5MP traditional digital camera, in order for a single-pixel camera to take the same picture it would need an exposure time of (1/25 sec/pixel)*(5M pixels)*(10% assumed algorithmic efficiency) = 20,000 sec = 5 hours 33 min 20 sec.

    Of course since you're doing all this with mirrors, you could set up a megapixel array and have different mirrors shine at different pixels simultaneously (just like a DLP). But that seems to defeat the purpose of the whole rig.

  • I have sometimes thought of nano-sized cameras like this that, instead of having a million mirrors to allow a single pixel to take a full picture, instead, only took a pixel's worth of a picture. But each device is like a grain of sand. You could sprinkle the devices where someone is known to be passing through, or sprinkle them on the person, and thousands of these one-pixel devices, working in concert, could generate images.

    It would be like "dusting" someone with micro-bugs.
    • Right, and in the future, where manufacturing even ordinary objects atom by atom is practical, the future society would have ubiquitous surveillance. EVERY object everywhere would generate ultrahigh res images of everything around it, and nano-scale processors embedded in the objects would analyze the images. Crime would be impossible : even committing a crime would be impossible, as an attempt to commit a major crime would automatically cause an intervention by the AIs watching over everything.
  • Why don't they just scan over every mirror on the chip in a specific order? Then there would be less complicated mathematics required, right?
  • 'The digital micromirror device, as it is known, consists of a million or more tiny mirrors



    Drop this device just one time and you've got bad luck for the rest of your life... or next
    million lives if you believe in reincarnation.
    I urge all Eisoptrophobia'ist to avoid this at all cost!
  • by phliar ( 87116 )

    If you replace a million sensors with one sensor, for the same sort of exposure you'll need a million times the time. (Or, since the claim for the device is that you don't need to sample everything since you're compressing with JPEG, let's say half a million times.)

    But we want the entire frame to be captured in "the same instant" (or you'll see strange artifacts from moving objects).

    Let's say we want an exposure of about 1/100s. So, can these micro-mirrors switch at a 5x10^7/s rate (20 nanoseconds)? Sin

    • If you replace a million sensors with one sensor, for the same sort of exposure you'll need a million times the time.

      Bingo. And to "compensate", you'd have to tweak up the CCD sensitivity to the point of unacceptable noise. I'd also suspect that the power requirements for mechanically moving mirrors would be prohibitive.

      Grainy pics, blurry action shots, and a five minute battery life? heh..
  • The basis (Score:3, Informative)

    by Jerf ( 17166 ) on Thursday January 18, 2007 @05:19PM (#17671986) Journal
    Here's some of the basic math behind the idea:

    When you lossfully compress an image, you are literally throwing away data. If you compress a 1MB image down to 100 KB, which with JPG is still very good quality, you are mapping many, many, many slightly different but ultimately very similar source images all onto the same compressed image.

    Consumer cameras "waste" time starting from a full lossless image, and compressing it with JPG; the waste comes from collecting all of this data that has no bearing on the final result. (Anything that stores the .RAW of the image isn't doing this compression, it's storing the entire data set.)

    The idea of this system is that by mixing the pixels together in a certain way, we can collect less information in the first place. For what would be a 1MB picture in a standard camera, you'd start off by only collecting 100KB of information, and then computing the image from your sequential numbers.

    Two problems leap to mind:
    • I find it very, very hard to believe that "random" is the optimal approach. I would have thought there would be something much better than that for the bases, but I could be wrong. (There almost certainly is something better than "random" but it may not be better enough to justify the computational expense.)
    • JPG bases were carefully designed to match the human visual perception system and make it difficult for us to perceive the compression artifacts. The compression bases in this situation will have to be optimized for information gathering, which won't be the same as the human eye, which will result in somewhat inferior pictures, bit for bit. If you know what you're looking for, you can see it in their sample pictures; it's going to take a lot more bits to make that mosaic effect "go away" that it will to make JPG artifacts "go away".
    A clever PhD may be able to solve both problems in one swipe, by using a clever mirror progression that happens to map better to the JPG standard. (You can't get it perfect though because you can't predict in advance how many bits go to one JPG block, that's computed dynamically.)

    It works, and it's a clever algorithm, but I would definitely still question its practical usefulness over a conventional imaging system. I think the current trend of compression is temporary; the megapixel race should start to slow down (who needs 100megapixel pictures of their baby?) and then as cameras and storage continue to advance, we'll start getting uncompressed or losslessly compressed images instead. I could see this technology winning the race to be the first to produce a single camera that matches the image capturing power of the human eye, though; by manipulating the incoming light you may better be able to manage widely varying light levels.

    (Finally, bear in mind before posting criticisms of how impossible this all is that they appear to have actually built a device that does this, which trumps skepticism.)
  • Are micro-mirror arrays truely cheaper to produce than a CCD or CMOS photo sensitive array?
  • The QuickTake is back.
  • Modern cameras have sophisticated algorithms to determine automatic exposure based on the composition of light and dark within the frame. To get the proper exposure with this system, it seems to mean you'd have to pre-scan the scene to get the exposure value. Or go back to old-school external light sensors rather than TTL (Through The Lens, not Transistor-Transistor Logic) metering.
  • by meanfriend ( 704312 ) on Thursday January 18, 2007 @06:37PM (#17673332)
    Single pixel images will revolutionize the efficiency of porn sharing.

    Are you into hentai? Here you go! .....................

    Barely legal teens? Coming right up .......................

    Even goatse freaks dont need to be left out:
    .

    Though I'll probably get modded down for that last one :(
  • by BenJeremy ( 181303 ) on Thursday January 18, 2007 @08:44PM (#17675020)
    Just my luck, and the warranty says I can't return it unless I find at least 4 dead pixels!!!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...