Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Using Commodity Hardware in Laboratories? 116

PhysicsTom asks: "I am a Senior Physics student who's final year project is based upon using common, easily available technology to replace parts of the aparatus used in various departmental labs. Currently, my main area of interest is trying to integrate certain computer peripherals (such as scanners and digital cameras) into experiments at an earlier stage, so that images gained from the experiments (such as difraction patterns, etc) can be analysed in a program such as MathCAD straight off, rather than the much less efficient methods we're using at the moment. The problem is that I am having trouble finding out about the way in which scanners and digital cameras work, and how this would affect their accuracy with respect to what I am aiming to do." Basically, how do the various hardware aspects of such devices affect their ability to accurately measure or scan the subject of the experiment?

"The information I am looking for includes things like: the resolution of their grey-scales, what degree of accuracy the motor steps at, how uniformly distributed the CCDs are in the arrays, and other issues that might affect accuracy. Just so that I can know how close to the 'real' picture what I get out of the scanner/camera is. If anyone can tell me all these boring facts for any suchequipment (preferably solutions currently available in the UK) then I would be very appreciative."

This discussion has been archived. No new comments can be posted.

Using Commodity Hardware in Laboratories?

Comments Filter:
  • by wiredog ( 43288 ) on Tuesday October 30, 2001 @11:56AM (#2497710) Journal
    The grey scale should be in the documentation. CCD density can be calculated by width of scanner bar divided by number of CCD's. As for the rest, you may have to track down the manufacturers engineers. An alternative is to take apart a scanner and find out who manufactures the components, and contact them. Good luck.
  • No guarantees (Score:5, Insightful)

    by TheSHAD0W ( 258774 ) on Tuesday October 30, 2001 @11:56AM (#2497715) Homepage
    And the answer is... You can't depend on it. You can't even depend on one camera being identical in specs to another. These devices are made for the consumer market and aren't meant for scientific use.

    This doesn't mean you can't use them, though. What it does mean is that you'll need to select something you're pretty sure can handle what you want, and then devise procedures for calibrating the devices' output.
    • You can certainly calibrate the image capture device using test images, as has been noted. You can analyze your problem to decide how accurate you need your tools to be.

      Another point to be wary of is lossy compression, many folks who use cams and scanners are used to grabbing (lossy) jpegs. Depending on what information you are trying to capture, jpeg might add artifacts (noise/errors) to your data that you don't want, and it may add more artifacts each time you transform the data, like making a copy of a copy of a cassette tape. Make sure you can operate on lossless data, and only use lossy compression formats for archival storage after you have manipulated your data (assuming you need the compression rates that lossy algorithms provide).

      • We recently bought a Sony DSC-505V digital camera for use in my lab (Biomedical Engineering research). In addition to making sure the camera could store in TIFF format (no compression), we also had to make sure that there was a physical zoom (i.e., in the lens), as opposed to just a digital zoom. We got both in the camera we bought, but we keep the digital zoom turned off. This particular camera is also nice because it has a (very nice) Zeiss lens.
    • You can't depend on it. You can't even depend on one camera being identical in specs to another.


      Very true. Both my father and I purchased identical Umax scanners a few years back. Same model, same interface. Yet, the color quality on mine was far superior - his pictures always scanned with a bluish tint. He was able to color correct them using the software provided, but the tolerances were obviously very different despite being the same equipment.


      I would wager that a $1000 HP scanner would have much tighter tolerances than a $50 Umax. Just a thought.

    • Re:No guarantees (Score:3, Informative)

      by darkonc ( 47285 )
      This doesn't mean you can't use them, though. What it does mean is that you'll need to select something you're pretty sure can handle what you want, and then devise procedures for calibrating the devices' output.

      Note that even NASA continually re-calibrates their probes -- and I presume that Nasa doesn't use off the shelf components for most of their deep-space boxes. Units will change over time -- much less the change between units of the same model. Whatever you choose, you'll have to do calibration on an ongoing basis.

      Once you figure out how to do calibration, you can compare various units (and report back! :-). If you do a good job, you might even be able to scam yourself some free units from manufacturers who are interested in the results, and having them published.

  • Have you tried contacting the manufactuer of the products you are using? I don't mean as joe student, I mean as a represenative of your university. You might be able to get more then just the specs to the equipment. You might be able to get donations.
  • by Wakko Warner ( 324 ) on Tuesday October 30, 2001 @11:57AM (#2497719) Homepage Journal
    ...I learned how to make certain things out of easily-available common household items, but I doubt you could put any of them in a thesis.

    - A.P.
    • There was the guy in my honours year that as part of his project made a contraption that was a large metal drum with a hose at the bottom where air could be drawn through, this air was passed through a 20 litre drum of water with a standard vacuum cleaner at the other end to suck all the air through.

      The project was looking at chemical compounds in wood smoke, and so used the water to trap some of these components for study.

      He showed pictures of this thing in his seminar and described it in his thesis. It was about 15 minutes after the seminar everybody started to realise he had built a really really really big bong.
  • by jmichaelg ( 148257 ) on Tuesday October 30, 2001 @11:58AM (#2497723) Journal
    If you're relying on your equipment to give you reproducable results, you're going to have measure what it's actually capable of and not rely on published specifications.

    I stopped relying on spec sheets when I discovered they weren't very accurate. I've seen variances as high as 50% off spec.

  • Sorry (Score:4, Funny)

    by Red Weasel ( 166333 ) on Tuesday October 30, 2001 @11:58AM (#2497725) Homepage
    I'm sorry to inform you but this information is illegal to discuss as it would enable you to use this device as you see fit.

    Please continue to use the equipment as the manufacturer intended but please refrain from learning anything about it or using it for actual work.

    Your friends in peace
    USA
  • Happy Medium (Score:3, Informative)

    by Root Down ( 208740 ) on Tuesday October 30, 2001 @12:01PM (#2497746) Homepage
    Basically, how do the various hardware aspects of such devices [scanners and digital cameras] affect their ability to accurately measure or scan the subject of the experiment?

    Well, it would all depend on the time quanta that you are measuring. Digital scanners for anything having velocity are right out the window, since it takes a notable amount of time to capture the image. Digital cameras are somewhat faster, but it would depend on the quality of the camera if you wanted to track moving objects at higher 'shutter speeds' and resolutions.
    If a regular camera could capture the data you are collecting - and it seems that this is the case - the digital cameras should be fine. The important issue is that higher resolutions take longer to fix the image. Finding a happy medium between image resolution and image capture is what you're looking for. You might be able to get those specs from the manufacturer(?).
  • Everything that you are requesting is component dependent and is most likely availeable from manufacturers. This is the best source ( most dependable) of the info you seek
  • Calibration (Score:5, Insightful)

    by Ami Ganguli ( 921 ) on Tuesday October 30, 2001 @12:03PM (#2497758) Homepage

    Have you considered just calibrating the equipment? You'll probably need this anyway since, even if you can get the specs, they'll be expressed as ranges and individual components can fall anywhere within the range (as well as changing physically over the life of the equipment). This is true of your custom hardware as well.

    If you want to get an idea of how the equipment performs before you buy, just bring your test images and a laptop into the store and ask to try the demo model.

    Talk to some of the researchers in your lab. They probably already have tests as well as software that will compensate for irregularities in a CCD based on the results of the calibration.

    • by coyote-san ( 38515 ) on Tuesday October 30, 2001 @12:50PM (#2498081)
      Not only do I agree 100%, I would put it even more strongly.

      Stop thinking like a freshman who expects to find the answers in the back of the book. Even if you find this information someplace, the nature of commodity (vs. scientific) gear is that the manufacturer can change it at any time to meet market needs.

      You're a senior and need to start thinking like one. If you need calibration data, and you do, you should be thinking about how to get it for yourself using other commodity equipment. This is important today, critical with the improved hardware a decade or two from now.

      A trivial example I would have killed for 20 years ago? A 600 DPI laser printer. With it you can easily produce high quality optical test patterns, including some basic grey scales. (A standard sized sheet of paper will have far more 'pixels' than the CCD element in the camera.)

      A slightly more advanced example is what you can do with a cheap A/D card. 10-bits of accuracy doesn't sound like much, but if you're clever you can leverage it.

      Finally, I would strongly recommend that you review the "Amateur Scientist" columns in Scientific American over the past four or five years. If you can construct a simple closed feedback loop (cheap op-amp chip) and monitor it with an A/D converter ($100), you can do some incredible experiments.
      • An easy way to calibrate the color and/or gray-scale response of a scanner ( or digicam ) is to go to your local high-end photo store ( one that deals in darkroom equipment ) and buy a color standard card. Kodak publishes them for use in calibrating color printing equipment and evaluating filter packs for color enlargers, and they are QUITE accurate. They also have a gray-scale reference card available. While this won't be as accurate as calibration against NIST primary standards, they don't cost as much either. These cards are NOT expensive, but the old, immutable rule is true. Cheap, fast, good ... pick two. Do NOT use a laser printer for grey scale standards. Even inexpensive scanners can be set to use interpolated scanning at high enough resolution to resolve the toner particles.

        As for x/y positional calibration, I made up a template for fret placement on a guitar fingerboard, once upon a time, by computing and plotting the fret placement in AutoCad and printing it out on a laser printer. The finished home-built instrument played scales more accurately in tune than my commercially-built acoustic guitar did. Or, if your school has a machinist on-campus, see if you can obtain a set of Jorgensen blocks and scan them. They are sized accurately to, IIRC, 0.0001 inch, or so. If you decide to use a laser printed calibration chart, be SURE you use a grid, rather than, say, a rectangle of a certain size. This way you will be able to determine whether there are any non-linearities in the motion of the scan head.

        Accurate calibration standards just aren't THAT hard to find.

      • I agree with this completely, except that I read the original post as a question of reliability or accuracy of the commodity hardware.

        Calibrating any device is important, and I love the laser printer idea for positional calibration. I think other posters responded to the intensity aspect of this.

        One problem is that, for digital cameras, most are automatic and set F-stop, shutter speed, and do color corection and sharpening afterwards. You don't want any of this post-processing, and you need to know the shutter speed and F-stop (or their equivalents). I don't know enough about the market to say where these are available or not. I guess if you can't control this directly, then you'll have to include calibration data with every photo as part of the "scenery", as if each photo were taken with a completely different camera with different properties. I have a great book entitled "CCD Astronomy" which, while it isn't exactly on topic, covers the basics of image processinig and calibration very nicely. I don't have the book with me now, so I can't tell you what the author is. I used this book as an undergrad in physics while working on a CCD camera as a research project. It was very accessable to me.

        But the main thing you're after, I think, is the accuracy of the device, after it's been calibrated. I think this is based on two factors, the actual pixel sizes/bit depth, and the accuracy of your calibration factors. Something like pixelsize/sqrt(12), added in quadrature to the calibration errors. Estimating the accuracy of the calibration is an interesting business, and there are a variety of ways you could do it. Taking multiple calibrations and finding the statistical spread (RMS) is an easy way, but only gives you a lower bound because there will be systematic errors in your calibration procedure. I'll leave it at that. Probably any intelligent, hand-waving approach to estimating these systematic errors is enough for a senior project. A really thorough approach could take much more time than you have, generally speaking.

        Hope this helps!
        • One problem is that, for digital cameras, most are automatic and set F-stop, shutter speed, and do color corection and sharpening afterwards. You don't want any of this post-processing, and you need to know the shutter speed and F-stop (or their equivalents).

          My brother, who is a photographer of the strictly analog variety, tells me that you can get digital backs for most modern medium and large format cameras. However they do cost $10,000 and up and must be plugged into a standard computer or laptop for storage. The advantage here is that you can set F-stops, focus, and focal plane adjustments manually. Like you, however, I don't know a lot about the market so I can't really provide much more info or links.
    • My remote sensing professor had a setup using off-the-shelf video cameras for an airborne scanning system. Three "identical" cameras with filters to record three bands.

      In the end, he discovered that there were substantial geometry differences between the cameras that made registering the three bands quite a challenge. That's the difference between a 12" format aerial photo camera and a 35mm SLR, the former costs a small fortune because a lot of effort has gone into ensurng that the film is held as flat as possible (on a plate rather than stretched between two spindles).

      Then there's colour response. Same deal. I'd guess "scientific" quality sensors would be moe consistent in response to being hit by light.

      Bascially if you can account for the effects of using more variable components (by calbration, experimental design or whatever), where's the problem? Regardless of which way you go, you will need to *know* how your instrumnts take their measurements so you can know how they influence your results.

      Xix.
  • Checksumming a binary image may not be possible, although this works for other forms of expected output (like from printer drivers). But there must be some loss-analysis tools for images which could help you get a feel for the accuracy and deviation of any given device.
  • You can probably get some specifications from manufacturers, but I wouldn't really rely on them (you may for example get 8-bit grey scale, but it may be a non-linear response). WHy not calibrate it yourself: you have got the equipment. Also different models may vary slightly (some manufacturers will change components without telling you). Pay a bit more for good quality equipment, and bear in mind that for example the quality of a ccd camera depends a lot on the lens you use, and it may be better to buy greyscale equipment rather than use colour eqiupment in grey modes.

    Also for image capture avoid anything that adds software artefacts (especially compression). firewire uncompressed cameras (we get ours from www.unibrain.gr, very good) are good for high framerate high res, with good Linux support.
    • If you plan to use a consumer-grade digital camera and calibrate it with known test patterns, you will have to be a little careful: (1) Many cameras output in jpeg or other compressed formats. All lossy compression algorithms have artifacts, which are complex and counter-intuitive. All pixels interact with their neighbors to some extent. For example, two green dots far apart might come through correctly, but if they are too close together the color might change. (2) The better cameras have electronic pixel shifting to give a higher apparent resolution. I'm sure this is a non-linear process which has been deliberately tweaked to appeal to the eye. (3) The lenses will have non-linear aberrations which might need measurement.
  • Seems to me... (Score:2, Insightful)

    by sysadmn ( 29788 )
    that the first lab, or first part of each lab, should be spent analyzing errors introduced by the equipment! That would be no different than running a control sample through a chromatograph before running unknowns, or verifying a signal generator's output before testing a circuit.
    It would be a good lesson in the real world - like the old aphorism
    In theory, there is no difference between theory and practice. In practice, there is.

    • It seems to me that this is what I remember getting a physics degree was all about ;-)

      What's more important in an experiment is understanding where your errors come from. All of my undergraduate labs (nuclear and optics) were based around using whatever rediculously ancient and decrepit pieces of equipment we had lying around, and learning all the clever little tricks we could ween from our professors about how we could get accurate results from them.

      It seems to me like there is no answer to your question unless we know exactly what it is that you're trying to measure. I accurately measured diffraction patterns in optics labs with a cheap CCD video camera and a framegrabber card. Sure, we had to program a filter to convert the frame into raw data for analysis, but I remember that just using our eyes, we were able to determine correct contrast settings.

      It also seems to me that if you're working on a senior project, what your professors are more concerned about is not your results, but how well you statistically analyze the nonlinearities that are actually there. Trying to find a more accurate measurement tool usually just means that you're going to have to use more sensetive calibration tools to determine nonlinearities.

      Now if you could post what your experiment actually is (although it sounds like you're trying to revamp many experiments), someone here may be able to propose a solution to you that allows you to ignore the nonlinearities in a device.

      ~Loren
  • This is really weird. I found this site last night and was reading it, now this question...

    Anyway: This site on film scanners [cix.co.uk] talks specifically about film scanners, but also about the technology associated with them. I also really liked the discussion on ink jet printers (which I knew nothing about). Good luck!

  • I think its a great idea to use cheap commercial hardware for shoestring budget experiments. But be aware that scientific grade measurement hardware are expensive for a reason, and you get what you pay for. For instance, I bought a cheap $200-$300 CCD camera for recording images from an optical microscope. It works fine if I want just pictures of what I'm seeing with my eyes, but as a measurement device, its terrible. The camera is color, but uses an array of filters such that every group of four pixels has two that record green, one red and one blue. An algorithm is then used to generate resolution and color for the image that is captured. This means that knowing the CCD pixel size doesn't mean you can take measurements in your image by simple counting pixels. You have to calibrate the system.

    The same goes for your scanner. There are a ton of problems you will run accross when you try it, so just make sure that you compare the results you get in your traditional meaurement aproach with what you get using the scanner.

    Good luck,
    JD
  • hmm, sounds to me like you've got a very noble project on your hands; use technology to assist the scientific process. However, I think you're going to have a bit of trouble finding those kinds of deep technical specs for hardware devices that you may be re-purposing beyond their intended usage... I remember many a HP Chromograph system that had to be completely replaced (or radically upgraded) at great expense when a simple hardware hack could have provided a strap-on solution to the problem. In particular I'm recalling a heating/ventilation issue that some ingenious soul had resolved by building a simple hot-box inside the machine with a peltier, thermistor and some PICs to maintain even sample temperature for some device. HP's solution was a whole new piece of kit that cost more than a car, but someone rev-engineered it and saved some serious beer-money for the lab. HP, I'm sure, doesn't approve of these things (I'm not knocking HP, it's a corporate survival instinct). Now you may have the best intentions in the world, but I find it pretty hard to believe that the manufacturers are going to go out of their way to make this info available to you, so a little research is probably in order.

    a) See if anyone in the open-source community is working on projects utilizing this hardware. Heck, see if you can find some uber-geek who's been involved in creating linux drivers for a bunch of scanners; maybe you'll find someone who has a great storehouse of eclectic ccd-centric knowledge they would be glad to dump on you.
    b) Never underestimate the power of documentation. As I always was told, "documentation is like sex, even when it's bad, it's better than nothing..." Maybe they've listed component manufacturers in the crufty stuff they pack in the back of some of the user manuals.
    c) Love your service technician. If you're working with equipment that requires outside support, get friendly with him/her and see if you can't wheedle yourself a set of old support/repair documentation. Most of these people are wage-slaves like us and may well be interested in your little projects.

    Beyond that, keep your nose to the grindstone and good luck. Let us know how it goes.
  • by Mochatsubo ( 201289 ) on Tuesday October 30, 2001 @12:13PM (#2497839)

    I am looking for includes things like: the resolution of their grey-scales, what degree of accuracy the motor steps at, how uniformly distributed the CCDs are in the arrays


    The best thing to do, when possible, is to do the measurements yourself. That way you know exactly what *your* device is capable of doing, and not the *average* device from the manufacturer. You shouldn't rely on manufacturer's spec sheets for this type of information.

    For example, you can get a quick idea of the bit depth of a CCD by measuring the noise floor of the output of a null signal and compare it to the output of a saturated signal. You will find that most *consumer* or *security* CCD cameras will not give you a full 8-bits. Even scientific CCDs which state that they give a full 8-bits are only under certain conditions with a specific type of average or weighted measurement. Don't trust the spec sheet. Measure to make sure!

    Then of course you could also use your head. How uniform are CCD arrays (spatially)? Think about how they are made. They are very uniform.

    Finally, you should talk to your final project advisor. What you are doing isn't Physics, it's Engineering. Sure engineering is part of experimental science, but shouldn't be the prime focus of a "Physics" project, IMO (was a Physics undergrad myself).
  • There was an article on a very similar subject in New Scientist a few weeks back.. Lemme see if I can get a URL....{time passes} Oh dear, it's in the archive and you'll need to register to see it. And registration requires a subscription to the magazine.. how very lame.

    Anyway, the upshot was that a research group was using consumer-type Digital Cameras to help automate surveys of Rain Forest flora.. turns out that their estimates were VERY BADLY off because the prime descriminator used was colour (think: "shades of green"). And the cameras they used (specific make/models not mentioned) basically couldn't capture the range of greens required, or distorted them. Spherical Aberations from the el-cheapo lens on the cameras just made things worse. Bottom line: years of work needs to be re-done with more expensive, calibrated equipment.

    • I'll chime in here just because I've seen *so* many computer-types foul this up. (No slam, just fact.)

      The bottom line is this: Everything depends on how well you collect that initial data. This is TRUTH: No amount of signal conditioning, DSPs, FFTs, DCTs, quantum neural framulators, or anything else can make up for crappy sensing elements.

      This is true whether you're talking about image sensors, temperature sensors, EM sensors, mechanical sensors (force, pressure, torque, etc.), or anything else. Remember that silly saying you learned in your first computer class: Garbage In, Garbage Out - It's still true, whether you like it or not!

      Sadly, I can't tell you how many times I've seen really bright people ignore this simple fact of life (wasting countless millions of dollars in the process), confident that the rules don't apply to them, and that their computer can somehow create something from nothing. (My experience tells me thse people are more likely to be in academia or very large companies where "scientists" are more highly regarded than mere "engineers".)

      BTW: I know a thing or two about this because my father's company [i-s-i.com] (and no I did *not* do the web site) specializes in building high-quality mechanical sensors that provide laboratory precision in hellish environments. (Literally hellish: like downhole in oil wells.) Some customers are willing to pay for really good data, realizing that there's no alternative if you really need to know what's going on.

      Get the sensing elements right, and the rest of your job will be much easier (and cheaper, too...)
  • by tshoppa ( 513863 ) on Tuesday October 30, 2001 @12:23PM (#2497887)
    You explicitly ask about directly interfacing to scanners and digital cameras - my preferred open-source way of dealing with these peripherals is SANE [mostang.com].

    The SANE folks have gone to great efforts to get various scanner/camera devices to work in an open source environment. In some cases the manufacturer provided all the information needed to interface to the device; in other cases the interface has been found exclusively through reverse-engineering.

    I highly recommend that you look closely at the list of supported SANE devices [mostang.com] and choose a device known to work from the list. If you go into your local computer store and buy something off the shelf without looking at the SANE list, you are *very* likely to end up with a product that is completely unsupported in any useful environment.

  • Calibration is the ultimate tool of the scientist, in that it allows you to measure your measuring instruments! Generate test articles, measure them, and determine the accuracy of the instrument. Bing, bang, boom, done! This is a no brainer, you would need to do it with any scientific instrument, regardless of its origin (whether it came from bestbuy or from perkin-elmer).
  • You're rediscovering the science of photogrammetry.

    Here a potted google search. [google.com]

    BugBear

  • Test It!! (Score:3, Redundant)

    by dragons_flight ( 515217 ) on Tuesday October 30, 2001 @12:25PM (#2497909) Homepage
    I got my degree in physics, and you're right that off the shell hardware can be a great cost cutting measure. It's honestly disturbing how many times I've seen data collection run on something like a 386 using QBASIC.

    The thing to learn though is that consumer hardware is not scientific hardware. There is rarely much quality control with regards to specs, even when they are available. If this hardware is going to be the dominant error source you probably shouldn't be using it in the first place. As tedious as it can be, it's a good idea to test the specification of ANY piece of hardware that you are adding to a research lab, whenever reasonable to do so. I still remember wasting two days of my life because the magnetometer was disturbingly off spec, and that was a serious research tool.

    How do you test scanners and cameras? Clearly by scanning and photographing known objects. If you're just scanning diffraction patterns and stuff like that, then find a couple well known, well understood such effects and use them as your benchmark. It's also possible to buy high quality gray scales and precisely known grids to use as references.

    The lesson here is, don't use cheap equipment when it will be the dominant error source (preferably use it in parts of the experiment that contribute neglibly to your overall error), and TEST all your equipment and quit relying on spec sheets for anything important. Publication retractions that read the equivalent of "Oops! There really isn't any effect here, but we were too lazy to get it right." are very funny, but won't do anything good for your career.
  • by Captain WrapAround ( 472913 ) on Tuesday October 30, 2001 @12:26PM (#2497916)
    First you can check out How Things Work for the basics.

    Second, off the shelf imaging devices are challenging to use for scientific data collection for a number of reasons. The main one being their response is usually designed to replicate the human eye rather than a true spectral response--the difference between photometry and radiometry.

    For resolution tests, go to www edmundoptics com and check out the various testing targets available. The cheapest mylar USAF targets are pretty good for testing spatial resolution. Remember that when you get close to the resolution limit of the CCD, aliasing due to misalignment is going to be a factor. Your resolution could be up to a factor of 2X (per axis) better than you can test for, unless you're able to align the target with the pixels.

    You should also try to figure out which CCD the device uses. Yahoo!'s Electronics Marketplace is a good place to search for components and there is usally a link to the manufacuter's spec sheet. Some spec sheets are quite detailed and will give you plenty of information regarding sensitivity, dark current, spectral response, etc.

    Be skeptical of resolution claims. A flatbed scanner I have claims 9600 dpi or about 2.6e-6 m resolution. In reality, it's no better than about 5e-5 m.

    Also, the picture you get out vs the "real" picture is highly dependent on the imager's software & firmware. Autoexposure and color correction functions are usually present and can play havoc with an attempt to figure out what the "real" image is. Again, test targets may help here--if you can control all the other variables in the system, you can do some calibration experiments to figure out what the imager is doing to your image.

    Well, I hope this points you in the right direction.
  • I bet you wish someone would repond with usefull information. Here is what you should do. Make a "sample" to be measured. I don't know what you are trying to measure. Making up a perfect example is difficult because of that. But you will have accurate measurements of the "sample" because you have measured it with standard means. Scan the sample with the equipment you need to callibrate. (scanner, camera, etc..) Does the data from the flatbed scanner match what you know to be true about the sample? If it does so after a few tries, it is suitable for your purposes. It probobly won't. Experiment with different scanner settings and other software suites. You are probobly interested in the raw data from the scan. Many scanning programs will perform touch-ups on the image for photograpic purposes. This will throw your data off. It's possible that the equipment won't work for your purposes. But this procedure will help you find out.
  • by Anonymous Coward
    astronomers use ccd cameras all the time to gather images from the sky. hence, they also have powerful software to analyze this (pixelized) data. common tools would be DS9 (http://hea-www.harvard.edu/RD/ds9/) and IRAF (http://iraf.noao.edu/iraf-homepage.html) which work with fits files (use convert to generate them from common formats). you could also do it simple, get a digital camera which runs under linux (e.g. nikon coolpix 900), download the images with photopc and then work from there. it depends on what exactly you want to do.
  • There was nothing terribly expensive about the physics laboratory equipment I worked with.

    There are exceptions, such as specialty devices like the Michaelson-Morely apparatus, lasers with particular wavelengths, oscilloscopes, frequency analyizers... but none of that is going to be replaced by a general purpose computer.

    You may be looking for entirely different kinds of experiments which can be done using computers and digital cameras or scanners... like "take this camera and use it to measure distance, speed and direction of motion", "determine the rate at which accuracy deteriorates", "move the camera or use two cameras to calcuate the distance of unknown objects, applying what was learned about the camera's accuracy and resolution to determine your confidence in the object's position" or "measure the colour response and accuracy of this scanner"

    Other fun first year exercises might be to demonstrate the effect of various binary representations of numbers on the accuracy of data... all physics students need to know that stuff.

    Forget about push-button dumps of information into Matlab or whatever. I hated when lab instructors would set up labs, you don't learn anything. It would be worse if I walked in and didn't even have to measure anything... just hit a button (god forbid touching the apparatus!), push the data into MatLab, follow the instructions, hope the OS doesn't crash, then hand in my results.

  • OK, just so you know the company i work for is solutions provider for the medical/atomic/chemistry fields so we sell things that you desire (a scanner thats good enuf for xray film). there are a couple of places where you can get this stuff. in the US there is a company called "cobra scan" (i think , its close enough to find them) and while their scanners look pretty shitty, they will fly someone your lab to calibrate the scanner to your acceptible margin of error, another company called vidar [vidar.com] sells scanners that are very nicely built. these scanners are much more pricey then most (~7kUSD) however they have the accuracy you need and still used standard twain drivers and can most likey get them to work in what ever enviroment you have.


    something you should go and check out is a trade show RSNA which shows up medical scanners and other imaging hardware that can be usefull to you.
    RSNA [rsna.org] is held in chicago.

    -rev

  • by OpenMind(tm) ( 129095 ) on Tuesday October 30, 2001 @12:37PM (#2497988)
    • Don't use consumer low end devices where color is an important factor. Scanners in particular tend to change their color characteristics after calibrations, with changes in lamp temperature and other environmental factors.
    • Don't take pictures with low end cameras and use them for later analysis, particularly when looking for positional data. Compression artifacts stand to introduce substantial errors even on low compression settings. If you can turn off compression entirely, this may be an option. Live capture from a video stream would probably give you better images not limited by the device's intended use.
    • Keep in mind the low quality of lenses on most digital cameras and camcorders and the possibility of geometrical abberations near the edges.
    • Get some good visual benchmarks.
    • Consider that the cost savings may not bear out the work needed to make sense of the data. Commodity products are not made for precision.
  • by dgarant ( 515961 ) on Tuesday October 30, 2001 @12:40PM (#2498010)
    Consumer imaging devices are great if what you want to record are spatial patterns of light in an image or over time. But if you need to record absolute light values, or measure differences within a still image, or compare values over time, you will need a scientific measuring device capable of maintaining a calibrated black level as well as a known response to light. Consumer scanners typically only maintain a calibration within a single scan cycle, then recalibrate for the next scan, in order to give consumers the best digitization for each individual target. Consumer photo and video CCDs let the black level 'float' over time so as to give the best overall exposure at any given instant. These design features make consumer equipment useless as measurement devices, but ideal as pattern recorders.
  • Keep in mind that good postprocessing can factor out all sorts of predicatable equipment shortcomings. When the Hubble Telescope went up with a seriously flawed mirror, good software made it possible to get scientifically valid results without replacing the flawed optics. A similar approach might be useful here, if you're interested in this aspect of the problem.


    Also, keeping benchmarking data such as a color test image in field in each of you data images could allow for per-image calibration and factor out some of the unpredictability of consumer imaging. This could be easily automated in software.

    • When the Hubble Telescope went up with a seriously flawed mirror, good software made it possible to get scientifically valid results without replacing the flawed optics.

      Actually, good optics were used to correct flawed optics. It wasn't a software solution, but rather a corrective lens that was added to get the good results.
      • Actually, good optics were used to correct flawed optics. It wasn't a software solution, but rather a corrective lens that was added to get the good results.


        Truthfully, both were done, the corrective lens at a later date, as indicated in the mission press info. [nasa.gov]




        While the launch on the Space Shuttle Discovery more than 3 years ago
        was flawless, Hubble was not. Two months after HST was deployed into
        orbit 370 miles (595.5 km) high, Hubble produced a disquieting discovery
        not about space, but about itself. The curvature of its primary mirror was
        slightly Q but significantly Q incorrect. Near the edge, the mirror is too flat
        by an amount equal to 1/50th the width of a human hair.

        A NASA investigative board later determined that the flaw was caused by
        the incorrect adjustment of a testing device used in building the mirror.
        The device, called a Rnull corrector,S was used to check the mirror
        curvature during manufacture.

        The result is a focusing defect or spherical aberration. Instead of being
        focused into a sharp point, light collected by the mirror is spread over a
        larger area in a fuzzy halo. Images of extended objects, such as stars, planets
        and galaxies, are blurred.

        NASA has been coping with HubbleUs fuzzy vision with computer
        processing to sharpen images. For bright objects, this technique has yielded
        breathtaking detail never seen from the ground. NASA also has been
        concentrating on the analysis of ultraviolet light, which ground-based
        telescopes cannot see because of the EarthUs intervening atmosphere.
  • by an_art ( 521552 ) on Tuesday October 30, 2001 @12:53PM (#2498102)
    If your goal is to reduce the cost of automating experiments that require an optical sensor, then consider the imaging equipment being used by amateur astronomers. These imagers are less expensive than the "professional grade" units, and are much more adaptable to being attached to equipment than are consumer units. Most of the amateur astronomy magazines have an assortment of ads for these units. As indicated by other folks, you'll need to develop or acquire physical calibration standards for noise, linearity, sensitivity versus exposure time, resolution, dark response, pattern sensitivity, repeatability and temperature stability, to name a few. It sounds like fun. Good Luck, Art
    • Amatuer Astronomy uses CCD cameras in a very controlled and scientific manner. A great reference is

      http://www.willbell.com/aip/index.htm

      Also Edumond Optical sells standard optical test target cards to calabrate optical equipment just like the pros do.

      http://www.edmundoptics.com/IOD/Browse.cfm?catid=2 89&FromCatID=36

  • by JabberWokky ( 19442 ) <slashdot.com@timewarp.org> on Tuesday October 30, 2001 @12:56PM (#2498115) Homepage Journal
    Whatever you use, open source drivers are a very very good idea - the Windows drivers for many scanners (and I would guess for the digital cameras as well) process the images to "clean" them - remove artifacts like the glimmer in tight lines, etc.

    Artifacts from CCDs are bad enough - you don't need more caused my "corrective" software designed for human perception.

    --
    Evan

  • or does this guy seem way to picky about his PORN SCANS?
  • Having some experience in dealing with high resolution digitized images and scanners, I can say this: you have essentially no chance getting the specs.


    "The problem is that I am having trouble finding out about the way in which scanners and digital cameras work, and how this would affect their accuracy with respect to what I am aiming to do."


    This can be learned by studying optics. It is not very staight forward, but any discussion relating to it would be far beyond the range possible in discussing on slashdot. Get a book and start reading.


    "The information I am looking for includes things like: the resolution of their grey-scales, what degree of accuracy the motor steps at, how uniformly distributed the CCDs are in the arrays, and other issues that might affect accuracy. Just so that I can know how close to the 'real' picture what I get out of the scanner/camera is"


    There is a LOT more that affect what "the real picture" is than these factors. Again, perhaps you need to go do some reading on optics.

  • light intensity (Score:3, Informative)

    by emg178 ( 304822 ) on Tuesday October 30, 2001 @01:04PM (#2498175)
    I worked in a lab where the intensity reading of each pixel was important. We used scientific grade equipment, so that we could set the sensitivity, offset, and correction.
    If you are interested in measuring the intensity of each pixel, read on:

    First of all, I would think that a consumer camera with automatic exposure control would automatically set the gain (or sensitivity to light). You would need to be able to turn this off.

    Secondly, the offset has to do with black noise (or error due to thermal energy within the ccd). On the camera's that I used, it was around 5 intensity levels out of 256 on an 8 bit camera. There is not a need of refining this on consumer equipment, so it probably doesn't get much better than that. You can buy cooled camera's for getting rid of this, but you want it cheap so this is not a great option. You could try cooling the camera w/ liquid nitrogen. I wondered about doing this myself. Alternatively, if you are taking images of something that doesn't change in time, you can take multiple images and average them. The black noise of the averaged image will decrease as the square root of the number of images.

    Thirdly, the image correction -
    Most consumer equipment uses a gamma correction curve b/c of similarities w/ film and video. Look it up if you don't know about it, it is interesting, and useful for taking pleasing photographs. For scientific purposes, though, you probably want a linear response. This will give you a constant sensitivity to light changes.

    The last thing you should be concerned with it changing conditions / response with time. Some others have noted this. You will need to calibrate the device many times at different times of the day to make sure that changes over large times are reasonable. We utilized a calibration during the experiment to reduce this problem.

    As for image formats readable by mathcad / matlab, etc. That sould be fairly easy once you get the device driver settled.
  • images gained from the experiments (such as difraction patterns, etc) can be analysed in a program such as MathCAD straight off

    For these kinds of things just use normal 35mm camera film to record the difraction patern (or whatever) and then run it through a film scanner, flatbed scaners would be very arcane. A good film scanner will run for less than $500 and give you probably over 3000dpi of resolution (mine is about 2 years old and does 2720).

    This has two major advantages. First you get something non-digital to archive (the film) which you may use later on for studying something completely unrelated that you never thought of. Secondly, you'll probably get much better quality. Film scanners are made for professional/semi-profession uses and are probably alot better built.

    To test if the thing isn't distorting your images produce a well known pattern and mesure it digitaly, see if it checks out.

  • Scanner References.. (Score:3, Informative)

    by jdrogers ( 93806 ) on Tuesday October 30, 2001 @01:14PM (#2498253) Homepage
    I just asked a Professor here at the optical sciences center if he knew of any good publications on scanner technology.. He said to check out books/papers by Leo Beiser. He has apparently written some books on Optical Scanning technology and various SPIE papers as well. This may give you a starting point for a more rigorous look at the tech behind scanning.
  • First off, most commodity imaging hardware does not have specified shutter rates, so anything that's timing dependant may be a problem. On the other hand, at the slightly more money than commodity but still less than lab equipment point, are industrial CCDs... usually they can be had for sub $500, and have a lot more options and better specs.

    Assuming that you are going with a commodity imaging device, calibration is going to be important. Do it yourself with test patterns. Also, try to work in greyscale if possible. Many CCDs, when in color mode, have some built in color / light level compensation that will kill all your chromatic accuracy.
  • consumer != science!!!

    I've seen it multiple times, adhere to it! I graduated physics this summer and my CCD's cost approx. $30k - $50k EACH! And there's a reason for it! (well, you might not exaclty need _that_ costly an equipment, but still...)
  • use something of known dimentions and scan it and try to measure the acuracy, you're a scientist, create a set of tests to measure color and distortion.
  • Slightly off-topic -- In general, I don't recommend PC-based measurement equipment. A company I used to work for bought a PC-based Oscilloscope trying to save some money. It was a cheap data acquisition card you plug in to an ISA slot and used software to look at waveforms. But in actuality the capture and rendering was ridiculously slow, and the controls hard to operate, so nobody was able to perform any real work with it. Also one must take into consideration the additional trouble of having to set up the PC near the lab bench, waiting for the PC to boot up, and the lack of isolation between the probe and PC.

    In the end, the purchase was a short-lived expensive toy.
  • They have an interesting battery of tests for digital cameras. Take a look at some of their tests and that will give you an idea as of what you should be looking for.

    BTW, the guys are right. Don't expect consumer hardware to be consistent. You are just not paying for that consistency. Also keep in mind these things are not built to take the abuse of continuous usage. Devices that are a bit more expensive are usually built better. For example, look at a $50 scanner and a $200 scanner. Now guess which one will still have a functional lid after a month of intense use.
  • I have some experience in this, so i will interject.

    Home scanners and digital cameras are definitly not suited for the task if you need very near digital reproduction of an object. The reasons for this are many, but mainly, the all interpolate colors between what imaging elemaents they have. And not that accuratly.

    When you move up to the midrange of scanners/digital camers (~$2000), the problem can still be there, but its less pronounced. I worked on a project requireing digital photos of a very hard to photograph subject, and tthis range of cameras produced sub-par results for the task (the shots look incredible, but zoom in and youll see fuzziness and interpolated color).

    Then, you have the ~$20,000+ cameras and scanners. This was eventually what we had to go with. One camera delivered particularly good results, and achived it through actually moving the CCD so that there were no interpolated pixles. It was accurate enough that if you shot a Greytag/Macbeth chart, right from the camera, the greys would be the same value for evey pixel.

    As with all these [camera] setups, You need a very controlled lighting situation (ie. photo studio), but you can shoot just about anything.

    As far as scanners, the same applys. You will need to get in toe the pricy professional line to get accurate pixles, and from that, better analysis.

    Your test for any product should be if you scan a greyscale, if you go in to photoshop and look at the pixle color values, are they all the same value (like 125,125,125) and, is it consistant across the swatch (if you move your mose a few pixles over, does the value change?)

    The other aspect you have to contend with is your computer and monitor and its interpretation of what your seeing. Again, if any amount of accuracy is needed, you will need a controled lighting setup. No direct sunlight, try not to wear clothing that will project a color cast on the monitor, a lightbox to properly illuminate the scaned subject for proper color editing, etc.

    This is where you buy a macintosh. you dont need to do all the ColorSync stuff, just keep your monitor and scanner/camera in line.

    So based on the three levels in imagry equipment (home, semi-pro, pro) you can determine what level of final output you need and judge your costs from there.

    For full setup, id guess:



    Home 5-8k Semi-Pro 10-15k Pro 20-40k

    Some useful links:

    Greytag Macbeth [gretagmacbeth.com]
    Apple:ColorSync [apple.com]
    Imacon [imacon.dk] 3020 is camera i mentioned above
    megavision [mega-vision.com]
    leaf [leafamerica.com]
    Sinar [sinarbron.com]
    Phase One [phaseone.com]
    Betterlight [betterlight.com]

    This is mostly high end stuff, but, you should be a good starting point in findieng the mix of price/performance you are looking for for the overall project.

  • For work like this, I like my QX3 [fsu.edu]. Cheap and powerful.

    There's a short review of its capabilities here [microscopy-uk.org.uk], but this site [fsu.edu] has some amazing hacks [fsu.edu] that enable it to do darkfield, polarized, Rheinberg, or even simulated Hoffman modulation contrast viewing.

  • First off : you should be able to get info such as pixel sizes from the manufacturer (typical pixel sizes for CCDs are in the in the 10^-12 range for scientific grade CCDs) In arrays CCDs are fairly good. . .all CCDs have problems with charge transfer efficiency of course but if you're willing to gut your digi cam you'd find out about that. Also you will start to get quantum efficiency problems if it's not a good chip- so your detection of photons as a function of wavelength will be very poor. And then there biggest problem with digi cams are the read noise as CCDs don't do to well with out being cooled. Software wise I'd recommend you go track down your local astronomer and ask them about IRAF or IDL. IRAF is free from the National Optical Astronomy Observatories and is made to work well with CCD images. The same goes for IDL but it costs arms and legs (say roughly $1000 for a license!). I think the best advice you can get is 1) cantact the manufacturers on the technical info they have and then 2) track down your local observational astronomer. They can tell you about gain, read noise, shot noise and tell you about fun effects like diffraction fringes at redder wavelengths that you can get with CCDs.
  • Pixel positioning may be a bigger problem for you than light response. At least you can calibrate for light response, but with scanners and printers you can't rely on where your pixels may be. 1200 pixels per inch does not mean that pixel no.3447 is 3447/1200" away from the origin. You'll see fixed variations in the X-axis (along the sensor array) but unpredictable ones in the Y-axis (along the direction of motion), and because these vary from one pass to the next, you can't calibrate the errors away.

    Simple experiment: use a laser printer to print 0.01" squares 0.02" apart in both directions (ie.25% grey) on a sheet of acetate. Do it twice, superimpose the sheets, and try to get an even tone of grey. It's impossible, and there are no published specs on the accuracy of positioning.

    At the very least, include a graticule in every scan if accurate measurement is important to you.

  • When I was in college we did this same sort of thing (I think). I was once a physics guy, doing research for NASA, trying to build a temperature sensing device with no electrical components with in the sensor. Ultimately we found that using a sapphire fiber optic we could measure the temperature variance by relating the change in optic signal frequency to the diameter of the fiber, and then calcualting the temperature. Anyway, long story short, we needed percisioning devices for our testing. My main part of this project was the electronics and software to get data from them, so we bought up some CD players (specific ones which had lasers that had wavelengths with of something like .5 micrometers) got the scematics from the companies and started ripping them up to grab the laser and focusing electronics. CD lasers at the time used some interesting stuff which allowed the electronics to move the lense of the laser to maintain focus within a certain range of distance (read can focus on a wobbling CD). This made the cd laser perfect to measure percision of our systems, we knew that the laser's wavelength was half a micron so that was a rather exact margin of error, once the laser was focused on a target, we measure the signals on the electronics side and could develop our margin of error based on the movements of the lense with an accuracy of half a micron. KEWL! It was a fun project, and the use of CD's kept us on budget as opposed to buying research lasers for the same thing which could have been thousands of dollars.

    Later that year, I used my project for that research to develop a rudimentary surface topology scanner, which made really kewl 3D computer images of pennies, dimes and quarters at a microsopic resolution. Not bad for 1993...
  • You're a Senior. You're about to go out into the world and you still haven't learned the basic skills of research. Experience is no free hand out. Go figure it out yourself. I'm sorry, but it's your project. Some Universities might consider this collaborating on your project and I don't know if you're allowed to solicit this kind of help. We're like your fellow students, so do your own.
    • If you actually mean this, then you've obviously never done any real research. Informal idea exchange is the basis of how loads of stuf gets done in academia.
      • This person is a student and should learn to hone his research and problem solving skills. Asking for help is good, but do some fundamental searching on ones own is better. His questions seem too broad.

        Research?: 16 years circuit design and manufacture, software development in most languages, international medical research, equipment customization, statistical data analysis. You do what you have to do to get the job done.
  • One major aspect of using commercial digital cameras that will get you when trying to make quantitative measurements is what's known as anti-blooming. One of the problems with CCDs is that when you start saturating a particular pixel, the charge from that pixel can "leak out" onto other pixels, causing "blooming", where you get extra electrons in pixels around pixels that absored a lot of photons. The solution to this is to have hardware anti-blooming, which automagically compensates for this (I'm not sure how.) However, the end effect of this is that your image is no longer linear: the number of electrons is not strictly proportional to the number of photons, because some of the ones with larger numbers of photons have been siphoned off to prevent blooming. In scientific applications, blooming is prevented by using low light levels to keep the camera well below the saturation limit, but anti-blooming is used in commercial applications whree the light level isn't well defined, so that you always get pretty (if not accurate) pictures. This will kill you if you're trying to use a commercial camera to take spectroscopic data. One good mid-range solution is to use amateur astronomy cameras, like those produced by Starlight Xpress [starlight-xpress.co.uk]. These cameras have no anti-blooming, are internally cooled, and have very small pixels and large arrays. The down side is that they're slow to read out, but in many scientific applications (like spectroscopy) this isn't necessarily a huge problem.
  • Intel has a Open Source Computer Vision Library [intel.com].

    I played with it for a while many months ago. After reading this Oreilly Net Article [oreillynet.com] about it. The link is to page 2, because that's where the calibration stuff is.

    This is how you can find out where all the pixels are pointing. I suspect there is code for calibrating intensities, but I didn't use it.

Utility is when you have one telephone, luxury is when you have two, opulence is when you have three -- and paradise is when you have none. -- Doug Larson

Working...