PhysicsTom asks:
"I am a Senior Physics student who's final year project is based upon using common, easily available technology to replace parts of the aparatus used in various departmental labs. Currently, my main area of interest is trying to integrate certain computer peripherals (such as scanners and digital cameras) into experiments at an earlier stage, so that images gained from the experiments (such as difraction patterns, etc) can be analysed in a program such as MathCAD straight off, rather than the much less efficient methods we're using at the moment. The problem is that I am having trouble finding out about the way in which scanners and digital cameras work, and how this would affect their accuracy with respect to what I am aiming to do." Basically, how do the various hardware aspects of such devices affect their ability to accurately measure or scan the subject of the experiment?
"The information I am looking for includes things like: the resolution of their grey-scales, what degree of accuracy the motor steps at, how uniformly distributed the CCDs are in the arrays, and other issues that might affect accuracy. Just so that I can know how close to the 'real' picture what I get out of the scanner/camera is. If anyone can tell me all these boring facts for any suchequipment (preferably solutions currently available in the UK) then I would be very appreciative."
Contact the manufacturer (Score:3, Informative)
No guarantees (Score:5, Insightful)
This doesn't mean you can't use them, though. What it does mean is that you'll need to select something you're pretty sure can handle what you want, and then devise procedures for calibrating the devices' output.
Re:No guarantees (Score:2)
Another point to be wary of is lossy compression, many folks who use cams and scanners are used to grabbing (lossy) jpegs. Depending on what information you are trying to capture, jpeg might add artifacts (noise/errors) to your data that you don't want, and it may add more artifacts each time you transform the data, like making a copy of a copy of a cassette tape. Make sure you can operate on lossless data, and only use lossy compression formats for archival storage after you have manipulated your data (assuming you need the compression rates that lossy algorithms provide).
Re:No guarantees (Score:1)
Re:No guarantees (Score:1)
Very true. Both my father and I purchased identical Umax scanners a few years back. Same model, same interface. Yet, the color quality on mine was far superior - his pictures always scanned with a bluish tint. He was able to color correct them using the software provided, but the tolerances were obviously very different despite being the same equipment.
I would wager that a $1000 HP scanner would have much tighter tolerances than a $50 Umax. Just a thought.
Re:No guarantees (Score:3, Informative)
Note that even NASA continually re-calibrates their probes -- and I presume that Nasa doesn't use off the shelf components for most of their deep-space boxes. Units will change over time -- much less the change between units of the same model. Whatever you choose, you'll have to do calibration on an ongoing basis.
Once you figure out how to do calibration, you can compare various units (and report back! :-). If you do a good job, you might even be able to scam yourself some free units from manufacturers who are interested in the results, and having them published.
Contact the Product's Maker (Score:1, Informative)
Hm, well, when I was in college... (Score:3, Funny)
- A.P.
Re:Hm, well, when I was in college... (Score:1)
The project was looking at chemical compounds in wood smoke, and so used the water to trap some of these components for study.
He showed pictures of this thing in his seminar and described it in his thesis. It was about 15 minutes after the seminar everybody started to realise he had built a really really really big bong.
You're going to have to measure it yourself (Score:3, Informative)
I stopped relying on spec sheets when I discovered they weren't very accurate. I've seen variances as high as 50% off spec.
Sorry (Score:4, Funny)
Please continue to use the equipment as the manufacturer intended but please refrain from learning anything about it or using it for actual work.
Your friends in peace
USA
Happy Medium (Score:3, Informative)
Well, it would all depend on the time quanta that you are measuring. Digital scanners for anything having velocity are right out the window, since it takes a notable amount of time to capture the image. Digital cameras are somewhat faster, but it would depend on the quality of the camera if you wanted to track moving objects at higher 'shutter speeds' and resolutions.
If a regular camera could capture the data you are collecting - and it seems that this is the case - the digital cameras should be fine. The important issue is that higher resolutions take longer to fix the image. Finding a happy medium between image resolution and image capture is what you're looking for. You might be able to get those specs from the manufacturer(?).
Scanners/ cameras (Score:1)
Calibration (Score:5, Insightful)
Have you considered just calibrating the equipment? You'll probably need this anyway since, even if you can get the specs, they'll be expressed as ranges and individual components can fall anywhere within the range (as well as changing physically over the life of the equipment). This is true of your custom hardware as well.
If you want to get an idea of how the equipment performs before you buy, just bring your test images and a laptop into the store and ask to try the demo model.
Talk to some of the researchers in your lab. They probably already have tests as well as software that will compensate for irregularities in a CCD based on the results of the calibration.
Hear hear!! (Too late to change your topic?) (Score:5, Insightful)
Stop thinking like a freshman who expects to find the answers in the back of the book. Even if you find this information someplace, the nature of commodity (vs. scientific) gear is that the manufacturer can change it at any time to meet market needs.
You're a senior and need to start thinking like one. If you need calibration data, and you do, you should be thinking about how to get it for yourself using other commodity equipment. This is important today, critical with the improved hardware a decade or two from now.
A trivial example I would have killed for 20 years ago? A 600 DPI laser printer. With it you can easily produce high quality optical test patterns, including some basic grey scales. (A standard sized sheet of paper will have far more 'pixels' than the CCD element in the camera.)
A slightly more advanced example is what you can do with a cheap A/D card. 10-bits of accuracy doesn't sound like much, but if you're clever you can leverage it.
Finally, I would strongly recommend that you review the "Amateur Scientist" columns in Scientific American over the past four or five years. If you can construct a simple closed feedback loop (cheap op-amp chip) and monitor it with an A/D converter ($100), you can do some incredible experiments.
Re:Hear hear!! (Too late to change your topic?) (Score:3, Informative)
As for x/y positional calibration, I made up a template for fret placement on a guitar fingerboard, once upon a time, by computing and plotting the fret placement in AutoCad and printing it out on a laser printer. The finished home-built instrument played scales more accurately in tune than my commercially-built acoustic guitar did. Or, if your school has a machinist on-campus, see if you can obtain a set of Jorgensen blocks and scan them. They are sized accurately to, IIRC, 0.0001 inch, or so. If you decide to use a laser printed calibration chart, be SURE you use a grid, rather than, say, a rectangle of a certain size. This way you will be able to determine whether there are any non-linearities in the motion of the scan head.
Accurate calibration standards just aren't THAT hard to find.
Agreed, but calibration is only the first step (Score:1)
Calibrating any device is important, and I love the laser printer idea for positional calibration. I think other posters responded to the intensity aspect of this.
One problem is that, for digital cameras, most are automatic and set F-stop, shutter speed, and do color corection and sharpening afterwards. You don't want any of this post-processing, and you need to know the shutter speed and F-stop (or their equivalents). I don't know enough about the market to say where these are available or not. I guess if you can't control this directly, then you'll have to include calibration data with every photo as part of the "scenery", as if each photo were taken with a completely different camera with different properties. I have a great book entitled "CCD Astronomy" which, while it isn't exactly on topic, covers the basics of image processinig and calibration very nicely. I don't have the book with me now, so I can't tell you what the author is. I used this book as an undergrad in physics while working on a CCD camera as a research project. It was very accessable to me.
But the main thing you're after, I think, is the accuracy of the device, after it's been calibrated. I think this is based on two factors, the actual pixel sizes/bit depth, and the accuracy of your calibration factors. Something like pixelsize/sqrt(12), added in quadrature to the calibration errors. Estimating the accuracy of the calibration is an interesting business, and there are a variety of ways you could do it. Taking multiple calibrations and finding the statistical spread (RMS) is an easy way, but only gives you a lower bound because there will be systematic errors in your calibration procedure. I'll leave it at that. Probably any intelligent, hand-waving approach to estimating these systematic errors is enough for a senior project. A really thorough approach could take much more time than you have, generally speaking.
Hope this helps!
Re:Agreed, but calibration is only the first step (Score:1)
My brother, who is a photographer of the strictly analog variety, tells me that you can get digital backs for most modern medium and large format cameras. However they do cost $10,000 and up and must be plugged into a standard computer or laptop for storage. The advantage here is that you can set F-stops, focus, and focal plane adjustments manually. Like you, however, I don't know a lot about the market so I can't really provide much more info or links.
One use of off-the-shelf CCDs (Score:2)
In the end, he discovered that there were substantial geometry differences between the cameras that made registering the three bands quite a challenge. That's the difference between a 12" format aerial photo camera and a 35mm SLR, the former costs a small fortune because a lot of effort has gone into ensurng that the film is held as flat as possible (on a plate rather than stretched between two spindles).
Then there's colour response. Same deal. I'd guess "scientific" quality sensors would be moe consistent in response to being hit by light.
Bascially if you can account for the effects of using more variable components (by calbration, experimental design or whatever), where's the problem? Regardless of which way you go, you will need to *know* how your instrumnts take their measurements so you can know how they influence your results.
Xix.
Scan known image patterns and compare for accuracy (Score:1)
calibrate yourself (but get decent stuff) (Score:2, Interesting)
Also for image capture avoid anything that adds software artefacts (especially compression). firewire uncompressed cameras (we get ours from www.unibrain.gr, very good) are good for high framerate high res, with good Linux support.
Re:calibrate yourself (but get decent stuff) (Score:1)
Seems to me... (Score:2, Insightful)
It would be a good lesson in the real world - like the old aphorism
Re:Seems to me... (Score:1)
What's more important in an experiment is understanding where your errors come from. All of my undergraduate labs (nuclear and optics) were based around using whatever rediculously ancient and decrepit pieces of equipment we had lying around, and learning all the clever little tricks we could ween from our professors about how we could get accurate results from them.
It seems to me like there is no answer to your question unless we know exactly what it is that you're trying to measure. I accurately measured diffraction patterns in optics labs with a cheap CCD video camera and a framegrabber card. Sure, we had to program a filter to convert the frame into raw data for analysis, but I remember that just using our eyes, we were able to determine correct contrast settings.
It also seems to me that if you're working on a senior project, what your professors are more concerned about is not your results, but how well you statistically analyze the nonlinearities that are actually there. Trying to find a more accurate measurement tool usually just means that you're going to have to use more sensetive calibration tools to determine nonlinearities.
Now if you could post what your experiment actually is (although it sounds like you're trying to revamp many experiments), someone here may be able to propose a solution to you that allows you to ignore the nonlinearities in a device.
~Loren
Film scanner website (Score:1)
Anyway: This site on film scanners [cix.co.uk] talks specifically about film scanners, but also about the technology associated with them. I also really liked the discussion on ink jet printers (which I knew nothing about). Good luck!
Calibration is essential.. (Score:1)
The same goes for your scanner. There are a ton of problems you will run accross when you try it, so just make sure that you compare the results you get in your traditional meaurement aproach with what you get using the scanner.
Good luck,
JD
the spectre of propriatry specs... (Score:1)
a) See if anyone in the open-source community is working on projects utilizing this hardware. Heck, see if you can find some uber-geek who's been involved in creating linux drivers for a bunch of scanners; maybe you'll find someone who has a great storehouse of eclectic ccd-centric knowledge they would be glad to dump on you.
b) Never underestimate the power of documentation. As I always was told, "documentation is like sex, even when it's bad, it's better than nothing..." Maybe they've listed component manufacturers in the crufty stuff they pack in the back of some of the user manuals.
c) Love your service technician. If you're working with equipment that requires outside support, get friendly with him/her and see if you can't wheedle yourself a set of old support/repair documentation. Most of these people are wage-slaves like us and may well be interested in your little projects.
Beyond that, keep your nose to the grindstone and good luck. Let us know how it goes.
Do your own measurements. (Score:3, Informative)
The best thing to do, when possible, is to do the measurements yourself. That way you know exactly what *your* device is capable of doing, and not the *average* device from the manufacturer. You shouldn't rely on manufacturer's spec sheets for this type of information.
For example, you can get a quick idea of the bit depth of a CCD by measuring the noise floor of the output of a null signal and compare it to the output of a saturated signal. You will find that most *consumer* or *security* CCD cameras will not give you a full 8-bits. Even scientific CCDs which state that they give a full 8-bits are only under certain conditions with a specific type of average or weighted measurement. Don't trust the spec sheet. Measure to make sure!
Then of course you could also use your head. How uniform are CCD arrays (spatially)? Think about how they are made. They are very uniform.
Finally, you should talk to your final project advisor. What you are doing isn't Physics, it's Engineering. Sure engineering is part of experimental science, but shouldn't be the prime focus of a "Physics" project, IMO (was a Physics undergrad myself).
Be Carefull: Consumer CCD != Scientific Equipment (Score:2)
There was an article on a very similar subject in New Scientist a few weeks back.. Lemme see if I can get a URL....{time passes} Oh dear, it's in the archive and you'll need to register to see it. And registration requires a subscription to the magazine.. how very lame.
Anyway, the upshot was that a research group was using consumer-type Digital Cameras to help automate surveys of Rain Forest flora.. turns out that their estimates were VERY BADLY off because the prime descriminator used was colour (think: "shades of green"). And the cameras they used (specific make/models not mentioned) basically couldn't capture the range of greens required, or distorted them. Spherical Aberations from the el-cheapo lens on the cameras just made things worse. Bottom line: years of work needs to be re-done with more expensive, calibrated equipment.
Re:Be Carefull: Consumer CCD != Scientific Equipme (Score:2)
The bottom line is this: Everything depends on how well you collect that initial data. This is TRUTH: No amount of signal conditioning, DSPs, FFTs, DCTs, quantum neural framulators, or anything else can make up for crappy sensing elements.
This is true whether you're talking about image sensors, temperature sensors, EM sensors, mechanical sensors (force, pressure, torque, etc.), or anything else. Remember that silly saying you learned in your first computer class: Garbage In, Garbage Out - It's still true, whether you like it or not!
Sadly, I can't tell you how many times I've seen really bright people ignore this simple fact of life (wasting countless millions of dollars in the process), confident that the rules don't apply to them, and that their computer can somehow create something from nothing. (My experience tells me thse people are more likely to be in academia or very large companies where "scientists" are more highly regarded than mere "engineers".)
BTW: I know a thing or two about this because my father's company [i-s-i.com] (and no I did *not* do the web site) specializes in building high-quality mechanical sensors that provide laboratory precision in hellish environments. (Literally hellish: like downhole in oil wells.) Some customers are willing to pay for really good data, realizing that there's no alternative if you really need to know what's going on.
Get the sensing elements right, and the rest of your job will be much easier (and cheaper, too...)
SANE is a key resource (Score:3, Informative)
The SANE folks have gone to great efforts to get various scanner/camera devices to work in an open source environment. In some cases the manufacturer provided all the information needed to interface to the device; in other cases the interface has been found exclusively through reverse-engineering.
I highly recommend that you look closely at the list of supported SANE devices [mostang.com] and choose a device known to work from the list. If you go into your local computer store and buy something off the shelf without looking at the SANE list, you are *very* likely to end up with a product that is completely unsupported in any useful environment.
When in doubt, calibrate! (Score:2)
Keyword: photogrammetry (Score:2, Informative)
Here a potted google search. [google.com]
BugBear
Test It!! (Score:3, Redundant)
The thing to learn though is that consumer hardware is not scientific hardware. There is rarely much quality control with regards to specs, even when they are available. If this hardware is going to be the dominant error source you probably shouldn't be using it in the first place. As tedious as it can be, it's a good idea to test the specification of ANY piece of hardware that you are adding to a research lab, whenever reasonable to do so. I still remember wasting two days of my life because the magnetometer was disturbingly off spec, and that was a serious research tool.
How do you test scanners and cameras? Clearly by scanning and photographing known objects. If you're just scanning diffraction patterns and stuff like that, then find a couple well known, well understood such effects and use them as your benchmark. It's also possible to buy high quality gray scales and precisely known grids to use as references.
The lesson here is, don't use cheap equipment when it will be the dominant error source (preferably use it in parts of the experiment that contribute neglibly to your overall error), and TEST all your equipment and quit relying on spec sheets for anything important. Publication retractions that read the equivalent of "Oops! There really isn't any effect here, but we were too lazy to get it right." are very funny, but won't do anything good for your career.
Re:Test It!! (Score:2)
I didn't mean to imply anything to the contrary. Sure it's the right tool for the right job and old computers can be very useful in the labratory environment. When I said it was "disturbing", I just meant the scale of the thing. Computers were invented in research labs and it seems like many of them go back there to die. And lots of those old computers serve as general purpose controllers via qbasic, because of it's ease and excellent serial port tools.
Re:Test It!! (Score:2)
A few things to consider (Score:4, Interesting)
Second, off the shelf imaging devices are challenging to use for scientific data collection for a number of reasons. The main one being their response is usually designed to replicate the human eye rather than a true spectral response--the difference between photometry and radiometry.
For resolution tests, go to www edmundoptics com and check out the various testing targets available. The cheapest mylar USAF targets are pretty good for testing spatial resolution. Remember that when you get close to the resolution limit of the CCD, aliasing due to misalignment is going to be a factor. Your resolution could be up to a factor of 2X (per axis) better than you can test for, unless you're able to align the target with the pixels.
You should also try to figure out which CCD the device uses. Yahoo!'s Electronics Marketplace is a good place to search for components and there is usally a link to the manufacuter's spec sheet. Some spec sheets are quite detailed and will give you plenty of information regarding sensitivity, dark current, spectral response, etc.
Be skeptical of resolution claims. A flatbed scanner I have claims 9600 dpi or about 2.6e-6 m resolution. In reality, it's no better than about 5e-5 m.
Also, the picture you get out vs the "real" picture is highly dependent on the imager's software & firmware. Autoexposure and color correction functions are usually present and can play havoc with an attempt to figure out what the "real" image is. Again, test targets may help here--if you can control all the other variables in the system, you can do some calibration experiments to figure out what the imager is doing to your image.
Well, I hope this points you in the right direction.
callibration (Score:1)
ask an astronomer (Score:1)
A Solution in Search of a Problem? (Score:2)
There was nothing terribly expensive about the physics laboratory equipment I worked with.
There are exceptions, such as specialty devices like the Michaelson-Morely apparatus, lasers with particular wavelengths, oscilloscopes, frequency analyizers... but none of that is going to be replaced by a general purpose computer.
You may be looking for entirely different kinds of experiments which can be done using computers and digital cameras or scanners... like "take this camera and use it to measure distance, speed and direction of motion", "determine the rate at which accuracy deteriorates", "move the camera or use two cameras to calcuate the distance of unknown objects, applying what was learned about the camera's accuracy and resolution to determine your confidence in the object's position" or "measure the colour response and accuracy of this scanner"
Other fun first year exercises might be to demonstrate the effect of various binary representations of numbers on the accuracy of data... all physics students need to know that stuff.
Forget about push-button dumps of information into Matlab or whatever. I hated when lab instructors would set up labs, you don't learn anything. It would be worse if I walked in and didn't even have to measure anything... just hit a button (god forbid touching the apparatus!), push the data into MatLab, follow the instructions, hope the OS doesn't crash, then hand in my results.
this hardware exists. (Score:2, Informative)
something you should go and check out is a trade show RSNA which shows up medical scanners and other imaging hardware that can be usefull to you.
RSNA [rsna.org] is held in chicago.
-rev
Some random recommendations (Score:3, Informative)
Good for imaging, unsuitable for measurement (Score:3, Insightful)
Remember the Hubble Space Telescope (Score:2, Informative)
Keep in mind that good postprocessing can factor out all sorts of predicatable equipment shortcomings. When the Hubble Telescope went up with a seriously flawed mirror, good software made it possible to get scientifically valid results without replacing the flawed optics. A similar approach might be useful here, if you're interested in this aspect of the problem.
Also, keeping benchmarking data such as a color test image in field in each of you data images could allow for per-image calibration and factor out some of the unpredictability of consumer imaging. This could be easily automated in software.
Re:Remember the Hubble Space Telescope (Score:1)
Actually, good optics were used to correct flawed optics. It wasn't a software solution, but rather a corrective lens that was added to get the good results.
Re:Remember the Hubble Space Telescope (Score:1)
Truthfully, both were done, the corrective lens at a later date, as indicated in the mission press info. [nasa.gov]
While the launch on the Space Shuttle Discovery more than 3 years ago
was flawless, Hubble was not. Two months after HST was deployed into
orbit 370 miles (595.5 km) high, Hubble produced a disquieting discovery
not about space, but about itself. The curvature of its primary mirror was
slightly Q but significantly Q incorrect. Near the edge, the mirror is too flat
by an amount equal to 1/50th the width of a human hair.
A NASA investigative board later determined that the flaw was caused by
the incorrect adjustment of a testing device used in building the mirror.
The device, called a Rnull corrector,S was used to check the mirror
curvature during manufacture.
The result is a focusing defect or spherical aberration. Instead of being
focused into a sharp point, light collected by the mirror is spread over a
larger area in a fuzzy halo. Images of extended objects, such as stars, planets
and galaxies, are blurred.
NASA has been coping with HubbleUs fuzzy vision with computer
processing to sharpen images. For bright objects, this technique has yielded
breathtaking detail never seen from the ground. NASA also has been
concentrating on the analysis of ultraviolet light, which ground-based
telescopes cannot see because of the EarthUs intervening atmosphere.
Amateur Astronomy May Provide Some Hints (Score:5, Interesting)
Re:Amateur Astronomy May Provide Some Hints (Score:1)
http://www.willbell.com/aip/index.htm
Also Edumond Optical sells standard optical test target cards to calabrate optical equipment just like the pros do.
http://www.edmundoptics.com/IOD/Browse.cfm?catid=2 89&FromCatID=36
Drivers are highly suspect. (Score:3, Insightful)
Artifacts from CCDs are bad enough - you don't need more caused my "corrective" software designed for human perception.
--
Evan
Did I miss something (Score:1, Funny)
Well.. (Score:2)
"The problem is that I am having trouble finding out about the way in which scanners and digital cameras work, and how this would affect their accuracy with respect to what I am aiming to do."
This can be learned by studying optics. It is not very staight forward, but any discussion relating to it would be far beyond the range possible in discussing on slashdot. Get a book and start reading.
"The information I am looking for includes things like: the resolution of their grey-scales, what degree of accuracy the motor steps at, how uniformly distributed the CCDs are in the arrays, and other issues that might affect accuracy. Just so that I can know how close to the 'real' picture what I get out of the scanner/camera is"
There is a LOT more that affect what "the real picture" is than these factors. Again, perhaps you need to go do some reading on optics.
light intensity (Score:3, Informative)
If you are interested in measuring the intensity of each pixel, read on:
First of all, I would think that a consumer camera with automatic exposure control would automatically set the gain (or sensitivity to light). You would need to be able to turn this off.
Secondly, the offset has to do with black noise (or error due to thermal energy within the ccd). On the camera's that I used, it was around 5 intensity levels out of 256 on an 8 bit camera. There is not a need of refining this on consumer equipment, so it probably doesn't get much better than that. You can buy cooled camera's for getting rid of this, but you want it cheap so this is not a great option. You could try cooling the camera w/ liquid nitrogen. I wondered about doing this myself. Alternatively, if you are taking images of something that doesn't change in time, you can take multiple images and average them. The black noise of the averaged image will decrease as the square root of the number of images.
Thirdly, the image correction -
Most consumer equipment uses a gamma correction curve b/c of similarities w/ film and video. Look it up if you don't know about it, it is interesting, and useful for taking pleasing photographs. For scientific purposes, though, you probably want a linear response. This will give you a constant sensitivity to light changes.
The last thing you should be concerned with it changing conditions / response with time. Some others have noted this. You will need to calibrate the device many times at different times of the day to make sure that changes over large times are reasonable. We utilized a calibration during the experiment to reduce this problem.
As for image formats readable by mathcad / matlab, etc. That sould be fairly easy once you get the device driver settled.
Another way to do it (Score:1)
For these kinds of things just use normal 35mm camera film to record the difraction patern (or whatever) and then run it through a film scanner, flatbed scaners would be very arcane. A good film scanner will run for less than $500 and give you probably over 3000dpi of resolution (mine is about 2 years old and does 2720).
This has two major advantages. First you get something non-digital to archive (the film) which you may use later on for studying something completely unrelated that you never thought of. Secondly, you'll probably get much better quality. Film scanners are made for professional/semi-profession uses and are probably alot better built.
To test if the thing isn't distorting your images produce a well known pattern and mesure it digitaly, see if it checks out.
Scanner References.. (Score:3, Informative)
Timing, Calibration may be crucial (Score:2)
Assuming that you are going with a commodity imaging device, calibration is going to be important. Do it yourself with test patterns. Also, try to work in greyscale if possible. Many CCDs, when in color mode, have some built in color / light level compensation that will kill all your chromatic accuracy.
Can't stress enough: (Score:1)
I've seen it multiple times, adhere to it! I graduated physics this summer and my CCD's cost approx. $30k - $50k EACH! And there's a reason for it! (well, you might not exaclty need _that_ costly an equipment, but still...)
measure it. (Score:2)
OT: Stand-alone v.s. PC-based (Score:1)
In the end, the purchase was a short-lived expensive toy.
Try http://www.dpreview.com (Score:1)
BTW, the guys are right. Don't expect consumer hardware to be consistent. You are just not paying for that consistency. Also keep in mind these things are not built to take the abuse of continuous usage. Devices that are a bit more expensive are usually built better. For example, look at a $50 scanner and a $200 scanner. Now guess which one will still have a functional lid after a month of intense use.
Scanners + Cameras (Score:1)
Home scanners and digital cameras are definitly not suited for the task if you need very near digital reproduction of an object. The reasons for this are many, but mainly, the all interpolate colors between what imaging elemaents they have. And not that accuratly.
When you move up to the midrange of scanners/digital camers (~$2000), the problem can still be there, but its less pronounced. I worked on a project requireing digital photos of a very hard to photograph subject, and tthis range of cameras produced sub-par results for the task (the shots look incredible, but zoom in and youll see fuzziness and interpolated color).
Then, you have the ~$20,000+ cameras and scanners. This was eventually what we had to go with. One camera delivered particularly good results, and achived it through actually moving the CCD so that there were no interpolated pixles. It was accurate enough that if you shot a Greytag/Macbeth chart, right from the camera, the greys would be the same value for evey pixel.
As with all these [camera] setups, You need a very controlled lighting situation (ie. photo studio), but you can shoot just about anything.
As far as scanners, the same applys. You will need to get in toe the pricy professional line to get accurate pixles, and from that, better analysis.
Your test for any product should be if you scan a greyscale, if you go in to photoshop and look at the pixle color values, are they all the same value (like 125,125,125) and, is it consistant across the swatch (if you move your mose a few pixles over, does the value change?)
The other aspect you have to contend with is your computer and monitor and its interpretation of what your seeing. Again, if any amount of accuracy is needed, you will need a controled lighting setup. No direct sunlight, try not to wear clothing that will project a color cast on the monitor, a lightbox to properly illuminate the scaned subject for proper color editing, etc.
This is where you buy a macintosh. you dont need to do all the ColorSync stuff, just keep your monitor and scanner/camera in line.
So based on the three levels in imagry equipment (home, semi-pro, pro) you can determine what level of final output you need and judge your costs from there.
For full setup, id guess:
Home 5-8k Semi-Pro 10-15k Pro 20-40k
Some useful links:
Greytag Macbeth [gretagmacbeth.com]
Apple:ColorSync [apple.com]
Imacon [imacon.dk] 3020 is camera i mentioned above
megavision [mega-vision.com]
leaf [leafamerica.com]
Sinar [sinarbron.com]
Phase One [phaseone.com]
Betterlight [betterlight.com]
This is mostly high end stuff, but, you should be a good starting point in findieng the mix of price/performance you are looking for for the overall project.
Intel QX3 USB Microscope is Super (Score:1)
For work like this, I like my QX3 [fsu.edu]. Cheap and powerful.
There's a short review of its capabilities here [microscopy-uk.org.uk], but this site [fsu.edu] has some amazing hacks [fsu.edu] that enable it to do darkfield, polarized, Rheinberg, or even simulated Hoffman modulation contrast viewing.
CCD info (Score:1)
Beware of resolution! (Score:1)
Simple experiment: use a laser printer to print 0.01" squares 0.02" apart in both directions (ie.25% grey) on a sheet of acetate. Do it twice, superimpose the sheets, and try to get an even tone of grey. It's impossible, and there are no published specs on the accuracy of positioning.
At the very least, include a graticule in every scan if accurate measurement is important to you.
CD lasers and fiber optic interferometers... (Score:1)
Later that year, I used my project for that research to develop a rudimentary surface topology scanner, which made really kewl 3D computer images of pennies, dimes and quarters at a microsopic resolution. Not bad for 1993...
I'm Sorry but... (Score:1)
Re:I'm Sorry but... (Score:1)
Re:I'm Sorry but... (Score:1)
Research?: 16 years circuit design and manufacture, software development in most languages, international medical research, equipment customization, statistical data analysis. You do what you have to do to get the job done.
Digital cameras and anti-blooming (Score:1)
Intel has a Vision library with calibration tools. (Score:2)
I played with it for a while many months ago. After reading this Oreilly Net Article [oreillynet.com] about it. The link is to page 2, because that's where the calibration stuff is.
This is how you can find out where all the pixels are pointing. I suspect there is code for calibrating intensities, but I didn't use it.