Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Input Devices

Computer Vision Tech Grabs Humans In Real-Time 3D 110

Tinkle writes "Toshiba's R&D Labs in Cambridge, UK have developed a system capable of real-time 3D modeling of the human face and body — using a simple set of three different colored lights. Simple it may be, but the results are impressive. Commercial applications for computer vision technology look set to be huge — according professor Roberto Cipolla. On the horizon: cheap and easy digitized everyday objects for ecommerce, plus gesture-based interfaces — a la Natal — and in-car safety systems. Ultimately even driver-less cars. 'This is going to be the decade of computer vision,' predicts Cipolla."
This discussion has been archived. No new comments can be posted.

Computer Vision Tech Grabs Humans In Real-Time 3D

Comments Filter:
  • oic (Score:1, Insightful)

    by Anonymous Coward

    Driverless cars huh? Not sure how safe I feel about that ;>

    • Re: (Score:3, Insightful)

      by amRadioHed ( 463061 )

      I would feel much safer. Drivers are the cause of most crashes. If they can be replaced with something more reliable it would be a huge improvement.

      • Re:oic (Score:5, Insightful)

        by WrongSizeGlass ( 838941 ) on Tuesday March 30, 2010 @01:46PM (#31675096)

        I would feel much safer. Drivers are the cause of most crashes. If they can be replaced with something more reliable it would be a huge improvement.

        Let's ask Toyota owners how they feel about 'driverless cars'. All it takes is one small problem, or even an incompatible system amongst the many manufacturers (keep in mind that odds are they all won't be running Linux).

        This reminds me of Itchy & Scratchy Land [wikipedia.org] and its inspiration, Westworld [wikipedia.org]. What could possiblye go wrong?

        • Sure, ask anyone involved in any accident and what caused their accident will be most important to them. But what percentage of accidents do the recent problems with Toyota's comprise? I didn't say there were no accidents due to car failures, but the fact is even with Toyota's problems drivers are still responsible for more accidents and deaths then anything else.

        • by MobyDisk ( 75490 )

          I agree with your suggestion: ask Toyota owners. Go even further than that. Take a survey.

          Compare the number of Toyotas that have failed because of the mysterious acceleration problem, to the number of cars that have failed because of problems with the human driver.

        • Let's ask Toyota owners how they feel about 'driverless cars'. All it takes is one small problem, or even an incompatible system amongst the many manufacturers (keep in mind that odds are they all won't be running Linux).

          Drivers confusing the gas-pedal with the brake isn't a small problem. It's quite a large one.

          Fixing the troublesome component (i.e. eliminating the human driver) would likely reduce accidents quite a lot.

          Of course, the accidents that did occur would be sensationalised, but hopefully people would realise the increase in safety is worth it.

        • by geekoid ( 135745 )

          Properly done, no much.

          Did you know must new jest take off, fly to a destination, and then land on their own?
          Automated car systems are coming, and they will be fine. The current position of the market will dictate 2 thinkgs:
          1) Slow adoption - Meaning a piece at a time, then the coupling a a few pieces and so on.

          2) The public wont' tolerate unsafe vehicles.

        • You should go ahead and do that. Just find a Toyota owner and ask if they're worried, or if they're having any problem with their car. Because, as I said before [slashdot.org], it's the Non-Toyota owners who are offering the majority of the FUD.
          • by Restil ( 31903 )

            It's like any other obscure car problem. Assuming there is an ACTUAL problem, despite the lack of efforts to find it, it will probably not affect more than 100-150 vehicles over the lifetime of all of the products that have the potentially faulty system. It's enough to justify a recall, but on the other hand, it's probably less of an occurrence than random chance would otherwise provide. Even if there IS a glaring problem, most Toyota owners will never experience it.

            The problem now is that Toyota has a P

      • I would venture to say that drivers cause 100% of car driving accidents.
        • by hufman ( 1670590 )
          There's other causes of car driving accidents, such as when wildlife forget to not look both ways before crossing traffic in front of heavy traffic. Also, icy roads occasionally cause problems. But yes, most problems are because the drivers are allowed out of their houses.
        • Including the ones where, say, something falls off a truck and onto the road ? The ones where kids drop blocks of cement off bridges ? Yes, the latter happened, too.

          Granted, they're a very small subset compared to the driver-caused ones, but claiming 100% is just silly.
      • Drivers cause the most crashes now. That might not be the same when the proportion of driverless to driven cars is a little higher.
      • Because all us gamers know how good AI pathfinding is....

    • it's totally safe as long as they only drive in dark rooms illuminated only by red, green, and blue lights at fixed positions.
    • Well, in different areas of the U.S. I have driven (San Francisco, CA; Denver, CO;, Chicago, IL; Los Angeles, CA, Phoenix, AZ; Tampa, FL and others) I would say that driver-less cars would as safe (if not safer) than cars with drivers in some of those areas.

      Of course, it kind of depends on the driving conditions (rush hour, driven rain storms, blizzards, thick fog, etc.).

  • by Locke2005 ( 849178 ) on Tuesday March 30, 2010 @01:18PM (#31674726)
    What implications does this development have for the pornography industry?
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Until they can construct 3D models AND animate them convincingly I don't think there are any implications. Having viewers download a 3D model to admire on their 2D display doesn't seem to offer much advantage over photos or video. Loading the model in an editor and applying different clothing or performing a virtual boob-job maybe?

      • I take it you haven't seen Avatar. Using motion capture and performance capture, we should now have no problem capturing the "acting" in real time, and transferring it to a 3D model that doesn't have huge pimples on it's ass, bad teeth, and breast augmentation scars. And of course, the assertion that "size doesn't matter" will now be universally true, and Ron Jeremy can finally retire!
        • I take it neither of you read about what all went into making the big blue aliens in Avatar believable.

          After motion and performance capture, after running very refined automated scripts to tweak the movements and expressions, the Navii were squarely in the middle of the "uncanny valley", which is the effect that the closer you get to human-like expression without being correct the creepier and less-realistic a model feels.

          It took thousands of hours of hand-tweaking the expressions and body movements to suff

          • Why aren't extensively augmented starlets also perceived as being in the "uncanny valley"? And why do people think Nancy Pelosi looks so lifelike, when she's obviously a simulation of a human being?
    • Here are good news for fashion Chanel handbags [mychanelmall.com]lovers! There are countless Chanel handbags New arrivals, including Chanel leather handbags, Chanle flap handbags, Chanel Flap Bag, Chanel Shoulder Bag, Chanel Coco Cabas Shoulder Handbags, Chanel Cambon Bag Black, Chanel Line Tote Handbags and Chanel Wallet, etc. Fashion Chanle wallets [mychanelmall.com], especially black Chanel wallet, can make you more sexier. You can find Chanel bags [mychanelmall.com] sale online.
  • by internic ( 453511 ) on Tuesday March 30, 2010 @01:22PM (#31674788)
    There are four lights!
    • Re: (Score:3, Interesting)

      by HTH NE1 ( 675604 )

      This is how the Martians see us.

      • by HTH NE1 ( 675604 ) on Tuesday March 30, 2010 @02:35PM (#31675856)

        This is how the Martians see us.

        Overrated? You're making me feel old, and I wasn't even born yet.

        It's a reference to the RGB eyes of the Martians in the 1953 movie version of The War of the Worlds. The tri-segmented eyes in the movie emitted red, green, and blue light, illuminating the subject, allowing the cyclopian Martians to see in 3D, just like how a cyclopian camera can derive 3D information using this method now. Otherwise, as depicted with Futurama's Leela, a cyclops would have no depth perception.

        Of course, the amount of depth perception would depend on the spread of the lights, so even the Martians' sense of depth would be limited, but not non-existent.

    • Err, it's only obligatory to Star Trek (specifically, TNG) fans.

      (damn - I'm not really a Trek fan but I actually know that. double-damn!)

      • by Bugamn ( 1769722 )
        Well, the plot of that episode references 1984. Anyway, weren't they five lights?
        • In the episode there were four lights, but the interrogator claimed there were five. I figured claiming there were three is just as good. :-)
  • by Anonymous Coward on Tuesday March 30, 2010 @01:23PM (#31674796)

    Right now, 3D camera technology to scan a hand-made prototype into commercial CAD software revolves around a scanning laser, and special cameras, and a turn table.

    Combining this technology with other image mapping software would allow you to use 3 or 4 fixed cameras with overlapping FOVs, you would be able to simply set your source model on a table, turn on the lights, take a picture, and you are done.

    I would SOO love to have a FOSS implementation of this modeling software.

    (I sculpt, and being able to make a large physical object, scan it, then send the digital model to a rapid prototype house and get a miniature size made from the digital version would be VERY handy.)

    • Re: (Score:3, Insightful)

      by ircmaxell ( 1117387 )
      What I would find interesting, is if they could make RGB lights that flash each color for only a tiny fraction of a second. So to the average person, the light looks white, but to the camera (which would need to be fast to read that much change) it appears the color for that frame. So that way, you could have a system like this in a normal room, and record a 3D model of the room at all times (Think of a security camera, but one that could take a 3d image instead of a 2D one)... It seems cool so far, let'
      • Re: (Score:3, Insightful)

        by amRadioHed ( 463061 )

        Or even better would be a system that uses infrared or some other wavelength that we can't see.

      • Which is easier separating and stitching together 3 different colored frames each taken at different times from a high speed camera, or 3 synchronized streams of video of the same subject matter taken from 3 different regular speed cameras with different color lens filters on them?

        • by HTH NE1 ( 675604 )

          But you don't need to flash the subject three times or use a 3x rate camera. You can continuously light the subject with the three colors and separate the colors in each frame in the computer, much like how Photoshop lets you manipulate the red, green, and blue channels of a digital photo. The 3D effect comes from the three lights being in different, predetermined positions (three axes of a cube converging on the subject). You get the full 3D effect at a normal framerate without increasing the amount of dat

          • Well, the frame being analyzed would need to comprise of one color from each light source. Considering that we'd notice a flicker in the light if it switched at anything less than 30hz, you'd need a camera that could record more than that frame rate (say 60hz or 120hz). So the easiest way (in my mind) would be to either flicker the each light source between one color and white at something like 120 hz (synchronized of course), or simply "rotate" the colors between the 3 lights (so every 1/120 of a second,
            • Re: (Score:3, Informative)

              by HTH NE1 ( 675604 )

              The whole point of this, would be so that the lights could appear white to the human eye (And hence can double for normal lighting in a well designed room), while still providing the segmented colors necessary for this technique to work.

              The positions you need to put the colored lights in for the math to work properly are not the same positions one uses to properly light a subject being recorded. You'll produce an environment where the subject is overly lit and you'll have to resort to virtual lighting to properly illuminate the 3D model in post. And if you're going to have to do it in post, why bother with the expensive strobing and high-speed videography?

              This will be used in a controlled mocap-like environment, but without the ping-pong

      • That's a great idea if you want to mocap a tonic-clonic seizure.
      • security cameras with this capability would help the accuracy of automaatic facial recognition
    • I suggest a Faro Laser ScanArm [faro.com] or possibly a Faro Laser Scanner. Both can turn a hand-made model into a 3d drawing. The equipment is fairly expensive (~$100k), but you can hire firms that have the equipment to scan your stuff. The company we use charges about $200 an hour, but depending on what you're scanning, this might be really cheap. The Faro technology is fantastic, but since the market is not so large, the prices are high. The equipment is also precision-made and durable enough to survive industr
    • You'd need different sets of colors for each camera or you'll get cross-contamination between cameras. It would be better to just spin the object. The other option is to use specific wavelengths and filter out the light profiles for each camera.

  • I hate to be a downer, as I'm often fascinated by computer vision technology, but aren't there some very negative potential applications here? The UK is basically coated in CCTV cameras at this point, and our phones can broadcast GPS data to telcos (whom we KNOW are happy to hand over data to the NSA if they ask kindly). Isn't fully-automated human tracking the third element of the surveillance state trifecta?
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      And with face-recognition and 3D-mapping, soon you'll get a ticket via snail-mail.

      "Dead Keith Jr.,

      in the last three months we have noticed that you have gained 15% in body mass. Please report to the gym immediately or your health care benefits will be suspended."

      • by oldspewey ( 1303305 ) on Tuesday March 30, 2010 @01:44PM (#31675078)
        If the letter is in fact addressed "Dead Keith Jr.," then shouldn't it say something more along the lines of "in the last three months we have noticed that you have turned ghoulish grey and started to stink like hell. Please stay the fuck in the ground and stop disturbing your former friends and neighbours."
      • Re: (Score:3, Funny)

        by Tekfactory ( 937086 )

        "Dead Keith Jr.,

        in the last three months we have noticed that you have gained 15% in body mass. Please report to the gym immediately or your health care benefits will be suspended."

        From the Greeting I'd think his health benefit was already suspended.

        I guess that bodies really DO bloat a little after death.

      • Could you just use a scale?

    • The key is to not let them stop at a trifecta. We the people need to add a 4th element: renegade tracking and recording of "the watchers."

      Once everybody is subjected to the same rules and consequences, the idea of a surveillance society seems a lot less scary.

    • Re: (Score:3, Funny)

      I hate to be a downer, as I'm often fascinated by computer vision technology, but aren't there some very negative potential applications here?

      You mean like how this will affect a bunch of epileptic kids walking down the street on a school field trip?

  • Colors (Score:2, Interesting)

    Instead of red green and blue, could they use three different frequencies in the infrared range? Then they could also take photographs in normal visible light and wrap then around the model.

    • Re:Colors (Score:4, Informative)

      by HTH NE1 ( 675604 ) on Tuesday March 30, 2010 @02:13PM (#31675496)

      You'd need a custom CCD that's sensitive to each of those frequencies, as well as method of storing the image preserving the intensities of each component. And if you want a color full-motion 3D model, that CCD would need to be sensitive to six frequencies--the 3D sampling set and RGB--all at once. To fit all those different sensors will enlarge your CCD, else you'll lose resolution.

      • Soo... expensive but not impossible.

        • by HTH NE1 ( 675604 )

          Easier for a still-life photo. Harder and more expensive for full-motion that you'd tend to just skin the model with a known texture. And still not usable in an uncontrolled (particularly outdoor) environment or in overlapping environments.

          There comes a point where the expense of the R&D outweighs the usefulness of the end product. The ability to profit from the result is one. TFA's solution is lucky in that it can be done inexpensively with consumer hardware, a rigid light rigging, and a solved applica

  • by Anonymous Coward

    ( 'This is going to be the decade of computer vision,' predicts Cipolla. )

    where Twitter creates democracy and freedom around the world [youtube.com].

    Yours In Perm,
    K. Trout

  • Interference (Score:2, Insightful)

    by acheron12 ( 1268924 )
    Since this requires shining lights on the object to be digitized from particular angles, two or more independent vision systems (e.g. in driverless cars) would probably interfere with each other.
    • The article, the summary, and the links to the article in the summary are all a bit confusing. There are two different 3D modeling processes being demonstrated in the article. One uses a camera and a turntable to model objects, and the other uses one camera and an RGB lighting system. The second is what they propose to use for visualizing people:

      When it comes to capturing the raw shape of the human body and face in real-time the multiview stereo system is no good - humans move and expressions are, by nature, mobile. However, pictured above is another 3D modelling technology developed at Toshiba's labs that has been designed to capture the human body and face moving in real-time - yet is still faithful to every individual lump and bump.

      and the first seems to be what they're proposing to use for driverless cars (though they give no details about how a setup that uses a turntable would be transfe

  • It looks like the Toshiba group accomplishes with one camera what these guys [ted.com] did with dozens.
  • by Anonymous Coward
    Just improve skynet's target acquisition algorithms, why don't you?!!!
  • by moteyalpha ( 1228680 ) on Tuesday March 30, 2010 @01:49PM (#31675124) Homepage Journal
    That is a fantastic leap in thinking!
    I am wondering if this technique could be used with the spectrum of stars to identify the 3 dimensional structure of distant galaxies and clouds of gas?
    • Well, doubtful. The way this works, is that different colors of light are positioned at different angles. So the camera captures the resulting colors based on color mixing. The computer can deduce the angle of any one point by looking at the color reflected by it. Then, once you have all the angles, you can join the neighboring pixels into a "map", and use the angle changes to predict depth (hence how it's able to deduce depth from a 2D image). So for it to work, you'd need to know the exact position o
      • I will explain more. A sun has a spectrum based on its position in the sequence [wikipedia.org] and each sun then is like a different light source and the data spectrum of the different reflections could be combined to produce a 3D of the galaxy. I think that I could devise the python script for that from 2D images which have spectral data.
    • by HTH NE1 ( 675604 )

      I am wondering if this technique could be used with the spectrum of stars to identify the 3 dimensional structure of distant galaxies and clouds of gas?

      Only with crude beings does this work, not luminous matter.

      And you'll have too many stars with overlapping spectra to have effective chroma isolation for mapping non-stellar matter, let alone the problem of first mapping out all the light sources contributing to its illumination.

      • You are right, that would be like taking an encrypted signal from two satellites and merging them using the relativistic velocity of the satellites and the frequency shift as it passed into the gravity well, combining them and determining my position on the surface of a sphere.
        I know I shouldn't dream of new things, but if I could do that, I would call it GPS.
        • by HTH NE1 ( 675604 )

          At least your satellites are in known and regular positions and produce signals readily separable from each other. The natural distribution of stars in the Universe are not so conveniently arranged and their photons not nearly so distinguishable after they've been reflected off an object of unknown topology.

          Consider that the method described in TFA only works for a single photographer in a controlled environment. Don't let the blackness of space fool you: there's a lot of light pollution out there emitted b

          • I wish I had known that complicated things can't be achieved when I was younger and it would have saved me and my friends a lot of time. What you are describing is like ray tracing and that is quite impossible I know, now that you have informed me.
            Blender [blender.org]
            As far as finding the hand of the ceiling cat, that is obvious in the wonderful lulz that illuminate us.
            I know what you mean about the stars, every night I look up and they wander about like fireflies with no obvious pattern.
            If these techniques were al
            • by HTH NE1 ( 675604 )

              Well, ray tracing is easy once you know the position and direction of every photon. In natural practice, there's a bit of uncertainty regarding that. But you might be able to fudge that a bit for astronomical scale. Ray-traced images of a terrestrial nature always seemed artificial to me, like the environment depicted was always in a perfect vacuum.

              But what if it could be applied instead at an atomic scale, using charged particles to control the simultaneous emissions of photons of certain wavelengths from

  • I wonder (Score:5, Funny)

    by spleen_blender ( 949762 ) on Tuesday March 30, 2010 @01:50PM (#31675146)
    if it will be able to perceive black people.

    http://www.youtube.com/watch?v=t4DT3tQqgRM [youtube.com]
  • This has such incredible promise for the low-cost development of modern day games. Animation still presents a problem of course.
    • by snooo53 ( 663796 ) *
      The thing is, I'm not sure a glorified 3d scanner is going to help all that much. It would be cool for say digitizing the layout of your home or a model object, and being able to do so cheaply, but what I think would have a bigger impact is software intelligent enough to separate those objects out from the environment. Being able to recognize that say a flower or a person's arm is bendable, but a chair isn't. Or being able to recognize that the bottle of soda under a bright refrigerator light is the same
  • by NonSenseAgency ( 1759800 ) on Tuesday March 30, 2010 @02:06PM (#31675366)
    One of the uses mentioned in the article was that this would enable gamers to upload realistic portrayals of themselves into computer games as their avatar. Unfortunately ( or perhaps fortunately for some of us), real virtual life isn't anything like Neal Stephenson's "Snowcrash" novel. Most gamers, unlike the hero Hiro Protagonist (pun intended) do not want to look like themselves at all. They are bigger, or meaner, or better looking or in the case of all too many, not even the same gender. What would seem far more likely is a market springing up in avatars made from recordings of real people. So this begs a whole new question, who owns your avatar? Intellectual Property rights just took a huge twist.
  • In further news 20 million CAPTCHA drones in 3rd world countries rioted at the prospect of being replaced by advances in computer vision which will render captcha technology useless...

  • Did anyone else, having read just the headline, think this was about Mick Jagger?
  • Real World to 3d models is a core component on AI. The AI needs to see its world before it can make decisions inside it. Imagine quake, you can make a bot to play inside it because you have all the data in the game. Now if you wanted to make a "Fetch me a beer bot", the thing would have to know what your house looked like to navigate the instruction path.

    Obviously you'll need to write software that also "identifies" the 3d objects you're looking at, and that will take some work, but isn't impossible us
  • I would have thought gesture recognition would be relatively easy just moment the position of hands, which should be the nearest object. But its taking some time. Certainly having a computer recognise basic hand movements and running scripts accordingly would be get timesaver. On the subject, when will windows get a proper scripting language, like Rexx was on OS2 and amiga?

    ---

    Computer Vision [feeddistiller.com] Feed @ Feed Distiller [feeddistiller.com]

    • On the subject, when will windows get a proper scripting language, like Rexx was on OS2 and amiga?

      OMG, off topic but I SO miss ARexx...

      The closest I've found is AutoHotKeys, which has a whole scripting language and can interact with the UI of different software. It's not as useful as having Rexx ports in applications, but opens up many capabilities (the typo auto-correcter alone is worth the download).

  • The Running Man!
    • by HTH NE1 ( 675604 )

      Or Looker.

      "Hi, I'm Cindy. I'm the perfect female type, 18 to 25. I'm here to sell for you. Hi, I'm Cindy. I'm the perfect female type, 18 to 25. I'm here to sell for you. Hi, I'm Cindy...."

  • Looks around...
    Red Light source --check
    Yellow light source - check
    Green light source - check
    Other colors from monitor ......

    I think I'll go polish my tinfoil hat.

    /// Oooo, Shiny

  • I quote: "One potential usage for the tech is to create avatars that are not just cartoonesque versions of the computer user but an exact copy. Gamers would then be able to upload their digital double into their favourite games."

    Sooooo.... instead of your gaming appearance in the form of a muscular avatar with a shock full of hair, you'll show people that you're in reality a balding fatso?

  • until hollywood use it to make actors obsolete?

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...