Become a fan of Slashdot on Facebook


Forgot your password?
Input Devices Media Movies Technology

Cheap 3D Motion Sensing System Developed At MIT 60

Al writes "Researchers at the MIT Media Lab have created a cheaper way to track physical motion that could prove useful for movie special effects. Normally an actor needs to wear special markers that reflect light with numerous high-speed cameras placed around a specially-lit set. The new system, called Second Skin, instead relies on tiny photosensors embedded in clothes that record movement by picking patterns of infrared light emitted by inexpensive projectors that can be mounted in ceilings or even outdoors. The whole system costs less than $1,000 to build, and the researchers have developed a version that vibrates to guide a person's arm movements. Watch a video of Second Skin in action."
This discussion has been archived. No new comments can be posted.

Cheap 3D Motion Sensing System Developed At MIT

Comments Filter:
  • Tracking fidelity (Score:4, Interesting)

    by Anenome ( 1250374 ) on Thursday April 30, 2009 @03:12PM (#27777161)

    The tracking fidelity from the video seems low. For movie work you need a very smooth input, otherwise you end up spending a lot of money to smooth out the positional data which has the side-effect of making it look more artificial and robot-like.

    What I do like is the use of projected patterns to track individual dots, that's pretty clever. But it seems like this won't be the final solution. Ultimately we're going to need to perfect a micro-GPS system, and that has many more applications than just use as movement-capture for movie production.

    • Re:Tracking fidelity (Score:4, Informative)

      by Anonymous Coward on Thursday April 30, 2009 @03:41PM (#27777539)

      The video on the SecondSkin web site says it captures 5000 frames per second. I think the slowness you perceived in the feedback video was due to the feedback software, not the capture system.

      • The video on the SecondSkin web site says it captures 5000 frames per second. I think the slowness you perceived in the feedback video was due to the feedback software, not the capture system.

        How can it capture at 5000fps when projectors that give it a point of reference work at only 1000fps? besides its jerky

      • Mmm, frames per second and fidelity are two different things, much like performing 5,000 calculations per second is one thing and performing floating-point calculations versus non-floating point calculations. It's like you're saying it's a video camera that takes 5,000 FPS and it's shiny, and I'm looking at the features-tag that says it takes only 1 megapixel resolution. We want, we need, more resolution. Especially when it comes to mo-cap for feature films where the slightest jitter is extremely noticeable

        • Re: (Score:3, Insightful)

          by Jay L ( 74152 ) *

          If the system's granularity is too high then I can never produce smooth motion no matter how many times a second you want to capture a frame.

          Is video too complex to allow the sort of math we do on audio? In the audio realm, most ADCs are natively 1-bit converters with a ridiculously high sampling rate (MHz). That turns out to be mathematically equivalent to, say, 24-bit audio at 192KHz.

          But audio's a single waveform, and video's a collection of pixels, so I guess it's all different.

          • Audio "1 bit" converters are Delta-Sigma converters, and work internally at very much higher clock rates than the configured sample rate would imply (16 bit accuracy converter clocks in the 10's of MHz for 32..48 kilobit sample rates). Video needs to be sampled at much higher rates. Good old PAL/NTSC generally is sampled at approx 12.5 to 13.5 MHz minimum, and often much faster. The Sigma-delta converter for this would need to run in the GHz range to provide 8..10 bits of accuracy per pixel. This would cons
            • by Jay L ( 74152 ) *

              The Sigma-delta converter for this would need to run in the GHz range to provide 8..10 bits of accuracy per pixel

              Oh... duh. This is why I'm no good at math. Thanks for the explanation.

  • by rts008 ( 812749 ) on Thursday April 30, 2009 @03:24PM (#27777319) Journal

    When I saw the name of this, I immediately thought of Second Life.

    Second Skin takes over Second Life!

    Oh, the humanity! [or lack of...]

    I bet the pr0n industry could have fun with this...

  • Wii HD suit?
    • ...relies on tiny photosensors embedded in clothes... ...and the researchers have developed a version that vibrates...

      Someone will work the system into porn and THEN we'll have a video game that is REALLY addictive!

  • by Drakkenmensch ( 1255800 ) on Thursday April 30, 2009 @03:28PM (#27777397)
    If the suit used to capture motion is not the standard black suit covered in little ping pong balls anymore, it's gonna make DVD "making of" extra features a lot less entertaining to watch.
  • I actually presented a poster next to Ramesh Raskar at CHI earlier this month. While a very interesting project, he seemed to indicate that it was still very much a work in progress.
  • <quote>researchers have developed a version that vibrates to guide a person's arm movements. </quote>

    One word: autopilot.

    (Ironically, my captcha was "females")
  • That since most of the cost resides in doing something useful with the data (completely producing the images), the time and talent of the people that are _in_ the suits, etc, the producers really don't give a frak that their motion capture system costs $1000, $15000 or even $100000. What they want is something that is proved to work, that technicians are familiar with, and that you can readily rent by the hour along with the facility it's located in. So thank you Media Lab for another useless gadget.

    • Parent's attention is fixed on the existing moviemaking structure and is not directed to alternative distribution and creation channels. Those alternative channels are the wave of he future. The cheaper production gets, the more opportunity we'll all have for a greater array of diverse movies.

      Someday a truly independent movie is going to hit it big via reasonably independent internet distribution. That will change everything. Technology like this only makes that day closer to reality!

      I say hurrah!

    • Re: (Score:3, Informative)

      Actually, if you RTFA, you'll see that they already address this. One of the difficulties with current systems is that you have to go to the system to do the motion capture. This new system could potentially be used on set - which would be very attractive in situations where live-action and CG are mixed.
    • by Dutch Gun ( 899105 ) on Thursday April 30, 2009 @04:44PM (#27778527)

      There are many small and medium sized game development houses who would love an inexpensive motion capture system in order to capture data for things like in-game cut-scenes. And to them, yes, it makes a pretty big difference whether a system cost $1000 vs $100,000. Having to rent a studio by the hour is also pretty damned expensive.

      Besides which, it seems foolish to offhandedly dismiss new technology such as this before it's had even a chance to develop into a useful product.

    • Perhaps, but you're thinking small time, here. If the price of a good-enough Mo-Cap system got down to $1,000, do you know what that means??? That means that there would certainly be some hobbyists taking this home and experimenting with it. When that happens lots of fun things can result.

  • Like a WiiMote! (Score:5, Interesting)

    by DdJ ( 10790 ) on Thursday April 30, 2009 @03:50PM (#27777675) Homepage Journal

    What's interesting to me is, this is almost exactly how the WiiMote works so cheaply!

    A lot of people assume that the Wii's sensor bar actually senses, and that it can tell where the WiiMote is. But that ain't so. The sensor bar is just a pair of IR emitters. The front of the WiiMote is an IR camera. The thing you hold in your hand is looking at the external IR sources and using those to try and figure out where it is, and then telling that to the base system, almost exactly as is described in this article.

    It's like someone said "hey, let's do motion capture by gluing WiiMotes all over a person's body!".

    • the big difference is, that the wiimote needs to have an IR camera. In the presented method the receptors are cheap infrared sensors. The position is calculated by decoding the patterns the projectors send.
      A similar technique has been used to calibrate the image of a projector to a surface. Here is a video: []
    • by redJag ( 662818 )
      Well the sensor bar doesn't "sense", but the Wiimote definitely does. I realize your post doesn't contradict this, but it also implies that all Wii motion-sensing is done with IR and that isn't the case. The Wiimote has an accelerometer that can detect movement on 3 axes. The IR camera is used for detecting where on the screen it is pointing. []
      • the Wiimote definitely [senses]. [...] [parent also implies that all Wii motion-sensing is done with IR and that isn't the case. The Wiimote has an accelerometer that can detect movement on 3 axes.

        Keeping up with "your post doesn't contradict this", I want to add:

        The accelerometers sense differential data (motion), whereas the IR camera senses static data (direction towards IR light).

        If you assume that there are only two infrared sources out in the world (in either end of the sensor bar) and they don't move, you can use your camera reading to infer your angle in the horizontal plane as long as you can see the infrared sources. Using that, plus the strength of gravity at different points on the wiimo

  • I thought the most valuable part of motion capture data was the actor's face, as it's the most difficult to simulate in CG. This is a neat system, especially for the price, but it doesn't provide the best feature of the original.
  • This system will probably be used on athletes, ninjas and commandos. From the video, it obviously only works on an arm without any muscle tone.
  • $1000 of blister-healing goodness! And at 5000 fps! []

  • It relies on cycling a repeating pattern from every projector 500 times/sec. Every pixel in the pattern encodes a unique symbol by the colors & the changes in the colors over time. By sensing what symbol hits each sensor, you know what pixel from the projector is hitting the sensor & what position on the projector's XY plane the sensor is in. If you know the XY plane position from 2 projectors, you can triangulate the sensor's 3D position, but projectors with enough resolution & bandwidth to

  • So how do they keep the projected patterns in focus as the actor moves towards & away from the projectors? What if you want to track a close actor & a distant actor simultaneously? Those projected patterns aren't going to be in focus & the sensors won't know where they are.

  • They seem to use Gray code [] sequences (only one bit differs between to neighbouring codes). Johnny Chung Lee (the Wiimote Whiteboard guy) already demonstrated the use of structured light and optical fibers [] in his thesis. He used it to rapidly locate projection surfaces.

  • There are commercial products (MVN from Xsens [] (former Moven)) that use inertial sensors and gyros to derive the motion. One of the advertised uses is the movie/digital effects industry.

    Don't about the real performance of the technology but the idea in itself seems to enable some freedom (no need for interior studios, less expensive).

Where there's a will, there's an Inheritance Tax.