Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Input Devices

Create Your Own Bullet Time Camera Rig With Raspberry Pi 88

sfcrazy writes "A team of extremely creative people have created a really inexpensive bullet time set-up using Raspberry Pis — and the whole set-up costs less than a professional DSLR camera. The rig looks more like the LHC at CERN using nearly half a kilometre of network cables, 48 Raspberry Pis fitted with cameras and PiFace Control. The rig worked perfectly — in terms of doing what a bullet time set-up should do. Raspberry Pis achieved the Hollywood's 'frozen time' effect at a much lesser cost."
This discussion has been archived. No new comments can be posted.

Create Your Own Bullet Time Camera Rig With Raspberry Pi

Comments Filter:
  • by ebenupton ( 2424660 ) on Saturday December 07, 2013 @06:53AM (#45625637)

    If I were building this rig, I would have used the $40 Model A+camera bundle for a cost-per-node of ~$50 including a USB Ethernet adapter and an SD card per node and a decent PSU shared between four nodes.

    A bigger issue looking at the videos is the need to equalize the AGC setup (easy) and color temperature correction (harder) across the modules. Perhaps shoot RAW and then fix it with post-processing? This is where the CHDK alternative, with it's better optics and lower sensor variability, really wins out. Plus you'll have Christmas gifts for all your friends and family once you take the rig apart :)

  • by Anonymous Coward on Saturday December 07, 2013 @09:31AM (#45625991)

    Ok. I was one of the people building one of the first computer controlled multi camera rigs (http://www.reelefx.com/index.php?c=multicam.list) , so here's some background info to put this into context:

    Multiple cameras triggered to capture motion is older than the movies. Eadweard Muybridge did it.
    There were several people who built "one long strip of film with multiple simultaneous lenses and shutters" rigs in the 90s (one was a bunch of inexpensive cameras with the backs removed and a single long piece of film)

    The famous GAP "swing dance" commercial was done by convention rotoscoping/animation: they filmed from various angles, and built 3-d models and texturemapping/morphing.

    We did our camera rig for a commercial directed by Tony Kaye using Andre Agassiz, where he wanted the POV of the camera to track the tennis ball as it went down the court. And Tony Kaye didn't want to do it with visual effects (film the ball separately from the crowd, and composite it). So we built a rig with 100 cameras, carefully timed to fire as the ball went downcourt. Andre can hit the ball at the same speed every time without any problem, so it's just a matter of triggering the sequence by hand at the right time. Since they're standard 35mm film cameras with standard 36 exposure loads, you get 36 takes before you have to reload 100 cameras. Then, in post production you take frame 1 from camera A, frame 1 from camera B, etc.and string them together.

    The cameras were fired by a bunch of digital I/O cards in a rack mounted PC, and frankly, that was a nightmare. Miles(literally) of cables and connectors. Those Rpi folks learned that lesson too.
    There have been tons of commercials and movies that have used that rig and subsequent versions.
    There's really cool stuff you can do: fire the cameras at a varying rate to essentially create any motion profile you want; use cine cameras to allow intercutting motion frames with the still frames, etc.

    So here's what the RPi folks (or followers) will find:
    1) Cameras are not identical, particularly in terms of the color of the lens. Your eye automatically will compensate for an overall color cast, particularly on multiple pictures from the same camera, or when you get the film turned into prints, they adjust to grey or skin tones. But in real life, modern camera lenses made of plastic (which are high optical quality, and can be made aspheric, which helps) have slight color casts that vary from camera to camera, and when you start making composites of frames from multiple cameras, it's really obvious. So you have some post processing to do.
    2) Camera shutters have a lot of timing uncertainty. Back in the mechanical shutter days, we found that the microprocessor inside the camera (Canon EOS) had a polling loop looking at the shutter release button. A polling loop with 50 ms cycle time probably isn't noticeable to the casual user, but it's very noticeable when you're taking multiple shots of an object moving at constant speed. We wound up modifying cameras
    3) the optical geometry of the cameras is not consistent. So that adds another step in post production and calibration of all the cameras ahead of time.

    The big one is interconnections. The 100 parallel cables is a deployment nightmare, keeping track that camera 1 is plugged into cable 1, etc. For the second iteration, we built a microcontroller inside the camera to replace the original camera controller, and set up a daisy chain approach with individual ID numbers. Then the master just sends messages saying "camera N, you fire at time T1, T3, T4", and a master sync signal goes to all cameras. Lots less cable. Subsequently (I don't work there any more) they've gone to digital cameras, which is a post production godsend. Pulling 100 rolls of film and keeping them straight (you have to slate each camera individually, so the strips are identified), then scanning them for post, then doing all the corrections, and allowing for the inevitable "skipped" or "extra" frames was a nightmare.

I've noticed several design suggestions in your code.

Working...