Forgot your password?
typodupeerror
Iphone Mars NASA Hardware

Mars Rover Curiosity: Less Brainpower Than Apple's iPhone 5 256

Posted by timothy
from the when-I-was-a-boy-we-didn't-have-mars dept.
Nerval's Lobster writes "To give the Mars Rover Curiosity the brains she needs to operate took 5 million lines of code. And while the Mars Science Laboratory team froze the code a year before the roaming laboratory landed on August 5, they kept sending software updates to the spacecraft during its 253-day, 352 million-mile flight. In its belly, Curiosity has two computers, a primary and a backup. Fun fact: Apple's iPhone 5 has more processing power than this one-eyed explorer. 'You're carrying more processing power in your pocket than Curiosity,' Ben Cichy, chief flight software engineer, told an audience at this year's MacWorld."
This discussion has been archived. No new comments can be posted.

Mars Rover Curiosity: Less Brainpower Than Apple's iPhone 5

Comments Filter:
  • Misleading (Score:3, Informative)

    by Tourney3p0 (772619) on Friday February 01, 2013 @09:03PM (#42768175)
    This is misleading. The rover has dozens of LRUs, all individually computing sensory input, crunching it, and sending it across the bus for the main computer to process. Yet it's only taking into account the main computer's processing power.
  • by sighted (851500) on Friday February 01, 2013 @09:08PM (#42768217) Homepage
    Nitpick with the summary: the rover is not 'one eyed'. It uses a bunch: http://mars.jpl.nasa.gov/msl/mission/rover/eyesandother/ [nasa.gov] That said, it does have that one big laser on its head: http://mars.jpl.nasa.gov/msl/mission/instruments/spectrometers/chemcam/ [nasa.gov] Robots on Mars with lasers. It doesn't get much better.
  • Re:Misleading (Score:5, Informative)

    by Orion (3967) on Friday February 01, 2013 @09:15PM (#42768257)

    Line Replaceable Unit, meaning it's an unit that can be swapped out quickly.

    Somehow I don't think that term really applies here...

  • by Anonymous Coward on Friday February 01, 2013 @09:30PM (#42768363)

    Off the shelf junk works in space all the time. Processing power is unrelated to radiation shielding.

    Lack of processor power has to do with qualification processes and lead times. Your pitiful opinion is misdirected and uninformed.

    OTS works in the Earth's magnetosphere. Get outside of that and you have to start making major design compromises, particularly with RAM.

  • by Anonymous Coward on Friday February 01, 2013 @09:33PM (#42768385)

    Cosmic rays [wikipedia.org] go straight through the earths atmosphere.

    They absolutely do not, as the article you cite makes clear in its second sentence. Sheesh.

  • by Anonymous Coward on Friday February 01, 2013 @09:38PM (#42768401)

    It's not outdated by 20 years. The CPU is a RAD750, based on PPC750 which means it's roughly equivalent to a PPC G3 so the thing has a similar amount of power to the original iMac. That's leaps and bounds over the 386s that NASA used to use.

  • by JasoninKS (1783390) on Friday February 01, 2013 @09:44PM (#42768417)
    Better double check your figures. Curiosity launched November of 2011. Just landed August of 2012.
  • by WWJohnBrowningDo (2792397) on Friday February 01, 2013 @09:46PM (#42768435)
    Did you even read the article you linked?

    Cosmic rays go straight through the earths atmosphere.

    No, it doesn't. If that were true we'd be all dead. Comic radiation in interplanetary space is 400 to 900 mSv annually, which is 1000 to 2200 times stronger than dosage at sea level on Earth (0.4 mSv). Earth's atmosphere blocks most radiation below 1 GeV.

    Off the shelf computer hardware does indeed work just fine in space. You can watch people on the ISS using normal laptops and cameras all the time.

    That's because ISS is in LEO and thus is still protected by the thermosphere and Earth's magnetic field. On a trip to Mars neither of those protections would be available.

  • by retroworks (652802) on Friday February 01, 2013 @10:05PM (#42768535) Homepage Journal
    I could compare the software used in the Statue of Liberty to any phone on the market, and create the same headline.
  • by dgatwood (11270) on Friday February 01, 2013 @10:13PM (#42768585) Journal
    So only 16 years, then.... The PPC750 was introduced in 1997. Not quite 20, but closer to 20 than to "recent"....
  • by Anonymous Coward on Friday February 01, 2013 @11:00PM (#42768817)

    You're conflating serveral things..
    Space Qualification doesn't have a lot to do with rad hardening. It's more about manufacturing processes, reliability, and testing to work over wide temps. That off the shelf computer probably won't work at -40C or +75C, while the processors in most spacecraft do. ISS or shuttle isn't a good example: it's basically an office environment: it even has *air*.

    Rad hardening is something else. And the space processors *ARE* more successful at hardening than garden variety CPUs. Take a look at the LEON3FT SPARC core, for instance (Available commercially as the Atmel AT697 or the Aeroflex UT699, or you can burn it into an Actel RTAX2000, if you like). It has register paths that have error correction, etc. The demonstrated performance in a radiation environment *is* better than the non FT version.

    There's single event upsets (SEU) aka "bit flips" which EDAC or parity works nicely for. Your laptop flipping a bit might not be a big deal.. most consumer software has enough bugs and things that you just restart and move on. If the processor controlling the rocket motors during entry descent and landing screws up it's a $2.5B hole in the ground. So internal registers in the space CPUs tend to be triple redundant or other upset mitigations.

    But that's really not the big issues. There are things like Latch-Up.. that particle going through causes a latchup, and the resulting high current at a small location melts the chip. Oops, dead. There are latchup immune designs and processes, and there are latchup monitor/reset circuits, but it's not universal.
    There's single event gate rupture (SEGR) which is where a MOSFET gate gets punctured because the normal charge on it is close to the failure level in normal operation, and the particle deposits just enough more to push it over the edge. Would you notice this on a modern CPU? Maybe it's in the microcode for calculating square root or something and you wouldn't for a long long time.
    We use a lot of FPGAs in spacecraft these days.. If it's a xilinx, that particle can flip a configuration bit, and now you've just programmed your FPGA to have two outputs connected to the same "wire" and they have opposite values. Oops some dead gates now, or if it's bad enough dead chip.

    ISS is a benign radiation environment.. about a Rad(Si) per year or so. There are *humans* on ISS, after all. After all 600 Rad will kill someone in days, 100 Rad will make them pretty sick. A typical design dose for a Mars mission might be 20kRad. For going to Jupiter, maybe a MegaRad?

    But even in that benign radiation environment, a lot of COTS equipment will fail, and there's no way to predict, short of test. So they take all those COTS widgets and run them in a proton beam and figure out what the mean time til failure is. If it's long enough, you send it up to ISS and have at it. There's an awful lot of stuff that has "expected life on ISS" of something like 90-180 days. Google for the papers or look at the website http://www.klabs.org where a lot of this stuff is collected. 180 days on ISS is plenty if you're sending new stuff up on a regular basis. Even at $100k/kilo, that's pretty inexpensive to just send a new iPad up every few months if one dies.

    If you're sending a billion bucks to Mars for 10 years, I think you might want something a bit better.

  • by smash (1351) on Friday February 01, 2013 @11:01PM (#42768831) Homepage Journal

    Nah, it just goes to show how far behind the performance card the radiation hardened, military/aerospace grade equipment is.

    Plus, you really don't want to be bleeding edge on this sort of stuff. Discovering a mission ending critical CPU bug when you're astronomical scale distance away = bad.

  • by localroger (258128) on Friday February 01, 2013 @11:02PM (#42768843) Homepage
    Cosmic rays actually interact very little either with the Earth's magnetosphere, atmosphere, or sillicon chips. They're going so fast that they don't hang around long enough to interact with atomic nuclei unless they score a direct hit. There really isn't much difference in cosmic ray exposure between the ground and, say, the surface of the Moon. The real problem is solar weather. The Sun regularly spits out particle blasts that would fry anything made of semiconductors. Those blasts are what power the aurorae. But those charged particles aren't going so fast so they're deflected by the magnetosphere (which is what protects the ISS) and they're also more readily absorbed by the atmosphere, which is why radiation levels at sea level are lower than they are in Denver. If you could get your iPhone and tablet safely out of the solar system, they would probably work fine on a generation starship.
  • by TubeSteak (669689) on Friday February 01, 2013 @11:22PM (#42768925) Journal

    This is the computer chip in the Mars Rover: https://en.wikipedia.org/wiki/RAD750 [wikipedia.org]
    Specifically, they're using two *133Mhz chips rated for 1 Megarad.
    1 Megarad is about double the hardening they actually required,
    but I'm guessing they overspecced so that the Mars Science Laboratory will outlast its planned mission length.

    Anyways, if you're in low earth orbit (like the space station) you can get away with radiation tolerant electronics.
    But out in cold hard space, without the earth's atmosphere, you need radiation hardened electronics.
    *Not 200Mhz as so many articles are quoting

    Most satellites and space based processors are no more successful at
    hardening than your garden variety laptops. They just program them better and watch for memory errors.

    What? If it was that simple, we'd be using modern processes, instead of technology that debuted in 1997.
    Instead, it's quite the opposite, where a modern 24nm process is impossible to harden to the same strength as an old 150nm process.

  • by Anonymous Coward on Saturday February 02, 2013 @12:02AM (#42769131)

    Actually its not a Myth. I work for an aerospace company, specifically in electronics HW design, and specifically for satellites (we built some of the instruments on Hubble as well as the spacecraft bus for several DigitalGlobe birds).

    Its not immediate "cosmic rays" that get you.. its the long term rad exposure that a satellite gets over its intended life. The problem is aging effects and end of life performance. We do rad testing on all our flight level parts and derate the performance and then design so at the end of life point you still get intended operation.

    For example, we use Actel one time programmable FPGAs, SRAM parts like Xilinx can get random bit flips from radiation. And even in the Actel parts, we "scrub" memory in use frequently. We can fix single bit errors, and detect double bit errors, etc etc.

    On many programs we use low performance CPUs... something like 66MHz is common, and 16MB ram. But thats all you need for most applications. Remember, no GUI, no touch screen. Data processing is typically done on the ground, so you just have to get the data out the downlink... not hard to stream data fast.

    Anyway, there are rad hard or rad tolerant higher performance solutions that are in development. Tilera (MIT spin off) has 64 and 49 core chips that will have a rad hard varient for space. Xilinx is starting to ship a rad tolerant Virtex6. So thats great.. the problem is, if you are someone buying a $1bn bird and you have basicalyl one shot at getting the HW right, much less appetite for super cutting edge in the electronics. Better to use tried and proven that will get the job done and squirt data out of downlink for later use...

  • by VortexCortex (1117377) <VortexCortex@Nos ... t-retrograde.com> on Saturday February 02, 2013 @12:43AM (#42769335)

    Case in point: which is harder to code against: a command line interface, or a full-on GUI?

    Do I get to use GNU readline and ncurses? If not then I'd rather code to the GUI. Seriously, you're kidding yourself if you think terminal discovery, terminal emulation, META-DATA for Signaling & Control within the char stream (escapes), even dynamic resizing, and KEYBOARD SCANCODE TRANSLATIONS are a walk in the park. Seriously, write your own OS from scratch, all that UI stuff (even for a console only OS) is every bit as complex as the GUI stuff. In fact, ncurses keeps multiple off-screen buffers 'character windows' and performs delta compression to translate screen updates into efficient escaping of updates, esp for sub-screen size scrolling. Ugh. Its actually less complex to make a client/server for a graphical VNC. I'd much rather just write pixels directly to video memory -- Have you even LOOKED at what you have to do to create and load new textmode terminal fonts?!

    Don't get me wrong, I agree with your overall point: UI eats tons of CPU & memory. However, the difficulty of coding against either command line or GUI depends on the API in use, not the complexity underneath.

    There's another reason why the CPUs on Curiosity are slow. Cosmic Rays. Smaller & Faster chips are more prone to Cosmic Rays flipping bits. Bigger silicon is more reliable, but is also slower, and it takes more juice to power, but that's OK, because the alternative is having more frequent spurious code and data errors. I actually do use my own stateful malloc(), free(), etc replacements that emulate (pseudo)random bit flipping in memory in my hobby OS's DEBUG_COSMIC_UNCERTAINTY mode -- a user defined compile time constant, to adjust bit flipping frequency for size vs density of the chips, wall clock time vs cycle speed (it's a cycle bound counter), and cosmic ray frequency (we're in between two spiral arms of our galaxy, in the arms there are more cosmic rays).

    Look, it's not that I'm ever going to be running my OS in deep space, I'm just a Cyberneticist who likes to think ahead a bit. Maybe my Machine Intelligences will thank me one day for the foresight, maybe not, but we've got is easy down here under Earth's thick protective bubble of magnetic fields. Interestingly: Curiosity uses AI to drive itself, We can tell it, go over there and let it find its own way, stopping if it runs into trouble. That software was uploaded after it landed, occupying the memory that the flight and landing programs took up.

  • by martin-boundary (547041) on Saturday February 02, 2013 @03:20AM (#42769867)
    That's still graphical/visual programming in text mode, not command line programming in the conventional sense.

    A typical command line program simply reads data from STDIN, parameter values from argv[], and writes some values to STDOUT, maybe some error messages to STDERR. Command line programs don't care if the user is a human being or a script, unlike a ncurses program, whose fancy display formatting is all about human interactivity, but is often difficult to script.

  • by AmiMoJo (196126) * <{ten.3dlrow} {ta} {ojom}> on Saturday February 02, 2013 @05:58AM (#42770253) Homepage

    The reason is that integer performance isn't worth wasting the silicone on in a mobile processor. It is already well beyond "good enough". What does count is power consumption, where ARM is still in another league to x86, and in floating point operations. ARM has NEON SIMD instructions for that and they are pretty good for audio/video processing and games. In addition a lot of stuff is handed off to the GPU now anyway (transform and lighting, video decoding) which is always going to be far more efficient.

    There is a reason there are not many x86 mobile devices. Atom is more expensive and hard to get good battery life from. Raw performance is good but having four low power cores and a good GPU is better for providing a smooth user experience and mobile games.

  • by Sulphur (1548251) on Saturday February 02, 2013 @09:44AM (#42770903)

    and 12 lines of machine code that are actually running.

    I've never seen a processor whose machine code had lines.

    Segments.

"If that makes any sense to you, you have a big problem." -- C. Durance, Computer Science 234

Working...