Forgot your password?
typodupeerror
Bug Hardware

New Way to Patch Defective Hardware 238

Posted by Zonk
from the small-size-different-angle dept.
brunascle writes "Researchers have devised a new way to patch hardware. By treating a computer chip more like software than hardware, Josep Torrellas, a computer science professor from the University of Illinois at Urbana Champaign, believes we will be able to fix defective hardware by a applying a patch, similar to the way defective software is handled. His system, dubbed Phoenix, consists of a standard semiconductor device called a field programmable gate array (FPGA). Although generally slower than their application-specific integrated circuit counterparts, FPGAs have the advantage of being able to be modified post-production. Defects found on a Phoenix-enabled chip could be resolved by downloading a patch and applying it to the hardware. Torrellas believes this would give chips a shorter time to market, saying "If they know that they could fix the problems later on, they could beat the competition to market.""
This discussion has been archived. No new comments can be posted.

New Way to Patch Defective Hardware

Comments Filter:
  • what? (Score:5, Informative)

    by Anonymous Coward on Tuesday April 10, 2007 @07:29PM (#18683559)
    I'm not sure I see what this guy is doing that is novel. I can't tell if it's a stupid writeup or if this guy really thinks sending out a new bitstream to an FPGA is a breakthrough. FPGAs are remarkable pieces of hardware, and depending on how much you're willing to spend they can run up to a few hundred megahertz- though timing problems can be difficult to resolve at that kind of speed. Many ASIC designers use FGPAs in house to prototype and can afford to spend up to $25,000 for a single chip (only the craziest number of gates cost that much) but which reduces the number of million dollar ASIC production runs. The other reason you don't see a whole lot of FPGAs in closed source hardware is because an end user/hacker could make the hardware go out of spec or do something unintended and then expect warranty support. An increasing number of open source hardware projects (Universal Software Radio Peripheral, or USRP, for one) include FPGAs however. Anyway, bottom line is I just don't see from the article at least what this guy is doing that is so special. The article makes it sound like the chip can detect the errors itself but then requires a patch to be uploaded. It sounds to me like he's adding logic that works around certain hardware states in the fixed portions of the circuit- but that's just updating the VHDL/Verilog and creating a new bitstream. So again, I don't know if it's a dumb article or a dumb researcher. Anyone have more information?
    • Re:what? (Score:4, Informative)

      by Akaihiryuu (786040) on Tuesday April 10, 2007 @07:44PM (#18683699)
      I can't tell if the stupidity is in the article writer (most likely) or the researcher. But yeah, FPGA's are NOT new technology, they've been around for a long time. They are definitely useful in development, but an FPGA used to design a part is orders of magnitude slower than the ASIC that they produce from the design. I can see them being useful in very limited applications, but if the article writer or researcher thinks that we'll be replacing our CPU's or GPU's with FPGA's anytime soon they're pretty dumb.
      • by nurb432 (527695) on Tuesday April 10, 2007 @08:30PM (#18684089) Homepage Journal
        Predates FPGAs by decades.. Sure they have advanced things greatly, but where the hell has this guy been the last 30 or so years? Under a rock?

        Personally I was using proms as rudmentary programmable logic 20 years ago.
      • Re: (Score:3, Informative)

        by Anonymous Coward
        "I can't tell if the stupidity is in the article writer (most likely) or the researcher."

        Wrong. I believe that the stupidity is in the Slashdot readers. Dr. Torrellas published this in Micro-39, which means that the paper has been floating around the internet for around 4-6 months. You should assume that article writers are going to screw up the details. Go read the paper yourself. Here's a link:

        http://iacoma.cs.uiuc.edu/iacoma-papers/micro06_ph oenix.pdf [uiuc.edu]

        Then, if you feel so inclined, go read other mod
    • Re: (Score:3, Insightful)

      by CuriHP (741480)
      That was basically my read as well. It sounds like there may be something interesting in the automatic error detection, but the writeup is much too vague to be useful.

      I really don't see this going anywhere in the near future simply because of cost. You've just taken a $10 ASIC and replaced it with a $600 FPGA. ASICs may cost more than FPGAs in upfront design costs, but it you're going to use more than a thousand and can wait the extra few months, it's always going to be cheaper. Big FPGAs are expensive.
    • Re:what? (Score:5, Informative)

      by AaronW (33736) on Tuesday April 10, 2007 @08:04PM (#18683887) Homepage
      You would be surprised how widespread FPGAs are. They are used in consumer devices to a limited extent. In high-end hardware where cost is not as much of an issue and the volume is lower FPGAs are very common. I know networking hardware has been using FPGAs for at least a decade, and most enterprise networking equipment I see has them. They are common in higher-end routers and other devices.

      FPGAs also have come down significantly in cost while increasing their gate counts. A number of FPGA vendors also offer services where you can go straight from an FPGA to an ASIC at a much lower cost than a full custom ASIC design. Start looking inside consumer devices... look for chips that say Xilinx, Altera, Lattice, Actel and more. Some of these companies also make regular ASICs, but many of the parts you see are FPGAs.

      FPGAs are nothing new, though it is not so common for consumer devices to be upgraded in the field as it is for higher-end devices.
      • Re: (Score:3, Insightful)

        by SnowZero (92219)
        It all about volume. If you're only making 1-1000 of something, then an FPGA is way cheaper than an ASIC. High end devices often have low volumes (per revision), but even a low end device makes sense with an FPGA if you aren't selling that many of them. For the in-house robotics projects that are being done in my lab, they are indispensable since they can be used for replacing small logic chips and most of the glue logic; It's hard to beat an ARM chip with an FPGA next to it :)
      • I work with embedded designs for a living. Almost every single design that's ever crossed my desk has either a Xilinx or an Altera FPGA between the CPU and the PCMCIA interface. It's pretty standard.

    • Re:what? (Score:5, Informative)

      by timster (32400) on Tuesday April 10, 2007 @09:34PM (#18684507)
      I think you're seriously misunderstood TFA. The idea isn't to replace the chip with an FPGA. The idea is to include a small FPGA through which various important signals are routed.

      As shipped, the FPGA is just a pass-through, which does nothing. When you find out that a bug presents in a certain situation, you modify the FPGA to intercept the problem and handle it somehow.
    • Re:what? (Score:5, Interesting)

      by jd (1658) <[moc.oohay] [ta] [kapimi]> on Tuesday April 10, 2007 @10:25PM (#18684811) Homepage Journal
      Both a dumb writeup and a dumb researcher. FPGAs are commonplace, patching them in-situ in the field is a little unusual but not particularly exceptional. They're cheaper to produce than ASICs (but much slower) but for a lot of stuff, performance is far more I/O-bound than compute-bound and so manufacturers can get away with using FPGAs. Plenty of companies never bother moving off of FPGAs at all.

      Patches? Well, you think anyone on OpenCores is going to send patches via a soldering iron? No, they're going to reprogram the FPGA, the same as everyone else does. So even Open Source hardware has this guy beaten by many, many years.

      Are FPGAs the only way to do this? Depends on what you mean. Processor-In-Memory devices pre-date FPGAs by at least a decade. PIM architectures are fun, as you get raw CPU performance without any memory access bottlenecks. Want to reprogram it? Well, it's just RAM. You can program it however you like! PIM is vastly superior to FPGA, if (and only if) you know the fundamental logic you are going to use and the fundamental logic isn't going to change. For example, you could build a PIM that had the whole of the MPI protocol built into it. Cray did exactly that. Your program on top of that will change FAR more often than the protocol itself, so so long as you code the protocol correctly in the first place, this will not only run faster but be far easier to change. No rewriting the VHDL or Verilog, because there isn't any.

      But programming isn't the only time you'd want to patch defective hardware. Sometimes, hardware goes bad. You can't avoid it. A patch on an FPGA isn't necessarily going to fix that, because there's no way for the engineer to know what went bad and it wouldn't be cost-effective to re-engineer the code to put on it. Well, that's been thought of, too. Sir Clive Sinclair - possibly the most reviled figure in British computing - actually came up with a really neat solution. Simply make the system wafer-scale and format the compute elements as you would a disk. When something goes bad, mark the sector as bad. With massive redundancy and a near-zero failover time to a different sector, you could handle sizable chunks of the chip going up in smoke - something no FPGA patch would even remotely come close to.

      Ok, what if you want something that looks and feels like an FPGA - then is this your only answer? No. SOGs (Seas of Gates) have been around for a while.

      Finally, CPUs have long supported the notion of microcode - I believe one such system was hacked to run Pascal as the opcode not long after the language was first developed. Yes, that was some time ago. Hell, the Crusoe (if Transmeta had ever published how) could be programmed to look whatever you felt like making it look like. Talk about patchability!

      The sheer number of solutions people have come up with to this problem probably outnumbers the gates on the FPGA the researcher was using. I can see nothing credible or interesting in this, and certainly nothing new.

      Ultimately, of course, this has nothing to do with when someone invented whatever method. It has to do with when someone actually makes it ubiquitous. Alexander Graham Bell wasn't close to being the inventor of the telephone, but he marketed it like no-one else. That's what people react to and remember. Will this researcher turn what is frankly a pedestrian piece of work into a major slice of the market? I doubt it, but they might. If they do, then all the prior examples in the world will convince no-one. If they don't, then it's one more piece of research that's destined for the rubbish heap.

      • I think that there's some potential in the idea of placing a thousand 80386 cores [geek.com] in parallel on the same chip.
        • Re: (Score:3, Interesting)

          by jd (1658)
          Been done, mind you that was with a thousand 6502s on the same silicon. Actually got decent performance, well according to BBC's Micro Live. (How much would you trust a geek show whose PRESTEL session got hacked live on air?)

          Seriously, a massive set of relatively low-power cores on a very tight connection - provided there was a bloody good way of scheduling stuff - would likely work extremely well in both the high-performance world and the high-reliability world. Who gives a damn if one core fails every m

      • Re:what? (Score:4, Insightful)

        by Lorkki (863577) on Wednesday April 11, 2007 @03:00AM (#18685995)

        Patches?

        What bothers me personally is that "it's easier to upgrade" is one of the excuses used when producing those skimmed-down Windows-devices. You can guess twice if it's ever improved the quality of the products, or if even half of the bugs they ship with ever get any attention from the vendors.

        So yeah, please give them one more reason to ship too early, more often and cheap enough to sell by the bucketload.

        • Re: (Score:3, Informative)

          by cowbutt (21077)
          Yup, you see this all the time with devices that have the capability to have their firmware updated in the field; often they don't work properly until one or two firmware releases have been applied.

          In fact, given how field-updateable firmware is often implemented, this also adds a new failure condition - trashed firmware. I lost a DVD ROM drive when a rogue piece of software accidentally knocked out 1 bit in 16 of the firmware (judging by the new name it had for itself during the BIOS POST, anyway!). If d

        • Re: (Score:3, Informative)

          by jd (1658)
          Can't say I disagree with you. If enough effort went in to make the whole thing easily patched in the field, then there's an excellent chance insufficient effort went in to making the thing right to begin with. I hope noone read my post as excusing those who produce inferior goods because they can fix them later - yeesh. I'm disliked intensely by some where I work now precisely because I don't take any crap from those who prefer the Im-lazy-fix-it-next-year route.
  • by seanadams.com (463190) * on Tuesday April 10, 2007 @07:29PM (#18683561) Homepage
    Don't bother reading TFA, there is no more information there than what's in the summary. Just some additional hand waving about how this enabling technology will magically detect and fix hardware bugs.

    I'm sure the professor has developed _something_, but the article sure doesn't give any clue what it might be. This story is nothing more than an exceptionally poor description of what any FPGA can do.
    • by alx5000 (896642) <alx5000@alx5[ ].net ['000' in gap]> on Tuesday April 10, 2007 @07:48PM (#18683755) Homepage
      Imagine for a moment that this guy has invented something new. Imagine, as the last line of the summary suggests, that "If they know that they could fix the problems later on, they could beat the competition to market."

      Sounds like the hardware version of Windows. Every user would be a beta tester. Your phone calls your friends in the middle of the night and makes strange noises? It's ok, we'll fix it soon. Meanwhile remember we were the first to offer scheduled calls for cell phones!
    • by mbessey (304651) on Tuesday April 10, 2007 @07:54PM (#18683819) Homepage Journal
      The article IS light on details, but the last paragraph does explain how the system would work. Basically, manufacturers of mass-market chips would provide a small amount of FPGA-like programmable logic in every chip they make. This programmable logic would sit idle until some defect was discovered in the chip.

      At that point, you can send a "patch" to the chip that uses the programmable logic to detect the error condition (or conditions that trigger the error), and work around the problem.

      It's fairly clever, and is similar in spirit to the microcode patches that varios x86 CPU manufacturers use to correct for errors in their chips after they're taped out. It would be interesting to read about what the actual design is. It seems like coming up with a generic logic patching mechanism that can deal with previously-unknown errors would be a pretty interesting task.
      • by dhasenan (758719)
        And it would be difficult and expensive enough that the manufacturers would still subject their products to thorough testing. And probably expensive enough that it'll only be used in high-confidence operations, such as NASA hardware.
        • by whoever57 (658626)

          And it would be difficult and expensive enough that the manufacturers would still subject their products to thorough testing. And probably expensive enough that it'll only be used in high-confidence operations, such as NASA hardware.

          You are confusing design faults with manufacturing faults. The proposed idea affects only design faults and, if I read it correctly, has some kind of signature recognition logic that recognises the situation in which the chip will produce a faulty result and then must somehow

        • And it would be difficult and expensive enough that the manufacturers would still subject their products to thorough testing. And probably expensive enough that it'll only be used in high-confidence operations, such as NASA hardware.

          That was my thought, too. I could definitely see this being used in satellites or other applications where getting the hardware back is impossible or very expensive, but I can't see it being used on a microwave or DVD player.

          It's my understanding that a lot of NASA's stuff has
          • by 2short (466733)

            I have a friend who used to do chip design at NASA.

            Based on my half remembered conversations from 10 years ago, FPGAs are great for prototyping, but not for flight systems, because they are power hogs.

            When you measure your power consumption in surface area of solar panel and weight of battery that need to be put on orbit...
  • by feepness (543479) on Tuesday April 10, 2007 @07:31PM (#18683567) Homepage
    From the wiki link:

    The historical roots of FPGAs are in complex programmable logic devices (CPLDs) of the early to mid 1980s. Ross Freeman, Xilinx co-founder, invented the field programmable gate array in 1984.

    Umm, ok. Did you mean old way to patch defective hardware?
  • WTF? (Score:5, Insightful)

    by PhxBlue (562201) on Tuesday April 10, 2007 @07:31PM (#18683569) Homepage Journal
    So we'd get to have these chips in PCs sooner, and in return, they'd be less reliable? No thanks. One Pentium floating-point problem was bad enough.
    • That's a bit like saying that software is crap because we can update it and to get good software we should ban software patching.

      Actually, FPGA patches are sometimes done to fix bugs, but more often they're done to change functionality. eg. a new firmware download uses different DSP algorithms or whatever and thus needs different FPGA algorithms to work properly. THus both get updated.

    • Re: (Score:3, Insightful)

      that's not the only problem... imagine what the virus writers will do with this one!
      • by ThosLives (686517)

        This is exactly the reason why I will never, ever, ever want any hardware that is more "soft" than my own flesh and blood. That has enough of its own susceptibility to viruses, bacteria, getting smashed, getting clobbered by radiation, etc. for me!

      • Not much. Virus writers have not been as nasty as they used to be in terms of payloads. No-one flashes BIOS anymore.
    • Re: (Score:2, Funny)

      by noidentity (188756)
      Yeah really, we all know how much more reliable software is compared to currently hard-to-patch hardware. I just can't wait until we have patchable atoms. "Sorry, we've just found that the new-fangled carbon atoms making up all 2032 cars will self-destruct in one week. Please install this new patch, which will take a day to complete transmutation."
    • One Pentium floating-point problem was bad enough.

      You just don't get it, do you? See now you can buy your new fangled octocore 10.3GHz 128bit processor with Ultra-Uber-Hyper-Mega Transport(tm) onboard, patch it and get the performance of a Pentium without the bug!!!!

      Brilliant!!!

  • by JanneM (7445) on Tuesday April 10, 2007 @07:33PM (#18683591) Homepage
    So, from a customer viewpoint, what this offers is slower, more expensive hardware that is less tested and buggier than the competitors coming down the pipeline in a month or two?

    I suspect I an do without.
    • Re: (Score:3, Informative)

      by horatio (127595)
      I agree. I'm already (as I suspect most of /. is as well) almost constantly dealing with hardware and software that isn't production ready but "beats the competition to market". The Nvidia 680i boards ship with software that conflicts with itself, causing BSODs in XP - as confirmed by my emails with eVGA. It is one thing to ship patches for things discovered after shipping -- but I think most large corps today figure it in as a calculated risk. Don't even get me started on the steaming pile that is Vist
    • Re: (Score:2, Funny)

      by 26199 (577806) *

      Well, it's a marketing strategy that's worked well for Microsoft.

  • by Archeopteryx (4648) <benburchNO@SPAMpobox.com> on Tuesday April 10, 2007 @07:34PM (#18683603) Homepage
    It was called a ROM Patch.

    And isn't this the WHOLE reason for Altera and Xilinx???
    • And before that system software designers were commonly working around late-discovered hardware bugs. (since the bords were commonly produced before the software was final)
    • by Dahamma (304068)
      And isn't this the WHOLE reason for Altera and Xilinx???

      Yes, it is. I used to work at Altera way back, and still have stock. Make more hardware out of expensive FPGAs with their huge profit margins? I'm all for it!
  • by Kryptonian Jor-El (970056) on Tuesday April 10, 2007 @07:34PM (#18683607)
    "If they know that they could fix the problems later on, they could beat the competition to market."

    That sounds like vista to me...except for the fixing problems later on part...and the beating competition to market...
    What was my point again?
    • Re: (Score:3, Funny)

      What competition is there to Vista?

      Linux doesn't even come close in consuming memory and adding vulnerabilities, but it is catching up! :)
  • In the university lecture I was in this year on FPGA's the big selling point was the fact you could do exactly this and how its used in industry. I'm not seeing any 'news'
    • by HTH NE1 (675604)

      In the university lecture I was in this year on FPGA's the big selling point was the fact you could do exactly this and how its used in industry. I'm not seeing any 'news'
      Everything old is new(s) again.
  • by GoLGY (9296) on Tuesday April 10, 2007 @07:35PM (#18683611)

    "If they know that they could fix the problems later on, they could beat the competition to market.""
    Great, just what we need - hardware suppliers being encouraged to release buggy versions in the guise of fully working products.

    Hasnt the lessons that have been learnt by the software industry had *any* impact?
    • by evought (709897) <evought@pob[ ]com ['ox.' in gap]> on Tuesday April 10, 2007 @08:35PM (#18684135) Homepage Journal

      "If they know that they could fix the problems later on, they could beat the competition to market.""
      Great, just what we need - hardware suppliers being encouraged to release buggy versions in the guise of fully working products. Hasnt the lessons that have been learnt by the software industry had *any* impact?

      Sure, and those lessons are being fastidiously applied here. Customers buy that buggy, undercooked software and wait for the patches. Problem is, in increasing numbers of cases, the vendors are learning that they don't have to even ship patches (e.g. game industry, commodity hardware drivers and apps) or only for a very short lifetimes.

      Fast-followers usually have much better products than first-to-market vendors, and it used to be that they had better success as well. I am not sure that is always the case these days. Look also at the release of Vista and the fact that a new XP system simply cannot be purchased, locking customers into being beta testers (or getting off the platform entirely).

      In some sense, this has already extended to hardware as more and more depends on firmware and flashable updates. a good portion of drivers for some hardware consists of software to offload to firmware, one of the things that makes opensource drivers a pain.

      • You can buy XP-based computers from the small business arm of some computer sellers.
      • Problem is, in increasing numbers of cases, the vendors are learning that they don't have to even ship patches (e.g. game industry, commodity hardware drivers and apps) or only for a very short lifetimes.


        Why didn't you just say "Creative Labs" and be done with it? If your lucky, they might be nice enough to release an updated Sound Blaster driver every OTHER year! It sucks, because they otherwise have great hardware :(
    • by QuasiEvil (74356) on Tuesday April 10, 2007 @09:35PM (#18684517)
      Umm... most complex hardware *is* buggy. That's why datasheets often have errata issued with them, listing the different revs of silicon and what doesn't work...

      For example, here's the summary one for the Athlon 64 family (warning - pdf link):
      http://www.amd.com/us-en/assets/content_type/white _papers_and_tech_docs/25759.pdf [amd.com]

      It's also why modern BIOSes and OSes apply microcode updates to the processor - to fix "hardware" while it's in the machine. They literally rewrite the microcode that runs the processor on boot to correct certain types of issues.

      I spent days in a previous job chasing around problems with one particular batch of small microcontrollers. Turns out, I eventually noticed that all of the misbehaving ones were rev B silicon, which eventually lead me down the path to the errata sheet. Turns out, our code, which worked perfectly on every other rev, had fallen into one of the rare pitfalls of that revision.

      FPGAs are a horrid idea for mass production. They're usually either slower or utter power hogs. If it's a low-production device, or something that needs regular field updates, then great, but for mass-produced bits, it just won't work out well. I just can't see putting an "FPGA area" into regular ASICs due to the massive amounts of stuff you'd need to wire around in order to divert lines away from the usual areas of silicon over to the FPGA area. Plus there's all that wasted silicon if the FPGA area was never used, which would decrease yields and raise costs.

    • Great, just what we need - hardware suppliers being encouraged to release buggy versions in the guise of fully working products.

      Let's face it. Since the invention of the winmodem, hardware quality has been getting worse and worse. What you say is happening right now. Maybe not in CPUs, but lots of types of hardware already follow that idea. At least with phoenix I'd have a chance of getting that junk patched.

      FYI, have a tyan tiger, so I'm a victim of a .0 release of AMD's new 760MPX chipset, complete withou

  • Limited useability (Score:5, Informative)

    by Dynedain (141758) <slashdot2NO@SPAManthonymclin.com> on Tuesday April 10, 2007 @07:35PM (#18683621) Homepage
    Great... so I assemble a new system with "patchable" hardware... only to find that the hardware is deffective.

    Now I'm left in a situation where I need software to patch the hardware. But I can't run the software because the hardware is defective...

    This is just an excuse for being lazy. Do we really need more untested products flooding the market? Nothing like shifting the burden of quality control onto the end user to push up your profits...

    On the other hand, this could be very useful in systems where physical access to the hardware is nigh impossibles... satellites for example. But this should not be used in consumer devices, and shouldn't be a crutch for faster development.
  • Oh great... (Score:5, Insightful)

    by Brandybuck (704397) on Tuesday April 10, 2007 @07:37PM (#18683629) Homepage Journal
    "If they know that they could fix the problems later on, they could beat the competition to market."

    Oh great, now we'll have hardware as crappy as software. I guess we'll have to get used to the new QA mantra: "If it solders, ship it!" Sigh.
  • by R.Mo_Robert (737913) on Tuesday April 10, 2007 @07:39PM (#18683651)

    Hmm, he might want to work on changing the name from Phoenix. Good thing the summary says its only "dubbed Phoenix," not that it's the final name.

    What's that you say? No, "Firebird" won't work, either...

  • by Sciros (986030) on Tuesday April 10, 2007 @07:39PM (#18683657) Journal
    Supposing a defect in this post-releast-modifiable hardware makes it impossible to connect to the internet? Good luck downloading a fix! :-P

    This could make hardware manufacturers cut QA costs at our expense. Yay!
  • by user24 (854467) on Tuesday April 10, 2007 @07:40PM (#18683667) Homepage
    If I'm missing something, then I'm sure a lot of other people are too, so please explain:

    exactly what is stopping malware2.0 from killing my processor?
    • by dissy (172727)
      If I'm missing something, then I'm sure a lot of other people are too, so please explain:
      exactly what is stopping malware2.0 from killing my processor?


      A hardware switch or button that needs to be set in program mode.
    • by nuzak (959558)
      > exactly what is stopping malware2.0 from killing my processor?

      Most PC's don't have recoverable bios backups, so most PC's can be all-but-bricked by malware that corrupts the bios. For most people who aren't into pulling chips, that's completely bricked.

      It's an unsuccessful virus that instantly kills its host. Malware these days goes to quite some lengths to avoid notice so they can actually execute their intended purpose.

      • by user24 (854467)
        "so they can actually execute their intended purpose."
        what, like, killing their hosts on April the first, or something?
    • by Frank T. Lofaro Jr. (142215) on Tuesday April 10, 2007 @07:53PM (#18683809) Homepage
      Nothing.

      And this is the reality NOW.

      Erasing the BIOS, stopping fans, overclocking and overvolting chips can be done TODAY.

      Also, changing the region of a DVD drive until it locks out changes and leaving it on a unwanted region is also doable; another "advantage" of this attack is that it is a felony to repair the hardware thanks to the DMCA giving DRM the force of law.

      Killer POKEs didn't die with the Commodore PET and C64, they just aren't literal POKEs anymore.
    • by freeze128 (544774)

      exactly what is stopping malware2.0 from killing my processor?
      DRM 2.0
  • bleh (Score:4, Interesting)

    by dave1g (680091) on Tuesday April 10, 2007 @07:42PM (#18683681) Journal
    I was hoping for some idea like slapping an X gate FPGA onto the package of a regular processor, and then if in later testing it is deemed to have a bad cache line, or floating point unit. it could be reimplemented in the FPGA section and wired in, possibly increasing yields. Though these would certainly be lower quality parts they would atleast be functionally correct, if a bit slower.

    But I dont know. Something tells me that if there is a hardware problem(not a hardware design problem) then it is likly that there will be others on the same chip, due too some non uniform distribution of impure silicon. and it wouldnt be long before there are too many corrections to fit in the fpga.
    • But I dont know. Something tells me that if there is a hardware problem(not a hardware design problem) then it is likly that there will be others on the same chip, due too some non uniform distribution of impure silicon. and it wouldnt be long before there are too many corrections to fit in the fpga.

      I think this would only be used to fix a design problem. Defects in silicon are spread randomly across the wafer, meaning that the fault(s) in each faulty chip are not the same. It would not be worth the effort to track down where the defect was and create new logic to avoid it just to fix one chip on the wafer.

    • I think processor companies already do something similar w/o an FPGA.

      The difference between the Pentiums and the Celeron (or whatever they called now) used to be mainly cache size -- this might have been motivated from yield considerations (in terms of the cache, since that is a large portion of the chip area). I remember reading something along the lines that they might have a few extra cache lines that can be used to replace a bad one (during the time of manufacture), by blowing a tiny fuse, etc. And I
  • by Abuzar (732558)
    Look, I'm not even a geek. I'm just some luser, and even I in my eternal stooooopidity could tell you:

    a) Duh, duh, duh. This ain't no news, FPGAs have been around for quite a while, and being able to soft-repair hardware is an old idea that is being used where practical. Sheesh, slashdot is really headed for slushdot these days.

    b) Great, so now we can get more defective products quicker, be on hold with tech support longer, and spend our own time/money fixing the products under warranty that we've alre
  • by megla (859600) on Tuesday April 10, 2007 @07:56PM (#18683839)
    I can't believe I'm even reading this.
    The entire selling point of this system is that it allows hardware developers to do sloppy work? Great! The build-and-fix approach has worked wonders for software what with constant security alerts and all, why not use it for hardware? Inspired!

    Have they put any thought into this at all?
    That other people might make malicious "patches"?
    That they'd be opening up hardware to all the vulnerabilities that software has?

    Jesus christ people, use some common sense.
    • by Shados (741919)
      Indeed. Hell, we see it all the time, being able to PATCH a software just means that it will be buggier on release, and never really get better than the unpatchable version... Thats whats semi-killed off PC gaming, since console games have always been released with less bugs than PC games have after the first 5 patches... though with consoles with harddrives and such, that might be history in a few years...
  • Torrellas believes this would give chips a shorter time to market, saying "If they know that they could fix the problems later on, they could beat the competition to market.""

    So the P.O.S. unfinished game I just bought to run on the P.O.S. unfinished operating system I just bought is now expected to run on the P.O.S. unfinished PC I just bought...

    If you've half a mind to go into marketing, that's all you need...

  • This is bad, it means production design testing will be foisted onto the user.
    We already have to worry that software products don't work and will be apllying endless patches to it, not hardware?

    What about malicious attacks?

    No I was to busy being alarmed to read the article.
  • by Chris Snook (872473) on Tuesday April 10, 2007 @08:28PM (#18684075)
    ...except slower and more expensive.
  • In other news:
    A professor in optical systems discover that a light bulb screwed into a socket starts to emit photons.
  • by daitengu (172781) * on Tuesday April 10, 2007 @08:33PM (#18684117) Homepage Journal
    Half of Google is in "Beta", 90% of the video games I buy are beta-quality, more and more software now-days is labeled as "beta release 3.1415", I don't need to beta-test a processor or GPU as well! While it would be nice to be able to _add_ things to your CPU, like support for SSE42, I think something like this in a CPU would cause more harm than good.

    It'd also make debugging software that much harder, as you won't be sure where the problem lies, with the CPU or the software program itself.
  • by JustNiz (692889)
    This sucks, as companies will presume its more OK to release buggy hardware now.
  • Oh wow, really? (Score:3, Interesting)

    by Junta (36770) on Tuesday April 10, 2007 @08:52PM (#18684249)
    I'm surprised anyone would think this is news. Also, as it stands today, some companies have *entirely* too much faith in FPGAs to get them through. We had two companies come give us product to try. Both implementing the exact same technology, one with an FPGA design. They talked about how wonderful FPGAs are (as if they were new to them) and that they were perfect for large-scale deployment, and they could fix *anything* in firmware. During our evaluation, despite their claims of how well it performed, we contorted the tests all over the place to meet 70% of the 'theoretical' performance, with *huge* latency penalties on any given operation no matter how we sliced it. All this coming with the bonus of an abnormally large TDP of the part.

    The other solution was a traditional ASIC. Under 1/4th the TDP of the competitor, a 50-fold decrease in latency per-operation, and on the first default run, got 90% of the theoretical performance, 96% after tuning. All this at a lower cost-per-part in production by about 200 dollars.

    We were skeptical of both vendors for different reasons, neither vendor was allowed to give us extra hand-holding until the first vendor was so embarassingly bad we let them go hands on with us because we were *certain* we had to be missing something if it was that embarassing. Even after giving them unprecedented advantage to offset initial results, they couldn't come close to touching the other offering.

    I know, a better company could have done better, probably, but the cost delta of FPGA and ASIC was not their fault, and the TDP of their parts, while likely worse than they could have done, probably would have been higher regardless. As another poster pointed out, its more difficult in general to clock up FPGAs than ASICs, and so performance will suffer.

    FPGAs have their place, and a huge benefit is prototyping. I've seen a number of companies do a proof-of-concept with an FPGA, go forward with a demo, but when time comes for mass-production, it's most often implemented as an ASIC. After decades of dealing with hardware bugs, the industry at large has gotten very good at glossing over the rough spots in firmware. Sure, some hardware bugs can never be addressed in such a way, and as a consequence, your testing has to be better up front and inevitably slow down a release process due to a fear of post-release returns, but that is a *very* healthy fear to have, and it ensures the quality will be better at release time than your FPGA-reliant competitor. First to market is generally an advantage, but it is also a *huge* opportunity for embarrassment and sending your early adopters begging for your competitors competent ASIC implementation, with a low bar to beat as well.
  • in 1980!
  • by SeaFox (739806)

    Defects found on a Phoenix-enabled chip could be resolved by downloading a patch and applying it to the hardware. Torrellas believes this would give chips a shorter time to market, saying "If they know that they could fix the problems later on, they could beat the competition to market."

    Oh, boy! Defective by (lack) of design... sooner!
    If there's anything wrong with hardware and software development, its that there isn't enough quality testing done prior to shipping. How does this do anything but encourage p

  • You think it's tough getting tech support now? Wait until field-patchable hardware hits the market.

    Can't read the screen? First you call the O/S manufacturer, then you call the video card guys, then you call the RISC chips guys, and so on.

    That'll be loads of fun.
  • So we can get things to market faster knowing we can fix the bugs in the chips later -- after the user fails to get the promised benefit.

    UGH. I'm so sick this attitude that it is beyond description. ASUS makes some great gear, for example, and the worst software for that gear I've ever seen. Netgear is the same way. The crap both companies turn out is low quality and it's clear that their focus is so hardware centric that they see the software as a necessary evil that the users need in order to use the
  • http://www.freepatentsonline.com/4441170.html [freepatentsonline.com]

    This patent seems to cover redundant circuits and one time fuses to patch them and a disable fuse to prevent end user changes. The repairs can be made after packaging. This is not to change code, but to increase manufacturing yeild. A few bad bits can be replaced by patching in redundant spare memory instead of trashing the die.
  • by kabdib (81955)
    "We know how to fix software really easily."

    Hahahahahahahaha. Anyone who thinks that it's easy to patch several tens of millions of machines in customer homes or offices is an idiot.
  • They'd release rushed, shitty hardware that wouldn't work properly until you connect to the internet to patch it, and which would be susceptible to viruses. Viruses that could reconfigure your processor to short the +5V to ground. How fun.
  • Is this guy for real (Score:3, Interesting)

    by RobinH (124750) on Tuesday April 10, 2007 @11:09PM (#18685041) Homepage
    First of all, this guy is way behind the times. Secondly, FPGAs are significantly more expensive than ASICs. Third, their performance is slower. What he's suggesting is akin to building all new homes out of lego in case we change our minds about the design after it's delivered. Nobody would go for it because we live in the Walmart society now, where price is the only motivating factor in purchases. Nobody wants to have to download a patch to their brand new widget either, just because the vendor couldn't take the time to debug it before shipping (not that we don't have the same situation with firmware revisions even today).
    • Re: (Score:3, Informative)

      Secondly, FPGAs are significantly more expensive than ASICs

      Assuming you are making many thousands (if not millions) of them, and you ignore development costs. Gate-level design and simulation/evaluation of a complex ASIC is labor consuming task. Getting to a releasable mask set is a long road.

      Where I work they develop ASICs for spaceflight so they only build a relative handful, and it doesn't amortize out as well. There's a LOT of interest in space qualified FPGAs as a result. On-orbit reprogramming is

  • This reminds me of something I read about a few years back: "wafer scale integration", an idea for making much cheaper RAM at a performance penalty.

    The idea was that you would design a wafer with a bunch of RAM units on it, and have a control patch with a bunch of fusible links. You would test the wafer, and find out which RAM units were good and which were defective; and then you would blow the fuses to take out the defective ones and connect in the good ones. In theory, with decent yields, you would get
  • by buss_error (142273) on Wednesday April 11, 2007 @12:10AM (#18685313) Homepage Journal
    FPGAs have the advantage of being able to be modified post-production. Defects found on a Phoenix-enabled chip could be resolved by downloading a patch and applying it to the hardware.

    We already have Microsoft paying hardware device OEMs to use PICs and not release the driver code to any but closed source OS'es, so this just makes it easer to take the MS dollar and shut out OSS. A few may resist, but the surest way to kill OSS is to make it impossible to have drivers that work. In a perfect world, this wouldn't matter. We live, however, in a world far from perfection.

    It worries me, if for no other reason than it's likely to be abused. How long until we see a motherboard that requires the use of closed source drivers to enable the keyboard port, or be able to use block or memory devices?

  • The movement to fpga style cpu's means .......

    1. Designs can be released before thorough testing is done meaning that potentially life threatening (or at least data threatening) errors can be introduced requiring months of waiting for a fix.

    2 Now that you CPU can be on the fly re-programed it opens up a whole new world of viruses and worms.

    I envision conversations with your auto dealer now. "Yes sir we understand that the cars brake system locks up randomly, but the CPU manufacturer has assure

The biggest mistake you can make is to believe that you are working for someone else.

Working...