Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Stretch Announces Chip That Rewires Itself On The Fly 311

tigre writes "CNET News reports on a chip startup call Stretch which produces the S5000, a RISC processor with electronically programmable hardware so that it can add to its instruction set as it deems necessary. Thus it can re-configure itself to behave like a DSP, or a (digital) ASIC, and perform the equivalent of hundreds of instructions in one cycle. Great way to bridge the gap between general-purpose computing and ASICs."
This discussion has been archived. No new comments can be posted.

Stretch Announces Chip That Rewires Itself On The Fly

Comments Filter:
  • by KDN ( 3283 ) on Monday April 26, 2004 @03:09PM (#8975111)
    Can you imagine the virus you could write if you could change the instruction set of the cpu?
  • by LostCluster ( 625375 ) * on Monday April 26, 2004 @03:10PM (#8975114)
    If this doesn't rempresent the death of the megahertz as a processor-benchmark standard, I don't know what will...

    Effective application speed was never based on a cycle count alone, because different processors can have better instruction sets for the given application. The main breakthrough here is that this chip leaves "user-definable" space in its instruction set so they can re-optimize the instruction set on the fly. Whatever you're running, its most commonly used functions can almost slide from being code to being "on the chip" and that's sure to speed up the experienced speed.

    Yeah, I know its a /. cliche, but... imagine a cluster of these!
  • by hatrisc ( 555862 ) on Monday April 26, 2004 @03:11PM (#8975124) Homepage
    we can have only one standard assembly language? the hell with java if that's the case.
  • yawn ... (Score:4, Insightful)

    by torpor ( 458 ) <ibisum AT gmail DOT com> on Monday April 26, 2004 @03:12PM (#8975134) Homepage Journal
    ... wake me up when i can buy a thousand of them for $10 a piece ...

    [okay, okay, so it'll be -hell- fun to design codecs and other protocols that can switch their chipset dynamically, yeah, but i'd need 1000's of them deployed to have a real reason to do it...]
  • by LostCluster ( 625375 ) * on Monday April 26, 2004 @03:12PM (#8975144)
    I think we're going to have to move the crypto benchmarks back a step when this tech comes out. Not very many of us have RISC chips that are optimized for MD5 or any of the other popular crypto formulas, but if the typical consumer PC had this technology, we could all effectively have an on-demand RISC for whatever we need at the moment sitting in our PCs.

    In short, the time-to-crack using consumer technologies for almost any form of crypto is about to take a step backwards. It won't "break" anything, but the brute force combinations will be able to be examined in a faster time, meaning higher standards will be needed for the same level of protection you have today.

    Not surprising, these breakthroughs will always keep coming...
  • by dhasenan ( 758719 ) on Monday April 26, 2004 @03:14PM (#8975163)
    How can something that normally takes "hundreds of thousands of instructions" be handled in a single instruction? Surely all the same mathematical operations must take place, except for some optimization. Or is it a matter of a certain structure for computation being created in a more permanent fashion rather than being dynamically formed upon demand? Then the operations could be performed in a single cycle. On the other hand, that portion of the processor would become useless to other tasks. Or am I misunderstanding this entirely?
  • by SlipJig ( 184130 ) on Monday April 26, 2004 @03:15PM (#8975172) Homepage
    IANAEE, but I was just wondering if this technology provides greater advantages to unique monolithic apps as opposed to apps targeted for virtual machines such as the JVM or CLR. Those VMs are general-purpose, and maybe apps that run on them would be "invisible" to the hardware reprogrammability... however I don't know how just-in-time native compilation might change that picture. Anyone with knowledge of this stuff care to enlighten?
  • by Anonymous Coward on Monday April 26, 2004 @03:18PM (#8975213)
    ...I sense another Transmeta coming on...

    Yes sure, rewirable chips would be cool for certain applications, but how does one go about making it deal with multiple applications with multiple needs? You'd over load the CPU with a truckload of specialized instructions - which would probably slow it down. Granted, I see uses in things like mobile phones, but for multitasking machines, a 'Jack of all trades' chip is the way to go.
  • by jsac ( 71558 ) on Monday April 26, 2004 @03:22PM (#8975248) Journal
    Luckily it will also immensely speed up encryption times. So, on the whole, probably a gain for the white hats rather than the black hats.
  • by wed128 ( 722152 ) on Monday April 26, 2004 @03:27PM (#8975310)
    yea, but a working implementation is a long way from a concept paper...
  • Re:Ummm... (Score:2, Insightful)

    by narcc ( 412956 ) on Monday April 26, 2004 @03:31PM (#8975358) Journal
    Nope
    See the script [sfy.iv.ru]
  • by Gyorg_Lavode ( 520114 ) on Monday April 26, 2004 @03:40PM (#8975452)
    The idea of programmable chips is nothing new. Xlinx etc have been doing it for ever. The idea of putting both a standard core w/ a generic instruction set AND a programmable core ont he same chip is very interesting. It will, however, be a niche product. You aren't going to use it in your home computer because your home computer does a broad range of things.

    This will be useful in places that they mentioned. Places where you do a lot of processing that takes many generic instructions but can be translated into a single string of descrete instuctions.

    The more I think about it, this is the direction processors are going. We keep moving processors towards RISC based cores. We keep adding specialized paths for things such as multimedia. Eventually we WILL have half the processor being a purely RISC core and half being programmable hardware for specialized computational intensive instructions. I retract my initial view.

    I do wonder though, what the life is on the hardware side. How many times can you reprogram the hardware before it starts to die. What is the error rate in reprogramming it? What happens when a few programmable transistors die?

  • by Jerf ( 17166 ) on Monday April 26, 2004 @03:42PM (#8975475) Journal
    Along with jsac's comment (more processor power exponentially benefits encryptors, only linearly benefits crackers, on the whole more power means a win for encryptors), I'd like point out this is only a set-back for encyption in-as-much as encryptors claim that their encryption will keep your data safe for all time. Which is to say, at least for the reputable encryptors, this isn't a set back at all.

    If you insist on putting words in their mouth, then yeah, you might consider it a set back. But that's your misunderstanding, not theirs. All reputable encryptors have accounted for Moore's Law in their cost/benefits tradeoffs. Since it doesn't take much encryption power before it requires computers larger then the Universe to crack it via brute force (and since "cracks" on good encryption are really typically just ways of collapsing the search space, not procedures that give immediate answers, often adding more bits will require Universe sized machines, too), this isn't that big a deal for encryption. Push your key size up and be done with it. Even conventional machines can handle that today, it just takes longer.
  • by AhBeeDoi ( 686955 ) on Monday April 26, 2004 @03:44PM (#8975491)
    Stretch claims that their CPU running at 300MHz has shown superior performance to a 2GHz box. We have no details of their testing and I wonder about the real world performance.

    Natural questions come to mind like how quickly does the chip configure itself to optimize for the application, does the configuration only occur at start of the application, how many chip-configuring applications can it run concurrently, will it optimize for interpreted languages, can some configurations be made "permanent" to accommodate the OS used. I can see how this chip would optimize some specialized tasks, but I don't know if it will run well in an evironment where many different types of tasks are expected to run at the same time.

    Another issue relating to the gaining acceptance is whether Stretch releases specs so that others can write their own compilers. Is Stretch pursuing a pure hardware strategy (not trying to sell compilers, create their own OS, etc)?

  • by Short Circuit ( 52384 ) <mikemol@gmail.com> on Monday April 26, 2004 @03:45PM (#8975506) Homepage Journal
    Interesting point.

    People developing along similar lines must have means of controlling the new circuitry so that hot spots don't form on the die. Especially if they provide analog capability. It could be too easy to set up a feedback that could really trash that part of the die.

    Which brings up another thought: Do they have an on-board controller that tracks what parts of the die are usable and what aren't? If they do, they can have seriously high production yields.

    In fact, I wouldn't be surprised if such a self-diagnostic utility made its way into modular dies with specialized circuitry. So a processor could run on two AMUs instead of three, and so forth.
  • by Ars-Fartsica ( 166957 ) on Monday April 26, 2004 @03:46PM (#8975507)
    General purpose CPUs are fast, ubiqutous, and cheap. While compelling, this new approach is in no sense a slam-dunk in the market. Stretch will have to show a compelling case why this is a faster and cheaper alternative to the x86 (compatible) hegemony.
  • stop the madness. (Score:3, Insightful)

    by twitter ( 104583 ) on Monday April 26, 2004 @03:58PM (#8975653) Homepage Journal
    How do you detect a virus that has control of the underlying hardware though...

    The same way you detect a virus on any machine that has been compromised, with another machine and or a thorough understanding of normal operation and running processes. Nothing new here. Evaluate the harm done by a potential compromise and take steps accordingly.

    There is no practical difference between a hardware and a software compromise and the remedy is the same. Indeed, for critical purposes, there's little difference between a hardware compromise and a simple failure. You should anticipate it and not get burnt. The bottom line is know your shit and be in control when strange things happen.

    Security is a process and must be applied system wide. If you don't have reasonable configuration control, you are already lost. If you run junky closed software that's full of bugs and does not keep track of uid, pid or processes themselves you are always in for a rough ride. The trouble given you there will distract your operators, like it did for the last big blackout. Every piece has to be taken considered in context. It's not hard, it just takes time, organization and judgment.

    I hate how Ludites always look at any new tool and cry out, "look how awful [insert wonderful new power] is!"

  • by Anonymous Coward on Monday April 26, 2004 @03:59PM (#8975665)
    Looking at their brochure, it is based on Tensilica Xtensa technology (www.tensilica.com) which I know has been around for atleast 3 years. Nothing remarkable. Many companies have developed similar products.
  • by AmericanInKiev ( 453362 ) on Monday April 26, 2004 @04:59PM (#8976420) Homepage
    I wouldn't bet on that.

    A Minor change in the instruction set would likely render the OS dysfunctional - and while that would certainly get attention - it would not propogate very well.

    There is a math about viruses which requires them not to kill their hosts, and to do as little damage really as they can bear. Damaging viruses get high priority on fix lists and would get shut down more quickly than less harmful viruses.

    I think a CPU change virus would be a rather self-defeating proposition.
  • Re:Insightful?! (Score:3, Insightful)

    by Jennifer E. Elaan ( 463827 ) on Monday April 26, 2004 @06:29PM (#8977484) Homepage
    Actually, it's almost certainly based on standard SRAM FPGA technology. It's quite cheap in terms of power, and not especially expensive in terms of time, to reprogram, and there is no degredation over time from doing it too often. The only real disadvantage is that it might be entirely possible to create on-die shorts with bad programming data, as it currently is in FPGA's.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...