Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM Hardware

Four Core Processor to Bring Tera Ops 220

panhandler writes "As reported at CNet and the Austin American Statesman, researchers at UT are working with IBM on a new CPU architecture called TRIPS (Tera-op Reliable Intelligently adaptive Processing System). According to IBM, 'at the heart of the TRIPS architecture is a new concept called 'block-oriented execution"' which will result in a processor capable of executing more than 1 trillion operations per second."
This discussion has been archived. No new comments can be posted.

Four Core Processor to Bring Tera Ops

Comments Filter:
  • by makapuf ( 412290 ) on Friday August 29, 2003 @02:43AM (#6821897)
    at Unreal Tournament ? Why, some have cool jobs.
  • by Adolf Hitroll ( 562418 ) on Friday August 29, 2003 @02:44AM (#6821901) Homepage Journal
    this way you, yankees, can count every dollar of your actual external debt in a little more than a second !
  • Great.... (Score:5, Insightful)

    by innosent ( 618233 ) <jmdority.gmail@com> on Friday August 29, 2003 @02:45AM (#6821902)
    Great... Just what we need, processors that can perform an instruction, then wait 40000 cycles for the next instruction to be read from memory. I wish we could see some memory improvements to go along with these.

    Seriously, though, this will help break all the clustering records, provided we can come up with faster interconnects by then.
    • Re:Great.... (Score:4, Informative)

      by n3rd ( 111397 ) on Friday August 29, 2003 @03:10AM (#6821984)
      Great... Just what we need, processors that can perform an instruction, then wait 40000 cycles for the next instruction to be read from memory. I wish we could see some memory improvements to go along with these.

      Sun is working on something along those lines, check it out [sun.com]
      • Re:Great.... (Score:5, Interesting)

        by innosent ( 618233 ) <jmdority.gmail@com> on Friday August 29, 2003 @03:26AM (#6822018)
        That's throughput they're working on, which is great, but not the problem. Latency is the problem, not throughput. Try having large programs with lots of branches and/or syscalls: If the code is large enough, you'll spend more time bringing pages in from memory than actually executing your code, especially since you can forget about pipelining benefits...

        Personally, I wish a company would throw out every idea from current memory, put a GB of cache on a chip, and get memory access times down to about 3 picoseconds. But memory doesn't have the marketing appeal that processors do, so we're screwed.
        • Re:Great.... (Score:3, Interesting)

          by asavage ( 548758 )
          Personally, I wish a company would throw out every idea from current memory, put a GB of cache on a chip, and get memory access times down to about 3 picoseconds. But memory doesn't have the marketing appeal that processors do, so we're screwed.

          The problem is the larger the cache size, the slower the access time. It is a trade off.

        • Re:Great.... (Score:3, Insightful)

          by AlecC ( 512609 )
          True, for general purpose computing, which is probably what most /.ers probably do. But this sort of massive processing power is really only needed for the simulation people, who do large amounts of contoguous number cruncong, such as matrix multiplies. That sort of thing will be speeded up enormously by this sort of architecture.

          As a concept, this is hardly new. There have been all sorts of different parallel processing architectures over the years - SIMD, MIMD, strings, arrays, etc. Each has performed we
        • Do you want it cheap, or do you want it fast?

          We know how to design faster memory, we've done it. Other than a few niches, the marketplace hasn't been willing to pay for it. So we're back to dirt cheap DRAM, because "It's what the customers want," or at least will pay for.

          That said, there are inherent limits to reducing latency, mostly having to do with size. That's why L1 is the smallest cache, and L2 is a bigger cache. L1 is typically the biggest cache that can meet the fastest performance requirements.
        • Re:Great.... (Score:3, Interesting)

          by drinkypoo ( 153816 )
          Maybe someone should just get SRAM working at higher speeds and densities without making it bloody expensive as it has been, thus eliminating the need to do refreshes, and enabling asynchronous reads from memory at synchronous speeds but without the need to be synchronous.

          Or maybe IBM's MRAM will do this for us.

        • Latency is the problem, not throughput.

          And, this is also exactly what Sun's program is aiming for. Highly-threaded processors can use latency to their advantage, by scheduling additional threads during the waiting period.
    • Or some hard drive speed increases. I think normal every-day IDE drives have gone from 10-11 ms down to maybe 8-9 in the past 10 years. Wow, way to make Murphy proud, fellas.
      <plug>Western Digital hard drives are the best.</plug>
      • Western Digital hard drives are the best.
        Assuming they aren't DOA when you get them ;) Note: I'm not just trolling. DOA drives are a serious problem with WD HDs.
    • Wouldn't wide enough memory words fix this? Instead of puny 128 bit memory, go to bigger words, like 2048 bit words. Now in each memory access, you're reading in enough 64-bit instructions to keep the processor busy.

      The width of the new 2500-pin DIMM's could have an adverse effect on case design however.
      • > The width of the new 2500-pin DIMM's could have an adverse effect on case design however.

        That's not necessarily true. There's no reason that I know of that requires RAM to be a long stick. Make it look like a CPU and you can have a square-shaped socket that has more pins/square inch.
        • Well, I was trying to be somewhat of a smart ass, and somewhat serious. (Hence my nick.)

          While wide memory words are a real possibility to fix memory bandwidth in keeping processors appetites satisfied, nobody would seriously consider a 2500 pin DIMM.

          Other possibilities suggest themselves. Your square 2500 pin arrangement might be one. A very high speed serial interface might be another.

          Yet another possibility is that socketed memory might just go away completely. No really. As computers push th
    • That's what prefetchers are for. If the memory pattern is predictable (many are), then you don't have to wait, the hardware will prefetch the data for you and have it ready to go when the core needs it.
  • by Anonymous Coward on Friday August 29, 2003 @02:45AM (#6821907)
    Will still take five minutes to boot into a login prompt. Some things never change.
    • And you would need a login prompt why? By 2005 Microsoft will have everyone's user/pass and will log in for you, to reduce security risk...
    • by slittle ( 4150 )
      ??

      Things change plenty. Windows' boot times have been improving in recent years, esp. compared to the Win9x days.

      I think you meant to make a crack about Doom III or something...
    • by julesh ( 229690 ) on Friday August 29, 2003 @04:47AM (#6822208)
      Come on, funny as the line might be, timing from power-on to having a working desktop, my systems come in like this:

      Windows 2000 - 45 seconds
      Windows 98 - 1 minute
      Linux + KDE 2 - 1 minute 10 seconds

      (Admittedly linux + console is about 20 seconds, but that's not really a fair comparison - Windows 98 'text mode only', i.e. DOS, is only about 2 seconds).

      Also, boot up time is largely IO bound. Improving your processor speed will make comparatively little difference (I think doubling speed might shave 10% off these figures, possibly more for the Linux one because the KDE issue is dynamic linking related which is a CPU problem).
      • Unfair comparison (Score:4, Informative)

        by axxackall ( 579006 ) on Friday August 29, 2003 @06:23AM (#6822534) Homepage Journal
        W2k keeps loading its services even AFTER I login. I can change the boot sequence order to Linux time for X11 Login prompt at least half.

        Well, I don;t need postfix, Apache, Zope, MySQL, PostgreSQL and many other services at the moment of login. So, Win2k designers has recognized the it and optimized the boot sequence being oriented for a desktop user. In Linux we still keep a server-oriented mentality, that's why XDM/GDM/KDM/EDM is always the last thing to start.

        Besides, Win2 boots some services in parallel, while in Linux we still boot all of them sequentially, waiting for [OK] string before starting the next one. The only way to paralelize the sequence is to track dependencies between services. In Gentoo there are some efforts to do the parallel boot.

        But as for now, Linux is (by dfault) is oriented for servers, and GUI login is the last (ltterally last) thing you need on your server.

        • Re:Unfair comparison (Score:3, Interesting)

          by julesh ( 229690 )
          Actually, I have substantially optimised my Linux startup times to get it down to that. I've removed a load of non-essential services (I'm not running a mail server or web server at all now, I only really have stuff that runs from inetd and mysql running other than the absolute essentials) and moved the X startup so that it happens before a lot of other stuff has loaded.

          OK, I'll admit that I haven't parallelised it beyond this, but I wouldn't expect to see a huge amount of improvement from that. Besides,
        • by roystgnr ( 4015 ) <roy AT stogners DOT org> on Friday August 29, 2003 @08:26AM (#6823383) Homepage
          Besides, Win2 boots some services in parallel, while in Linux we still boot all of them sequentially, waiting for [OK] string before starting the next one. The only way to paralelize the sequence is to track dependencies between services. In Gentoo there are some efforts to do the parallel boot.

          How are they doing it?

          I've often thought that we should be booting up our computers with a parallel invocation of "make". Then when adding a new service you would have none of this "what number between 0 and 100 should I assign?" foolishness: just write a three line makefile that includes all the dependencies that your service has on others.
          • The only way to paralelize the sequence is to track dependencies between services.

            Then why are we bothering with this System V rc directory structure that encodes dependency orders in the service start-up symbolic link? All that is required is to launch services with the same number simultaneously, and, bam, parallel booting.
          • by mkldev ( 219128 ) on Friday August 29, 2003 @01:45PM (#6826775) Homepage
            You mean kind of like Mac OS X does? From the docs on OpenDarwin:

            The Property List

            Each startup item bundle contains a property list file at the root level named StartupParameters.plist. The property list is an XML or NeXT-style text file that describes the contents of the bundle. It enumerates the services the bundle prov ides, the services the bundle requires, and other information useful for determining the proper order of execution of the bundles.

            The property list contains the following attributes:

            OPEN CURLY BRACE
            Description = "My Startup Item";
            Provides = "MyService";
            Requires = ("AnotherService", "Network", ...);
            Uses = ("YetAnotherService, ...);
            OrderPreferece = "time";
            Messages EQUALS OPEN CURLY BRACE
            start = "Starting My Item.";
            stop = "Stopping My Item.";
            restart = "Restarting My Item.";
            CLOSE CURLY BRACE
            CLOSE CURLY BRACE
            Apologies for the EQUALS, OPEN CURLY BRACE, and CLOSE CURLY BRACE, but Slashdot considers them to be 'junk'. Oddly enough, it also thinks double quotes are junk. Talk about encouraging plagiarism.

            Here's a modest proposal: if somebody has a Karma bonus, it should be clear that the person doesn't post intentional trolls or other useless crap. Don't subject those of us who actually try to consistently post useful information to these sorts of stupid filters. It only ends up preventing us from being helpful and informative and leads to the decline of the signal-to-noise ratio that it was designed to improve.

      • Windows 2000 - 45 seconds

        So what? Solaris 9 booting on a six year old workstation goes this fast after optimizing the rc directories. Also, most people wait for the hourglass cursor to go away in Win2K after logging in, anyway (I don't trust Windows enough to attempt work while it is still busy--that's just asking for trouble).
        • Windows 2000 - 45 seconds

          So what? Solaris 9 booting on a six year old workstation goes this fast after optimizing the rc directories.


          The original AC I was replying to suggested that 5 minutes was common. I was pointing out the error of magnitude. Also, the fact is that Linux with a modern desktop environment isn't much better.

          I wasn't saying 'wow, isn't windows fast'. I was saying 'look, there isn't a lot of difference between windows and a system of the kind that I guess you prefer'.

          Also, the timin
  • Better link (Score:5, Informative)

    by Textbook Error ( 590676 ) on Friday August 29, 2003 @02:46AM (#6821917)
    A somewhat more informative link [utexas.edu] for more info. Would it really kill submitters to put a link to the actual project in their submission...
  • Thank God (Score:4, Insightful)

    by (outer-limits) ( 309835 ) on Friday August 29, 2003 @02:47AM (#6821919)
    EPIC is clearly dead in the water. Intel didn't learn from the 432.
    • EPIC is clearly dead in the water.

      Only in the context of general-purpose processors. VLIW CPUs are common in signal processing and video cards. Perhaps the Itanic can become a $5,000 behemoth DSP chip for the next generation of 260-watt graphics cards. Perhaps they can put those rediculous SPEC scores to some use?
  • by robbyjo ( 315601 ) on Friday August 29, 2003 @02:48AM (#6821920) Homepage

    Here [utexas.edu]

  • by 10Ghz ( 453478 ) on Friday August 29, 2003 @02:53AM (#6821934)
    "This is yet another breach of our IP! Our fine researcher came up with this technology over 10 years ago, we have just ket it hidden for all this time. Unfortunately we wrote the patent-applications with invisible ninja-ink and they are being kept in a vault in our Fortress of Doom (tm), so we can't show them to anyone.

    We expect IBM to pay us 5 billion dollars plus 4 x $699 for each CPU sold"
  • Fabrication (Score:4, Insightful)

    by Anonymous Coward on Friday August 29, 2003 @02:54AM (#6821936)
    Does anyone remember the Pentium Pro? It was an extremely expensive processor. This was because of its strange system of connecting the CPU core with a massive amount of cache ram; production yields were very low, so fabrication costs were very high.

    Imagine how high the failure rate would be with fabricating a CPU with four cores... I don't see how it would be practical unless it was with an extremely-high yield design such as the StrongARM.
    • Re:Fabrication (Score:3, Interesting)

      by Nazmun ( 590998 )
      what if they could fabricate each core separately and then somehow connect the cpu's. Shouldn't be too hard to do in the factory. It wouldn't be as fast as a single core cpu's internal bus but it would be a heck of a lot better then mult cpu's in standard mobos now (like xeon's etc.).
    • Re:Fabrication (Score:2, Interesting)

      by ottawanker ( 597020 )
      Imagine how high the failure rate would be with fabricating a CPU with four cores... I don't see how it would be practical unless it was with an extremely-high yield design such as the StrongARM.

      Naw, that doesn't seem like too big a problem. All they have to do is check to see how many cores are working, and then sell the chips like that. Something like this (assuming you pay a premium for more cores, relative to the lower yields):

      $500 for 1 core
      $1200 for 2 cores
      $1800 for 3 cores
      $2500 for 4 cores
      • This is actually a great idea.
        • Re:Fabrication (Score:1, Insightful)

          by Phishpin ( 640483 )
          Marketing a chip that has a product defect doesn't sound so great to me, even if the chip performs flawlessly with a broken core or two.

          IBM: "Well, all our chips are made with 4 cores, but some of them get made broken, so we sell those for less as if they were only made with the number of cores that work"

          Customer: "I wonder what Sun and Intel are up to these days."
          • But it wouldn't be marketed like that of course. Just like early 486SX wasn't marketed as 486DX with a broken/disabled FPU.
          • Erm, what do you think the difference is between a different clock speeds of the same cpu? They print them up, the ones with less errors get a higher clock speed.
          • Thats why theyre sold at different speeds - all a P4 2GHz is is a P43GHz with enough flaws to keep its speed down. Nothings actually *broken*, but the defects are still there. IIRC, IBM does sell the Power4 with failed cores on the cheap (I could be wrong though, too lazy to go checking up), as do most other manufacturers (the on-die caches are often sold like that - 32K is just 64K with a broken half).
          • Check out the Power4 HPC sometime. Or the Celeron.
    • The multi die module wasn't a "strange system" it was simply an expensive way to do it. Both the CPU and cache were fabbed separately and individually bonded into the package and into each other. As I remember, others have done similar things before, but I think it was the largest scale use of its kind. Now a more complex CPU with a big of a cache is more routinely put on the same die, and I expect that this project will as well.
    • The Pentium-pro was a 2 chip solution in an expensive package. That combined with state-of-he-art technology at the time made it very expensive. The article doesn't mention the device count or die size for this chip, so your yield analysis is pure speculation without any real data.
  • Only 32 Billion Now (Score:5, Informative)

    by Nazmun ( 590998 ) on Friday August 29, 2003 @02:55AM (#6821938) Homepage
    The four cores add up to only 32 billion operations right now according to the CNet article. They project that they won't reach 1 trillion until 2010.
    • So I can only replace ten Pentium 3000s with this one chip, such a pity...
    • Only on Slashdot would someone be complaining about a processor (or processors) that only get 32 billions OPS.
      • Only on Slashdot would someone be complaining about a processor (or processors) that only get 32 billions OPS.

        Except, they only get that on a fairly narrow range of problems (easily divided into four independant chunks, and heavily CPU bound rather than memory-intensive).

        For some perspective, the newest P4s get 6.4 billion instruction per second, more if you consider the total possible SSE2 throughput. With standard number-inflation, it wouldn't surprise me to find out that this thing does nothing real
  • by Capt'n Hector ( 650760 ) on Friday August 29, 2003 @02:56AM (#6821939)
    But of course, these processors will require entire software rewrites.

    But this reminds me of a growing trend, and that is that as soon as large infrastructures are finally completed (be it the transition to OS X or 802.11b) the technology becomes obsolete. However, the entire infrastructure must be replaced. I don't care how many gazillion flops this or any other processor can pull. They need to easily scale so that the entire infrastructure does not need replacing.

    • If we wrote portable code, we could just recompile (to an extent).
    • With IBM behind it it shouldnt matter. IBM has the grandplan of making Linux the OS which will run across all its server lines. Look at this [slashdot.org] article for example.
      Anyway competition is always a good thing, and you really don't have to move the infrastructure unless you have to. If Intel chips remain good enough, stick on to that, you probably will still be able to find support for that.
    • Why? Because it means people will spend money on infrastructure, which increases cash flow in the IT industry, which will help keep the bubble afloat a while longer.

      A lot of the growth of the computing industry comes from making smarter and backwards compatible products. But look what it got Microsoft to make an Office version that is backwards compatible -- they had to use various other means to ram it down people's throats, because they didn't feel they needed to new version.

      A large part of the turnov
    • You don't think that some IT people are realising that building a system that doesn't require a new infrastructure is a dumb idea? I mean look how many are currently unemployeed.

      It would make a lot of sense for those developing such systems.

      - traskjd
    • But this reminds me of a growing trend, and that is that as soon as large infrastructures are finally completed (be it the transition to OS X or 802.11b) the technology becomes obsolete.

      Obviously, you are new to the computer industry, aren't you?

      • No actually. It has only been in recent times that we have finally built large infrastructures based on rapidly progressing technology. I'm not only talking about computers, I'm talking about everything from roadways (which need to be repaved all the time anyway) to electricity.

        Backwards compatability is nice, but let's not forget forwards compatability as well.

  • by kramer2718 ( 598033 ) on Friday August 29, 2003 @02:57AM (#6821945) Homepage
    If each chip is basically four processors each of which can execute 16 operations simultaneously, it will be difficult for compilers to find 64 independent instructions to execute each cycle.

    I guess one possibilty could be to execute instructions from four different processes simultaneously, thus reducing the probability that the instuctions will interfere.
    • by boopus ( 100890 ) on Friday August 29, 2003 @03:10AM (#6821981) Journal
      Exactly. The IA64/itanic/itanium instruction set provides for executing multiple instructions "simultaneously" (aka: pipelined with no interference) but the intel guy I heard from said it so far doesn't provide anything close to the improvements they hoped the feature might. Scaling it up to 64 instructions per clock is only going to help tasks which IBM supercomputers have already lost to beowolf clusters.
      • it will be difficult for compilers to find 64
      • independent instructions to execute each cycle

      The problem is that the word independent is the wrong one.

      It depends on what sort of work you choose to do on this sort of beast, finite element work (simulations, etc) involves the same operation on lots of values over and over. This is how Cray made his money years ago.

      This is not a desktop machine for you to do office automation on, quake maybe, but not word smithing.

    • For many applications, it seems unlikely to me that compilers can do the job without help from programmers. Often, the programmer can identify sections of code that do not interfere with each other when no reasonable compiler could know this. I suspect we need new methods of software design that highlight these situations combined with compiler features that make it easy for the programmer to identify them. Specifics? Hey, on this one I am in marketing: the implementation is the job of the engineers!
  • Expensive (Score:1, Insightful)

    It seems cheaper to me to simply make larger clusters of computers with more processors than to redesign processors. For example, why don't IBM and UT team up to design an 8-processor Itanium motherboard or something?

    First, they don't spend money reinventing the wheel. Second, hardware production failure rates are reduced because if an eighth of all cores fail, you don't average zero production. Third, most of the code is already written for multithreading with multiple processors. It would probably

    • Memory (Score:3, Informative)

      by Detritus ( 11846 )
      It's easy to throw 8 processors on a motherboard. The hard part is designing a memory subsystem that can supply the bandwidth for 8 processors and any other bus masters. Plus, you have to provide cache coherency for all of those processors.
      • Re:Memory (Score:2, Informative)

        Wasn't one of the main premises of the TRIPS system that each processor is more or less independent of its neighbors? They used the term "network" to describe the processors' interrelationship. With operating systems that commonly are processing 20 threads when no apps are running (*cough* XP), what would be the advantage of increased "networking" of the processors?

        It appears that the only issue that would be solved is there would be less lag between processors--but at the speeds they're talking about, th

  • by Kizzle ( 555439 ) on Friday August 29, 2003 @03:10AM (#6821983)
    If anyone in any way shape or form mentions the word beowulf, expect a swift kick in the nuts by your's truly.

    That is all
  • by JanusFury ( 452699 ) <.moc.liamg. .ta. .ddag.nivek.> on Friday August 29, 2003 @03:16AM (#6822002) Homepage Journal
    capable of executing more than 1 trillion operations per second."
    Let me guess...
    NOP
    NOP
    NOP
    NOP
    NOP
    NOP
    NOP
    ...
  • Sure, the machine will work.

    But it's going to take more than a faster CPU to kick-start the IT industry in the West.

    Right now, IT is a sunset industry, serving a market that is itself rapidly becoming extinct as entire business chains get automated in foreign countries. Within five years the famous Western IT industry will become a thin service layer reselling products (hard and soft) developed and produced elsewhere.

    Building yet faster CPUs does not alter this. There is no way new generations of faste
    • See, the big cost in CPUs is engineering development, not chip manufacture. These foreign governments with socialized computing are going to negotiate low-cost purchases from the American CPU companies, who in turn will have to charge high prices in the U.S., especially to individuals without CPU coverage in their IT insurance.

      Of course Americans will try to sneak over to Taiwan or use the Internet to buy cheap CPUs using purchase orders signed by unethical sysadmins. At that time, the director of the F

  • Cell (Score:4, Interesting)

    by Jagunco ( 547686 ) on Friday August 29, 2003 @03:22AM (#6822011)
    Wasn't the PS3 "Cell" chip made by IBM and Sony supposed to deliver 1 teraflop [zdnet.co.uk] too?
    • According to a story [linuxdevices.com] at the embedded Linux portal, this project is still on track. It is amazing how little hard data there is available. On the face of it, this should be a pretty major product, at least in the entertainment market. Imagine what the film editing and production companies could do with this.
  • It's actually called TORIAPS, but whatever.
  • by SharpFang ( 651121 ) on Friday August 29, 2003 @03:35AM (#6822041) Homepage Journal
    TRIPS. Lemme guess. The name says all about reliability of the system.
    • I think it says more about their PR department. Remember the golden rule, acronym first, meaning later...
    • Eh. Reliability can't really fully be determined until a chip comes off of the line. And this is really theory/sim work. They hope to get someone to buddy up with them and help with an actual proto. And remember, if the same reliability standards were in place for trains as were for computers, they would derail every two hours :) As far as the acronym, I'm gonna have to ask someone about that... (some people blamed it on PR or marketing people, but the EE dept. doesn't really have any)
    • I was thinking more along the lines of a certain novel by stephen king (amongst his more inspired works but with a typically poor ending.) That sort of implies that the system will crash out due to a massive virus and only one in a thousand processes remain.
  • TRIPS?!? (Score:1, Funny)

    by Anonymous Coward
    SCO smokes crack, IBM goes for trips... what will be next, sysadmins sniffing exploded capacitors instead of ethernet packets?
  • But... (Score:2, Funny)

    by Anonymous Coward
    A) Will it run linux
    B) Run Quake 3 at an acceptable FPS
    C) Take a slashdoting
    D) Make my Coffee
    E) Run Linux
    F) Where is my flying car!?
  • by boola-boola ( 586978 ) on Friday August 29, 2003 @04:02AM (#6822109)
    ....that I've had Doug Burger and Steve Keckler as professors here at UT, and not only do they know their stuff, but they're great professors as well, and they really seem to intimately care about the technology. They have a great sense of humor too (such as Dr. Burger complaining that he doesn't even have root access to his own machines :-P)
  • Now this is cool and if they can make it show itself as 4 CPU's instead of one it might neven mean that porting existing software is easier. Of course I'm not sure what the performance overhead would be

    Rus
  • So this processor will get on and do "inquire 'delete *.* are you sure (y/n)'", delete *.* ...pause for user entry to catch up ...oops.
  • "block-oriented execution" - I don't know that position, sounds kinky... "which will result in a processor capable of executing more than 1 trillion operations per second." - when she said 'size doesn't count', she lied. ~Marcus
  • I wonder if those professors working on these are the same ones who let their TA's come to the first day of class and in some cases first week of class instead of them themselves.

    My fiance was pretty disgusted this year since she's a grad student and for the money's she's paying she does not expect a student to be teaching the class on day one.
  • Today IBM announcemed the release of 2 new landmark acronyms: TRIPS and PERC. According to a spokesman: "Well felt that RISC had run its course
    . Its far to familiar to people outside the industry now and besides , it has somewhat negative connotations. With these new acronyms we can
    once more confuse millions of people over the acronymns expected 10 year lifecycle and also it gives us plenty of bad in-joke
    opportunities for our technical authors"
  • by Mostly a lurker ( 634878 ) on Friday August 29, 2003 @06:38AM (#6822589)
    Have you noticed that big Japanese companies seem comfortable working with IBM? I find it difficult to think of any other large US corporation about which we can say the same. IMHO, it is because (while a hard nosed competitor) they deal in a straight fashion with partners. They are seen as trustworthy.
  • that's pretty much what a FPGA is, like a Xilinx which will execute many commands in parallel, not just a few pipelines, but entire large blocks, imagine executing 500 if statements in parallel rather then sequentially.

    I'd love to learn to program one... I just don't want to have to learn Verilog.
  • We're not still in the stone age comment, being posted from a real University of Texas tech desk. A lot of the features in the Pentiums and up were based on papers written by professors in my department. AMD has a fab right here in town (well, ok, a little bit outside of town).
  • By the way, here's another link: News. [utexas.edu] This is from the general public friendly news thing on the UT home page...
  • The prototype is going to have four cores; a final version could have as many as 16.

"Hello again, Peabody here..." -- Mister Peabody

Working...