Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel Hardware

Speculations Intel's Next Generation 329

An anonymous reader writes "The Inquirer speculates about the next generation Intel chip. It's low power, 64 bit, multi core (up to 16?) and the real reason for the Apple switch."
This discussion has been archived. No new comments can be posted.

Speculations Intel's Next Generation

Comments Filter:
  • Intel (Score:4, Funny)

    by Anonymous Coward on Thursday August 18, 2005 @05:57PM (#13351282)
    Correction: 65 bits. Twice as fast as 64 bits.
    • Re:Intel (Score:5, Funny)

      by Anonymous Coward on Thursday August 18, 2005 @06:37PM (#13351514)
      To be pedantic, it would actually be just a bit faster, not twice as fast.
    • Re:Intel (Score:3, Funny)

      by ackthpt ( 218170 ) *
      Correction: 65 bits. Twice as fast as 64 bits.

      What with dumping all the old technology for a brave new approach, they'll undoubtably revisit old mistakes.

      it'll be a 63.999999999999976581 bit processor

    • Re:Intel (Score:3, Insightful)

      by demachina ( 71715 )
      "It's low power, 64 bit, multi core (up to 16?)"

      Wow!! This could mean they might catch up to AMD's current generation :) Excepting they don't have 16 cores yet.
      • by js7a ( 579872 ) <`gro.kivob' `ta' `semaj'> on Thursday August 18, 2005 @10:49PM (#13352781) Homepage Journal
        The hidden Markov model Viterbi beam search algorithms that I depend on for my work run less than 50% as fast on 64 bit architectures than on 32 bit processors. Primarily, that is because of the fine memory access paterns, complicated locality issues, and probably other things that I am not really very aware of, such as less mature compiler technology.

        In any case, the fact that everyone wants to jump to 64 without testing the waters very carefully first is seriously foolish. I know I'm not the only one who feels this way -- Microsoft's Windows speech recognition subsystem refuses to run on any 64 bit architecture unless all of the OS and applications are strapped to 32 bit mode.

        This is possibly worse than five years ago when people were paying absurd premiums to go from 800 MHz to 1.3 Ghz with RAM speeds stagnant. At least then you got something more from algorithms which weren't memory access-bound. From 32 to 64 is a significant step backwards in many cases.

        • by demachina ( 71715 ) on Thursday August 18, 2005 @11:06PM (#13352838)
          I'd agree the 64 bit part is a bit overrated and bleeding edge for most applications, unless you are handling massive data sets. Video editing, simulation, circuit design, seismic all can use it. Of course all the supercomputing fields need it. I imagine some big databases probably can too. Some games will probably need it to in a few years. Film animators are about to the point they will need 64 bit address space if the software developers will take the plunge.

          The best thing in the x86-64 API is they just added a lot more registers [hardwaresecrets.com] which are sorely lacking in IA32. 8 new registers and 8 SIMD registers can help performance a lot if you compile for them.

          Are you compiling for and taking advantage of all the new registers?

          They might have an even better chip if they had just tacked on the new registers on IA32 but since they were breaking the ABI anyway you can understand why they would go 64 bit since it has longer legs for the future. There are going to be more and more applications that will need 64 bit as RAM and disk capacity grows, and people start working with bigger data sets.

          Running Gentoo on amd64 is a bit bleeding edge. There are still a lot of apps that are masked out for it, partially just because no one tests and owns them since the user community is still pretty small. I find most things work fine when you unmask them. I need to start volunteering to support the packages I use that no one has blessed for amd64.
        • The hidden Markov model Viterbi beam search algorithms that I depend on for my work

          You just made that up to see if we're paying attention, didn't you?

  • by burtdub ( 903121 ) on Thursday August 18, 2005 @05:59PM (#13351299)
    Probably will feature an android, a Klingon, and a balding captain.
  • by TelJanin ( 784836 ) on Thursday August 18, 2005 @06:00PM (#13351309)
    I'll speculate that Intel is going to create a new 128-bit proc composed entirely of turtles. Does that make me slashdot-worthy?
  • by Nom du Keyboard ( 633989 ) on Thursday August 18, 2005 @06:01PM (#13351314)
    The article speculates that this is going to be the reason for the Apple switch, but...

    If they're announcing an archtecture this radical at next week's IDF, what are the chances that it will be available and running well in time for Apple's announced timeline for desktops?

    Or is Apple going to sell a lesser version first, in which case why haven't they already switched over to selling it to early adopters already. Yes there really are people who buy systems and wait for the applications to arrive later.

    • Or is Apple going to sell a lesser version first, in which case why haven't they already switched over to selling it to early adopters already. Yes there really are people who buy systems and wait for the applications to arrive later.

      That isn't even neccessary sometimes. I've found that of my application use on my Mac, 95+% is Apple supplied (Safar, Mail, Terminal, iLife, etc). After that, MS Office (which I expect would be ready, but would run well enough in Apple's binary translator), and BBEdit (which

    • All of the hardcore Apple users I know hate Intel. I wonder how this will ultimately fly. I'm excited about this new venture.
      • by Nasarius ( 593729 ) on Thursday August 18, 2005 @06:46PM (#13351584)
        I think the people who are most disappointed are the Linux geeks who like playing with exotic hardware. No more cheap PPC hardware for us.
      • "All of the hardcore Apple users I know hate Intel. I wonder how this will ultimately fly. I'm excited about this new venture."
        They hated Intel for a variety of factors: #1 They were/are loyal Mac users. The Mac marketing department wanted them to hate Intel, for obvious reasons. They (marketing) made sure that they (users) did and they (marketing) were sucessful. #2 For a while, PowerPC had a processor advantage. Not anymore.
    • Servers for all! (Score:2, Insightful)

      by linzeal ( 197905 )
      Not a lot of people have thought about this but what if Apple is going for the server market and that is why they severed ties with IBM?
    • Apple's compiler doesn't even support AMD64 yet; it's just IA-32. Kind of weird when they've been selling 64-bit G5s for years to go back to 32-bit, but maybe not too surprising, since Intel doesn't have a mature line of AMD64/EM64T products just yet.
      • "Intel doesn't have a mature line of AMD64/EM64T products just yet."

        I call BS.
        There is Xeon, and Itanium.
        -nB
        • "Intel doesn't have a mature line of AMD64/EM64T products just yet."

          I call BS. There is Xeon, and Itanium.

          Itanium uses the AMD64/EM64T instruction set? What's IA-64 then?

        • by Nasarius ( 593729 ) on Thursday August 18, 2005 @07:16PM (#13351772)
          As already mentioned, Itanium is not EM64T.

          The few Xeon and Pentium 4 processors that do use EM64T have not been around for very long. The vast majority of Intel's processors are still 32-bit. They don't have anything that Apple could offer in a reasonably-priced desktop. Compare with AMD, which is almost entirely focused on AMD64 now, from the cheaper Athlon64s to the gamer-oriented FX series to the dual-core X2s.

      • by Tumbleweed ( 3706 ) * on Thursday August 18, 2005 @07:22PM (#13351803)
        The Apple move to Intel processors is supposed to be in two waves: the first will be the laptops and Mac Mini, which are currently 32-bit G4s, so there's no need to make something 32-bit that is currently 64. The second wave, perhaps a year later or so, will be the PowerMacs. Plenty of time for the 64-bit Yonah or whatever between those two waves.
    • I think it will be. If this is the case, then Intel has been working on this for AT LEAST 2 years now.

      AMD has been doing way too well for Intel not to notice. They learned lessons with the p4 (don't listen to the marketing department as much) and I don't think that the best answer they have is lackluster additions to the p4.

      Things like process shrinks, more cache and slapping 2 cores together without much regard for on die communications are not revolutionary. These things can be interpreted as trying to
    • by CaptDeuce ( 84529 ) on Thursday August 18, 2005 @07:13PM (#13351760) Journal
      ... what are the chances that [Intel's new processor] will be available and running well in time for Apple's announced timeline for desktops?

      I'd say slim to none, leaning heavily towards none. But I think that's a lot less important than your next question ...

      Or is Apple going to sell a lesser version first, in which case why haven't they already switched over to selling it to early adopters already. Yes there really are people who buy systems and wait for the applications to arrive later.

      Apple hasn't switched over because consumers won't buy any box that doesn't run OS X apps, Macintel or not. Developers need the head start.

      However, Apple and Mac developers don't have backward compatibility issues; whatever processor Intel serves up can't break code that doesn't exist. All Apple needs to do is make sure that the Xcode compilers are ready for the neXt86 processor such that what developers are compiling now will run on the new processor.

      It's highly unlikely that the neXt86 will be that different, but the fact that the Mac is a clean slate means it's impossible to rule out. This is wild speculation, but Apple may be able to use this advantage to exploit the new processor's features in a way that Windows developer can't. Think of the marketing coup for Apple and Intel.

      Intel may even use Apple to compel Windows developers to adopt new processor features much the way Apple spurred the USB device market.

      On the other hand, the neXt86 may only sport fins and a racing stripe. :-j

    • Continuing the theme of rampant speculation established by TFA...

      All the rumors I have heard seem to suggest that the high-end desktop hardware (PowerMac, XServe, high-end iMac configs) will be the last to switch to Intel.

      If Apple uses Pentium M and its successors to solve its laptop/Mac Mini problem, it can probably afford to wait on the high-end hardware. IBM has already announced [appleinsider.com] dual-core G5s which should be good for another PowerMac revision or two.

      By that time, if there is a mythical Intel 64-bi

  • by ajiva ( 156759 ) on Thursday August 18, 2005 @06:02PM (#13351325)
    To me that sounds a lot like Sun's Niagara box. Huge CMT box (8 cores, 4 threads each, 32 way box). With power consumption around 65watts, but faster than 4way Xeon processors and probably more like an 8way depending on application. Intel probably is moving to something similar, maybe not quite that many cores and threads.
    • Yeah it does very much so. This doesnt sound as extreme as niagra and still should keep reasonable performance unlike niagra with single issue no speculation.
    • faster on what ? (Score:3, Interesting)

      by vlad_petric ( 94134 )
      Niagra is a server chip. It works well on OLTP, web-serving style workloads, because those have an inherent thread-level scalability and also miss to memory a lot. Instead of having a wide, out-of-order core that is unutilized most of the time, it's more efficient to have a bunch of simple, in-order cores that execute multiple threads.

      That's good for sun, because they sell server stuff, but for other kinds of workloads this approach is very innefficient. See the Piranha [mit.edu] research paper, by Barroso et al.

      • I think we are facing the prospect of having to change our workloads (by re-writing software) in order to see additional speed improvements. That great 50-year ride of ever faster single core execution seems to be petering out.

        Maybe we will wind up with a bunch of niagra-like simple cores for paralell code plus a small number of big, complex out-of-order execution cores for whatever hasn't (or can't be) implemented in that manner. In fact, isn't that what the Cell processor is?

    • There used to be a very very good pdf over at Sun's site at this address [sun.com] but unfortunately it is now defunct. If anyone has saved that pdf, please make it available somewhere as it is/was very informative.

    • by lupine ( 100665 ) * on Thursday August 18, 2005 @09:15PM (#13352349) Journal
      The interconnect for intells xeon servers is really poor and at high loads all the processors compete for access to the shared bus and memory. This means it doesnt scale worth a darn. You have diminishing returns for each processor something along the lines of:
      1 xeon = 100%
      2 xeon = 140%
      3 xeon = 160%
      4 xeon = 170%
      Wheres with the amd opteron with hyperTransport interconnect the processors dont have to fight for resources. And performance scales much better along the lines of:
      1 opteron = 100%
      2 opteron = 180%
      3 opteron = 250%
      4 opteron = 310%

  • by team99parody ( 880782 ) on Thursday August 18, 2005 @06:04PM (#13351334) Homepage
    Based on Itanium, I'd say it's a bluff to move Apple away from IBM.

    This is the same thing Intel did to HP who walked away from PA/RISC, and to SGI who walked away from MIPS, and to Compaq/DEC who walked away from Alpha --- so they turned from the leaders in 64-bit computing to resellers of wintel.

    Hey, if it worked last time, let's try it again; and maybe the rest of the 64-bit competitors'll give up.

    • The article switches over midway to saying that Intel will pretty much just copy transmeta, but with multiple cores, and an Itanium-stylu VLIW main processor. The argument is that software optimization as done in the Transmeta processor saves on branch prediction, and X86 decoding hardware, while extra cache and multiple cores gets rid of Transmeta's performance issue.

      But it is all pure speculation.
    • Huh? HP approached Intel with the EPIC architecture, as it was based on their next generation PA/RISC research. The HP/Intel allience is a refinement of the Super-Parallel Processor Architecture (SP-PA). There was no swindling or bluffing HP. If anything, one could say Intel was tricked since they dropped their x86-64 designs, lost focus on x86 in general, and invested billions in Itanium. Try actually reading some of the history [hp.com] next time.

      The failure of the other architecture is not just Intel's successful
    • The difference being that Intel is already selling chips good enough to justify the switch. Whether or not this new thing is a big change, and whether or not it works out, Intel has proved they can make chips good enough by selling them in the millions.
  • by doormat ( 63648 ) on Thursday August 18, 2005 @06:04PM (#13351335) Homepage Journal
    We'll know more when IDF arrives. Until then its just stuff written to try and hit a bullseye in the dark. Which seems to be everywhere nowadays, Dvorak, The Inq, even my fateful Ars is getting bit by the bug that says every action by anyone in the tech industry must be expounded on in a multipage article worth of /. and the ad revenue it brings..
    • Quite true. We can predict some things, but it's just a logic exercise at this point. So I'll throw in my 2 cents.

      Will happen:

      • Multi-core
      • x86 related
      • 64 Bits
      • Fastest availabe (according to Intel, on some benchmarks)
      • "Processor of the Future" (according to Intel)
      • Cooler running (at least per MIP)

      Will not happen:

      • 64+ Core
      • Runs PPC code nativly
      • Tastes like Chicken
      • "Designed with help from AMD"
      • 3x Hotter than a P4!

      Possible:

      • Integrated memory controller - wouldn't be suprised, that has REALLY helped AMD
      • by TeknoHog ( 164938 ) on Thursday August 18, 2005 @08:52PM (#13352247) Homepage Journal
        Code translation (ala Transmeta) - Possible, skeptacle of this, but could be quite interesting

        With this tiny font, I couldn't make out what the word there was, but after reaching for my skeptacles it was all clear. Truly the wealth of alternative spellings on Slashdot never ceases to surprise. I'm not even a native English speaker.

    • We'll know more when IDF arrives.

      Will we? Will they do a demo of a 4 GHz P4 [geek.com] too? Will they tell us it will use just as little power as the new Transmeta processors again?

      And will anyone believe them?

  • core speed (Score:2, Funny)

    by astellar ( 675749 )
    it's 16th cores will execute infinite loop longer than AMD anyway.
  • Rosetta (Score:5, Interesting)

    by shmlco ( 594907 ) on Thursday August 18, 2005 @06:11PM (#13351377) Homepage
    If a VLIW X86 processor had a "native" mode, one would have to wonder if Apple's Rosetta technology could compile directly to it instead of X86. I mean, it would seem dumb to JIT-compile to X86, which in turn is translated to VLIW.
    • it would seem dumb to JIT-compile to X86, which in turn is translated to VLIW.

      Yes... I also thought of this for a second, but there's a counterpoint; the native ISA will have the freedom to be changed radically, while x86 is the stable ISA that is visible outside. I think this is a good thing (just like the fluctuations in the Linux module API) because it allows for faster development.

    • Re:Rosetta (Score:5, Insightful)

      by interiot ( 50685 ) on Thursday August 18, 2005 @06:24PM (#13351448) Homepage
      Or, the alternative you're missing...

      At one point, Transmeta was promising to be able to change the CPU on-the-fly from an x86 to other things (eg. ARM, MIPS), which is no problem, since it was doing the x86=>native translation anyway, all it has to do is change to a different translation.

      So, all Intel needs to do is make the CPU be able to be switched from x86 to PPC at runtime. That's why Apple claims they can run old apps so quickly.

      • Re:Rosetta (Score:2, Interesting)

        by NovaX ( 37364 )
        Transmeta never promised that. They were careful to let that others make that hype, but never stated such. If they had, it could have gotten them in serious trouble during their IPO. After the IPO, it became 'common knowledge' stated in any Transmeta article. It would have been a problem, since Transmeta even admitted that their ISA was heavily tuned to x86 and would have been difficult support other ISAs.

        Theoretically, it would have been possible and was good marketing buzz so Transmeta never squashed the
      • So you're saying that OS-X would basically stay the same and the chip would convert PPC instructions to micro ops instead of X86 instructions?

        This would be cool. But what would be the point of sending out developer machines that are X86 based? Why would they bother with the OS-X x86 port then?

        -ft
  • I speculate... (Score:2, Interesting)

    by suitepotato ( 863945 )
    ...that we will see, eventually...

    1. Four cores standard
    2. Chips pluggable to the mobo like Atari cartridges to eight CPUs
    3. Mobos as blades to passive backplanes
    4. Home blade servers and thin clients.

    I think in the end we'll see low-end, mid-range, and high-end blade everything in the future with modularity being the way of everything.

    But that's just my speculation.
  • He's been smoking some seriously strong weed to come up with the crazy ass ideas in that article.
  • by rolfwind ( 528248 ) on Thursday August 18, 2005 @06:22PM (#13351439)
    I suspect Apple's switch wasn't because of any cool chip (it'd be ridiculous to think they are getting intel chips that no PC maker will have access to) but simply because it's one less defensive front - they don't have to worry about getting chips that are competitive anymore, which was getting a problem with PPC as well as the all important Notebook chips - IBM simply wasn't offering anymore competitive PPC solutions.

    It's one less thing to defend.

    Back when Apple first introduced PPC (1994?), they were hyping it throughout because that was one of the few real tangible differences they could tout - pre-OSX Mac was buggy and unstable single-threaded OS while Microsoft had at least NT technology.

    Now OS X pretty much rocks and they still have their excellent hardware integration - they don't need a different chip to differentiate them - OSX is their added value.
    • I still suspect the real reason for the switch had to do with being able to get the chips cheaper than from IBM due to Intel's economies of scale and the fact that IBM wasn't producing enough chips on a consistent basis. With IBM producing cell chips soon, that problem would likely have become worse.
    • NT technology is a tautology.
    • Exactly. Plus, since IBM was not able to deliver the volume Apple needed, would not cuts the prices Apple needed, and would not produce a laptop G5 that Apple needed, Apple took the added value of OS X and simply moved to a new chip. Now that the OS is the added value to the Apple product line, the processor chip is inherently less important than the OS to the user experience.
    • I waste my moderation point (-1 troll) to bite...

      pre-OSX Mac was buggy and unstable single-threaded OS

      Wow. 3 false statements in one sentence only.

      Tell us in what way Mac OS {10-n } was a) buggy b) unstable and c) single-threaded?

      I'd really wish you'd tell me where Mac OS failed on you? Anything that was OS-related?

      I've had better uptimes in Mac OS 8 and 9 than any version of Windows you can throw at it. Right up to XP.

      Mac OS was threaded. In various ways. There was the Task Manager [apple.com], the Vertical retrace ma [apple.com]
      • as far as i can tell, macos 7.5 (that's on a powerpc, a powerbook 5300) is NOT fully multi threaded. apps in the background "stop" running until they get to the foreground again
        • Wrong.

          Miss-behaving applications could bog down other applications because Mac OS used "cooperating" multi-tasking. An application, through it's normal course of operation, would relinquish CPU time through it's event loop(s).

          It's a transparent process done simply by polling the system for your next user, system or idle event (aka, "WaitNextEvent()"). This is where Mac OS' multi-tasking defers from preemption modern OSes offer (Linux, Unices including OS X and Windows). In those OSes, the kernel is the one
      • It didn't have (full) memory protection but this somehow made Mac apps more stable in the first place. When developers get called because their app takes down the whole system, they dont make the same mistake twice.

        My house doesn't have circuit breakers and somehow this makes me more careful with electrical appliances. When a shorted toaster burns down the house, I don't make the same mistake twice.

        I mean, I'm as much of an Apple fan as anyone, but really...

        Yes, classic MacOS had advantages, especiall

      • by bnenning ( 58349 ) on Thursday August 18, 2005 @07:35PM (#13351872)
        Tell us in what way Mac OS {10-n } was a) buggy b) unstable and c) single-threaded?

        (a) is a matter of opinion. (b) isn't; an OS where a single application failure can easily bring down the whole system is unstable by definition. (c) is technically false, but effectively true. The Thread Manager only supported cooperative threads, which doesn't really count. You could create preemptive threads with the multiprocessing API, but they were very limited as to what they could do (no memory allocation IIRC).

        I'm a Mac fan too, but there's no denying that the internals of Mac OS pre-X sucked. I still preferred it to Windows because of the UI, but I'm very pleased that with OS X I no longer have to make that tradeoff.
    • That is a good point. I have actually heard people from Intel say that the switch was more to couple Apples PC market with those of the other PC manufacturers. You would think that the separation of markets would be an advantage because Apple could take advantage of the price variation in x86 processors and maybe sell more computers when x86 prices were high. Because Apple computers tend to cost so much more though, it didn't really move more people over when the x86 PC prices were high, but it did move peo
    • I think you hit on it. I was talking to some guys from Freescale recently about processor offerings for one of our new board designs and the topic somehow got sidetracked on Apple's switch to Intel. They told me that nobody was really making much money selling processors to Apple. They had to invest a lot of $$$ into R&D to continue cranking out new chips for Apple, and Apple wasn't willing to pay much for the chips. As a result, a business decision was made to focus R&D in the area that had the
  • by RM6f9 ( 825298 ) <rwmurker@yahoo.com> on Thursday August 18, 2005 @06:25PM (#13351453) Homepage Journal
    ...The Farmer's Almanac speculates on the next generation "Beefalo" chip: Running from Longhorns daily into a pasture near you, the new "Beefalo" chip (tm) will multi-thread faster spreading odor and increased fertilization rate. Cores have been increased to 8 semi-solid, virtually discrete units that may be tracked onto the North bridge (If you don't wipe your boot sectors before then). Video processing speed will see a marked increase, although cooling remains a concern for these new chips...
  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Thursday August 18, 2005 @06:25PM (#13351455) Homepage
    It will be a 64 bit, multi-core, i860 [wikipedia.org] or i960 [wikipedia.org] based chip!

    Who told me? The mold that lives in the back of the fridge in the second snack room on the 7th floor of the 4th building at their 2nd site.

    Bwhahahahahahahaha.

  • since the chips would be VLIW internally, they could have software x86-VLIW AND powerPC-VLIW translators! So that makes the transition easier as well!
  • by thanasakis ( 225405 ) on Thursday August 18, 2005 @06:48PM (#13351593)
    Assuming that the article is generaly correct this upcoming processor will be able to morph to other architectures. Could this mean that we can have some sort of native (or at least semi-native) JVM or .Net processor? I am not certain whether implementing a java virtual machine on hardware is feasible but this would be an interesting possibility.

    Or it could be that the software JVM of today produces good enough native code for any architecture (x86, ultrasparc, ppc) that it makes it pointless to try to implement a machine that interprets the classes directly?
  • Wow. (Score:5, Insightful)

    by pantherace ( 165052 ) on Thursday August 18, 2005 @06:49PM (#13351601)
    Can one say: Pure speculation?

    Apple is not that spectacular in terms of choosing chips for performance, from their past history. M68k: good chip, but it was suffering from old age when they moved to PowerPC. (They could have moved to x86, arm, or other processor at that time.) Now, they announce they are moving to Intel, and suddenly Intel has some super-duper chip up their sleeve? I don't think so.

    The article starts from that basis and works up to Intel has some super-killer CPU.

    Despite the amount of hype surrounding dual-core, unless you massively change software (likely to happen eventually) to support SMP, things go slower on dual-cores than single core processors, if the dual-cores are clocked lower (Intel's current chips). What the article proposes is to duplicate the mistakes Intel has made with Itanium. (It was announced a decade ago. (If not, near enough to count.))

    Itanium 1 stripped out all the branch prediction, and similar things, relying on the compilers to do it. The result was that it got soundly thrashed by other 64-bit archs.

    So why does Itanium 2 not suck nearly as bad? HP's engineers mostly went back and put all that stuff back IN, because compilers, and code translators are still (with a very very few exceptions, I can think of 2 (one, FX!32, mentioned in the article)) very slow. Even FX!32's speed wasn't due to the speed of translation, it was due to the huge (at the time) performance of the underlying alphas. Sure, it may have been faster than the fastest x86 hardware implementation, but it was still quite slow compared to the native speed of the chip it was on.

    So the article speculates that Intel is indeed going to repeat the mistakes of the past, mistakes that *only* came to market because a) Intel has money b)Intel has pride (oh and c)got others to wipe themselves out... except IBM.) I would think Intel would learn from it's mistakes. Right now they should notice that a)processors can't be fabbed right now to work at ~4GB reliably and they are really hot. b)Going the opposite route of improving IPC almost entirely (IA-64s are not low-powered, nor cheap). Instead they should work on the in-between, which they (again due to Intel having tons of money) have in the form of the Pentium M.

    • Intel traditonally is pretty open about their future product lines. They don't tell you everything, but developers are told what direction things are going. It wouldn't be in their intrest to keep people in the dark and dump sudden changes on them. Hell, look at how long they spent talking up Itanium before it finally hit the market.

      It would also be a moronic move business wise. Apple will be a major account for Intel, but not even close to the biggest. He'll I'd be supprised if they were even approach 10%
    • Re:Wow. (Score:3, Informative)

      Despite the amount of hype surrounding dual-core, unless you massively change software (likely to happen eventually) to support SMP, things go slower on dual-cores than single core processors, if the dual-cores are clocked lower (Intel's current chips).

      I agree that this is common wisdom, but this could also be why Apple would be a nice customer. Apple's desktop have been SMP for years now, and a lot of software has been engineered to take advantage of it. Most of the high level libraries built in OS X li

  • Speculation (Score:4, Funny)

    by starrsoft ( 745524 ) * on Thursday August 18, 2005 @06:56PM (#13351653) Homepage
    Q: What's worse than listening to an experienced writer who knows his tech speculate on what Intel's next chip will look like?

    A: A bunch of slashdotters doing the same thing.

  • by swissmonkey ( 535779 ) on Thursday August 18, 2005 @07:04PM (#13351706) Homepage
    This article was written by Nicholas Blachford, the same fool who tried to analyze the Cell processor of the PS3 and described it as a supercomputer on a desk while not understanding a single thing about it.

    Seriously, it's worth a read for the laugh, but there's nothing worth believing in it, this guy doesn't know what he's talking about.
  • by Anonymous Coward on Thursday August 18, 2005 @07:04PM (#13351714)
    There's a better explanation of why the Inq article's speculation is bogus here:

    http://www.realworldtech.com/forums/index.cfm?acti on=detail&PostNum=3655&Thread=3&entryID=55310&room ID=11 [realworldtech.com]

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...