Forgot your password?
typodupeerror
Intel Hardware

Intel Drops Tejas, Xeon To Focus On Dual-Core Chips 329

Posted by timothy
from the gimme-the-core-tom dept.
PunkerTFC writes "Reuters has an article about Intel dropping the fourth-generation P4 chip (codenamed "Tejas") and the Xeon server processor. Intel says they want to concentrate on their new 'dual-core' technology for desktop and notebook systems. This is essentially putting two processors on one chip, allowing for a doubling of performance with less energy use. The introduction of this technology was not expected for another year and a half. Rival chip maker AMD says they have the capability to produce dual-core chips and will introduce the technology when they "feel there is a market need.""
This discussion has been archived. No new comments can be posted.

Intel Drops Tejas, Xeon To Focus On Dual-Core Chips

Comments Filter:
  • by Anonymous Coward on Saturday May 08, 2004 @05:41PM (#9095825)
    but might this have something to do with the recently-announced Longhorn specs?
    • by Anonymous Coward
      I wonder if it is entirely right to think that MS's annoucement about Longhorn requirements stems solely from Intel's change in roadmaps.

      AMD's Opteron, with its onboard memory controller, has been a perfect candidate for a dual-core setup since it was released (and will be getting one later this year). The Athlon64 is very similiar to the Opteron and thus it will be very easy to transition it to dual-core. The P4, on the other hand, has already got its dual-core in the form of hyperthreading.

      I'd think A
    • by mikis (53466) on Saturday May 08, 2004 @07:09PM (#9096360) Homepage
      No. I bet "recently-announced Longhorn specs" were a very clever troll, and I can't believe how many people HBT. All CPU & RAM requirements asside, but why would an OS *require* Gigabit ethernet and wireless networking? This guy confirms it [weblogs.com], but hey, he works for Microsoft, so he must be lying.

      No, it has everything to do with Pentium M and AMD64 architectures kicking PIV's a$$.
    • I'm sure MS already had inside information on this switch, long before we knew. Intel plans to have these out one or two years before Longhorn, so MS guessed that the optimal Longhorn PC would be built with them. Considering it's been a few years and AMD64 is just now starting to be put in mainstream computers, it sounds like a good estimate.

      The one thing that irks me about this- AMD saying they would have dual-core cpus out when they feel the market is there. Intel said the same thing about 64bit and n
    • Those 'specs' were predicted 'average computer' that was going to ship with Longhorn. It was in no way related to the 'minimum specs'. My guess is Intel is not liking the fact that AMD's growth continues, and AMD seems to continue developing chips faster than Intel, so Intel's taking a drastic move to try and take the lead again... or something to that effect.
  • by Anonymous Coward
    FIRST POST

    doi you think this has anything to do with the fact that MS is shipping a database(traditionally considered able to leverage hyperthreading very well) on their desktop? HMMMM!
  • by Anonymous Coward on Saturday May 08, 2004 @05:42PM (#9095831)
    But multi-cpu system sales figures do not justify abandoning the single-cpu market in any way. This is a serious mistake or an admission that they just cant keep up with AMD anymore.
    • It's dual core, not dual processors.
      • yeah, the dual-cores probably share a memory interface. It's like the P3 days when each processor sat on a shared bus to the north bridge.
    • by Anonymous Coward
      You can't keep adding transistors to the same core, extending the pipeline, and adding more cache forever. That is what isn't justifiable. Also, reinventing design is a huge cost and some risk (x86 still sticks).

      The reason multi-CPU system sales are not high is because multi-CPU systems are high in price and much lower in supply than single-CPU alternatives. You don't see a lot of older chips in multi-CPU configurations for sale do you? Among other reasons, it's because chip makers would prefer you buy
  • Interesting. (Score:4, Interesting)

    by Anonymous Coward on Saturday May 08, 2004 @05:42PM (#9095834)
    But, does this suffer the same problems as current chips do wrt dual processors? Or quad processors?

    What's the penalties of this technology? Does anyone know?

    Sounds too good to be true for a dual core cpu to act as a single core proc.
    • " But, does this suffer the same problems as current chips do wrt dual processors? Or quad processors?

      What's the penalties of this technology? Does anyone know?

      Sounds too good to be true for a dual core cpu to act as a single core proc."

      Single-core cpu's already have multpile pipelines to support parallelism of single threads. Also, the p4 hyperthreading allows multiple threads to take advantage of multiple OS tasks simultaneously. A dual core seems like an expansion on the hyperthreading concept, allowi
      • Re:Interesting. (Score:5, Informative)

        by AlecC (512609) <aleccawley@gmail.com> on Saturday May 08, 2004 @07:42PM (#9096538)
        Sooner or later you are going to bottleneck on the memory interface. Dual cpus are going to give more capability than hyperthreading, at more cost. If they are strangled by the memory interface, there is no advantage to it. But if it gets more throughout - and Intel have probably simulated it to death - it could be the way to go.
  • Whoa, deja vu (Score:5, Informative)

    by gooberguy (453295) <gooberguy@gmail.com> on Saturday May 08, 2004 @05:43PM (#9095837)
    This [slashdot.org] has been discussed before.
  • FP (Score:5, Funny)

    by JPM NICK (660664) on Saturday May 08, 2004 @05:43PM (#9095839)
    FP FP Written Written from from my my new new dual dual core core chip chip from from Intel Intel. Still Still some some bugs bugs to to work work out out.
    • Re:FP (Score:2, Funny)

      by bhtooefr (649901)
      Funny Funny Funny Funny, but but but but if if if if this this this this were were were were the the the the case case case case, someone's someone's someone's someone's new new new new Tyan Tyan Tyan Tyan S4880 S4880 S4880 S4880 Opteron Opteron Opteron Opteron 848 848 848 848 rig rig rig rig would would would would do do do do this this this this too too too too, only only only only worse worse worse worse. Oh Oh Oh Oh, wait wait wait wait............
  • by kidventus (649548) * on Saturday May 08, 2004 @05:43PM (#9095846) Homepage Journal
    sold more chips than Intel during a two week period (52% to 47%). I wonder if Intel is finally feeling the heat from AMD? Maybe Dell (who only sells Intel) is pushing on them too.
    • sold more chips than Intel during a two week period (52% to 47%)

      Interesting statistic, but you didn't cite the source. Where did this figure come from?
    • Yes, of course, Intel got those sales figures and started shaking in their boots, leading them to drastically shift their business plans on a moment's notice.

      Realistically, long-term strategies are in the pipeline for months before they're ever announced to the public. Intel surely had several different plans, and decided that this one was more future-proof than the previous one. I doubt that a one-week trend had anything to do with their decision.
    • Thank You AMD (Score:3, Interesting)

      by w42w42 (538630)

      Thanks to AMD and their recent successes in the market, Intel it seems is finally focussing on their core business - manufacturing successively faster processors, not inventing new marketing schemes. Before this announcement I could only imagine chips like these being reserved for high-end xeons.

      Competition is always a good thing.

  • Dupe Scoop... (Score:3, Insightful)

    by Kr3m3Puff (413047) * <me@@@kitsonkelly...com> on Saturday May 08, 2004 @05:43PM (#9095849) Homepage Journal
    Well weekends are for dupes it seems [slashdot.org]

    I mean this was interesting a couple days ago, but now it is old news...
  • Interesting (Score:2, Interesting)

    by JoeShmoe950 (605274)
    AMD seems very calm about this. If I was in AMD's position, I would be in pretty scared. I mean, Intel is a year a head of schedule.

    Personally, I'm just happy that soon enough I'll be able to buy a duel core chip.
    • Re:Interesting (Score:3, Insightful)

      by DrEldarion (114072)
      You'd think they wouldn't be so calm. If AMD lags behind Intel on this, they'll miss the whole wave of early-adopting nerds. These nerds will more than likely be very pleased with their purchase, and turn into fanboys. Fanboys, as we all (unfortunately) know, like to evangelize about their manufacturer of choice to other nerds (which creates more fanboys), and anyone who asks them for advice on a computer. Even if AMD comes out with something better afterwards, the damage will have already been done.

      Se
    • Not so interesting (Score:3, Informative)

      by eRacer1 (762024)
      I mean, Intel is a year a head of schedule.

      The author incorrectly states that Intel's dual core CPU is "more than a year ahead of schedule". Six months ago during the Intel fall analyst meeting Intel claimed (slide #40) [investorshub.com] dual core for the home computers would arrive in 2005.

      This is a rather interesting bit of information from the article: "This strategy was not expected for at least a year-and-a-half, said Dean McCarron, the head of Mercury Research."

      Well, how is this news? Intel is claiming that they
    • Re:Interesting (Score:5, Informative)

      by mikis (53466) on Saturday May 08, 2004 @07:38PM (#9096523) Homepage
      Um, AMD announced this [xbitlabs.com] in September last year.

      "With coherent HyperTransport, it is inevitable that we will have multiple cores on a single chip. This is a tremendous opportunity because with our architecture the scaling is far superior to anything else that's out there., The Register quoted Mr. Sanders."

      Also, see this: AMD CEO: "Dual-Core Opteron Will Shock the Hell Out of Everyone" [xbitlabs.com]. Ruiz confirms dual core Opteron in 2005.

      They say that Intel Tulsa (dual core Xeon) will arrive in about a year [xbitlabs.com] and Jonah (dual core Pentium M) is planned for 2005/2006 [xbitlabs.com].

      So, nothing new here for AMD.
  • [quote]when they "feel there is a market need."[/quote]

    Um, the market would be me. The time would be now.

    Bring it on!

    I see that the new dual core opterons are supposed to be pin compatible with existing boards. So that makes it possible to get an AMD server today, and in xx months time pop in a new chip and turn it from a single proc to a dual proc (dual -> quad?) server. Nice. Now if only memory prices would come down some more. So I can enjoy a 16GB quad proc server for under $3K.

    ADV: VPS [rimuhosting.com]

  • As being the recommended chip for running Microsoft Longhorn Version of Windows. Wonder if this has anything to do with Intel's decision.
  • Parallel? (Score:3, Informative)

    by Uber Banker (655221) on Saturday May 08, 2004 @05:45PM (#9095866)
    Intel says they want to concentrate on their new 'dual-core' technology for desktop and notebook systems. This is essentially putting two processors on one chip, allowing for a doubling of performance with less energy use.

    Is this a parallel implementation then? In that case performance is only doubled for processes that can be performed in parallel.

    I think this is more related to moving to the PM from the P4 architecture as the M series is more scaleable - taing P4 any further requires a lot more power and generates a lot more heat.
    • The other aspect that few people are discussing is the cost to build the chip and the profitability.

      The overall trend for desktop computers is "fast enough" and "cheaper" -- In a year or two, you could be looking at $250 Dell machines. Obviously in such a situation, the volume CPU has got to be cheap to build and not require a huge power supply and tons of cooling.

      It's ironic that just as AMD has gone for the high-end with their big, complex, and presumably expensive Athlon-64 chips, Intel has jumped on t
    • Re:Parallel? (Score:2, Insightful)

      by Xoro (201854)

      Is this a parallel implementation then? In that case performance is only doubled for processes that can be performed in parallel.

      This is only accurate if you're describing single-task performance. System-wide performance may be *more* than doubled, if you're dealing with loads that are causing a lot of switching overhead.

      And I don't think it's just a server thing. When my old dual cpu system finally died, I replaced it with a single cpu setup that ran nearly twice as fast (by MHz) as the two chips in t

    • Re:Parallel? (Score:3, Interesting)

      by ciroknight (601098)
      It's not only about scaleability. Processors are fast enough; this is evident by companies simply masking the processor's actually speed spec (note: not performance spec). No, the next war will be one of innovation, simply because the market's really tired of the same old thing, just faster. AMD's extension of x86 is the perfect example of this. Even though it's performance is on par with (and maybe a little faster, but not enough to matter, a couple percent, 10 at most) the other chips in it's class (r
    • Most consumer software sold today is multi-threaded and would definitaly benefit from a parallel operating multicore system. Most modern OS's also support SMP already, and switching to a multicore is no big deal in that area or difficult to take immediate advantage of.

      And you're right. The cores will be derived from the pentium M - not the 4.
  • Dual core opterons (Score:5, Informative)

    by Mdalek (702460) on Saturday May 08, 2004 @05:45PM (#9095867)

    This seems to be the new trend,
    AMD will have dual core opterons next year: [arstechnica.com]
    • by Saville (734690) on Saturday May 08, 2004 @09:27PM (#9097050)

      How many people remember this AMD Dual Core K8 Architecture [vr-zone.com] slide? AMD has been planning this for a long time.

      They introduced the k8 on a .13micron process and it was 192mm with 1024k L2 cache. Moving to .09micron it will shrink to 114mm and a dual core version, with 1024k L2 per core, may come in at ~215mm, not much bigger than the current Athlon64!

      AMD will claim the market is ready for dual core processors when they move to .09microns sometime next year. We've all read this quote [eweek.com] from AMD chairman and CEO (Hector Ruiz), right: "One of the most powerful things next year is going to be our dual-core product. To me, that's going to really shock the hell out of everyone, because it's going to be hardware-compatible, infrastructure-compatible, pin-compatible. I mean, people that have a 2-P system can slap in a dual-core product and end up with a 4-P system for the price of a 2-P. That's been the biggest drawback, everyone tells me. What keeps them from going from a 2-P to a 4-P system? It's price."

      Paul DeMone had a great article [realworldtech.com] about the 64bit processors we'll see in 2005 and the k8 is looking pretty good!

  • by skzbass (719269)
    How many people do you know with dual procs. anywho? the only one I know is a mac friend. What kind of heat sink are we going to need for dualies? Its gotta weigh in round 5lbs. And have the noise output of a harley
    • Well, if we're using automotive analogies, then I'd say as a dually [google.com], it won't need a heat sink, it'll need a radiator.
    • by mean pun (717227) on Saturday May 08, 2004 @06:36PM (#9096161)
      How many people do you know with dual procs. anywho? the only one I know is a mac friend. What kind of heat sink are we going to need for dualies? Its gotta weigh in round 5lbs. And have the noise output of a harley

      That's exactly what they try to avoid. Each core in a multi-core processors is simpler than a single processor of the current generation, but they make it up by putting two or more of them on the same chip. Another way to look at it is that the parallel execution units of a current generation processor are made even more autonomous, and this is made explicit by declaring them to to be separate processor cores.

      The point is to use the available transistors on a chip as effictively as possible. For a long time computer architects used the growning number of transistors to enlarge caches and pipelines, add execution units, and add other niceties (e.g. branch prediction, MMX), but the gains have gotten less and less (and were sometimes dubious to begin with).

      Multicore processors are only useful if people have enough parallelism in their applications to make it worthwhile. Therefore, it won't help every application, but that's also true for many tricks in existing architectures.

    • by jo_ham (604554)
      My Dual G5's heat sinks are large, but pretty light.

      They are rectangular boxes about 3"x3" square section, 5" long made of 1mm thick aluminium with lots of fins making an unobstructed tunnel for air flow.

      With a fan in front and behind each of these heat sinks, my G5 stays cool and quiet.

      The loudest fan in this box is the one up by the hard drives.

      The PPC970s in this box draw 51 watts each. The ones in the G5 Xserve draw 24 watts each.

      With careful design, the noise can be kept to a minimum. Sure, the he
      • by dfghjk (711126)
        Can you substantiate the claim that "Apple and other Mac developers have spent more time working on dual CPU optimised apps"? Apple didn't offer a true multitasking, multithreading platform until OS X. PC's have had them since OS/2 1.0, over 15 years ago. True multiprocessor support came shortly thereafter with NT 3.1. Macs are relative newcomers at this and the Apple/Mac developer base is relatively small compared to Windows. The NT kernel, basis for current Windows platforms, has always worked well w
  • Are Intel... (Score:2, Interesting)

    by Phidoux (705500)
    ... trying to get themselves ready for Longhorn too?
  • Real impact (Score:5, Insightful)

    by onyxruby (118189) * <{onyxruby} {at} {comcast.net}> on Saturday May 08, 2004 @05:48PM (#9095883)
    Here's the real impact many of us will be feeling. Software vendors that license by the CPU have already in fair part indicated that they consider "dual core" chips to be two CPU's for licensing purposes.

    In other words, people are going to find themselves having to pay higher licensing fees with regular desktop computers as well as servers. Small workgroup servers could be really hard hit by this from some vendors.

    I wonder how this will play out with XP Home which only supports one CPU? AMD has the technology so they may well respond in kind when Intel does (dammit lead AMD, lead), which could have a fair impact in weaning the masses of XP Home. I dont think MS will let this go the route of hyperthreading with the "logical processor" support.
    • Re:Real impact (Score:2, Insightful)

      by Anonymous Coward
      Bah. You can be pretty sure if the Average Dell Machine comes with a dual-core chip, software vendors will be forced by customers to change their licencing policies.

      Instead, with everyone doing these small multi-core chips, you'll probably see "Per MIPS" pricing like in the mainframe world.
    • Re:Real impact (Score:4, Informative)

      by Anonymous Coward on Saturday May 08, 2004 @06:10PM (#9096017)
      I wonder how this will play out with XP Home which only supports one CPU?

      XP home and win2k3 do correctly recognize xeon's with hyperthreading as only one processor for licensing.

      Win2k thinks each logical processor in a HT xeon is a real processor. So if you want a quad-xeon box to run win2k, you have to get the win2k advanced or enterprise version. Regular win2k only supports 4 processors.
      • Re:Real impact (Score:3, Informative)

        by IceFox (18179)
        Incorrect! It Hyperthreading is turned on in the bios, XP wont install on a quad box because it is a "license violation" stating that only four cpu's are supported.

        -Benjamin Meyer
    • In many cases a dual-core chip has similar performance to two conventional processors, so per-processor licensing isn't unfair. I don't think servers will have to pay more for licensing because people will just buy servers that have the same number of processors. (e.g. People will replace their 4-way servers with 2-chip/4-core servers, so licensing costs will be the same.)
  • by cubicledrone (681598) on Saturday May 08, 2004 @05:55PM (#9095929)
    "Sir, maybe we should introduce our dual core chip now!"

    "No... that's just what they'll be expecting us to do..."
  • by Anonymous Coward on Saturday May 08, 2004 @05:56PM (#9095940)
    ...will sell for the low-end and be called Halferon.
  • Dual core explained (Score:3, Informative)

    by tpengster (566422) <slash&tpengster,com> on Saturday May 08, 2004 @06:01PM (#9095963)
    Transistors are getting smaller and the chipmakers can fit more and more onto a chip.. However, it is much cheaper (less design time) to simply run two cores with some "glue" hardware than to design a new core that is 8-way superscalar instead of 4 (for example).

    One way to look at dual-core is to view it as a dual-processor (MP) system with a very low communications cost, since both cores are on the same die. The disadvantage is similar; since the two units are not perfectly synchronized, such a system runs best with multithreaded code. A single-core CPU with the same number of transistors will run faster, while the dual-core is not quite "double the speed" of one of its cores.

    • Dual core explained fuzzily...

      You've got things quite a bit confused. The reason that engineers are going dual core is not because its appreciably easier to design a dual 4-way CPU than an 8-way CPU. The reason they are going dual core is because there is not enough inherent parallelism in code (3-way is about the limit for most code) to feed an 8-way core. The reason for going dual core rather than 8-way SMT is because bigger CPUs are harder to scale to higher clock speeds.

      Synchronization has absolutely
  • Remember (Score:5, Insightful)

    by cubicledrone (681598) on Saturday May 08, 2004 @06:04PM (#9095978)
    Intel, like Microsoft, Dell and Sony, is a favored company.

    AMD, like Nokia, Apple and Nintendo, is not.

    AMD's strategy (Opteron instead of dual-core?) will therefore be called "a significant risk given the current market reality" while Intel's strategy (dual-core instead of Itanium?) will be called "a savvy decision for the technology giant," even though the media wouldn't know an Opteron or a dual-core CPU if one jumped up on their desk and did the tap number from 42nd street.

    All of the general stories will make repeated and redundant references to the effect of Intel's strategy on the "tech-heavy Nasdaq."

    This is no different than the Sony vs. Nintendo console competition. The media doesn't like competition. Neither do the markets. (There is only room for three companies in any given market) It's so much easier to be a sycophant when your favored company has 80% of the market.
    • Nintendo isn't a favored company?

      When parents are considering what videogame console to buy for their little kids, do you think they even consider the other two?
      • Yes? Given that the Playstation2 is more popular than the other XBox and Gamecube combined? You forget that its the little kids that are spelling out precisely what the parents should buy!
    • Re:Remember (Score:5, Informative)

      by mikis (53466) on Saturday May 08, 2004 @07:19PM (#9096412) Homepage
      I mostly agree, only AMD already announced their dual-core CPU strategy even before Intel. In words of Mr. Ruiz [eweek.com]:

      "One of the most powerful things next year is going to be our dual-core product. To me, that's going to really shock the hell out of everyone, because it's going to be hardware-compatible, infrastructure-compatible, pin-compatible. I mean, people that have a 2-P system can slap in a dual-core product and end up with a 4-P system for the price of a 2-P. That's been the biggest drawback, everyone tells me. What keeps them from going from a 2-P to a 4-P system? It's price"

  • I remember reading a while back that one of the Intel chips came out of the design group in India. Was that the Pentium-M? Or am I just remembering this totally wrong?

    It would be kind of funny if Intel cancelled its American chip designs in favor of continuing work on a design from India.
    • You're remembering wrong. The Pentium-M was designed in Israel. Intel did recently set up a design team in India, and they are investing a lot of money ($130 million) into their operations there.
  • Intel next year will sell chips for both desktop and notebook computers that combine two microprocessors onto a single piece of silicon, "like putting two cylinders in a car instead of having one big cylinder," Nathan Brookwood, an analyst with Insight 64, said.

    Or maybe Longhorn is so bloated, it needs it's own CPU just to sustain the operating system, and another processor to run programs.
    • Wow. That really is one of the most interesting things i've heard in a while. Why not have the OS be a separate entity with it's own proc and one proc (and permissions set) devoted to user apps....

      Wow, i've been in the sauce today, but this idea is worth more thought.
      • contrary to slashdot ignorance and FUD, the OS doesn't spend most of its time running the CPU.

        Most of what the OS does is IO, which idles the chip while waiting for the IO to complete. Tthis is why all operating systems switch to the next task while waiting on IO. If your CPU is running at less than 100% usage its because every program is waiting for IO for most of the time.

    • That's nothing new. For quite a long time now, running Windows on a dual processor system was prefered because when Windows has some processing to do, it likes to eat the whole processor, and all applications become unresponsive... Sometime just for a few seconds, but sometimes for a couple minutes. It's extremely annoying when you are trying to work on a Windows machine.

      Thank goodness it's just Microsoft's stupidity, because switching to any other OS solves the problem...
  • Part of the long horn specs, remember?
  • Deja vu...? (Score:4, Insightful)

    by YuppieScum (1096) on Saturday May 08, 2004 @06:29PM (#9096123) Journal
    Rival chip maker AMD says they have the capability to produce dual-core chips and will introduce the technology when they "feel there is a market need."
    Didn't Intel say that about 64bit CPUs right up until AMD release the Opteron and AMD64 CPUs... then had to play catch-up and eat a whole load of humble pie?
    • Except in this case AMD has already demonstrated a dual core Opteron system. I, for one, am very excited about all this. Dual processor going mainstream is a great thing, IMHO. I'm glad that Intel is doing it, because that means AMD will be soon to follow.
  • Wait... isn't "multiple core" technology that thing that the people who make PPC chips were making a huge deal about how they'd be introducing it soon when the G4 was first introduced, then quietly dropped?
  • by brg (37117) <brg @ d g a t e.org> on Saturday May 08, 2004 @06:33PM (#9096143) Homepage
    The UltraSPARC IV processor is also essentially two UltraSPARC III processors on a chip, integrated using chip multithreading (CMT) technology. Here is an article [internet.com] and some marketing blurbs [sun.com] about the UltraSPARC IV.

    The current IBM POWER4 and upcoming POWER5 chips are both dual-core chips. Here is a nice presentation [hotchips.org](PDF format) about the POWER5; you can see in the die photos where there are two cores. There have also been rumors of a dual-core PowerPC [theregister.co.uk] based on it, but nothing concrete yet.

    Broadcom (which bought SiByte) markets a dual-core, 1GHz 64-bit MIPS chip called the BCM1250 [broadcom.com] which has a lot of integrated networking goodies.

    Finally, it bears pointing out that on the other side of Intel's severed corpus callosum [disenchanted.com], they're also working on a dual-core chip [theinquirer.net].

  • While I think the Pentium-M is an excellent CPU, let's not forget that it is essentially just a Pentium III with a quad bus, lots of cache and SSE2 instructions.

    Furthermore, the P3 is just a Pentium Pro with MMX, SSE and on-die L2 cache.

    A little retro? Seems strange that the future is in a P6 architecture. Maybe when these get too hot we'll move to a massive array of 486s.
  • In the Register's typically irreverent and insightful style:

    Intel says Adios to Tejas and Jayhawk chips [theregister.com]

  • Lower power? (Score:5, Interesting)

    by CatGrep (707480) on Saturday May 08, 2004 @06:49PM (#9096246)
    It's not clear to me that a dual core processor would take less power than a single core processor. Sure a dual-core processor _will_ take less power than two single-core processors on a board. So I suppose at a system-level a single core processor will take less power than a dual processor system, but the power problems we're seeing now are primarily at the chip-level.

    BTW: As someone who 'knows' people that work at Intel, this decision was a pretty huge one on the 'Richter scale'. 1000s of people found out in the last couple of weeks that they were being redeployed to different projects (or making major changs on current projects). This decision is having a huge effect inside of Intel. I suspect that this kind of shake up means that the higher ups at Intel were very afraid that AMD is making major inroads and they finally realized that they couldn't keep going in the direction they were headed in without disasterous effects on marketshare.
  • What's with AMD?

    "When we feel there's a market need?"

    These morons introduced 64-bit chips 5 years before anyone cared, and crippled the technology by making it straight IA32 with more bits.

    The market needs dual-core CPUs to advance.

    There's no way to get CPUs faster any more without reaching current levels that no power supply can reasonably handle in that space (hint: 100 Watts at 1.2 Volts is 83 Amps).

    The only solution is to divide the computation among several processors and parallelize.

    AMD's respo
  • I'm picking up on a divergence trend here.

    AMD is went with x86-64, and Intel said "we'll wait until there is a need."
    Intel is now going dual-core, and AMD says "we'll wait until there is a need."

    I think AMD has the upperhand, though. Intel has the 64 bit technology, but doesn't want to release it to the consumer market yet, more than likely because the 64 bit version of Windows sucks hardcore. AMD could double the core on an x86-64 proc and beat Intel yet again.
  • by sweede (563231) on Saturday May 08, 2004 @10:13PM (#9097288)
    As explained on overclockers.com (copied so not to /. the guys website)

    According to Reuters [reuters.com] and the Wall Street Journal, Intel is supposed to officially announce today that they're not going to bother with the Tejas generation of PIVs/Xeons.

    This ought not come as too much of a surprise to those of you who read this [slashdot.org] last March, and we openly wondered whether Tejas was going to see the light of day a little while back [slashdot.org].

    Yes, this a major announcement that will effectively knock Intel out of the box in the cutting-edge overclocking world for at least something close to eighteen months. This essentially leaves us with whatever AMD chooses to offer.

    Nonetheless, the biggest aspect to this story is not the "what," but the "why."

    A few days ago, the chief technology officer at IBM, Bernie Meyerson, told an industry forum that the traditional and expected increase in speed just from shrinking the manufacturing process is dead [eetimes.com].

    To quote:

    "Somewhere between 130-nm and 90-nm the whole system fell apart. Things stopped working and nobody seemed to notice. . . . Scaling is already dead but nobody noticed it had stopped breathing and its lips had turned blue."

    (This comes from the company that AMD paid $46 million dollars to help build 90nm chips, BTW. It also comes from the company that was supposed to have 3GHz 90nm PowerPC chips ready for Apple in a couple months, but is now talking about eventually getting to 2.5GHz.)

    Meyerson said the biggest reason for the problem is power leakage, the same as what Intel has been saying. He also pointed out that the problem with power leakage is "nonlinear."

    That's a fancy term for saying "it doesn't get slowly worse; you get past a certain point, and everything suddenly falls apart on you."

    It's Not Quite Over

    Mr. Meyerson is not saying "it's all over." What he is saying is that the era of easy, big gains from each new generation of processors is over. As he put it, "60 to 70 percent of the benefit of each new generation of manufacturing would have to come from innovation."

    By that he means technologies like SOI and strained silicon, though he implied that these were not long-term fixes to the problem.

    What is clear is that future technological advances are going to be a lot harder to do, cost a good deal more, and being a lot harder to work with than has been the case in the past. The old way of doing things is broken, and there's no mature alternative around at the moment.

    Perhaps one will eventually show up, but the magic bag is empty at the moment, and it will probably take years to come up with some major new tricks.

    In the meantime, progress will slow down.

    Playing Noah's Ark

    In all likelihood, Intel's short-term answer to this problem is to stop revving and start adding. Processors, that is. The son of Pentium-M which will become Intel's next generation will almost certainly be a two-headed beast. In short, a 6GHz processor won't be a 6GHz processor; it will be two 3s.

    AMD plans to do exactly the same (which ought to tell you that SOI, good as it is, is no long-term fix to this problem).

    This is hardly something either party would willingly want to do rather than increase speed, simply because the vast majority of current programming does not (or even cannot) work better with two-headed action.

    It's certainly not something Microsoft want to deal with on the OS side, and probably is a big reason why Longhorn keeps getting pushed back, much less the armies of non-MS programmers out there.

    It's going to happen because the hardware people don't have a choice in the matter.

Some people carve careers, others chisel them.

Working...