Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel To Redesign PC With "Grantsdale" Chip 309

MarkRH writes "Over at ExtremeTech, we tracked down some Intel roadmaps that discuss "Grantsdale", Intel's most important chipset in nearly a decade. Grantsdale brings PCI Express to the PC, so get ready to toss out your motherboard, AGP graphics card, and maybe a host of other components, too. Also check out our articles on the "Tejas" microprocessor, Intel's first CPU to forego pins (check out the waffle iron socket!), as well as the real reason Banias saves so much power."
This discussion has been archived. No new comments can be posted.

Intel To Redesign PC With "Grantsdale" Chip

Comments Filter:
  • by writertype ( 541679 ) on Friday February 28, 2003 @01:40AM (#5403844)
    It's going to be really interesting, I think, to see what this does for the holiday selling season. Since it's out there now that Grantsdale is going to have such a dramatic effect on PC architecture, what is this going to do for sales of graphics cards? Of sound cards?

    It looks like PCI will be supported in some way, but it's almost up to a motherboard manufacturer to come forward and say, "OK, we're only going to support one PCI slot, so figure out what you want to keep, now."

    My guess is that Nvidia's NV35 will be released later this year (fall?) on AGP8X, but that it will REALLY run well on PCI Express. So--wait, or buy? An old question, but with far more significance.
  • Why NewCard? (Score:4, Interesting)

    by KiahZero ( 610862 ) on Friday February 28, 2003 @01:43AM (#5403861)
    I don't understand why revamped PC-cards are being pushed for desktop computing. I can understand increasing the bus speed on PCI cards (faster real-time TV encoding... yay!), but why does this need to happen in cards the size of two quarters?

    Is the goal to make it so that users with two PCs can carry peripherals from one computer to the other? I would also hope that there will be legacy ports. I'm not planning on buying a new chip for a while, but I really don't feel like having to buy brand new hardware when I do. I'll have to buy a new video card (no AGP port), but they could at least put a few standard PCI ports on the mobo so I could slap in my more expensive expansion cards.
  • by craenor ( 623901 ) on Friday February 28, 2003 @01:47AM (#5403883) Homepage
    If you are thinking about making a Portable PC Purchase and are looking for either a performance or "road warrior" system...just wait a bit.

    The ones I've been playing with at work just absolutely rock. You can clearly see the difference that 1mb L2 cache makes...and combined with systems that already have decent battery life you won't have to worry about whether or not you'll be able to finish the Braveheart DVD on battery power.

    Craenor
  • PCI slots (Score:1, Interesting)

    by DrMrLordX ( 559371 ) on Friday February 28, 2003 @01:54AM (#5403911)
    On a side note, if they do design Grantsdale well, who cares about your legacy PCI slot? Stuff like sound cards, NICs, modems, etc should all be integrated with the motherboard ala nForce 2. Or, at least, the option for such a configuration should exist. I, for one, know that the only PCI card I have right now that I actually use is a horribly dated Ensoniq AudioPCI. Integrated sound solutions, even now, kick its ass. Oh, wait, and my modem. wheee. 56k powah

    Anyway, old PCI stuff should be easily replaced by integrated components on the motherboard. One available legacy PCI slot would likely accomodate the rare exceptions.

    Support your slashdot trolls! View at -1, and mod up all troll, offtopic, and flamebait posts. Thank you.
  • by naktekh ( 517517 ) on Friday February 28, 2003 @01:58AM (#5403926)
    Nvidia's been releasing the NV30 cards (Geforce Ti4200 and MX series) as AGP8x modules.

    The problem is that the bandwidth that is offered by the AGP bus tends to be a PCI-AGP bridge, rather than a true AGP graphics card, so what you essentially have is a PCI card running at a slightly faster dedicated bus speed.

    If PCI Express can truly deliver, I'll be impressed... but Intel's known for making decisions that are not necessarily widely implemented in the long run (remember Rambus?). I'm taking a wait and see approach with this one.
  • by Anonymous Coward on Friday February 28, 2003 @01:59AM (#5403928)
    Basically, there is no need for more speed from systems, I agree with your point there. I suspect this is a case of moving forwards for the sake of moving forwards. There are very few REAL applications that require anywhere near 3Ghz. Currently, even top line servers are rarely over the 1.xGhz mark, going in favour of more CPUs, something that can be done more easily and less wastefully with current chipsets. Compatibility is also kept.

    Honestly, a PC with eight $20 CPUs would end up far more responsive and just as useful for every task than one with one single several-hundred-dollar chip
  • Joy of joys (Score:4, Interesting)

    by buffer-overflowed ( 588867 ) on Friday February 28, 2003 @02:11AM (#5403982) Journal
    Even more stuff that as someone who uses computers primarily for work, I don't need.

    Sure it looks good, yea, I'm all exited about a "new era of computing," but it breaks backwards compatibility with all of my old stuff and I bet it still can't outperform the mainframe I program on now in terms of raw MIPS.

    Why did we ever move to PC's from thin clients in the first place? We have consoles for gaming, windows for PC gaming, and *nix for serious work (try doing something else under say Solaris, and posting to slashdot doesn't count.) now. Why all of the redundancy? Aren't we in an economic downturn? The bus speeds and improvements are nice, don't get me wrong... but in a PC? It removes the PCI bottleneck problem, but I don't see where it removes the HDD bottleneck in terms of raw speed.

    All in all i'd say it's a nifty gadget.

    When we get holographic/full immersion, give me a call. I'd love to see what my brain can output in raw source without needing to actually type.

    --I'm just continuing my tradition of posting drunk, pay me no head. Don't post to slashdot under wine.
  • by buffer-overflowed ( 588867 ) on Friday February 28, 2003 @02:18AM (#5404012) Journal
    And sometimes you need an ISA slot. It's rare, but recently I've had occassion to really, really need one (in fact several...).

    Sure they're slow, ancient, legacy (appologize for the redundancy there) but sometimes you just really need an older piece of hardware or a board you can solder and design yourself without an EE degree.

    The same will be true of PCI. There are more PCI cards out there than ISA, so PCI-Express should really be backwards compatible, capable of both modes. Or at least have a few slots on it that are mutual, then faze it out over a few years.

    Why don't major vendors get the fact that some of us like our legacy stuff and don't want to move just because we "have" to?
  • by writertype ( 541679 ) on Friday February 28, 2003 @02:36AM (#5404078)
    That's the question. If you read the article, there's going to be four PCI Express x1 slots.

    OK...so does that mean those are going to take the place of the PCI slots that will normally be found within a motherboard? PCI will be supported--but how many slots will we have to work with?

  • by MatthewNewberg ( 519685 ) on Friday February 28, 2003 @02:56AM (#5404146) Homepage
    One Fast CPU is always going to have an advantage over multiple slower CPUs. It takes a lot of bookkeeping in the background to assign different tasks to different CPUs. Not to mention programs need to be written multi-threaded to take advantage of another processor.
  • Re:Not necessary (Score:0, Interesting)

    by Anonymous Coward on Friday February 28, 2003 @02:59AM (#5404153)
    With Intel and AMD delivering faster and more powerful processors at a rate which makes your head swim, the consequences are plain as day. Apple is hurting, its spindly financial footing sinking ever deeper into that fiscal bog of no return. Frankly, many prominent industry analysts have crunched the numbers, concluding that Apple's outlook is bleak indeed.

    In Apple's latest numbers released in January for its fiscal first quarter of 2003, revenue fell from a year earlier and all of the company's major computer lines saw diminished numbers. PowerMac sales were down 20%, while iBook sales fell 8%.

    At the same time Apple's sales were falling, PC sales rose, though just slightly, according to figures from IDC released last month.

    The last time Apple was in this state, it brought back co-founder Steve Jobs to fix its issues. He fostered the development of the iMac and secured a US$150-million investment from Microsoft. But there aren't any new iMacs in Apple's future and Microsoft, bolstered by its victory over the U.S. Department of Justice, is clearly not going to help the beleaguered computer maker this time.

    So what have you got left? Apple is a company that controls around 3% of the computer market, has recently undergone a restructuring and is slowly fading into nothingness. Software makers don't even have Mac users on their radar and it's not like Apple can bring Mr. Jobs back to right the ship this time -- he's already there.

    Stick a fork in 'em -- this Apple is cooked.

  • Re:Most Important? (Score:4, Interesting)

    by josh crawley ( 537561 ) on Friday February 28, 2003 @03:06AM (#5404174)
    >>Intel's most important chipset in nearly a decade

    >Of course, because this will be the first chipset to fail in the marketplace
    >because computers are already fast enough for businesses, and gamers already have
    >overkill. The first market failure is always an important landmark.

    If anything, I'd like to see an addon vector processor for high speed math. G4
    motherboards have them under an Altivec core instruction set. I would also want
    the ability to directly program (in chip asm) to do misc functions.

    Personally, they can take this waffle-chip and shove it. If anything, I'd want
    an architechure where EVERYTHING's on a very high speed, very high bandwidth
    quad-plane bus with basic controllable logic. You put drive cards on it,gfx
    cards, sound cards, network cards, memory on it, cpu's on it.. anything. It
    would be the backbone of the system where anything would go. You could build a
    simple scan/bootstrap code to find what devices do what. It could be a simple
    hex line of simple "whatis information". To those who say this isnt possible, I
    believe the Altair 8800 used this similar architechure. You want a
    "beowulf"system, add 1 drive controller, and rest cpu controllers. BAM! You now
    have insta-BeoBox. You could also add DIFFERENT CPU architechures with this
    system, given they coincide to your bus setup (including the altivec and x86like
    one I want).
  • Who cares? (Score:5, Interesting)

    by Anonymous Coward on Friday February 28, 2003 @03:06AM (#5404175)
    First of all PCI-express will come in second half of the NEXT year.

    Second, PCI-Express x 16 just double AGP8X bandwidth. We can expect same "dramatic" (1-2%) performance increase as we saw with AGP8X and AGP4X. It will take many years until this kind of performance is really needed. Since high-end video cards will have 512MB of very fast (~40GB/s) local memory in H2-2004, 4GB/s bandwidth offered by PCI-Express won't make much difference compared to 2GB/s AGP solution.

    PCI-Express add-on cards won't be popular anytime soon. Since:
    1) PCI replacement (PCI-Express x 1) offers just 250MB/s of bandwidth, thats isn't a lot more than current 133MB/s offered by PCI.

    2) >90% of users won't need any external cards in H2-2004. Currently we have following stuff integrated on the chipset/motherboard:
    -two 100Mbps NICs
    -Sound with better quality than original Audigy
    -Firewire/USB2 etc

    In 2004 we will also have:
    - NICs will be updated to 1Gbps
    - Wireless LAN
    - DSL modem

    3) In the server market PCI-Express won't be popular since it isn't compatible with PCI. Currently servers use PCI-X (1GB/s) and it will be replaced with PCI-X 2.0 (2GB/s). This is enough bandwidth for many SCSI-raids and Gigabit NICs.
  • by S_hane ( 86976 ) on Friday February 28, 2003 @03:17AM (#5404223)
    > Why don't major vendors get the fact that some of us like our legacy stuff and don't want to move just because we "have" to?

    Why don't consumers get the fact that their hardware would be faster, cleaner, easier to use, and downright sexier if legacy stuff didn't have to be supported?

    Take Intel CPUs. They're a kludge. A terrible, messy, evil kludge. And they're a kludge because they have to support legacy applications that ran on the 8086.

    Intel, of course, is making exactly the same mistake by attempting to emulate x86 modes on the Itaniums.

    If you really, really want to use legacy stuff, then go and get a PCI to ISA Bridge or something. But don't try and force ISA compatibility into PCI-Express, because that's just going to make things slower (and messier) for everyone else.

    -Shane
  • Re:Joy of joys (Score:4, Interesting)

    by be-fan ( 61476 ) on Friday February 28, 2003 @03:18AM (#5404232)
    Grognard. It's not 1992 anymore and PC's aren't lumbering beasts.

    Thin clients: How are people going to use this at home? Over their 28.8 dialup connection? With the work I do, I can peg pretty much anything you throw at me. You think they want user's like me on shared systems? You think I want other users slowing me down?

    PC architecture: A modern PC has more resources than most RISC workstations that are 5x the price. Ever since the P4 came out, PC memory bandwidth (one of it's traditional weak points) has skyrocketed. By the end of the year, it will be up at 6.4 GB/sec, which is an impressive number even for an SGI or Sun machine.

    Bottlenecks: What do you do where the HDD is the bottleneck? After an hour of use or so, my Linux system pretty much runs out of RAM. On workstation tasks, the HDD is often not the benchmark. It's not the benchmark for the 3D rendering I do, the scientific sims, the gaming, the programming, pretty much everything. In fact, I thought it was going to really suck moving to a P4 laptop, because of the slow 4200 RPM hard drive. Ever since I put 640MB of RAM in there, I don't noticed any slowdown at all.
  • by rainwalker ( 174354 ) on Friday February 28, 2003 @03:30AM (#5404284)
    From the article, ..."Granite Peak" initiative, which limits the number of driver revisions to one every six months, making the launch of each new chipset even more significant.

    So, what exactly does this mean? If I have a problem with Intel's drivers that, say, prevents my machine from booting (not that THAT has ever happened) I have to wait 6 months for the next revision? I don't understand what driver revision schedules have to do with product release cycles.

    Also from the article: "...[people buying] the latest GeForce card near the end of this year, when six months later it won't work [fit] inside a new PC?"

    This is a non-issue for most people, I think. Those people who buy new video cards every six months (you know who you are) aren't really going to balk at replacing motherboard, CPU, and video card all at the same time, if it yields a 25% performance improvement (or more). At the other end of the scale are people who upgrade video cards by buying a new Dell (or whatever), for whom this is also not an issue. Those of us in the middle just won't buy a new motherboard/CPU until we can afford to replace the whole shebang anyway. Once we do, we will most likely build a whole new machine.

    Anyway, it's not like nVidia and ATI are going to stop making AGP cards; I'm sure that both connections will be supported. If you look around, you can still get PCI versions of most cards on the market (shudder).
  • by -tji ( 139690 ) on Friday February 28, 2003 @04:12AM (#5404402) Journal
    Google [google.com] comes up with this link [sagebrushcorp.com] that says Texas was derived from the Hasinai Indian word "tejas", which means friend.
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Friday February 28, 2003 @04:23AM (#5404436) Journal
    The article mentions that Intel may do away with the USB ports in the Grantsdale systems, that PCI express may get rid of USB entirely -- but if it does have USB it will have at least 8 ports.

    OK, that's pretty weird. But why would they get rid of a popular, reasonably high-performance, and cheap interface like USB? Is Firewire 800 going to take it's place? SATA? Is everything going to be wireless?

    thad
  • by master_p ( 608214 ) on Friday February 28, 2003 @07:16AM (#5404802)
    Sure, I need faster speed (Doom III comes to mind; and a whole load of cinema-quality games-imagine Half Life II for example, or Duke Nukem For Ever), but there are some other things that bother me:

    1) the PC BIOS!!! for how long should we tolerate the shitty 16-bit PC BIOSes ? I mean, in the days of PCI-X and 800MHz memory buses, the PC's BIOS is still 16-bit and operating systems need to perform wild tricks to boot.

    2) the partioning scheme. Only 4 partitions!!!! this is an artifact from the days of the original PC.

    Ok, not so important but irritating nevertheless.

  • Re:What's the point? (Score:2, Interesting)

    by vofka ( 572268 ) on Friday February 28, 2003 @08:23AM (#5404961) Journal
    Sure, one GFX card may not saturate the Bus, but what about more than one?

    Imagine having a board with several PCI Express slots - put a Good Graphics Card in, say, each of 3 slots, and multihead your games :)

    Also, if I understand it correctly, PCI Express is an upgrade to / replacement for PCI. Sure, it allows high-bandwidth comm to a Graphics Adapter, but also to SCSI/IDE Controllers, NIC's, Video Capture Hardware etc, etc.
  • by Anonymous Coward on Friday February 28, 2003 @08:33AM (#5404978)
    Im not really sure what they mean but an interpretation could be that they are going to get rid of USB 1.1 for good and only support USB 2.???

    I can't see Intel dumping USB totaly...

  • Re:Joy of joys (Score:3, Interesting)

    by MtViewGuy ( 197597 ) on Friday February 28, 2003 @08:53AM (#5405026)
    I think PC architecture is going to undergo some drastic speed improvements over the next 24 months--and that's not including the CPU.

    Between faster chipsets, big increases in memory bandwidth (PC3200 DDR-SDRAM is only the beginning), and Serial ATA, you'll see overall faster computers anyway.
  • by dpilot ( 134227 ) on Friday February 28, 2003 @09:06AM (#5405049) Homepage Journal
    As others have said, so what if a new motherboard is needed - they're obsolete about as fast as a CPU chip, anyway. Another post indicates that PCI eXpress is a reasonably open standard.

    But the IP/lock-in aspects still bother me. Intel behaved like a spanked puppy for a few years after their Rambus fiasco, but lately they seem to be back at those games, again.

    They've taken steps to ensure that Banias/Centrino only sells with their chipset. It's only a logo program, but it probably carries a heavy enough advertising kickback behind it to have the force of law.
    The Itanium is *the most proprietary* CPU on the planet, or at least a contender for the crown. No second sources, no cross-licensing on any of the IP.

    So in this light, anyone want to bet that Tejas is not tied to Grantsdale?

    Assuming it is, the net effects are questionable. It appears that Intel is driving compatibility away from the CPU pins, and out to the motherboard plug interface. I seriously doubt they have the capability to push it any further than that. In the long run, this probably opens the market niche for AMD and Via C3, because it's closing the market for low-cost chipset providers to service Intel CPUs.
  • This is dumb (Score:2, Interesting)

    by Anonymous Coward on Friday February 28, 2003 @11:11AM (#5405939)
    The ultimate in CPU packaging will be RF, that is, one high bandwidth interconnect using a very dense modulation scheme like 1024 QAM. The bus of the motherboard will be Infiniband.
    The CPU will look like old style ceramic power triodes, with a built-in bonded heat sink. There will be two low inductance connections for power, and a hard line SMA connector for everything else.
  • by HiThere ( 15173 ) <charleshixsn@ear ... .net minus punct> on Friday February 28, 2003 @12:39PM (#5406728)
    There a certainly tradeoffs, but it's not a true statement.

    It's worth paying for a more complex design, etc., to get a faster unitary CPU instead of two slower ones, for the bookkeeping reasons that you point out, but not worth paying an unlimited amount more. OTOH, simple parallelization schemes have their limits also.

    My view of where we are headed is:
    1) CPUs will advance to some optimum level of power.
    2) Clusters of CPUs will increase the power of the individuals.
    3) Clusters will be linked along a fast bus, with one CPU out of each cluster attached to the bus. (This cpu is effectively a member of two separate clusters, but one of the clusters does practically nothing but manage communications.
    4) A node at one end of the bus will be linked into an orthogonal bus, which will contain similar nodes...

    Now this is just a design for a maximally compute intensive processor. At each step you must pay additional overhead, so you would be better off it the problem could be addressed by a system one level simpler. But if you can't...

    At this point we come up against the issue of "but how do you USE it!?" This system will probably not be effective until compilers, or possibly interpreters or VMs can automatically partition problems and assign the pieces to various chunks. This will probably require a message-passing operating system. (Not too unreasonable. I think that Linux could be adapted into one.) But, e.g., when a job wants to open a file, it wouldn't need to know where the disk was, it would just send out a message asking for the file. This is like the separation between files and devices, so the basic layers are already in the design. File permissions would need to include lock status, though, or else file access would need to be managed by a dedicated cpu (which could do it in about the current manner). Etc.

    There's lots of details that will need to be thrashed out, and the design isn't going to happen this year, or the next. (Probably.)

    At some point, all higher levels would get turned over to a TCP/IP like connection set, slightly redesigned to optimize it for use within one connected computer system. (Probably just a matter of adding some additional protocols for internal use that are more efficient over the known network configuration, and are less concerned with security, but which will only talk internally. [And which have their own limits, so that a virus can't spread unchecked.])

    I see all external communication occuring over standard TCP/IP, and possibly even using only a subset of the standard protocols. (N.B.: I'm talking about limiting protocols, not ports. Think of it as a sanitation measure. You want the system to be able to evaluate incoming communications, and react sensibly. If someone tells it "Drop dead!", it should decide not to obey the literal, or even the figurative, interpretation, but rather to consider that this is an expression of frustration. And this should be true even if the remote user is "root". Some commands should require local access.)

    N.B.: I talked as if this were a strictly hierarchical system, and, to a large extent it would be. But it should take advantage of sideways links also. This would be largely for error recovery, as the high speed communications would be hierarchical. But it should enable reconfiguration and recovery (and diagnosis) in case of hardware errors. (Think of the cell system for underground agents.)
  • by jayslambast ( 519228 ) <slambast AT yahoo DOT com> on Friday February 28, 2003 @12:59PM (#5406916)
    Because of its abysmal performance, Intel has abandoned this approach and now uses a Celeron coprocessor to handle x86 software execution. I think you're mistaken. Itanium and Itanium2 have dedicated logic to run x86 code. They don't emulate x86 code, they actually run it. The logic is not pentium nor celeron based. The amount of engineering time to add a celeron coprocessor to a processor the size of an Itanium, or to modify a chipset to include a celeron, would be not be costeffective, nor be a sound engineering decision. btw, everything else in your comment is well thoughtout.
  • Re:Who cares? (Score:4, Interesting)

    by poot_rootbeer ( 188613 ) on Friday February 28, 2003 @01:40PM (#5407262)
    Yes, PCI Express will only be an incremental improvement over the latest AGP spec. But there are other devices on the peripheral bus that need to move a lot of data around.

    Your processor runs at an internal clockspeed of what, 1.5GHz? And your PCI bus? IIRC, it maxes out out a paltry 66MHz. The peripheral bus is already a bottleneck, today.

    I don't care how much they can integrate onto the mainboard, it's still going over the same bus -- the only difference is that the connections are etched onto the board instead of having card slots.

    Furthermore, bundling peripherals onto a mainboard is exactly as bad as bundling web browsers and such into an operating system: it's harder to choose solutions from other vendors even if they're better suited to your needs, you're paying for features you may never use or need, there's no incentive for the hardware company NOT to cut corners and put the cheapest shite on there that they can find.

    The beauty of the x86 PC architecture, if any, is the extreme modularity. I hope that this feature of the design doesn't get eroded away by increasing levels of device integration, and a stronger, faster PCI spec can help a lot towards retaining openness and modularity.
  • by heroine ( 1220 ) on Friday February 28, 2003 @02:21PM (#5407641) Homepage
    It's amazing that CPU's haven't increased performance on a clock for clock basis since 1997. Imagine if modern CPU's really ran 3000 times faster than a 6502.

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...