Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware

Will Intel Ship an x86-64bit Chip This Year? 336

Solid Paradox writes "According to The Register, American Technology Research predicts an x86-64-bit processor will 'soon' arrive from Intel and in another story, they also predict that Sun and IBM will be the major players in the future 64-bit boom. Meanwhile the Inquirer has a somewhat related article entitled Senior Intel PR man talks 64-bit extension talk, which follows their Pentium V will launch with 64-bit Windows Elements article that says that the chip is to be sampled internally this month."
This discussion has been archived. No new comments can be posted.

Will Intel Ship an x86-64bit Chip This Year?

Comments Filter:
  • Stack size (Score:1, Insightful)

    by Anonymous Coward on Monday January 05, 2004 @08:20AM (#7879918)
    Darn, my stack size is about to double...

    I guess recursive algorithms are about the become a memory hog.
  • Windows XP 64-bit (Score:5, Insightful)

    by Zog The Undeniable ( 632031 ) on Monday January 05, 2004 @08:22AM (#7879931)
    But will MS write their 64-bit XP to work on Athlon 64 and the new Intel chip, or will we have three different versions (Itanium, Athlon 64 and Intel x86-64)? At this rate Windows will become as fragmented as Linux ;-)
  • Re:Dumb question (Score:1, Insightful)

    by Anonymous Coward on Monday January 05, 2004 @08:34AM (#7879987)
    you would not notice any difference today.

    it's an evolutionary step.

    5 years from now, when you are completely used to 64bit, taking you back to 32bit will definitely be noticeable.

    just like if we forced you to use a 16bit processor and operating system today, you'd notice, wouldn't you?
  • just what we need (Score:4, Insightful)

    by Anonymous Coward on Monday January 05, 2004 @08:41AM (#7880020)
    Great just what we need. another patch on a 20+ year old design. its not Apple who needs to switch platform's its us the whole x86 platform should be dropped. Apple has been able to pull off a proccessor change from the m68k to the PPC and they were able to mantain compatibly with legacy apps in emulation.
  • by los furtive ( 232491 ) <ChrisLamotheNO@SPAMgmail.com> on Monday January 05, 2004 @08:47AM (#7880043) Homepage
    ...they also predict that Sun and IBM will be the major players in the future 64-bit boom

    Isn't IBM already a major player [apple.com]?

  • Re:Dumb question (Score:5, Insightful)

    by argent ( 18001 ) <peter@slashdot.2 ... m ['.ta' in gap]> on Monday January 05, 2004 @08:52AM (#7880057) Homepage Journal
    In theory, 64bit should be better than 32bit (that goes without saying).

    Not actually true. The larger the word size, the more bits you have to move on every operation. Going to a larger word size is normally driven by application requirements: if an application doesn't need a larger address space or a wider ALU a larger word can actualy make it slower.

    What can you do with a 64-bit processor?

    Well, one thing you can do is directly address every byte on the largest disk drives we can get today. With an operating system that was designed for direct access, like Multics, you would never have to "read" any files: when you opened one, it would look just as if it had already been read in... all your physical memory would effectively be a big disk cache.

    For another, you can give each computer on the network part of the address space, so the same thing would be true for any file on your local LAN. Or any program on your LAN... no more messing around with protocols and remote file servers and databases... if you had the access rights, it would be as if they were local files.

    You could do the same thing for each instance of a program, so you wouldn't need complex mapping code when passing messages from one program to another... in fact you could just pass the address of a message and let the memory management system move it over when you actually need it. That would get rid of a LOT of redundant copying, since you probably don't need all parts of every message.

    The problem is, you'd need a whole new OS (or a whole old one... Multics is older than UNIX) to really take advantage of this kind of thing.
  • Re:Pentium V (Score:1, Insightful)

    by Anonymous Coward on Monday January 05, 2004 @09:04AM (#7880108)
    Except the 32bit api/overlay system you speak of was actually a loader which put the processor into 32 bit protected mode and setup a nice playing environment for applications instead of having to roll your own implementation each time. If 64bit mode on the new x86 chips is enabled similiar to how 32bit mode is enabled currently then there is no need to care. Remember when NT4/Win95 came out dos4gw became a thing of the past...
  • Re:x86-64??? (Score:3, Insightful)

    by NanoGator ( 522640 ) on Monday January 05, 2004 @09:08AM (#7880132) Homepage Journal
    "If this happens, it will only reinforce the fact that Intel has lost it's leadership position in the x86 compatible market."

    What?

    Leadership is determined by who's got more out there, not by who's following whose standard. By your definition, AMD could never ever achieve leadership position because it's usinng Intel's instructions.

    AMD may be a threat, but it has not ousted Intel, not by a long shot.
  • Re:Very Likely (Score:5, Insightful)

    by The One KEA ( 707661 ) on Monday January 05, 2004 @09:47AM (#7880332) Journal
    It took 11 years for 32bit operating systems to finally displace 16bit operating systems. Your prediction of 32bit PCs being laughed at by December 2004 is probably a little too radical.

    However, your other comments about AMD and the Opteron are spot on, IMO - the enterprise world is NOT going to buy a competing, slightly incompatible 64bit platform when it has already invested in another 64bit platform that is ALREADY AVAILABLE and is KNOWN to be just as fast/faster than a 32bit commodity platform or an older 64bit platform like a PPC box from IBM. It's hard enough these days for IT departments to support the current heterogenous mix of 32bit commodity desktops and servers and the old/new 64bit platforms from AMD and IBM. Throwing in a third which could cost even more and add more headaches would be pretty hard to sell, IMHO.

    You were also right about marketing; AMD abolsutely MUST find a way to conclusively show that GHz != Speed. They need a new aggressive marketing campaign ASAP - unless the rumours about Prescott being a bit of a dud are true.....

    Either way, AMD knows that they're sitting on a goldmine; they just need to exploit it as much as they can.
  • Re:x86-64??? (Score:3, Insightful)

    by ajagci ( 737734 ) on Monday January 05, 2004 @10:38AM (#7880678)
    Leadership is determined by who's got more out there, not by who's following whose standard.

    No, the term "leadership" by itself is commonly understood to refer to "technical leadership", i.e., who sets the standard, not who moves more product. If there is any ambiguity, just be clear about it. In this case, from context, it should be clear that the term was used to talk about technical leadership.

    By your definition, AMD could never ever achieve leadership position because it's usinng Intel's instructions.

    The instructions Intel defined ten years ago don't help us determine who leads the industry now technically. What matters is recent changes, who made them, and who copied them.

    AMD may be a threat, but it has not ousted Intel, not by a long shot.

    And AMD may never "oust" Intel. But they can still be in a technical leadership position.
  • Re:Dumb question (Score:2, Insightful)

    by dimonic ( 688129 ) on Monday January 05, 2004 @10:47AM (#7880744)
    Sorry, but I am not actually answering your question directly, but it is perhaps appropriate because your question begs a bigger question.

    I don't think anyone needs more speed than the best 32 bit CPUs provide today. The bigger problem today is bugs. Memory leaks, security flaws, memory protection errors, you name them. If I understand him correctly, Linus Torvals has weighed in to say that 64 bit architecture will allow a new way of addressing devices: 1:1 mapping. This will eliminate a huge amount of paging and cacheing code in OSs. If I read Linus correctly, he is saying that 64 bits is enough to map any entire hard drive space directly into memory. This brings me to the comments of another luminary: Donald Knuth. Knuth has written and demonstrated ways of writing very low bug count software, and created a seminal work, the practical, non-trivial application LaTeX, which I use daily. Knuth has proved that a large complex application with strong change management can achieve a very low bug count and still have enough features to have no real competition in its field.

    So what I am getting to is another question: is 64 bits enough, now and forever? I mean will we ever need more address space for anything? So can we write or change our OS so direct mapping of everything is the norm, and thus eliminate half the cleverness and most of the bugs in the OS? And expect that this will be "the last re-write". That all that will be needed in the future is new device drivers and re-compiles for new CPUs?
  • Re:But... (Score:4, Insightful)

    by TeknoHog ( 164938 ) on Monday January 05, 2004 @11:15AM (#7880962) Homepage Journal
    many linux distros only have beta quality 64 bit OS'es.

    Just to nitpick, Linux has supported other 64-bit architectures (at least Alpha) from its early years, so it definitely has the 64-bitness sorted out already. X86-64 is a relatively new thing, but not quite the first one with 64 bits.

  • Re:But... (Score:2, Insightful)

    by October_30th ( 531777 ) on Monday January 05, 2004 @11:51AM (#7881233) Homepage Journal
    Lastly, I do not understand people's obsession with x86. Disco died in the early 80's, but we still want to use a computer archetecture from the 70's?

    a) With the exception of the black magicians of the embedded systems, people people do not, in general, have to write bit-banging assembler code. Who cares if x86 is shite - and no-one's disputing that here - if the compiler/interpreter hides them nassty, nassty bitses.

    b) It is imperative that the legacy code runs fast or that it can be easily recompiled. You mentioned that you've run Alphas. I too had an Alpha 164LX in 1990s and ran Linux on it. It was fine and dandy, but after a while I got tired fixing those stupid-programmer-cast-a-pointer-to-int bugs in order to compile free software. I expect tons and tons of similar problems on Opteron platforms, but on IA64 the problems would probably become ever worse.

  • by bucky0 ( 229117 ) on Monday January 05, 2004 @02:56PM (#7882940)
    Let's just review a few facts:

    Lets.

    Many dual-cpu boards tie all the memory to one cpu, slowing down the other one.

    There are a few boards like that, but certainly not a majority. The difference is very small however, considering that there is just one extra hop across a HT link to the processor with memory. (The memory controllers are directly connected to HT links which minimises latency)

    Various versions of the AMD64 architecture are unreasonably expensive.

    True, some versions are expensive, but your talking about a technology that's been released for approximately 3 months now. Give it time and prices on the high end stuff will go down. That said, you can get a single proc A64 system for fairly cheap.

    I've heard rumors of Linux incompatibility with various boards and bioses.

    Rumors...you're giving people advice on whether or not people should purchase a particular architechture on rumors? What's the severity of the problems?

    AMD is also in the act of outsourcing it's IT staff to India. While Intel undoubtedly does the same, AMD's action is more recent and this sort of thing shouldn't be rewarded.

    I agree

    AMD's planning with Microsoft Win64 release was also obviously lackluster if Intel was able to delay it.

    That's a whole ton of speculation. There's any number of reasons that release was delayed. MS could be having trouble porting the legacy code over, Intel could have negociated(sp?) hard(keep in mind who has the much larger market share), MS could have wanted to wait for marketing reasons...who knows? It's silly to blame AMD for it though.

    My 2 cents.

8 Catfish = 1 Octo-puss

Working...