64-bit x86 Computing Reaches 10th Anniversary 332
illiteratehack writes "10 years ago AMD released its first Opteron processor, the first 64-bit x86 processor. The firm's 64-bit 'extensions' allowed the chip to run existing 32-bit x86 code in a bid to avoid the problems faced by Intel's Itanium processor. However AMD suffered from a lack of native 64-bit software support, with Microsoft's Windows XP 64-bit edition severely hampering its adoption in the workstation market."
But it worked out in the end.
Let us give thanks.... (Score:5, Funny)
Re:Let us give thanks.... (Score:5, Interesting)
Re:Let us give thanks.... (Score:5, Informative)
64 bit x86 worked out, but not for AMD (Score:3)
AMD may have helped create the x86-64 market, but now it's getting killed by it. soon Intel will be the only major player. ARM market is AMD's only hope.
Re:64 bit x86 worked out, but not for AMD (Score:5, Informative)
Re: (Score:2)
AMD smokes Intel in performance/price for most stuff that can be parallelized. It's only single thread performance where Intel wins.
Re: (Score:2)
AMD smokes Intel in performance/price for most stuff that can be parallelized. It's only single thread performance where Intel wins.
On CPU prices alone, yes... but they're also struggling on performance/watt which translates into performance/$ both in power supply and cooling, which is a fair bit of the cost if you're running big, massively parallel jobs that engage all the cores over long periods of time. Anandtech simply summarized it like this [anandtech.com]:
Power consumption is also a big negative for Vishera. The CPU draws considerably more power under load compared to Ivy Bridge, or even Sandy Bridge for that matter.
Every dollar AMD loses on the power bill is of course another dollar Intel can charge extra for a more efficient processor.
Re: (Score:2)
There's motherboard cost too, you can run an i7 on the cheapest motherboard
But with an FX 8350 or lower, using the cheap 760G boards leads you to trouble because the VRM circuitry can't handle above 95 watts. Many buyers unknowingly make that mistake and so end up with a great FX CPU underclocked at 800MHz, or it works fast but throttles down and stutters when you do something demanding enough with it.
Re: (Score:2)
My 16-cores-per-processor servers question your statement. I don't think any other vendor beats AMD on the core density aspect.
Re:64 bit x86 worked out, but not for AMD (Score:5, Insightful)
Intel won't let AMD die. In fact, AMD is right where Intel wants them to be - big enough to ward off government regulators, small enough to not be a huge pain in the rear. Intel and other large companies are scared of government regulation and monopoly declaration, and we do know that Intel has committed enough sins that if the regulators look hard enough, they can make a case to break up Intel. Including separating the ASIC design and foundry parts (and we know Intel has a LOT of foundry capacity). And I'm sure Intel's shareholders would rather give up some revenue to ward off the much bigger hit that would happen when the government regulators step in.
It's entirely possible that Intel has a bunch of "AMD rescue" plans - ranging from simple "let's just buy up all of AMD's CPUs and bury them" to more elaborate schemes. Of course, Intel cannot directly fund AMD. Perhaps Intel could give AMD some patents in an emergency.
Heck, you could argue that Intel told Sony and Microsoft to buy AMD chips - it gives AMD a nice steady income for the next few years. Intel could've used their extensive fab capacity to make custom chips for the consoles (much more easily than AMD can), but you can bet an opportunity like this to help prevent AMD from keeling over was just perfect.
And no, this isn't unusual in the business world. What you see as competitors can have all sorts of incestuous relationships amongst themselves - it's not unknown to have competitors to buy parts from each other. And you can bet Apple, Google, Microsoft, Samsung and others are far more chummy to each other than patent lawsuits or settlements will imply. There's enough back room deals and arrangements that really hide the interdependence on each other they all have.
Re: (Score:2)
AMD may have helped create the x86-64 market, but now it's getting killed by it. soon Intel will be the only major player. ARM market is AMD's only hope.
not this shit again.
amd doesn't own any plants, so how would be licensing an arm design and having it contract manufactured save them ?? how the hell?? what would be the amd business and research in that situation?? who the fuck would buy them??
"worked out" (Score:2, Insightful)
But it worked out in the end.
Yes, mostly due to the fact that we needed a way to get past the 4GB memory limitation, and not because we gave a damn about whether the processor was native x64 or not. AMD has had some great ideas, but they've almost always shorted themselves on the implimentation, leaving the field wide open for Intel to come in with a better offering and take the lion's share of the profit.
Re: (Score:2)
AMD has had some great ideas, but they've almost always shorted themselves on the implimentation, leaving the field wide open for Intel to come in with a better offering and take the lion's share of the profit.
Well AMD can't magically just "be big" and even when they were kicking Intel's ass fab capacity means you can't take over this market overnight. Intel could afford to gamble on things like Pentium IV and Itanium, while still working on entirely different lines like Pentium III-M that was the basis for the Core processors and Atom which has denied AMD much revenue on the low end. That is the sort of thing AMD never could afford to do, they had to design a jack-of-all-trades and hope that through Intel's inep
Re:"worked out" (Score:5, Insightful)
WRONG on many levels. Yes, we had to get past the 4GB memory limitation, but there had been, and still were at the time, several other true 64-bit microprocessors around when AMD introduced the Opteron: Alpha, UltraSPARC, MIPS, PowerPC, and yes even IA-64. (not to mention IBM POWER and zSeries.) But they all had the fatal flaw of NOT being compatible with the Intel 32-bit x86 processors and off-the-shelf Windows software. Only Opteron had that, and that compatibility was so critical that Intel was grudgingly forced to adopt the x86-64 instruction set.
So, you may say, why didn't AMD take the IT world by storm? Because of 1) AMD was not Intel, and never could/would be; 2) Intel was paying manufacturers NOT to offer ANY AMD based systems with marketing kickback agreements; 3) Intel would punish any manufacturer who did offer AMD systems with exorbitant price hikes on the Intel parts they did sell; 4) All this was taking place during the Bush years of federal laissez-faire non-enforcement policy, giving Intel free rein on those practices; 5) Prejudice against AMD in the IT industry was widespread, and still is; 6) few people saw or acknowledged the need for a flat 64-bit address space; 7) those that did have the need for 64-bit software were forced to spend exorbitant amounts of money for RISC workstations, which motivated them to look down their nose at commodity PCs, even if they were 64-bit; 7) Chicken-and-Egg syndrome (no volume 64-bit hardware, thus no volume 64-bit software, thus no need for volume 64-bit hardware).
So AMD did not "short themselves on implementation". Their architecture was state of the art, and kicked both 32-bit Pentium and non-compatible IA-64 in the nuts. They had all of today's advanced hardware features years before Intel: x86-64 architecture; Hyper-transport to replace the front-side bus bottleneck and enable point-to-point CPU links; and on-board memory controllers. AMD was not able to block Intel from poaching their features because of the pre-existing patent cross-licensing agreements. And anti-monopoly enforcement was practically non-existent at the time (and not much better today).
Of course, not of this is meant to imply that AMD was not partially or even mostly responsible for their troubles. They were (and still are) horrible at executing their own roadmaps. They were (and still are) horrible at marketing to consumers. They were (and still are) horrible at manufacturer relations. They were (and still are) unable to make a sane strategic decision if their life depended on it. They were (and still are) perceived as the el-cheapo Intel-knockoff copycat instead of pioneering leaders in their field.
So yeah, AMD is a hot mess, but there is plenty of blame to go around.
Re: (Score:2)
Use Debian testing (about to become Stable), and use multiarch. Never worry about 32bit vs 64bit again
It should not have been called XP... (Score:2, Interesting)
XP x64, Microsofts ginger step-son of an OS. Ignored and dropped like a hot potato as soon as they could.
You couldn't get drivers for half the stuff, even MS didn't provide their own software and lots of 'free for home, pay for commercial' stuff would detect it as 2003 Server and refuse to run/install.
Somewhat of a shame really as it wasn't a bad OS.
An Extra Bit of Register (Score:5, Insightful)
So those 32 extra bits of memory addressing are nice. But don't forget about that 1 extra bit for identifying registers!
Re:An Extra Bit of Register (Score:5, Informative)
And this is something people who've worked on RISC chips have known for ages. The x86 system architecture is essentially stuck in the early 80s. The 386 was just a simple extension on top of 286 model, nothing really fundamentally changed, you still had limited number of registers each with at least one specialized purpose. Maybe MMX and similar stuff fixed that but you couldn't rely on everyone's PC to have the instruction set you compiled it for.
Intel was stuck supporting a very popular CPU with an instruction set that they knew was outdated, and they even tried having replacements for it that failed to gain acceptance. The reason this Opteron caught on was because it was backwards compatible with x86, not because it was the first thing to try to break out of the mold. And 386 was designed to be compatible with 286, which was designed to be compatible wiht 8086, which was designed to be compatible with 8085, which is compatible with 8080, which is compatible with 8008, which is compatible with 4004, which was the first commercially available microprocessor... (and all of those retain the original accumulator A register)
Re: (Score:2)
> one of the presenters said that one of the most surprising speed-ups for 64-bit code came from just having 16 real general purpose registers to work with.
Yeah, this has been known for ages. The technical term is called "register spill"* in compiler land.
* See: http://en.wikipedia.org/wiki/Register_allocation [wikipedia.org]
i.e. A compiler tries to optimize register usage by trying to reuse temporaries and minimize load/stores since memory is extremely SLOW compared to registers/L1/L2/L3.
Here's a practical example. Let
x32 ABI (Score:5, Informative)
And for those that want the best of both worlds, there is the x32 ABI, which uses all the good stuff from x86-64 (more registers, better floating-point performance, faster position-independent code shared libraries, function parameters passed via registers, faster syscall instruction... ) while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers.
They're working on porting Linux to the new ABI...kernel and compiler support is there, not sure about all the userspace stuff.
Re: (Score:2)
Re: (Score:3)
except the ability to access more than 4GB of RAM
3GB typically. That limit applies only per process, and it's pretty rare for a typical user to have a single process that big.
Then, you have netbooks and/or vserver hosting where the entire [virtual] machine doesn't have that much physical memory.
x32 is also noticeably faster: over i386 for anything that wants registers, over amd64 for anything with more pointers than CPU's cache. Benchmarks vary wildly, but figures around 7% faster than amd64 are typical.
Re: (Score:2)
3GB typically.
AIUI an x86 process running on an x64 linux kernel gets damn near 4GB of usable virtual address space. I presume the same applies to x32 processes running on that same kernel.
On a 32-bit kernel as you say 3GB is typical. There were "4G/4G" patches at one stage to increase this but afaict they never made it into mainline.
Re: (Score:2)
A 32 bit app can't possibly have access to a full 4GB of ram. Doing that prevents you from having any way to interface and pass data two and from the kernel. That 1GB of RAM at the top of the address space was where your kernel pretended to sit, so apps could talk back to the kernel and read data from the kernel.
Unless you remove the kernel interface, you can't remove the address ranges used by the kernel.
You can make them smaller, but there are limits, certainly can't go below page sizes.
Re:x32 ABI (Score:4, Informative)
kernel and compiler support is there, not sure about all the userspace stuff.
Just debootstrap it from Daniel Schepler's repository [debian.org]. Most of the work has since moved to official second-class repositories (AKA debian-ports), but because of the freeze, you want both, So after debootstrapping, echo "deb http://ftp.debian-ports.org/debian [debian-ports.org] unstable main" >>/etc/apt/sources.list and you're set.
Nobody's said 64 bit Linux 4 years before Windows? (Score:5, Interesting)
On Slashdot? (Score:2)
Re: (Score:2)
Erm that 'smart memory management' (PAE) has a nice big performance hit. Somewhat bigger than a 3% slowdown.
Also 64 bit can handle bigger numbers (over 4.3 billion) an awful lot faster than 32bit can. It doesn't help with small numbers but for the bigger ones 32bit processes them rather inefficiently.
Re:Did it really work? (Score:5, Informative)
PAE is more or less old school segmentation. You can't say 'it has a 3% slow down' because it has 0 slowdown if that particular page is already in memory, and if not ... it has the same 'slowdown' as an other paging operation plus a fixed number of cycles. So if you're dealing with tiny amounts of 'more than 2/3gb' then the overhead is a lot higher than if you're mapping out 2GB on every window change. PAE is just another form of paging. It is slower, but you're making numbers up from nothingness.
The interger math performance of the processer has nothing to do with it being 64 bit. Most (All now?) x86-64 processors internally will process 2 32 bit numbers in the same span as a 64 bit number if properly optimized by sending the 32 bit values through together. 64 bit code using less than the OS max for 32 bit code is actually slower than 32 bit code due to the increased pointer sizes wasting the processors registers filling them with 0s.
You really have no idea how processors work. While nothing you said is illogical, it is still in fact wrong in every account. Under the hood, processors don't work anything like they do on the surface.
Other processors also do other weird things. I have an 8 bit CPU that can handle 32 bit numbers in a single clock cycle, exactly like it does 8 bit numbers ... and the neat thing ... it can do 2 16 bit numbers in a single clock cycle! Why? Because the processor as I see it from a software developers perspective isn't anything like the actual hardware doing the work. Processors have translation units in front of them to provide you with one look while allowing themselves to rewire the backend in all sorts of different ways.
Re: (Score:3)
My first point was that PAE does have a overhead somewhat larger than the 3% the parent mentioned.
And that overhead increases with the amount of ram you have. Sure 32gig of ram has very little overhead with PAE. That is of course unless you actually use the 32gig of ram and then it will be constantly swapping memory pages around.
Yes I know most people don't use that much RAM. My point is still valid.
Also my 2nd point was that 64bit processors handle big numbers faster, not small numbers slower.
Yes different
Re: (Score:3)
Notwithstanding all of that, amd64 also has more registers, so there''s less having to move stuff to and from memory and you can make most function calls by passing parameters in registers instead of on the stack. amd64 provides a worthwhile increase in performance just due to having twice as many general purpose registers (actually, more than twice as many because there's only really 4 proper general purpose registers on 32 bit x86 - amd64 adds 8 more registers).
Re: (Score:2)
PAE is more or less old school segmentation.
PAE isn't segmentation at all. It's a mode that changes the page table entry format to support more physical memory. Maybe you're thinking of something else as being "PAE", but Intel's (and AMD's and...) idea of PAE is a Physical Address Extension.
Re: (Score:2)
My experience with moving applications to 64-bit that didn't need the massive single memory space was that I started paging a lot more, since they were allocating words twice as wide (and while I could address every molecule in the computer separately, the same number of them were still memory). Physical memories have since expanded to compensate, but I'd like to see some statistics on the entropy of the upper 32 bits of the average QWORD.
Re: (Score:2)
Re:Did it really work? (Score:5, Insightful)
My 32 GB of RAM, absolutely essential for my work, laughs at your "memory management" bullshit.
Re: (Score:2)
32 GB Ram High-Five! Seriously, anytime Asus is feeling poor, they can release a Crosshair motherboard that takes 64 GB or perhaps 128 GB of RAM.
I am not through upgrading until I can virtualize the speed and location of every particle in the universe. Then I'm going to see what exactly this Time dimension actually looks like from a different angle. Maybe. I have a few other ideas, but I probably won't be allowed near a computer this powerful if I announce them all at once. =^_^=
Re: (Score:2)
Not if by node you mean NUMA node.
Re: (Score:2)
In my day, a Beowulf cluster had 128MB ram per node.
Uphill. Both ways. In the snow.
Re: (Score:2)
Snow?
My first computer required that you toggle in the boot loader binary code from front panel switches!
That has to be the modern equivalent of hand crank started horseless carriages.
Re: (Score:2)
Snow?
My first computer required that you toggle in the boot loader binary code from front panel switches!
That has to be the modern equivalent of hand crank started horseless carriages.
Takes me back to loading those Interdata model 3s with the front buttons so we could load the paper tape. Then we could watch the registers with lights on the front as our code executed. Ah glad those days are over.
Re:Did it really work? (Score:4, Funny)
If it's such a success, why does 64-bit software generally only run marginally faster than its 32-bit build? 64-bit binaries are larger and might run 103% at the speed of 32-bit if you're lucky.
Sure, it helps with the 4GB memory space limit, but so can smart memory management and other approaches.
I could see it being useful for super-computing things, but in general, there still just doesn't seem to be a point.
Wow, just wow. Do you actually work in the software field???
Re: (Score:2)
do you? for average PC applications (browsing the web, e-mail, office documents) 64 bit gives no advantage. for the above-average applications (multimedia creation/editing, CADD, running multiple VMs, ) it's very helpful.
Re: (Score:2)
I've seen Firefox run into the 2GB user-mode address space / process limit many times... Chrome and (recent) IE don't have this problem due to per-tab processes, but Firefox definitely hits it when you use as many tabs as I do.
Re:Did it really work? (Score:5, Funny)
do you? for average PC applications (browsing the web, e-mail, office documents) 64 bit gives no advantage. for the above-average applications (multimedia creation/editing, CADD, running multiple VMs, ) it's very helpful.
1) Yes, I do.
2) You are so wrong that it's actually funny.
Re: (Score:2)
do you? for average PC applications (browsing the web, e-mail, office documents) 64 bit gives no advantage. for the above-average applications (multimedia creation/editing, CADD, running multiple VMs, ) it's very helpful.
On Debian Linux and I can peg with Flash a stupid Zynga game running past 3GB of RAM. For Multimedia Creation/Editing you bet your sweet ass 64 Bits matters. Then again Linux doesn't have shit like GCD and quality OpenCL built in the OS with app suites that can leverage both and welcome 32/64 GB of RAM with open arms. Quality drivers, quality OpenCL/OpenGL etc., are coming with all the hard work at LLVM/Clang, Mesa and more. When that shit lands you better believe 64 bit matters and any heavy engineering/s
Re: (Score:2)
He's never written anything that's tested the limits of computing...
Meanwhile, I need only load up my badly coded evolutionary program to see my machine scream at the ~12 GB hit to the RAM. I say badly coded because I have found a few tricks to help get some additional memory savings out of it...also on topic, the aggression level was kind of low, so I imagine future tests might break the 32 GB barrier easily. Currently thinking of giving it a SSD for virtual memory...
Re:Did it really work? (Score:4, Interesting)
Re: (Score:2)
Then you did something wrong.
There is no logical reason that an x86-64 procressor in 64 bit mode would perform faster than 32 bit mode unless you are memory constrained. Raw operations are not inherently faster in 64 bit mode than they are in 32 bit mode.
If you are not exceeding 32 bit memory limits, your 64 bit version SHOULD be a tiny little bit slower than the 32 bit version.
Let me guess, you ran it in 32 bit mode, then ran it again immediately after in 64 bit mode ... and then ignored the disk cache co
Re:Did it really work? (Score:5, Informative)
x64 has twice as many registers. That alone means less having to move stuff in and out of memory, so that will improve the speed when compared to 32 bit applications. 32 bit x86 has only 4 truly general purpose registers. x64 adds another 8 64 bit registers.
Re: (Score:2)
Thank you. The people spouting nonsense about 32-bit programming, and how they can't understand why 64-bit computing would be faster (in the x86 world) drive me loony...it's like they missed an entire year's worth of classes where we went over, in detail, the various changes, and why it's faster...and they have the gall to ask for your notebook the night before the final. I mean, it's impressive, that kind of blindness, but they're aren't getting the notebook without a pimp slap to go with it (extra baby po
Re:Did it really work? (Score:4, Insightful)
Design patterns for the most part are actually adaptations of pre-existing functional concepts. For example Chain of Responsibility is really just a slightly simplified monad (input must equal output). The first Iterator pattern was (map fn list). Flyweight is a simplified form of Memoization.
Packages and namespaces also first appeared in many functional languages first. Encapsulation vai lexical closures has been around since Scheme was invented in the 70's. Lambda functions? Those little gems, making there way into every OOP language where invented with lisp.
You have missed the entire point though if you think OOP is about organizing you programs or something. OOP is largely about encapsulating moving parts into logical pieces. Functional code is largely about minimizing or removing "state" (aka moving parts) from your code. E.g. an input to a function should always give the same output. These concepts are not incompatible at all.
Re: (Score:3)
Why would a 64 bit program be slower when modern processes are optimized for 64bit programs?
Re: (Score:3)
The program is written in C#. Only MS knows what is going on there.
He forgot to use a hash table (Score:3)
Re:Did it really work? (Score:5, Informative)
Most programs still don't need to work with numbers larger than 4 billion on a regular basis, so native 32-bit ints are just as fast as native 64-bit ones.
Most programs still don't need to map more than 2GB (not 4GB; in fact not even quite 2GB) at once, so there's no pressing need for 64-bit pointers.
Software does take advantage of the fact that you can fit twice as many 32-bit values into the standard x86 registers if the registers are 64 bits wide, in the same way that you can stuff two 16-bit ints into EAX on a 32-bit system if you want to. However, the performance gains from doing so end up in conflict with the reduced cache coherency of larger binaries (bigger instructions) and possibly larger (less well-packed) data, resulting in more frequent cache misses. That's why the perf gains are typically very modest, although it really depends on the application.
Where 64-bit does become really valuable is working with very, very large amounts of sequential data (want to allocate a 10GB array? Can't do that on x86, no way no how). That's hardly a typical requirement right now (although I wrote a program a few weeks ago that needed to do it). However, it's getting closer. Additionally, while clever memory mapping can allow a 32-bit process to access over 4GB of RAM (just not all at the same time), there is a (small) performance impact associated with the need to be constantly re-mapping that memory.
The other area where 64-bit really helps is with security, specifically exploit mitigation. High-entropy ASLR in recent versions of Windows and some other OSes randomly places 64-bit aware executables and their various data regions across their entire 64-bit address space. This not only makes it completely impossible to correctly guess the address of any given bit of code in memory, it also makes spraying (heap spray, JIT spray, etc.) attacks completely infeasible; to cover even a tenth of a percent of the address space, you'd need to spray 16 million gigabytes of data. That's not only quite impractical at modern CPU speeds (even on a blazingly fast CPU and done in parallel, it would take a week or more), it also is far more memory (physical or virtual) than any modern computer will be able to allocate.
with 32 bit on some system you get like 2.5-3.7gb (Score:2)
with 32 bit on some system you get like 2.5-3.7gb useable ram. and yes video ram eats from the 4gb pool.
Re: (Score:3, Informative)
Software does take advantage of the fact that you can fit twice as many 32-bit values into the standard x86 registers if the registers are 64 bits wide, in the same way that you can stuff two 16-bit ints into EAX on a 32-bit system if you want to. However, the performance gains from doing so end up in conflict with the reduced cache coherency of larger binaries (bigger instructions) and possibly larger (less well-packed) data, resulting in more frequent cache misses. That's why the perf gains are typically very modest, although it really depends on the application.
You're arguing on the correct side, but what you wrote here is badly flawed. Packing multiple 32-bit values into a 64-bit register is near worthless, what is valuable is amd64 gives you twice as many general-purpose registers (that also happen to be 64-bits wide). A far bigger gain for 64-bit on x86 was the addition of full relative addressing. Instead of 32-bit jumps always being to absolute addresses, in 64-bit mode software can do addressing relative to the program counter. This helps a great deal with l
Re: (Score:2)
Most programs still don't need to map more than 2GB (not 4GB; in fact not even quite 2GB) at once, so there's no pressing need for 64-bit pointers.
Perhaps not most, but a whole awful lot of programs want more than that. I'd say the mean for "large" apps on my laptop is 1.5GB, and the resident size distribution is (to my eye) more or less gaussian. That means that few apps want more than 2GB today, but if the average app grew by 33%, about half of them would be over the 31-bit size limit.
Re: (Score:2)
Most programs don't need a GUI...but they tend to function better with one. Most computers don't need a SSD...but they tend to run faster with one, and users tend to agree that you can have your SSD back when you pry it from their cold dead fingers.
You don't have to fly First Class, you're getting there at the same time as the people in Business or Economy class...but it's a lot nicer.
Re: (Score:2)
Not just for the extra memory. (Score:5, Interesting)
In our algorithms lab there were programs that would gain more than 2x when compiled for 64 bit.
A more "real-world" example is when I started in 2005 at my current company. The engineers had 6-month old P4s @ 3.2 or 3.4GHz, running 32bit linux. For a project they used VisualStudio on VMWare and it took over a minute to compile the project. The company allowed engineers to choose their hardware, so I built an Athlon 64 @ 2.2 or 2.4GHz and I had it run 64bit SuSE. I remember the shock and awe from the first time I tried to compile the project under VMWare - a little more than 10 secs - the engineer next to me had his jaw drop. Of course most of the engineers immediately requested to switch to 64bit machines. I am not sure why it made such a difference in that application - perhaps the 16 general purpose registers come in really handy in this scenario? Of course it didn't help that the P4 was slower in everything (funny how at the time very few reviews really clarified this), but not order of magnitude slower...
Re: (Score:2)
I never knew it was suposta be faster
Re:Did it really work? (Score:5, Interesting)
I think if you understand how truly horrifying PAE is, you would have no doubt at all that 64 bit platforms were the way to go. There's a lot of memory management cruft in the Linux kernel that x86_64 eliminates.
x86_64 also slipped in a few much needed enhancements to the ia32 architecture, including some extra general purpose registers.
http://en.wikipedia.org/wiki/X86-64 [wikipedia.org]
Re: (Score:2)
Really? PAE is bad? Have you just learned to completely ignore segmentation unless its named PAE?
Segmentation on x86 is utter tripe as well, but PAE is nothing but a spec on top of the other mess of bullshit known as segmentation.
Re:Did it really work? (Score:4, Informative)
but PAE is nothing but a spec on top of the other mess of bullshit known as segmentation.
Actually, no, it's a mode that changes the page table format to allow larger physical addresses in page table entries. Nothing to do with segmentation.
Re: (Score:2)
An awful lot of people run 10 year old computers, and also an awful lot of people run XP on computers that could handle 7 64bit or linux 64bits. So you'd better have a 32bit version of your program (Google Chrome, Google Earth, Firefox, whatever).
Though, it ought to be easier to have a fully 64bit system (a linux distro without Wine might do it, if you're careful to not install 32bit software and if Chromium and/or Firefox are 64bit there. But the benefits is only not storing and running duplicate 32bit lib
Re: (Score:2)
A 32-bit x86 app has access to 8 32-bit "general purpose" registers - they ain't really all general purpose because three of them are the stack pointer, frame pointer, and program counter.
You appear to have confused x86 with the PDP-11; the program counter ("instruction pointer" in x86land) is not one of the GPRs.
As for the frame pointer, GCC, for example, has a -fomit-frame-pointer flag that generates code that doesn't use EBP as a frame pointer, so it's available as a GPR. That might make debugging more difficult. If you're not just overlapping the data and stack segments, references through EBP implicitly go to the stack segment, so you'd have to use a segment-override prefix if it has
Re: (Score:2)
64-bit binaries are larger and might run 103% at the speed of 32-bit if you're lucky.
Maybe there is a lot of software written in C that uses int or unsigned when it should have typedef'd a size appropriate for its needs.
Software that's written in C, in all of the environments I know of for x86, has 32-bit ints (signed or unsigned) whether compiled 32-bit or 64-bit, so you're presumably not saying that those programs suddenly get 64-bit ints when compiled 64-bit. They will get 64-bit longs on UN*X (but not on Windows), and will get 64-bit pointers in either case.
Re: (Score:2)
Re: (Score:2)
No.
Re: (Score:2)
Does 64 bits really mean that every program is twice as big as it needs to be? Every time I hear about an innovation that requires things to be bigger, I question the necessity.
Nope. Doesn't mean that at all.
Maintaining backwards compatibility with 32-bit means that you have to compile it twice, and include both sets of binaries. Actual compiled code that doesn't bother with backwards compatibility isn't significantly larger than 32-bit code.
Re: (Score:2)
Re: (Score:2)
Nah - your primitives are doubled in size, which realistically represents something closer to a 25-33% size increase on average.
What makes you think your 'primitives' are doubled in size?
Re: (Score:2)
The fact that they are, maybe? ints become twice as big
No they don't, the size of an int is entirely compiler dependent.
Re: (Score:2)
I know
If you know then why did you say ints become twice as big when they don't?
I was giving an example from the real world on that platform that people of Slashdot are usually interested about, Linux.
That example doesn't in any way illustrate that primitive types would be twice as big, pointers yes, but not primitive types.
Re: (Score:3)
sizeof (char) 1 1
sizeof (short) 2 2
sizeof (int) 4 4
sizeof (long) 4 8
sizeof (lon
Re: (Score:2)
You could both, I don't know, ACTUALLY FIND OUT the answer and present it.
I did, the answer is that it is dependent on the implementation, not the machine architecture. Or are you going to tell me that those are the size of those primitive types on 32bit and 64bit architecture? Because they aren't, they are just the values defined by the implementation you used.
The truth is somewhere in between.
No, the truth is exactly as I said, that - as your post demonstrates - primitives are not doubled in size on 64bit architecture, and the reason why is because the decision is on the size of primitives is not governed by th
Re: (Score:2)
The fact that they are, maybe? ints become twice as big,
If you're talking about C-language ints, on very few 64-bit platforms are they 64-bit. Most UN*Xes are LP64, not ILP64, and Windows is LLP64 (they didn't even make long 64-bit, unlike most UN*Xes).
so do pointers.
Re:Twice as big as it needs to be? (Score:5, Informative)
it's an easy choice unless you absolutely need 16-bit support.
The annoying thing being that an x86-64 processor in long mode can, in fact, run 16-bit protected mode code (like essentially all actual Windows 3.x programs) with the same compatibility sub-mode that runs 32-bit code. It's merely that Microsoft decided they didn't want to bother supporting it.
That this can be done is easy enough to prove; take a Win16 app and run it in WINE on 64-bit Linux.
Re: (Score:2)
Only if it's a fat binary, but thankfully these never needed to catch on with the x86 to x86-64 transition.
Re: (Score:2)
Only if it's a fat binary, but thankfully these never needed to catch on with the x86 to x86-64 transition.
...although they did anyway, in OS X (even though the vast majority of Macs had x86-64 processors).
Yes (Score:2)
That's why I switched to using 1 bit microprocessors. My programs are really small now. I just wrote a database which I can fit in my pocket.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Most of those same home users might get by with 512 - 1GB RAM and a 1$10 AGP video card; but with millions having multigigabyte machines with vector processor GPUs, the potential for cheap, powerful distributed processing is enormous - if you can convince them to give up a few hours of CPU time occasionally.
Otherwise, well, it's probably just a waste electricity although PCs have been pretty darned efficient in the last few years.
Re: (Score:2)
Its not cheap you jackass, you're just passing the bill of to someone else.
On top of that, its incredibly shitty for the environment.
Re: (Score:2)
By your blinkered "thinking", all research that doesn't produce instant results is wasted.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Insightful)
Those were x86-based? The title was "64-bit x86 Computing Reaches 10th Anniversary", not "64-bit Computing Reaches 10th Anniversary".
Re:Whatever! PowerPC been doing 64-bit (Score:5, Funny)
Re: (Score:2)
SPARC would like a word with you as well. When the Ultra workstations first hit the market, 32 bit software actually ran slower under 64 bit Solaris.
Re: (Score:3)
POWER != PowerPC
Both POWER (all-caps) and PowerPC refer both to instruction set architectures and brand names used on processors that implemented them.
The PowerPC ISA took the POWER ISA, added some stuff such as general-register-based multiply and divide instructions, and removed a few instructions (and didn't add in the ones used in the POWER2 processor).
POWER3 [ibm.com] was a 64-bit processor that implemented the union of 64-bit PowerPC and POWER; I don't know whether any subsequent POWERn processors implemented the POWER ISA-o
Re: (Score:2)
Re: (Score:2)
And how many people actually owned a SPARC, POWER, or Itanic for that matter?
Well, some of the masses might have had G5 iMacs (PowerPC 970, 64-bit), but, yes, it took AMD to bring 64-bit to most of the masses.
At least one comment claims that the original title of the article was "64-bit Computing Reaches 10th Anniversary", which, if true, means the article came out with a bogus headline (there's more to "Computing" than stuff that runs on a mainstream desktop or laptop machine, and DEC OSF/1 came out in 1993, so it's been at least 20 years); if the original comment was posted befor
Re: (Score:2)
Uh, no. He never said anything like that. But hey, don't let the facts stop you... just keep repeating that retarded meme.
Re: (Score:2)
In my experience most hardware that works with other versions of x64 windows works fine on XP x64. The only two exceptions I ran into was the data translation DT9816 (which worked with some APIs but not others, go figure) and the NI mydaq (for which the software refused to install at all). Remember from a driver point of view XP x64 is basically the same as server 2003 x64 so all the core hardware that is used in both clients and servers is well supported.
As for adoption I know of a few dedicated simulation
Re: (Score:2)
Hmm. Depends. The global economy is a bit too unstable to make much progress for now, and people are still getting used to the 64-bit changeover.
We have multiple cores, but the software kits haven't evolved enough yet to take full advantage of them, or so I'm told.
Personally, I think the next big leap should be optical processing.
Re:How soon till we get 128-bit? (Score:4, Informative)
A long time.
We don't even have true 64-bit x86-64 processors yet. While programmers are told to* treat pointers as 64-bit in the current implementation (reffered to as a "48-bit implementation" there are only 47 usable bits for user-mode pointers**. That is enough to map 128 terabytes to one process, afaict the most ram you can currently get in a PC architecture machine is 2 terabytes.
If we assume the largest available memory size doubles every 1.5 years and we want to be able to map all the memory to one process then we have 9 years until the current implementation is used up and another 24 years after that before a "full 64-bit" (with one bit used to distinguish between kernel and user mode) implementation is used up.
* Of course just because programmers are told to do something doesn't mean they will http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=642750 [debian.org]
** A 48th bit is used to differentiate kernel and user addresses. The number is then sign-extended to produce a 64-bit number.