Unlock seamless, secure login experiences with Auth0—where authentication meets innovation. Scale your business confidently with flexible, developer-friendly tools built to protect your users and data.
Posted
by
michael
from the double-the-pleasure-double-the-fun dept.
msolnik writes: "Over at RealWorldTech they've published an article on the future of 64-bit performance. This article covers the different technology from Sparc to Hammer. Its a great read if you are looking for information on up-and-coming products from Intel, AMD, Sun, and Compaq."
This discussion has been archived.
No new comments can be posted.
AMDs going for a slightly different track, AMD
is the only one trying to put 64-bit on the
desktop. Now for us linux freaks SUSE Linux
and NetBSB will be fine for a 64-bit desktop,
but if AMD want to lock up some of the market
into x86-64, they really need a mainstream OS.
Unfortainately that means Windows, and "if
we build it they will come" doesn't necessary
work if they is no competition. Still in the
mean time, Crawhammer will be a damn fine 32-bit
chip as well, and Sledgehammer will bring
high-end servers right down to mid range prices.
This is not true, SGI and Sun have had 64bit based desktops for a long time! The Sun Blade 100 is a great example, this is only ~1000. If you are a member of the academic community you can obtain one for around $795. This is a 64bit processor - same one as in the Sun Fire 15000.
UNIX is and has been on the desktop for years. Sun got their start on the desktop and has been strong there ever since! Suse, Debian, NetBSD and OpenBSD all support the Sun Blade 100 too !
Unfortainately that means Windows, and "if
we build it they will come" doesn't necessary
work if they is no competition.
Luckily, Microsoft has abandonned Windows 9x/ME based kernels for Windows NT based kernels for all of their desktops. Microsoft has been developing 64bit versions of Windows NT for some time now, originally for Alpha, then (using Alphas for development and testing even) for IA-64. If there is sufficient demand, we may see an x86-64 version of Windows XP (or whatever the next version will be called). I doubt it will be a lower cost "Home" version, but more likely a "Professional" version. All Microsoft has to do is realize that x86-64 owners will use Linux/BSD if they are limited to a 32bit version of Windows, and suddenly they will be scrambling to make a port.
Using AMD CPUs and IDE drives on a server is good both financially and for performance.
AMD CPUs outperform every Intel CPU (don't fall into the MHz trap!), are cheaper and are no less reliable. You're a fool if you buy more expensive Intel CPUs these days.
SCSI isn't magic technology anymore either. In fact, the latest IDE protocols surpass all the existing SCSI technology in speed. Furthermore, the actual drive mechanics are the same for both SCSI and IDE versions of a drive so the reliability isn't any lower for IDE drives anymore. Yeah, you can chain more drives in a single SCSI bus than you can on IDE, but IDE controller cards are cheap. There are inexpensive IDE RAID controllers too. And, of course, the price for IDE drives is significantly lower. You can get two huge IDE drives at the price of a single 18 GB "high performance" SCSI drive.
The more I listen to Intel and SCSI people the more I believe that the so called advantages of both technologies are nothing but hype to keep up the ridiculous pricing.
Whereas I agree with you on the AMD front, I strongly disagree regarding SCSI vs IDE in a server environment.
SCSI drives have disconnect abilities, which means they can have commands sent to them, and the bus is then disconnected (free for other use) while the drive is seeking to the sectors required and buffering in the internal drive RAM. This means that other drives can be instructed in this 'dead' time. On a single-drive system, this is irrelevant, but even on a small server (say 0.5Tb disk array) it is crucial.
IDE drives hog the channel - which is why you can't get much more speed out of a RAID-0 array with 3 or 4 drives than one with 2 (masters) on a standard PC. There are only 2 channels, so only 2 drives can be accessed at once. Contrast this to a SCSI system, where anything up to ~64 disks might be attached to a single channel, but using disconnect to manage that channel amongst them.
To see why disconnect works so well, remember that the time it takes to seek the disk head is measured in milliseconds - this is several orders of magnitude slower than the time to send the commands/data over the bus to the host computer.
Also remember that the ATA-100 is (AFAIK) a burst-speed, ie: it can transfer at that speed when the source data is in the cache - it cannot read the data at that speed... The latest SCSI standard is 320MBytes/sec (Seagate, I believe) although I think 160 MBytes/sec is the highest widely available standard. Given the architecture underlying both technologies, which do you think will have the best chance of filling it's cache more often in a RAID array? (Hint: it starts with an 'S':-)
The only company I have seen to make large-scale IDE RAID arrays work as fast as SCSI ones uses an IDE controller *per drive*, and attaches a SCSI/Fibrechannel front-end via custom hardware. It's still cheaper than SCSI, but not by that much, and getting people who know about it is more difficult when it goes wrong...
SCSI's disconnect ability looks good in theory, but in practice it's not such a great advantage. With SCSI you can attach up to 15 devices to a single channel, and effectively access them all the the same time. With IDE you can attach up to two devices to a single channel, and only access one at a time. Sounds like SCSI is lots better, but only if you have a single IDE/SCSI channel and more than one drive. If you put each IDE drive on a seperate channel, and you can buy IDE controllers with 8 channels, then there really is no advantage to SCSI's disconnect/reconnect ability.
This IDE vs. SCSI debate is getting really old. Buy what you believe in and what you want to spend. I for one prefer SCSI and I am willing to pay the extra cash. If you don't, then don't.
I've found that the high-end multi-channel IDE controllers (such as 3Ware's Escalade line) work well in small database servers. They get around IDE's shortcomings by devoting a separate channel to each drive. Mind you, cabling up 8 drives is a b*tch.
Of course, the drives are still lower performance, but with a healthy amount of RAM I get good results at a reasonable price.
Furthermore, the actual drive mechanics are the same for both
SCSI and IDE versions of a drive
Why do people keep repeating this myth? If you look at the physical parameters for any SCSI and IDE drive made in the last 5 years, you will see that they are completely different. I dare anyone to find a SCSI and IDE drive from the same manufacture produced since 1998 that has the same number of heads, spins at the same speed, and has the same capacity. You won't find any.
Both SCSI and IDE are communications mechanisms, with SCSI winning out as being more intelligent (due to a variety of factors). Having said that therefore it's merely a function of the circuitry stuck on the back of the drive: Why in the world would any drive manufacturer manufacture completely different drives for SCSI or IDE? Seriously, I personally have never looked at the stats, but that seems absurd: It seems brutally obvious that they'd just pull them off the end of the line and stick on the SCSI board, or the IDE board, of course sticking a 200% premium on the SCSI equipped version as a sucker tax.
I find it interesting that you mentioned "since 1998", and it is perhaps true given that condition: IDE has permeated the market, and the only area where SCSI still has a presence is high end servers, so given that it is possible that they only even both sticking SCSI boards on the 15,000RPM monsters anymore. However, I still disagree with your assertion that it's a "myth", as back in the day (when even desktops came with SCSI if you wanted "multitasking") every SCSI versus IDE review started off with a disclaimer that the drives were physically exactly the same, and only the communications mechanism differed.
And I bet it will cost more then 10%. AMD will still have the better ratio of performance/price.
Certainly Northwood will (rightly) carry a bit of a price premium over Willamette, but mainly because Willy prices will drop by a lot. Northwood will almost certainly improve Intel's price/performance relative to AMD, for the simple reason that in addition to being able to clock faster and getting better performance at equivalent clock speeds, Northwoods are cheaper to make than Willamettes, because they're a lot smaller. (~130mm^2 vs. 217mm^2)
AMD will still offer better performance/price, of course, but mainly because they will cut prices in response. (And they had an awfully large lead to start with.)
AMD has kick-ass CPUs; they are fast and cheap.
Same goes for IDE, including the fact that they
have become bigger and more reliable the past
years, in case you haven't noticed.
I'm not bashing anyone here, I'm just stating
a fact. And in case you wonder, my OpenBSD
server runs on AMD and SCSI.
Maybe you were thinking of Alpha CPUs ? Now
THERE'S raw power for ya.
And how much N64 code is 64bit? The spinning logo is hte only 64 bit code I've seen in any N64 game. The logo was written by the SGI guys to show how to use the hardware properly.
If the Power Mac G5 is introduced at Macworld on Monday*, you can all have your 64-bit goodness by the months end!
*I'm not really expecting it to be released this soon, maybe later this year. But who knows? It could happen.
It would be great, but I don't seen it happening. A top of the line G4 currently uses PC133 SDRAM and ATA/166 hard drives. They will more than likely unveil future plans for 64bit, but I think we will see something more modest in terms of hardware at the show like maybe DDR-SDRAM and ATA/133 hard drives in a brand new G5 (and I guess you could also count the new case as hardware too).
yes, but the G5 replaces the 32-bit ALU with a 32/64-bit ALU. The PPC spec has included 64-bit instructions from day 1, but they've only been used in IBM's mainframes. The problem with apple using a standard 64-bit PPC is that there are a few minor differences in how certain generic instructions are handled (most instructions are specific to single- or double-words) which make running code compiled for 32-bit PPC uncertain on 64-bit PPC. So what I'm assuming Mot has done with the G5 is add a "64-bit mode" that apple disabled by default and applications must explicitly request.
The PPC spec has included 64-bit instructions from day 1, but they've only been used in IBM's mainframes. The problem with apple using a standard 64-bit PPC is that there are a few minor differences in how certain generic instructions are handled (most instructions are specific to single- or double-words) which make running code compiled for 32-bit PPC uncertain on 64-bit PPC.
Is this really true? I run on a mixture of 32bit and 64bit 4-way POWER RS6000 machines - all the software is compiled up on the 32bit platforms and runs seemlessly everywhere. So either your statement doesn't apply on AIX or the PowerPC chips are subtly different to the POWER platform when it comes to 32bit/64bit.
I'm not currently speaking for 64 bit PPC. I know PPC quite well. I write software for a living, and the PPC (MacOS X BSD layer with some components in asm) is one of the targets I have to keep synched. Gotta love that big-little endian mode switch...
I'm not currently speaking for 64 bit PPC, as I've never seen one. I've seen 64 bit POWER-4 servers, but that's a little different. I do, however, also target and maintain Solaris versions of my software, which are 64 bit aware. I do have to deal with the 32 bit library/64 bit application issues. I do have to deal with building both 32 and 64 bit versions. I even have to deal with testing gcc 3.0.x 32/64 modes against the Forte CC 32/64 modes. I'm pretty damned familiar with the issues involved in making software on mixed addressing operating systems work.
Before I go on, let me note that a 64 bit application in the sparcv9 format cannot link to a 32 bit sparvcv8 library, either static or dynamic, and the only solution with a commercial library will be to actually write an interface by interface transport layer for the library, linking the 64 bit side of the transport layer to the application, and the 32 bit side to the library, and take the penalty of using pipes for communication right on the jaw. Oh, and the 32 bit side will have the 4GB memory limit, too...
While Sun does do a good job of making the 32 bit/64 bit transition look smooth, it's not, really. SGI and HP face similar issues. I'm told that Alpha Linux may have workarounds not available on the big iron platforms, but I don't know the details, as I don't do any serious Alpha work.
Now I am currently speaking for the PPC. Please take this as speculation based on POWER-4 details and the original PPC spec, not as insider knowledge.
The PPC is interesting. The original design calls for mode switching (like the sparc or mips), but there's a provision for realtime mode switching in there. I expect you would take a heavy hit, but you might be able to link 64 and 32 bit binaries, if the linker was smart enough to insert mode switch instructions into the calling sequence and if the compiler were set to interpret interface definitions (in headers) according to a dependancy determined pointer size assumption. Come to think of it, it should have been possible to implement something like this for the sparc and mips binary formats... (eg _int_v8 and _int_v9 as seperate types in the compiler's internal interpretation...)...but there would be serious penalties for this as well.
Being realistic, I expect eventually we'll have a 64 bit kernel (Darwin) with 32 bit libraries provided as interfaces for mixed mode applications, and a handful of apps (Photoshop, FCPro) that require 4+GB memory being released in 64 bit form (requires G5(6?) or greater!!!) for power users... this of course, at the point in time where we have 2GB+ DDR modules, and four slots again... and another major transition. At least Apple has proven that they are good at tremendous transitions, remarkably so, considering...
There are other possible benefits to 64 bit computing, beyond addressing. Some of them can be realized now... on the G4s, and the P4s, there are ways to use 64 bit (or 128 bit, or even, in one case on the P4, 256 bit) bit vector arithmatic to speed up comparisons, sometimes by unbelievable factors... some higher precision mathematical processing is possible only with 128 bit floating point, which is generally coupled only with 64 bit integer registers, which are the basis of 64 bit memory addressing as a reasonable proposition...
There's also a possible two-instruction-per-cycle trick that could be performed on a 64 bit CPU with a hybrid (64 bit with 32 bit support) kernel for certain operations. There's some documentation for this online, but I haven't tried anything of the sort myself (no current access to a POWER-4 server), so I can't vouch for the usefulness of this.
We're not talking about a trivial task, or any immediate benefits, so don't expect a 64 bit MacOS X anytime soon. Even if the CPUs are 64 bit. It should be transparent, however, as the PPC is upward compatible (32 bit binaries run on 64 bit CPUs) just as the sparc and MIPS are...
Been in 64-bit heaven since IRIX 6.0 in 1994. PowerIndigo2 (R8000) on the desktop, Challenge XL's in the server room (R4400 and R10000). And, today, Octane on the desktop and Origin 300 + Origin 3000 in the server room. A few UltraSPARC Suns, too, but Solaris took its sweet time making the move to 64 bit (Sun started the migration with Solaris 2.5 and finished with Solaris 7).
is here [realworldtech.com]: That way you only have to wait a longass time for it to load once, instead of a longass time for each of the 5 or 6 pages.
by Anonymous Coward writes:
on Thursday January 03, 2002 @05:50AM (#2778100)
Looking Forward to 2002
By: Paul DeMone (pdemone@realworldtech.com) Updated: 01-02-2002
A Quick Look Back
In the last six months several noteworthy events and disclosures have occurred in the fast moving world of microprocessors. AMD started shipping its Palomino K7 processor as the Athlon XP. Despite the controversy surrounding the performance rating based model naming scheme associated with the XP, it appears the latest refinement of the AMD's venerable K7 design has, by most measures relevant to the PC world, eclipsed the performance of the 2 GHz Pentium 4 (P4), the highest speed grade offered for Intel's first implementation of its new x86 microarchitecture. However, this advantage should prove short-lived, as the second generation 0.13 um Northwood P4 will be officially released in early January. The Northwood will offer higher clock rates, an L2 cache doubled in size, and minor internal performance enhancements.
Extending their rivalry on a different front, Intel and AMD unveiled microarchitectural details of their forthcoming 64-bit standard bearers at Microprocessor Forum in October. Although the McKinley and Hammer are both future flagship parts, and thus important symbols of Intel and AMD struggle for technological leadership, the two processor families will be sold into different markets and won't directly compete. In other 64-bit news, IBM officially unveiled the POWER4 processor in several different hardware configurations with clock rates as high as 1.3 GHz and took the top spot in both the integer and floating point performance categories of the SPEC CPU 2000 benchmark. However, preliminary "teaser" numbers from Compaq suggest that IBM will lose SPEC performance leadership when the EV7, the final major product introduction in the doomed Alpha line, is unveiled. Regardless of who wins bragging rights for technical computing, both processors will offer memory and I/O bandwidth far ahead of their competitors and both should do quite well on commercial workloads.
Sun Microsystems continues to slowly upgrade its UltraSPARC-III line in the face of an increasingly difficult competitive environment. Sun recently introduced its copper process based version of the US-III at 900 MHz. The latest device ostensibly includes a fix to the prefetch buffer bug that vexed the earlier aluminum based device. Far more interesting than the new silicon was the latest version of Sun's compiler. It raised the new copper US-III/900's SPECfp2k score by roughly 20% by spectacularly accelerating one of the 14 programs in the suite using an undisclosed optimization. A recent call was issued for new programs for the next generation of the SPEC CPU benchmark. Tentatively named SPEC 2004, it now seems like it couldn't come soon enough.
McKinley: Little more Logic, Lots more Cache
The most striking aspect of McKinley is its size and transistor count. Weighing in at a hefty 220 million transistors, this 0.18 um device occupies a substantial 465 mm2 of die area. The majority of McKinley's transistor count is tied up in its cache hierarchy. It is the first microprocessor to include three levels of cache hierarchy on chip. The first level of cache consists of separate 16 KB instruction and data caches, the second level of cache is unified and 256 KB in size, and the third level of cache is an astounding 3 MB in size. The die area consumed by the final level of on-chip cache can be seen in the floorplan of the McKinley and some representative server and PC class MPUs shown in Figure 1.
Figure 1 Floorplan of McKinley and Select Server and PC MPUs.
The Itanium (Merced) floorplan is shown as blank because although its chip floorplan has been previously disclosed its die size is still considered sensitive information by Intel and has not been released. The outlines shown indicate the range of likely sizes of the Itanium die based on estimates from a number of industry sources.
Both the first and second generation IA64 designs, Itanium/Merced and McKinley, are six issue wide in-order execution processors. In-order execution processors cannot execute past stalled instructions so it is important to have low average memory latency to achieve high performance. This focus on the memory hierarchy can be clearly seen in the McKinley [1]. Although it is not surprising that the on-chip level 3 cache in McKinley is much faster than the external custom L3 SRAMs used in the Itanium CPU module, it is interesting to see how much faster in terms of processor cycles the McKinley level 1 and 2 caches are despite the McKinley's 25 to 50 percent faster clock rate in the same 0.18 um aluminum bulk CMOS process.
The improvement in average memory latency between Itanium and McKinley can be approximated using the comparative access latencies presented by Intel at their last developers conference, combined with representative hit rates based on the size of each cache in the two designs and an assumed average memory access time of 160 ns. This data is shown in Table 1.
CPU
Processor
Itanium
McKinley
Frequency (MHz)
800
1000
L1
Size (KB)
16
16
Latency (cycles)
2
1
Miss rate
5.0%
5.0%
L2
Size (KB)
96
256
Latency (cycles)
12
5
Global Miss rate
1.8%
1.1%
L3
Size (MB)
4
3
Latency (cycles)
21
12
Global Miss rate
0.5%
0.6%
Mem
Latency (ns)
160
160
Latency (cycles)
128
160
Total
Average Latency (cycles)
3.62
2.34
Average Latency (ns)
4.52
2.34
The back of the envelope type calculations in Table 1 suggests that a load instruction will be executed by McKinley with about half the average latency in absolute time than it would on Itanium. No doubt this is a major contributor to the much higher performance of the second generation IA64 processor. Although the large die area of McKinley suggests a substantial cost premium compared to typical desktop MPUs, for large scale server applications the extra silicon cost is insignificant compared to the overall system cost budget. In fact, from the system design perspective, the ability to reasonably forgo board level cache probably more than pays for the extra silicon cost of McKinley through reduction of board/module area, power, and cooling requirements per CPU. Large scale systems based on the EV7 will also eschew board level cache(s), although with the Alpha it is the greater latency tolerance of the out-of-order execution CPU core plus the integration of high performance memory controllers that permit this, rather than gargantuan amounts of on-chip cache.
Besides the greatly enhanced cache hierarchy, the McKinley will boast two more "M-units" than Itanium. These are functional units that perform memory operations as well as most type of integer operations. In a recent article I speculated about the nature of McKinley design improvements. I suggested that it would contain 2 more I-units and 2 more M-units than Itanium in order to simplify instruction dispatch and reduce the frequency of split issue due to resource oversubscription. In IA64 parlance, both I-units and M-units can execute simple ALU based integer instructions like add, subtract, compare, bitwise logical, simple shift and add, and some integer SIMD operations. I-units also execute integer instructions that occur relatively infrequently in most programs but require substantial and area intensive functional units. These include general shift, bit field insertion and extraction, and population count.
Because the integer instructions that cannot be executed by an M-unit are relatively rare, the McKinley designers saved significant silicon area with little performance loss by only adding two M-units (for a total of four) and staying with the two I-units of Itanium. Data on the relative frequency of different integer operations suggest that the vast majority of integer operations, about 90%, that occur in typical programs are of the type that can be executed by either an M-unit or I-unit [2]. If we consider a random selection of six integer operations, each with a 90% chance of being executable by an M-unit, then the odds are better than 98% that any six instructions are compatible with the MMI + MMI bundle pair combination and can be dual issued by McKinley. Thus there is practically no incentive to add two extra I-units to McKinley to permit the dual issue of the MII + MII bundle pair combination.
One curiosity in the McKinley disclosure was the fact that the basic execution pipeline was revealed to be 8 stages long. Although this is still 2 stages shorter than the pipeline in the slower clocked Itanium, it is one more stage than the 7 stages previously attributed to McKinley [3]. Whether this represents a slightly different way of counting the pipe stages or an actual design change isn't clear. Ironically, it has long been rumored that the Itanium pipeline was stretched by at least one stage quite late in development. It will be interesting to see if the new IA64 core under development by the former Alpha EV8 design team (now at Intel) also suffers this strange pipeline growth affliction.
Hammering x86 into the 64 bit World
In October AMD revealed some aspects of K8, its next generation x86 core code-named Hammer [4]. This new design is primarily distinguished by being the first processor to implement x86-64, AMD's extension to the x86 instruction that supports 64 bit flat addressing, 64 bit GPRs, as well as other enhancements. As can be seen in Figure 2, the Hammer core heavily leverages AMD's highly successful K7 core
Figure 2 Comparison of K7 Athlon and K8 Hammer Organization
The back end execution engine of the K8 Hammer core is basically identical to that of the K7 except that the integer schedulers are expanded from 5 to 8 ROPs. The increase in the integer out-of-order instruction scheduling capability this implies may have been intended to better hide the data cache's two cycle load-use latency, and thus slightly increase per clock performance. An alternative hypothesis is that the latency of some integer operations may have been increased to allow higher clock rates and the change was made to prevent a slight loss in per clock performance. The basic execution pipeline of the K7 and K8 are compared in Figure 3.
Figure 3 Comparison of K7 and K8 Basic Execution Pipeline
The K8 execution pipeline has two more stages than K7, and these new stages seem to be related to x86 instruction decode and macro op distribution to the integer and floating point schedulers. Although some of the stages have been renamed it appears that the final five pipe stages, representing the back end execution engine, are comparable. This is unsurprising as the most complex and difficult task in an x86 processor like the K7 or K8 is the parallel parsing of up to three variable length x86 instructions from the instruction fetch byte stream and their decoding into groups of systematized internal operations. In comparison, the execution engine is hardly much more complex than a typical out-of-order execution RISC processor.
Both the block diagram and execution pipeline indicate that AMD has spent nearly all its effort in Hammer development at revamping the front end of the K7 design. Some of the extra degree of pipelining may be related to the extra degree of complexity in decoding yet another level of extensions (x86-64) on top of the already Byzantine x86 ISA. Some of the increase may be related to increased flexibility in internal operation dispatch to reduce the occurrence of stall conditions and increase IPC. And, some of the increase may simply reflect a reduction in the work per stage to increase the clock scalability relative to the K7 core. Without a detailed description of each of the pipeline stages in the K8 it is difficult to correlate front end pipe stages in the K7 to the K8, and next to impossible to assess how the benefit of the extra two pipe stages is allocated between accounting for increased ISA complexity, measures to increase IPC, and reduction in timing pressure per pipe stage to allow higher clock rates.
Although the 64-bit instruction set extension makes for attention grabbing headlines in the technical trade press, the major performance enhancements in the Hammer series are much more prosaic from a processor architecture point of view. These enhancements are the direct integration of interprocessor communications interfaces and a high performance memory controller. Like a "poor man's EV7", the Hammer includes three bi-directional HyperTransport (HT) links and a memory controller supporting a 64 or 128-bit wide DDR memory system using unbuffered or registered DIMMs. With the latter, a K8 processor can directly connect to 8 DIMMs, although this number may be reduced at the higher memory speeds supported. It is interesting to compare the results of the same design philosophy applied to the high-end server and mainstream PC segments of the MPU market as shown in Table 2. Power and clock rates for the Hammer MPU are estimates.
Alpha EV7 [5]
K8 Hammer
Process
0.18 um bulk CMOS
0.13 um SOI CMOS
Die Size
397 mm2
104 mm2
Power
125 W @ 1.2 GHz
~70 W @ 2 GHz
Comm Links
4 links, each 6.4 GB/s,
one 6.4 GB/s IO bus
3 links, each ~6 GB/s
Memory Controller
2 x 64 bit DRDRAM
12.8 GB/s peak
64 or 128 bit DDR
2.7 or 5.4 GB/s peak
Package
1443 LGA
?
Although the Intel McKinley and AMD Hammer are both 64 bit MPUs, these devices are directed at different markets. While the large and expensive McKinley will target medium and high-end server applications, the first member of the Hammer family, code named "Clawhammer", will target the high end desktop PC market. That is not to say that McKinley will outperform the Clawhammer device. Indeed, I expect the AMD device will easily beat the much slower clocked IA64 server chip in SPECint2K and many other integer benchmarks, as well as challenge much faster clocked Pentium 4 devices in both integer and floating point performance.
Exactly how much performance the Hammer core may provide is the subject of some controversy. AMD's Fred Weber was quoted as stating the Hammer core could offer SPECint2k performance as much as twice that of current processors. Although this comment is vague enough to drive a truck through (twice as fast as the best AMD processor? Best x86 processor? Best processor announced but not yet shipping?, IA-32 or x86-64 code?, Clawhammer or the big cache Sledgehammer?) a few web based news sites interpreted this comment as meaning the Hammer would achieve 1400 SPECint2k and now some people are incorrectly attributing this figure to Weber himself. Keep in mind that no Hammer device has even taped out as of the end of 3Q01 let alone been fabricated, debugged, verified, and benchmarked at the target clock frequency. Whatever figure Mr. Weber had in mind was derived from architectural simulation and for a benchmark suite as cycle intensive as SPEC CPU simulation results are approximate at best [6][7]. As been shown time and time again, it is best not to count performance chickens too closely before the silicon eggs hatch.
Alpha Goes Out With a Bang not a Whimper
Although Compaq announced the wind down of Alpha development in June and transferred nearly the entire EV8 development team to Intel over the summer there is still one more surprise in store for the computer industry. The EV7, the final major design revision in store for Alpha, has been the subject of intense testing, verification, and system integration exercises since late spring. This design has been in the pipeline for a long time. It was first announced more than three years ago and finally taped out in early 2001. Because the complexity of this device (basically a complex CPU and large scale server chipset all on one die) and the incredible degree of shakedown server class MPUs and systems undergo, the EV7 will not go into volume production until the second half of 2002. To bridge the gap between current products and EV7 based systems Compaq will shortly release a 1.25 GHz version of the workhorse EV68.
Although general details of the EV7 design have been in the public domain for more than three years, and specific facts about the performance of this MPU's router and memory controllers were disclosed in February, I think the performance it will achieve when officially rolled out in 2H02 will surprise and dismay many in the industry (possibly including senior Compaq management). At the Microprocessor Forum in October Compaq's Peter Bannon unveiled some preliminary performance numbers for the EV7, namely 804 SPECint2k, 1253 SPECfp2k, and roughly 5 GB/s STREAM performance.
Although these numbers are quite good in absolute terms, comparable to the fastest speed grade POWER4 running in a contrived and unrealistic hardware configuration, the numbers failed to live up to my estimates given in a previous article. However, former members of the Alpha design team have privately confirmed my suspicions that Mr. Bannon was clearly sandbagging the EV7 numbers, keeping a not insignificant amount of performance off the table. For a product still more than six months from release that is a not unexpected tactic. I still hold the opinion that when it is all said and done the EV7 has a good chance of being the highest performance general purpose microprocessor ever fabricated in 0.18 um technology, a fitting ending to a remarkable and tragic technological saga (EV79, an EV7 shrink to 0.13 um SOI is on the roadmap for 1H04 but the continued turmoil at Compaq suggests a healthy amount of scepticism is in order).
Sun's Surprising Spike SPARCs SPECulation
Sun recently introduced a new member of its UltraSPARC-III family. This new 900 MHz device differs from earlier US-III parts by the use of copper interconnect instead of aluminum. Although Sun submitted official SPEC scores for a 900 MHz Sun Blade 1000 Model 1900 using an aluminum US-III in late 2000, yield was apparently poor and this speed grade wasn't generally available. A rarely occurring bug related to a prefetch buffer inside the US-III was discovered and as a work around this feature was disabled in firmware. Unfortunately for Sun Microsystems, this caused the SPECfp_base2k score for the Model 1900 to drop from an already lackluster 427 to a lamentable 369 in a second SPEC submission in the spring of 2001. So it comes as no small surprise that the Sun Blade 1000 Model 900 Cu workstation, based on the new copper processor turned in a SPECfp_base2k score of 629 in a recent submission. Both the Model 1900 and Model 900 Cu versions of the Blade 1000 feature 8 MB of L2 cache.
It is possible that the copper US-III incorporates improvements beyond a fix to the prefetch buffer bug as well as improvements to system level hardware between the Model 1900 and Model 900 Cu. However it appears much of the improvement can be attributed to the use of the Sun Forte 7 EA compiler instead of the earlier Forte 6 update 1 compiler used to generate the 427 and 369 scores. The reason why I say that with confidence can be seen quite readily in the graph in Figure 4.
Figure 3 SPECfp_base2k Component Scores for US-III and Competitors
The SPECfp_base2k scores for the 14 sub-component programs for the pre-bug fix Sun Blade 1000 Model 1900 submission using the Forte 6 compiler are compared to the recent Sun Blade Model 900 submission using the Forte 7 compiler. In addition, scores for the Itanium (4MB, 800 MHz version in an HP i2000), Alpha EV68C (1000 MHz version in an ES45/1000), and POWER4 (1300 MHz version in a pSeries 690 Turbo) are provided for reference. It is the new compiler's score on the 179.art program that quite literally stands out from the rest. Although several other programs see appreciable improvement (the 183.equake score nearly triples), the new compiler increases the score of 179.art by more than 800%. In absolute terms this score, 8176, is more than four times higher than that achieved by the Alpha EV68 and POWER4, MPUs that easily beat the copper US-III on nearly every other SPECfp2k program. The 179.art score achieved by the Forte 7 compiler is vital to the new machine's pumped up SPECfp_base2k score. If you leave 179.art out of the geometric mean then its SPECfp_base2k score would drop by nearly 18% from 629 to 516.
This remarkable improvement on 179.art is unusual in the field of compiler engineering where single digit percentage performance increases are often considered major victories. So it is no surprise that Sun's achievement immediately raised suspicions among industry observers and competitors about the nature of the optimization employed by the Forte 7 compiler. It is hard not to think of Intel's infamous eqntott compiler bug that erroneously increased the SPECint92 score of its processors by about 10% until caught and fixed [8]. This bug used an illegal optimization that allowed the output of 023.eqntott to pass result checking with the test data used but was invalid in the general case.
Although the exact nature of the new Sun optimization isn't known, suspicion has fallen on several inner loops within the 179.art program. Speculation is that this code was originally written in FORTRAN and converted to C. Because FORTRAN and C access two dimensional arrays in opposite row and column order it is presumed that 179.art accesses arrays by the wrong index in the innermost loop causing poor cache locality. It is possible that the new Sun compiler recognizes this situation and turns the nested loops that step through the array accesses "inside out" and achieves much lower cache miss rates. Whatever the exact nature of the Sun optimization turns out to be there is the question of whether it violates one of the SPEC rules, namely "Optimizations must improve performance for a class of programs where the class of programs must be larger than a single SPEC benchmark or benchmark suite".
Without knowing the nature of the new Sun optimization it is impossible to say whether Sun should be praised or scolded. But here are the words of Sun engineer John Henning who made the following comments in a November 27 post to the comp.arch usenet news group:
"Our compiler team believes that what Sun has done with art is (1) the result of perfrectly [sic] legitimate optimizations (2) compliant with SPEC's rules and (3) not appropriate for further discussion - if you want to figure out to make art faster, go work on it yourself, don't ask Sun how we did it!"
With the widespread attention this incident has engendered within the industry we can presume that compiler and benchmarking experts working for Sun's competitors have closely scrutinized the code Forte 7 generates for 179.art. The fact that Sun's new scores haven't been withdrawn from the SPEC official web site yet suggests that Mr. Henning is correct. No doubt we can expect competitor's processors to score much higher on 179.art in the months and years to come as the Sun optimization migrates to other compilers. Depreciation of a benchmark's value is seldom as spectacular as in the case of 179.art, but still naturally occurs over time and provides incentive to accelerate the development of a successor to the SPEC CPU 2000 benchmark suite (which no doubt will not include 179.art). A message soliciting programs for this new suite, tentatively named SPEC 2004, was posted on comp.arch on December 28. Ironically the author of this message, the secretary of the SPEC CPU subcommittee, is none other than the previously mentioned John Henning.
Conclusion
It is comforting to see the pace of innovation in the microprocessor field shows no sign of slackening. The great seesaw battle between Intel and AMD for share of silicon's richest prize, the x86 microprocessor market, is about to enter a new phase with the imminent release of the 0.13 um Northwood Pentium 4. Although AMD will also migrate its K7 core to 0.13 um later in 2002 with both bulk and SOI versions, it is unlikely to be in the position to regain the performance advantage over Intel it previously achieved with the T-bird and XP Athlon until its new 64-bit Hammer core ships. Unlike AMD, Intel plans to reserve its 64-bit offerings for the high-end market. With McKinley Intel hopes to address the significant performance difficulties seen in the Itanium in part by taking advantage of its capacious manufacturing facilities to incorporate a huge amount of on-chip cache on its sizable die.
It seems like the time it takes for new ideas and features to migrate down from high-end server MPUs to mass-market devices is shrinking. The integration of high performance interprocessor communication links and memory controller(s) onto a processor die has been on the drawing board for many years and will soon be realized in the high end server market in the form of the EV7. Remarkably, the same concepts will appear in a mass-market x86 processor, the first of AMD's Hammer series, not too much later. Although these features will naturally be more limited in the scope in the x86 device to keep costs under control, they should still provide a large boost in performance from significantly reduced memory access latency as well as a dramatic reduction in the cost of producing multiprocessor systems based on this device.
Few topics in the computer and microprocessor field can raise a controversy, as well as blood pressure, as quickly as benchmarks and benchmarking. Sun managed to throw a hand grenade into the simmering debate between the supporters and detractors of the industry standard SPEC CPU benchmark by speeding up the execution of one of the fourteen programs in the floating point suite by nearly an order of magnitude through the use of a previously unexploited compiler optimization. This in turn raised the SPECfp2k score of its latest US-III processor by roughly 20%. We can now look forward to the spectacle of competing firms scrambling to reverse engineer Sun's new compiler trick and incorporate the same voodoo into their own wares.
References
[1] Krewell, K."Intel's McKinley Comes Into View", Microprocessor Report, October 2001, Volume 15, Archive 10.
[2] Hennessy, J. and Patterson, D., "Computer Architecture A Quantitative Approach", Morgan Kaufmann Publishers Inc., 1990, ISBN 1-55860-069-8, p. 181.
[3] Advance Program, 2001 IEEE International Solid-State Circuits Conference", p. 35.
[4] Weber, F., "AMD's Next Generation Microprocessor Architecture", October 2001, Downloaded from AMD web site.
[5] Jain, A. et al, "A 1.2 Ghz Alpha Microprocessor with 44.8 GB/s Chip Pin Bandwidth", Digest of Technical Papers, ISSCC 2001, Feb 6, 2001, p. 240.
[6] Dulong, C. et al, "The Making of a Compiler for the Intel Itanium Processor", Intel Technology Journal, Q3 2001, Downloaded from Intel web site.
[7] Desikan, R. et al, "Measuring Experimental Error in Microprocessor Simulation", Digest of Technical Papers, 28th Annual International Symposium on Computer Architecture, June 2001.
[8] "Intel OverSPECs Parts", Microprocessor Report, January 22, 1996, Volume 10, Number 1, P. 5.
by Anonymous Coward writes:
on Thursday January 03, 2002 @05:54AM (#2778108)
There is an interesting discussion over in comp.arch on Usenet about
Compaq, Alpha, and the Itanium. The thread is called
Alphacide [google.com]. Interesting stuff. It appears
that Compaq drank the Koolade.
By the way, Pricewatch is quoting about $3K for the lowend Itaniums running at about 700 Mhz.
No thanks.
Impressive though 64-bit processors might be, I'm not convinced that the performance improvement is going to be as big as people are expecting.
Remember that the components in any digital system - and I'm not just talking about your windoze desktop PC, but servers, mainframes and embedded systems too - have to talk to each other in order to do anything remotely useful. Last time I looked, most PCI devices din't utilise the provision for 64-bit data bus operation.
There's a perfectly good reason for this, of course... in order to attach a chip to a circuit board, you need an array of pins (or solder balls) that are macroscopic, so they can be soldered and handled without too much risk of accidental damage. Additionally, PCB tracks can only go so small (and so close together) without undesirable electrical effects and again, an inability to work with it in a production environment.
The "more bits" phenomenon has been sustained by improvements in VLSI and the advent of true System-on-a-chip design, but this too has its limits. If you compare a P4 motherboard with, say, a 386 mobo circa 1995, you'll see the chip count is drastically reduced. But fewer interconnected components means less repairability, upgradability, and interoperability. My old 486 had a VLB EIDE hard disk controller, which I swapped in after the last one failed. If my controller failed today, I couldn't do that; I'd either need to buy a new mobo or start replacing chips on the old one (which is just as expensive).
Don't get me wrong - I'm all for progress! And I expect we'll see more and more 64/128-bit chips springing up inside custom devices (e.g. 3D cards, routers) where the local interconnect can be made as fat as necessary. But the PC will remained shackled by slow frontside busses for a while yet, I reckon.
My old 486 had a VLB EIDE hard disk controller, which I swapped in after the last one failed. If my controller failed today, I couldn't do that; I'd either need to buy a new mobo or start replacing chips on the old one (which is just as expensive).
Perhaps your 486 MB was the first of its kind, but modern motherboards with integrated devices have the ability to disable them so that can be replaced by cards in slots.
This all stems from the fact that those 'chips' that are taking ever more responsibility are trashable. I remember watching an old movie in gradeschool about the development of computers (this would've been in the 80s). A man recalled an interview where the reporter kept asking what sort of tiny tools the guy would use to go in and fix a part of the circuit (the reporter's mind was forever stuck with tubes). Eventually, the guy got through to him that the chip wouldn't be repaired, just replaced.
Thus, the chip count may be reduced, implying more complex chips, but they're not necessarily more expensive. On the other hand, they've become so cheap that its more cost-effective to bundle the functions of multiple chips past into a single chip.
But, still , regarding your BUS argument, there have been numerous articles all over the web about newer BUS standards competing to be the future industry standard. Those BUSes will get big right when these chips do.
> Perhaps your 486 MB was the first of its kind,
> but modern motherboards with integrated devices
> have the ability to disable them so that can be
> replaced by cards in slots.
True, but that presupposes the existence of spare slots;-)
I hear what you're saying about trashable chips, but I think the real phenomenon is the "trashable board". Think about it - if your mobo dies and your warrantee has run out, you go buy a replacement and ditch the old board. If it happens still to be under its manufacturer's warrantee, most likely you just take it back to the shop and swap it for a working one. What happens to the old one? Most likely, they throw it away. The cost of postage, packing, an engineer's time to find the problem, repairs, parts... it's more than the damn thing retails for anyway.
I think this is missing the point anyway. The integration idea goes like this: with today's technology, you could put the equivalent of an early Pentium processor, plus hard disk and graphics controllers, BIOS chipset, etc. onto a single piece of silicon. Pretty much all you'd be left with off-chip would be (a) RAM and (b) I/O circuitry, because they're both harder to integrate. So your computer is about four or five chips. This is approximately the case in palm-tops now.
The point is that you've lost all ability to choose your own components. That graphics block/macrocell has probably been chosen by the manufacturer becuase it was the best value for money (i.e. the cheapest they could find). If you're lucky, they will give you expansion ports so you can plug your own stuff in. But that costs money, and if they think you'll pay for the lesser product then they'll make that instead.
Does it matter? Probably not to the average user. But I think it would matter to the industry. The whole point of having standard architectures like PCI, SCSI, EIDE (and before them, ISA et al.) is that many vendors can compete to produce compatible products, which drives innovation and generally provides a good deal for the consumer.
But if the minimisation continues and the busses become subsumed into the very chips themselves, then the chances are the manufacturers will cut corners. They won't wait for the not-quite-standard-yet SuperBus2005 architecture... they'll design their own and make you buy their proprietary upgrades. Again, the economics work out such that you the consumer probably get a good deal. But trading off good deals today against innovation tomorrow is dangerous.
So, it would be much better to keep all those busses outside the individual components, right? But that's exactly what is keeping the PC architecture slow at the moment (which was the point of my previous post. I think.).
Remember that the components in any digital system - and I'm not just talking about your windoze desktop PC, but servers, mainframes and embedded systems too - have to talk to each other in order to do anything remotely useful. Last time I looked, most PCI devices din't utilise the provision for 64-bit data bus operation.
PCI devices or PCI busses? Even the original old PCI buses support 64bit transfers via multiplexing (2 32bit transfers). So the bandwidth essentially remained the same, but usage as a "64 bit bus" was supported.
However, just because a CPU can process at 64 bits does not mean it must communicate at 64 bits outside the CPU. 64 bit CPU's do often support smaller word transfers.
It is true that most PCI devices are not true 64bit PCI, but that is mainly due to there being no need for the bandwidth that 64bit PCI affords.
If the bandwidth of 32bit at 33MHz (132MB/s) is not enough for your device to operate at it's fullest potential, then it is probably available as a true 64bit PCI device for a 64bit 66MHz PCI (528MB/s) slot, found in servers.
Realise that the IDE bus that may well be used in your computer, is only 16 bits wide. A 64bit CPU most certainly does not require 64bit here there and everywhere.
My old 486 had a VLB EIDE hard disk controller, which I swapped in after the last one failed. If my controller failed today, I couldn't do that; I'd either need to buy a new mobo or start replacing chips on the old one (which is just as expensive).
Not true, I've yet to see a mobo that would not allow the disabling of it's onboard VGA, IDE, SCSI, SERIAL, PARALLEL, USB, etc. Adding a card to replace a busted and disabled onboard device usually works.
The real value of a 64bit CPU over a 32bit CPU, is in the ability to compute more data at once, higher precision data or larger number data much faster and possibly also address way more data if a 32bit address bus is being compared with a 64bit address bus. A 64bit address bus, can access 4,294,967,296 *times* more data than a 32bit bus.
Correct me if im wrong but isnt this going to make the programs bigger and heavier than before? I sure hope they will make it possible to make for example just a calculation in 64 bit and the rest in 32 bit instead of wasting bits for no reason.
Just thinking out loud =)
No, as some other replies have indicated, the only reason for 64-bit applications is to access 64-bit addresses. If you don't need to address that amount of memory, then your applications might as well be 32-bit. Any good processor will still give you 64-bit registers etc to work with.
That's because each thread (under WinNT/2K/XP) reserves 1 MB of address space for its stack, by default. We ran into that one:-) Wondered why we couldn't allocate any more memory, when we were only using half of what was there. Still had physical RAM, but no address space to map it to... 2 or 3 GB isn't enough. We need 64 bit CPUs.
It's possible to reduce the reserved stack, but only for all the threads in a process. We switched to using only a few threads & assigning jobs to them.
Ok, we have known that Alpha will die for some time now, I even have my doubts about the EV7. Digital was shipping their 64-bit micoprocessors before other people were and there previous forays into RISC gave them some good insights on the architectural problems.
With Digital being sold to Compaq and then Alpha being sold to Intel and Compaq possibly merging with HP, the future there is clouded. I have been working with Alphas and have been told that the future is Itanium coloured, but sorry, I don't really like the chip. EV7 will come out, but so far its performance doesn't look so competitive.
With a lot of former Digital talent working at AMD, I think this will be the better option. However, the K8 is not a clean design, it seems to be a 64-bit version of the K7 with some extras on the pipelineing. I guess hat the chip is not ging to be the easiest to get the best performance from.
EV7 will come out, but so far its performance doesn't look so competitive.
Huh????? EV7 will almost certainly be the fastest MPU available at the time of its launch (by this I mean highest scores in SPEC2k int and fp), even ahead of the extremely expensive POWER4 (which sort of "cheats" on the single-threaded SPEC2k because then the one active core on the device gets the entire 128MB of shared L3 cache to itself).
Of course Compaq's support is questionable, the upgrade path is zero, and there's no telling how quickly they'll get out the high-end 32 and 64-way boxes out, but in terms of plain old CPU performance the EV7 is going to be the chip to beat. (BTW, sounds like you didn't read the article if you didn't get this point.)
The way I read the article, Compaq were being very careful about their performance figures and it doesn't compete well with the other procssors that will appear shortly after the EV7 launch. However, to be fair the article does make the point that it is normal for designers to be a little down about performance just before launch possibly due to yield issues. Until they get the full yield, it is difficult to get the full performance out of all the chips.
I like the Alpha though and have been using it since it first appeared. I will be very sorry to see it go.
Error Occurred While Processing Request
Error Diagnostic Information
An error occurred while attempting to establish a connection to the service.
The most likely cause of this problem is that the service is not currently running. You can use the 'Services' Control Panel to verify that the service is running and to restart it if necessary.
Much as I hate to say it, the Intel McKinley looks like a very well designed piece of kit, and it appears Intel have learned from their mistakes with the P4 by including a big, fast 3-level cache on the McKinley. It's also good to see them reducing their pipeline size, which means it may finally be able to compete with the G4 in terms of efficiency. However, this is of course going to kick them in the teeth in terms of competing on processor speed, which they have been pushing so hard recently in their marketing.
The same can't be said of AMD's offering, although in fairness the Hammer is not directed at the server market unlike the McKinley. The pipeline is longer than both their previous design and the McKinley, which is going to give them a performance hit. We can only hope that their cache is as good as Intel's.
What amazes me is that they can still keep adding instruction extensions without too much of a performance hit. Anyone looked at the latest instruction set documentation for these processors? Eugh! The pain of backwards compatibility...
You may want to read this [geek.com] about the Mckinley, successor to Merced.
As far as I know, Merced is HP's design. Mckinley is Intel's. So... you could say Intel is learning from their mistakes by letting HP engineers do a good job.
Anyway, it's a mutually beneficial thing because HP doesn't have the resources to market and drive the product, while Intel doesn't have the engineers or resources to design and implement the architecture in a 'good' way. Intel provides process and HP provides layout, and together they will take over the world!
At least that's what I've heard and read. Myself, I own a G4 and use a Mac, it's not exactly as if Itanium is going to strike me down anytime soon.
IA64 is an incompatible and new instruction set, intel is not adding anything to their x86 ISA.
Hammer does not have an 3MB L3 but it has an integrated memory controller, that would drastically reduce latencies of cache misses.
Assuming amd will go fro bigger than 32 kb L1 cache, and will not succeed in making cache hits as fast as mckinley (speculation based on current offerings) picture is a bit complicated:
Watch it: hammer and mckinley asks for an instruction/piece of data, both hit, mckinley wins, but a more probable scenerio is mckinley misses and hammer hits - a clear win for hammer, a still more probable scenerio is that both misses. If data is in the L2, mckinley is faster, it has lower miss penalty and can fetch from L2 faster but it is more probable that it is in hammer's cache, but not in mckinley's cache, that would benefit hammer . If L2 misses too, but mckinley scores an L3 hit, mckinley wins, if it suffers from an L3 miss, it has to suffer both L3 miss latency and memory latency, but hammer suffers no L3 miss latency and its memory latency is probably much lower, so with huge data processed in not-so-tight loops hammer wins hands down, while for medium sized data that could fit into L3 mckinley wins hands down.
Although mckinley is a server product and hammer is not (or so it is said), an integrated memory controller benefits hammer in multiway systems so much that it may as well be positioned as a server product. No more asking the chipset to fetch a piece of data and wait until chipset serves other processors' requests, just go and grab it!
Finally, some of the hammer line will have L3 caches and hammer line will have a higher clockrate than mckinley. If Amd can deliver what they have promised, they have a clear winner overall. But I'm still a bit scpetical.
What do you mean "backwards compatibility"? The McKinley uses Intel's new IA-64 ISA, not x86. (Instead, it has a small chunk of real-estate devoted to translate x86 internally. Which means the chip is not optimized for x86.) IA-64 is much much cleaner than x86, and contains no reverse compatibility.
The big question is wether the compilers for IA-64 will be any good or not... that's what caused Intel's last attempt to divorce from x86 to fail, and their back-up plan (the 386, the 32-bit extension of the 16-bit x86 ISA) to succeed and become the most popular desktop microprocessor ever.
This time around, Intel doesn't have a backup plan , and AMD is the one doing the extension of a tried and true system.
Conclusion? I put my money on AMD, but if Intel can pull of the compiler, they dramatically increase their chances.
First of all I'd like to say, I am not biased in either way.. after all I'm going to get me a new AthlonXP next week.
IA64 is very different from x86-64. AMD's 64bit solution is nothing more than extension to current 32bit instruction set. Of course there are some tweaks, but nothing very radical. You will still be able to run old 16 and 8bit code efficiently.
Intel's IA64 is a huge step in the future... architecture wise is far superior to x86-64. Why?
Why do we need 64bit processors? Addressing? Nah, current processors can address enough space.. with 386 processors FAR addressing was introduced, which expanded allocatable address space drastically. (those silly DS, SS,.. registers) And newest processors can deal with them with same ease as with non-far addressing.
AMD's 64bit solution currently has no real value.. except for huge data storage (could work faster with 64bit data blocks) and probably some heavy encryption. x86-64 compiled Quake3 would make minimum use of 64bit registers.. and would probably be just a margin faster than IA32 compiled version.
Is IA64 better? Yes it is. IA64 has 128 usable 64bit registers, predicates... But that is not all.. in single 64bit register you can store 4 16bit values(common integer). (or 8 8bit or 2 32bit)And manipulate with them almost as much as you like. And if you have 4 integers in other register.. you can make 4 arithmetical operations with SINGLE instruction. You can do similar things with floating point operations... and with ILP you could do 3 instructions per cycle. This means that Quake2's VectorAdd/Subtract could be done in SINGLE cycle.
Clawhammer will be better for a year or so.. but soon it will hit the ceiling. Intel will be able to get better performance from 1/2 clocked IA64.
And please don't respond with lame comments if you haven't read at least whitepapers from Intel and AMD.
Don't current processors let you do the same thing? I mean, isn't the whole point of MMX to let you do, say, 4 16-bit operations at a time on a 64-bit register? and then there's SSE and 3dnow for floating-point.
From your description. it seems to me like IA64 is more MMX/SSE registers and an expanded instruction set, which is the same incremental change what we've been getting from each version of intel chips for a while now.
Yes to some degree.. but IA64 is 10 times more flexible than SSE2. For example you can rotate 16bit parts within 64bit register. (i don't mean plain shl/shr) But 1,2,3,4 -> 4,3,2,1 or 1,2,3,4 -> 2,1,4,3.
You can download IA64 instruction set manual from intel's website.. any ASM programmer will be fascinated with options IA64 offers. I know, I was.
Exactly, ASM programmers. Actual compilers
are no where near dealing with all the toys
in IA64, so your left with programs that run
slower on IA64 than existing Sparc, Alpha and
X86 machines.
Tough call. Is one grotesque backwards-compatibility botch better than another?
This probably qualifies as a "lame comment", but Alpha is arguably a cleaner and more elegant design. If you NEED a 64-bit server then Debian on Alpha will probably serve you well.
You probably don't though.
So why the fashion for 64bit x86-alikes? It's not like you'd want to run Windows on a serious server anyway.
And unfortunately it will be a LONG TIME before good non-buggy optimizing compilers will be available for such a complex architecture.
software pipelining and parallel instructions give you a real complex monster cpu. Languages like C and C++ make it extremely hard to optimize for a cpu like this since C and C++ were never designed for fine-grained parallelism and software pipelining. So it results is a lot of wasted clock cycles.
What I'd LOVE to see is a statically typed pure functional language that could be used to generate the code for IA64. Then it would be feasable to fully take advantage of the IA64's features!
In the meantime, people compiling IA64 C code with GCC will be extremely disappointed. People compiling IA64 C code with Intel's optimizing compiler will be happier but will only be mildly impressed.
I've worked with VLIW (256 bit instructions) software pipelined DSP's before and learned very quickly that the C and C++ language standards are fundamentally limited for these things. I also learned very quickly that writing assembly language directly for them is an easy way to gain a special invitation to a padded room!!! I shudder to think what the compiler writers have to go through!
True.. IA64 will be very compiler dependant.. but there are still 3-4years to develop good and efficient compiler. I really don't see a point in buying Itanium atm. (except if you are an OS developer). If you really need supercomputer for server.. get yourself a CRAY:)
You made some good points. I've read both white papers though I honestly only understood half maybe 2/3, since I am no Electrical engineer, or assembly programmer.
The only real benefit I can see from a industry perspective is it will drive down the price of high end systems for corporations. Intel desperately wants to get into the mainframe, research and scientific market, since the margin are much higher. As others have noted, IA64 isn't going to really revolutionize partical physics, astrophysics, realtime weather simulation or any other research requiring massive bandwidth and address space. It might make it easier for smaller universities to build faster, better, and slightly cheaper clusters in 2 years, but for now who cares if AMD's extensions have a limit.
Really, if we want faser 3D graphics, we need faster Bus, memory and GPU's.
Firstly, it would appear that you have at least read some white papers on the web, but have you used a real Itanium?
I have, and let me say that the reason AMD will win is compatibility and making sure that the things don't sound like a jumbo jet taking off. These may seem like minor points but they are what will count.
The major point of the AMD solution is backward compatibility... Intel knows this, why do you think that all their previous chips have been sucessfull? Because they were the "best" solution for the job? NO, not by a long shot. They were, however, able to run the old software faster, and provide a route for new software to run even faster still.
AMD provides this solution, new software can take advantage of the 64 bit addressing and processing several integers in just one register. But at the same time old software can run on the system faster than it would on the same clock speed 32 bit processor.
Look at it from an IT purchasing point of view, you can pick machine A which will theoretically be faster in the future but to get anything even aproaching decent performance you need to buy a whole load of new software. Or you can pick machine B which will run all your current software faster than your current machines, and run future software even faster still. Which would you choose, people like a sure thing, not the promise of something good in the future.
AMD can then concentrate on moving towards a pure 64 bit machine once most of the applications have moved to 64 bit, this makes the most sense long term. You buy your 64 bit machine, run 32 and 64 bit mixed software quickly. Then once you are running mostly 64 bit you can move seamlessly to a 64 bit proven and tested environment.
Current Itaniums are slow large and noisy, this makes a huge difference if you only have a small server room, or have to run a server under your desk - some people really have to do this in smaller comanies. You won't see them on the desktop market anytime soon, they are too slow, 32 bit performance is not great.
The Itanium might have a good archtecture, but I think that it's lack of speed and compatibility with 32 bit applications, coupled with noise and heat will cause it to lose the battle
It all depends on how good register renaming and out-of-order execution units on hammer will be and how well ia64 compilers generate asm code. It doesn't matter how many registers are visible to the programmer or how many actually exists, the only important thing is how many of them are actually used. Other advantages of RISC-like arch. of ia86 over x86-64 are also depend on implementation. E.g. : in all current offerings (except P4) x86 floating point registers are essentialy on an array, not a stack. Legacy code generators can produce stack based code, and cpu can use it as if it was executing on a array based fpu. In the end you get a superior design's speed with full backward compatibility. Current x86 processors have a lot of tricks to sidestep limitations of x86 ISA, hammer will have more. Let's wait until the real benchmaks pour in, will we?
And if all else fails, SSE-3 and MMX-2 can provide your lovely 128 visible registers.
AMD has stated *explicitly* that the Hammer is an evolutionary rather than revolutionary design. They've said all along that it is an Athlon with 64-bit extensions and some minor tweaks (SSE2, extended pipeline). They haven't deceived anyone.
Now, as to the relative performance of the two architectures (x86-64 vs. IA-64): the Athlon XP 1900+ achieves a SpecInt2000 score of 701 (peak) while the 800MHz Itanium manages... 314. On floating point the Itanium does rather better: 645 vs. 634 for the Athlon. (The current leader is the IBM Power4, which gets 814 SpecInt and 1169 SpecFP.)
Having 128 64-bit registers is good, but remember that the Athlon and Hammer have far more physical registers than are presented in the programming model, and automatically map them according to the requirements of instructions in the pipeline. And the predicates and wide issue of the Itanium are balanced against the ability of the Athlon to *automatically* issue instructions speculatively and re-order the instruction queue to improve ILP.
And on the subject of manipulating multiple values with a single instruction: ever heard of MMX? 3DNow? SSE? Athlon has all of these, and Hammer will add SSE2. What do you think these are for?
As to the value of 64-bit addressing: I've programmed for machines (Suns and Compaq Alphas) with as much as 64GB of memory. While you *can* address that much with a 32-bit CPU, it means that you have to constantly re-map your view of memory, which is a royal pain. Moving to 64 bit addressing makes the problem disappear. And with current memory prices, even small commodity servers could make good use of more than 4GB of memory.
And 64-bit integer registers are good for a lot of things, and while you can certainly use 64-bit integers on a 32-bit CPU, making them faster won't hurt.
So, Athlon currently has a huge performance advantage over Itanium on integer apps, and a huge price/performance advantage (with comparable absolute performance) on FP apps. AMD's aim with Hammer is to extend Athlon cheaply and effectively into the 64-bit realm.
Intel's aim with Itanium appears to be to crush all competition; unfortunately, they've placed a *huge* bet on improvements in compiler technology that just hasn't paid off yet, resulting in a high-end chip that lags behind not just the high-end RISC chips like Alpha and Power, but low-cost desktop chips. To achieve commercial success, the Itanium needs integer performance somewhere in the vicinity of their competitors, but they currently trail the pack by a huge margin. Even SGI do better, and they all but shut down their CPU design efforts years ago.
Maybe McKinley will be the answer - but it doesn't look like it, given that the promised speeds have dropped to 1GHz. IA-64 is an interesting architecture which may even have a future, but so far it just don't fly.
You don't have a clue. Let me just pick out a couple of the grossly wrong items...
Why do we need 64bit processors? Addressing? Nah, current processors can address enough space.. with 386 processors FAR addressing was introduced, which expanded allocatable address space drastically. (those silly DS, SS,.. registers) And newest processors can deal with them with same ease as with non-far addressing.
Sheesh, where are you coming from? You can address 64 Gig of physical memory with an x86 now, but you can only address 4 Gig (at most!) of it linearly. 32 bit address registers, get it? Gosh, and far addressing was introduced with 386's was it? Give me a break, try 8086's.
AMD's 64bit solution currently has no real value.. except for huge data storage (could work faster with 64bit data blocks) and probably some heavy encryption. x86-64 compiled Quake3 would make minimum use of 64bit registers.. and would probably be just a margin faster than IA32 compiled version.
Right, and I'm supposed to believe you on this, given your performance above. Um, you seem to have ignored the value of being able to crunch 8 byte integers, or pixels 8 bytes at a time, nicely matching the width of the MMX registers. For starters. Repeat this to yourself: "sledge hammer". "sledge hammer". Good, that's more like it.
Is IA64 better? Yes it is. IA64 has 128 usable 64bit registers, predicates... But that is not all.. in single 64bit register you can store 4 16bit values(common integer). (or 8 8bit or 2 32bit)
Um, and guess how many 16 bit values you can store in a 64 bit sledgehammer register? Ah, and guess how many fp/mms instructions sledge can retire per cycle?
Clawhammer will be better for a year or so.. but soon it will hit the ceiling. Intel will be able to get better performance from 1/2 clocked IA64.
You don't have any idea why it's called itanic, do you. Moderaters, take a look above. Remember, that's what 'random' looks like. Yes, I've got mod points right now. No, I won't waste them on you.
Is IA64 better? Yes it is. IA64 has 128 usable 64bit registers, predicates... But that is not all.. in single 64bit register you can store 4 16bit values(common integer). (or 8 8bit or 2 32bit)
Um, and guess how many 16 bit values you can store in a 64 bit sledgehammer register? Ah, and guess how many fp/mms instructions sledge can retire per cycle?
If I understood the whitepaper the answer is 2. You can also store 2 8 bit values in a 64 bit hammer registers. I know the math doesn't hold, but the ISA does; you can't access an arbitrary byte, word or dword section of hammer registers. Ofcourse the number you can just store is 64/r_size of r_sized values, but accessing them (except the lower two) requires rotate or swap operations.
Why do we need 64bit processors? Addressing? Nah, current processors can address enough space.. with 386 processors FAR addressing was introduced, which expanded allocatable address space drastically.
Far adressing can handle 4GB. but 4GB is not much by today standards(it is not little either). You want flat adress space, and ram is cheap this days. 32 bits get you to 4GB, if you want more you have to resort to tricks. (those silly DS.DD etc)
With the first 64 bit alpha's they used this as an argument: it is useful for fast memory scans when using big databases.
Once we get the 64-bit hardware, we still have the MMOS (minor matter of software) to worry about.
Cases in point:
Silicon Graphics machines with MIPS R4400 (and up) CPUs were 64-bit, but the additional address and pointer space wern't utilized until IRIX 6.0 in 1994 -- over 18 months later. (And, of course, certain SGIs still run in 32-bit mode due to RAM concerns -- 64-bit requires more RAM -- all Indys, all Indigos, all O2s, and R4400 Indigo2s).
Sun machines with UltraSPARC CPUs were 64-bit, but again, the additional address and pointer space had to wait for software support. (Multi-stage transition to 64-bit, starting with Solaris 2.5 and finally complete with Solaris 7 in 1998).
Then there's application optimization. Many apps can get slight speedups by processing data in larger (say, 38-bit or even 64-bit chunks). Sometimes the difference is huge, many times it's small. But, lots of little speedups can add up across an entire system. Still, someone has to make these changes to apps and compilers. It takes time, testing, and adoption. In better times, SGI did several such overhauls... they got some insane speed out of Netscape Enterprise and Netscape FastTrack web servers during the Everest project. One of their engineers also did some cool (but nonstandard) hacks to Apache, including the very first pure, clean 64-bit port/mod.
Newer, faster, wider, more-torque hardware is always great. But don't forget the software.
Even with a reference application (oracle 8.1.6) on a reference OS (Solaris 8), the patch levels for the 64 -bit version were 3 revs behind those for the 32-bit version when I last looked. What bothered me was that the bug I'd run into was fixed in the 32-bit version but still there in the 64-bit version. Guess which version I ran.
As I recall from an article I read in a magazine (that my mind won't reveal to me) many years ago...
32-bit CPUs use a 32-bit address space. That's space enough to address 2^32 bytes, or 4GB. With today's 100+GB hard drives and fractional to low GB RAM capacities, each requires its own addressing.
However, with 64-bits of addressing space, you have enough room to memory-map your entire stinking (yes, stinkin) hard drive into virtual RAM address space. This means your virtual address space would represent both RAM and your file system together.
Huh? you could do 32 bit addressing long before you could buy 4GB drives, but nobody thought memory mapping your hard drive into your address space was a good idea then... What would be the point?
You need sector-sized granularity for tracking changed sectors. The latter does not scale well.
The common implementations of virtual memory use page-sized granularity for tracking changed pages. Does that scale?
The big problem I've seen with using a single memory space is that applications often forget to implement multiple levels of undo. With many PDA applications, once you make two accidental changes to a file, the previous version is gone forever because many applications modify files in-place, breaking the "revert to last saved version of document" feature. This bit me in the butt several times on Newton OS.
FAR addressing is availbile on all current 32bit systems.. and it's as fast as near addressing.
I believe segmet:offset is 48bit long.. which extends addressing space to 8TB. I believe that is enough for anyone for next 8 years.
64 bits on a server = larger databases (the better the catalog you with my dear)
64 bits on a desktop = ?
I'm sorta missing why this is good for the rest of us...
There may well be a slew of 64 bit chips by the years end, but I doubt you are going to see much non-specialist application support for some time. Sure PhotoShop and a few other desktop applications will arrive fairly quickly, but look at Windows and 32 bit support; Intel shipped the 80386 in 1985 and only now can you boot a Windows PC without running 16 bit code from the HDD.
Actually, even that's not strictly true, since according to the Resource Kit documentation Windows XP's initial configuration detection is *still* 16 bit.
Intel shipped the 80386 in 1985 and only now can you boot a Windows PC without running 16 bit code from the HDD.
Excuse me? Windows NT came out in the Windows for Workgroups era. Running, if I recall correctly, on 486 class machines. Not to mention MIPS, PowerPC, and, I believe at the time, SPARC.
I think you've misunderstood; the key phrase was "without running 16 bit code", not that NT wasn't using 32 bit code. The NT/XP codebase is *still* not fully 32 bit as there are several initial steps of the OS boot process performed in real (16 bit) mode, including initial hardware detection by NTDETECT.COM, and if the underlying OS isn't yet fully 32 bit, then what is the liklihood all the applications are?
Getting techy for a minute, a PC's BIOS transfers control via a disk's boot sector to the boot loader in real mode, the boot loader (GRUB, LILO, NTLDR etc.) then loads the OS proper. Now, technically a boot loader could switch to protected (32 bit) mode before loading a fully 32 bit OS, but so far all mainstream PC OSs (yes, Linux and BSD too) run some initial boot code in real mode before making the switch to protected mode. Some make the switch sooner than others and I'm sure some of the experimental OS's out there make the switch immediately they gain control from the boot sector.
The main point though, was that for the 32 bit Windows platform (boot stubs aside) the process of Hardware support -> OS support -> decent app support has taken the best part of a decade. If you think the switch from 32 bit to 64 bit is going to happen much quicker, then you are probably going to be disappointed.
FYI, it was it didnt' run on SPARC and you missed Alpha. (Actually, I read it was coded on Alpha machines to ensure cross platform disipline.) I'm not sure about PowerPC, I can't remember.
So the only thing noteworthy of the US-III is the Forte 7 compiler optimization? And just read this garbage about cache sizes:
The majority of McKinley?s
transistor count is tied up in its cache hierarchy. It is the first microprocessor to include three levels of cache hierarchy on chip. The first level of cache consists of separate 16 KB instruction and data caches,
the second level of cache is unified and 256 KB in size, and the third level of cache is an astounding 3 MB in size.
Which is later followed by:
Both the [SunBlade] Model 1900 and Model 900 Cu versions of the Blade 1000 feature 8 MB of L2 cache.
Hmmm... 3MB is astounding, but 8MB is unremarkable... Well, I'd have to agree. I haven't bought a server with less than 4MB of cache in years. Oops, the SunBlade is only a workstation... Kinda makes you wonder.
Sun might be expensive, but it's solid, fast (enough), and predictable. I love x86 (usually Linux) at home, but wouldn't dream of putting it someplace business vital - much less mission critical.
Business Vital == 1 maintance window per month and a mean time to recover exceeding 6 hours potentially costs several million dollars.
Mission Critical == 1 maintance window per quater and a mean time to recover exceeding 15 minutes potentially costs several million dollars.
So what you're telling me is you're willing to put a mission critical application that could cost millions in losses if it goes down. Last time I checked, most low/medium level pc motherboards are still manufacturer on 3-5 layer process and use less stringent tolerances. Even high end motherboards don't have the same level of tolerance as Solaris server level motherboards.
Is it worth spending an extra 3-4K per machine to make sure you don't loose a couple million? I don't think math is needed to figure that one out. Plus, do you want to be the one responsible for it when it goes down? How much would a full redundant system cost to build with PC components that are equal to Solaris 1K+ servers like 4500?
I for one would never put a mission critical database or transaction application on linux PC. Not because linux isn't a good operating system, because PC components are designed to be thrown away in 1-2 years.
The number one question (but perhaps not in this forum) on most potential Sledgehammer-owner's minds is, what OS are they going to run on their 64 bit? Apparently not Windows. No announcement has been made by MS or AMD. Yet.
Win64 has been ported to Itanium for some time now. We've already ported our memory-hungry special-FX app to it. But few people outside the server space are going to be interested in getting an Itanium because the performance with legacy IA32 apps is dog-slow. I mean really slow, like P90 speeds. So we don't expect too many sales of that version, just a few for hardcore dedicated seats.
Sledgehammer is really interesting to us. Combine the best available x86-32 performance for running 3DS Max, Lightwave, Photoshop etc etc, along with the serious memory & address space of a 64 bit CPU for our app, not to mention quite a bit more speed when doing 64 bit calculations (pretty common with 64 bit pixels), makes for a powerful and still flexible beast.
But without Windows support, the IA32 performance advantage is largely meaningless. In our market, that relegates Hammer to Linux-64 render farms - which is fine, but it's not where our money is, and it's not where the CPU would shine. You can use Win32 or Linux-32, of course (unlike Itanium), but that's kinda missing the point.
AMD better get MS & Win64 on their side soon, if they want to capture the workstation market. A lot of server apps still require Windows too. The reality of the market is that mainstream OS support is required, or you get niched PDQ.
I think the point of a 64-bit cpu is a bit short sighted. Hardware vendors should be jumping to 128-bit cpus, ala the "Emotion Engine" in the PS2.
Why 128-bit? Because a 4-tupple (x,y,z,w) for vector and matrix operations can then be natively done. (Yes, I'm spoiled with the VU's on the PS2)
I do wonder when 64-bit cpu's will actually become a commodity item though. A 32-bit cpu provides 99% functionality for most of the general public using them. It's only gaming, scientific computing, & multimedia that really need the 128-bit registers, correct? Or am I missing something?
Secondly, for 64-bit cpus, is there a standard instruction set? Or do I need to compile our game code specifically for IA64, and Hammer?
I do agree, that a 64-bit address would be a welcome change. I can imagine the Database guys jumping up and down with joy once cheap PC hardware supports 64-bit.
> Game consoles like to call themselves 64bit because they can move 64bits at a time.
Unfortunately ture, the early consoles would play marketing games like this.
> By that standard even the Pentiums were 64bit.
The (classic) Pentium is classified as 32-bit because the *general purpose* CPU registers are only 32 bits. (There are a few 64-bit registers, i.e. TimeStampCounter, etc)
With MMX/SSE, the PentiumIII is actually a 32-bit / 128-bit hybrid. It has *native* instructions and registers for *both* 32 and 128-bit processing.
> However both the PS2 and PentiumIII are really 32bit.
Incorrect.
Pentium) I explained this above.
PS2) Do you even program on a PS2??
I think you need to re-read your "EE User's Manual", "EE Overview Manual", and "EE Core User's Manual" (Section 1.4) The core internal bus is 128 bits, and *ALL* the General Purpose Registers (total of 32) are 128 bits. What do you think LQ and SQ do? They load/store 128-bits to/from a register!
Now, it is true, that most PS2 instructions only deal with 32-bit (word) and 64-bits (doublewords), but there are native 128-bit multimedia instructions.
Don't let the fact that the PS2 treats the 128-bit registers as 2 * 64-bits, or 4 * 32-bits confuse you.
Technically the PS2 is a 64-bit/128-bit hybrid, much the same way the PentiumIII is.
One word: addressing. With those 32 bits,
you can
typically address up to 2 gig files on your
machine - which is a limit easily encountered when
you start working with video, for instance.
It took hacks to get 4 gig of RAM working on x86
with the linux kernel.
Go 64 bit, and that limit vanishes. You keep your
linear addressing, none of those ugly segments like
in the unfamous real-mode of PC-XT times.
I don't see what's really new about it all though,
we've had 64 bit since Alpha, and there's several
64 bit architectures around. It may not be
mainstream yet, but will IA 64 or Hammer really
change that (soon)? Allow me to have doubts.
Maybe, but not completly true.
Linux does some hacks to work around this...
But things will be made easier with 64 bits.
Like Intel X86 32bits does support in Linux support up to 8 gb memory.
This is a hack, and is not native for X86
It doesn't vanish, but it does open the room for a lot more RAM.
With 32 bits you can address 2 gigabytes worth of addresses:
0000000000000000000000000000000 is one address
0000000000000000000000000000001 is another
0000000000000000000000000000010 is another, and so on.
With 64 bits I believe you can address up to 8 exabytes of RAM, which is equal to 8192 petabytes or 8589934592 gigabytes. It shouldn't be too long before some program out there requires most of that to run even though now it seems like infinite RAM.
Even though it requires 2 more instructions to do it, just because its not ocupying the same register as eax dosent mean its [MMX] an ugly hack although i would prefer that it be an extention to the gpr.(sic)
I surely would call MMX as an ugly hack. Not because one needs to use normal registers to access data but because MMX uses FPU registers. Hello? To use instructions designed for 64bit integer calculations, you need to disable FPU? And remember, this was because OSes couldn't support task switching without changes if there wouldn't have been a hack like this. MMX is useful for such a special cases that practically no compiler generates MMX code - it's always hand-tuned assembler.
This is wrong. The maximum linear address on IA-32 is 2^32. The maximum virtual address is also 2^32. Segments can't give you above that, they just adjust the base address and the limit. ALthough you can construct an address which goes up to 8GB (which a very large base, and a large virtual offset), the address will wrap around to below 4GB.
And, the physical address is LARGER than 32 bits. It is 36 bits on Wmt. The physical bus on the processors has 36 bits for the address (well, actually 33 bits, since all addresses are chunk aligned, but that's an implementation detail).
FYI, on x86-64, the maximum linear and virtual addresses are 2^48, and the maximum physical address is 2^40.
The last sentence of that is misleading, if not plain wrong. That is the maximum number of virtual addresses you can form within one task, yes. But those addresses map all to 2^32 bytes of memory.
Chapter 3 of volume 3 of the current IA-32 manual is much more clear on this.
The easiest way to look at this is through paging, which clearly has a 32 bit size.
I'd be curious of a more complete reference for that citation (URL?)
Since that was marked troll, I'll blow more karma...
With most operations, 64 bits isn't 2x as fast its 1x as fast unless you deal with the stack in which case it could be even slower.
Addressing has little to do with word size. The 8088 shows that.
Suns running in 64 bit mode are offten slower than running in 32 bit mode.
Nintendo 64 games are all 32 bit code with just a few 64 bit operations. The good emulator proved that.
As far as going two 32 bit ops at once, I still don't need a 64 bit data path to do that, I just need several 32 bit data paths. What I don't need is to dump a bunch of unused 64 bit number on the stack everytime an exception happens (which one of my computers has done about 1047563950 times in the last 51 days)
This is because the P4 is a very peculiar beast that needs many optimisations for the code to run fast. Indeed, when Intel first shipped it they had to ship a specially optimised MPEG decoder for it to appear any faster than the PIII on benchmarks. For more info, check this [emulators.com] out:
Compilers are notoriously slow at catching up with the latest processor design, and you can probably expect gcc to catch up with the P4 around the time it's superseded by the 64-bit babies.
This is not to slur gcc - M$'s Visual Studio compiler suite hasn't yet been optimised for the P4 as far as I know (although I expect the.NET version will be) despite the vastly greater resources they have to throw at it...
If you use Intel's C compiler (esp. when using -ipo (inter procedural optimization)), you may want to check the results. It sometimes trades speed for correct results. See this article [heise.de] (in German).
I don't get it. Every time this crank posts the same offtopic junk, plugging his pet project (AI in VB and Javascript? Have some taste please!) he gets modded up. Moderators, look at his posting history and realise that you screwed up.
With the currently popular 32-bit CPU chips, Robot AI memory limitations are too severe because a memory of 2^32 size is not enough.
Ah, an attempt to be somewhat on topic. Hovever, I don't buy it - how much memory is enough - do you know or are you blowing smoke? And seeing as few machines have this much ram (2^32 = 4Gb ram) don't they use disk swap files or databases anyway? There are many file systems that can handle files this size already, so how exactly will 64bit processors suddenly enable AI in VB that can't be done at present?
An increase in computer power is a rising tide that lifts all boats, even crank AI, but how exactly is the move to 64 bits a sudden huge leap for your Javascript "mind"?
You're a nut, man. The `singularity` thing is the most obvious giveaway, two-bit futurists have probably been babbling about computers becoming smarter than humans and going off on their own since Babbage.
AMD's gonna win (Score:3, Insightful)
AMD is the future. Glad to see an underdog win.
Re:AMD's gonna win (Score:5, Insightful)
is the only one trying to put 64-bit on the
desktop. Now for us linux freaks SUSE Linux
and NetBSB will be fine for a 64-bit desktop,
but if AMD want to lock up some of the market
into x86-64, they really need a mainstream OS.
Unfortainately that means Windows, and "if
we build it they will come" doesn't necessary
work if they is no competition. Still in the
mean time, Crawhammer will be a damn fine 32-bit
chip as well, and Sledgehammer will bring
high-end servers right down to mid range prices.
Re:AMD's gonna win (Score:1, Informative)
This is not true, SGI and Sun have had 64bit based desktops for a long time! The Sun Blade 100 is a great example, this is only ~1000. If you are a member of the academic community you can obtain one for around $795. This is a 64bit processor - same one as in the Sun Fire 15000.
UNIX is and has been on the desktop for years. Sun got their start on the desktop and has been strong there ever since! Suse, Debian, NetBSD and OpenBSD all support the Sun Blade 100 too !
Al Qrapola's gonna win (Score:2, Funny)
This is the basis of Al Win Modem's plot to overthrow the world!
All your Athlon are belong to us!
Re:AMD's gonna win (Score:2)
Unfortainately that means Windows, and "if
we build it they will come" doesn't necessary
work if they is no competition.
Luckily, Microsoft has abandonned Windows 9x/ME based kernels for Windows NT based kernels for all of their desktops. Microsoft has been developing 64bit versions of Windows NT for some time now, originally for Alpha, then (using Alphas for development and testing even) for IA-64. If there is sufficient demand, we may see an x86-64 version of Windows XP (or whatever the next version will be called). I doubt it will be a lower cost "Home" version, but more likely a "Professional" version. All Microsoft has to do is realize that x86-64 owners will use Linux/BSD if they are limited to a 32bit version of Windows, and suddenly they will be scrambling to make a port.
Re:AMD's gonna win (Score:1, Insightful)
AMD CPUs outperform every Intel CPU (don't fall into the MHz trap!), are cheaper and are no less reliable. You're a fool if you buy more expensive Intel CPUs these days.
SCSI isn't magic technology anymore either. In fact, the latest IDE protocols surpass all the existing SCSI technology in speed. Furthermore, the actual drive mechanics are the same for both SCSI and IDE versions of a drive so the reliability isn't any lower for IDE drives anymore. Yeah, you can chain more drives in a single SCSI bus than you can on IDE, but IDE controller cards are cheap. There are inexpensive IDE RAID controllers too. And, of course, the price for IDE drives is significantly lower. You can get two huge IDE drives at the price of a single 18 GB "high performance" SCSI drive.
The more I listen to Intel and SCSI people the more I believe that the so called advantages of both technologies are nothing but hype to keep up the ridiculous pricing.
Re:AMD's gonna win (Score:3, Informative)
SCSI drives have disconnect abilities, which means they can have commands sent to them, and the bus is then disconnected (free for other use) while the drive is seeking to the sectors required and buffering in the internal drive RAM. This means that other drives can be instructed in this 'dead' time. On a single-drive system, this is irrelevant, but even on a small server (say 0.5Tb disk array) it is crucial.
IDE drives hog the channel - which is why you can't get much more speed out of a RAID-0 array with 3 or 4 drives than one with 2 (masters) on a standard PC. There are only 2 channels, so only 2 drives can be accessed at once. Contrast this to a SCSI system, where anything up to ~64 disks might be attached to a single channel, but using disconnect to manage that channel amongst them.
To see why disconnect works so well, remember that the time it takes to seek the disk head is measured in milliseconds - this is several orders of magnitude slower than the time to send the commands/data over the bus to the host computer.
Also remember that the ATA-100 is (AFAIK) a burst-speed, ie: it can transfer at that speed when the source data is in the cache - it cannot read the data at that speed... The latest SCSI standard is 320MBytes/sec (Seagate, I believe) although I think 160 MBytes/sec is the highest widely available standard. Given the architecture underlying both technologies, which do you think will have the best chance of filling it's cache more often in a RAID array? (Hint: it starts with an 'S'
The only company I have seen to make large-scale IDE RAID arrays work as fast as SCSI ones uses an IDE controller *per drive*, and attaches a SCSI/Fibrechannel front-end via custom hardware. It's still cheaper than SCSI, but not by that much, and getting people who know about it is more difficult when it goes wrong...
Simon.
Re:AMD's gonna win (Score:4, Informative)
Re:AMD's gonna win (Score:2)
It's a Free country, brother...
Re:AMD's gonna win (Score:2, Informative)
Of course, the drives are still lower performance, but with a healthy amount of RAM I get good results at a reasonable price.
Re:AMD's gonna win (Score:3, Interesting)
Re:AMD's gonna win (Score:2, Interesting)
Both SCSI and IDE are communications mechanisms, with SCSI winning out as being more intelligent (due to a variety of factors). Having said that therefore it's merely a function of the circuitry stuck on the back of the drive: Why in the world would any drive manufacturer manufacture completely different drives for SCSI or IDE? Seriously, I personally have never looked at the stats, but that seems absurd: It seems brutally obvious that they'd just pull them off the end of the line and stick on the SCSI board, or the IDE board, of course sticking a 200% premium on the SCSI equipped version as a sucker tax.
I find it interesting that you mentioned "since 1998", and it is perhaps true given that condition: IDE has permeated the market, and the only area where SCSI still has a presence is high end servers, so given that it is possible that they only even both sticking SCSI boards on the 15,000RPM monsters anymore. However, I still disagree with your assertion that it's a "myth", as back in the day (when even desktops came with SCSI if you wanted "multitasking") every SCSI versus IDE review started off with a disclaimer that the drives were physically exactly the same, and only the communications mechanism differed.
Re:AMD's gonna win (Score:2, Insightful)
And I bet it will cost more then 10%. AMD will still have the better ratio of performance/price.
Re:AMD's gonna win (Score:2)
Certainly Northwood will (rightly) carry a bit of a price premium over Willamette, but mainly because Willy prices will drop by a lot. Northwood will almost certainly improve Intel's price/performance relative to AMD, for the simple reason that in addition to being able to clock faster and getting better performance at equivalent clock speeds, Northwoods are cheaper to make than Willamettes, because they're a lot smaller. (~130mm^2 vs. 217mm^2)
AMD will still offer better performance/price, of course, but mainly because they will cut prices in response. (And they had an awfully large lead to start with.)
Re:AMD's gonna win (Score:2, Informative)
AMD has kick-ass CPUs; they are fast and cheap.
Same goes for IDE, including the fact that they
have become bigger and more reliable the past
years, in case you haven't noticed.
I'm not bashing anyone here, I'm just stating
a fact. And in case you wonder, my OpenBSD
server runs on AMD and SCSI.
Maybe you were thinking of Alpha CPUs ? Now
THERE'S raw power for ya.
Re:AMD's gonna win (Score:3, Informative)
I'll stick to my Atari Jaguar or N64 (Score:1, Funny)
Re:I'll stick to my Atari Jaguar or N64 (Score:2)
Re:I'll stick to my Atari Jaguar or N64 (Score:1)
Re:I'll stick to my Atari Jaguar or N64 (Score:1)
Might have 64-bit computing very soon. (Score:3, Interesting)
Re:Might have 64-bit computing very soon. (Score:1)
Re:Might have 64-bit computing very soon. (Score:2)
That doesn't sound very apple-like to me. They'll probably keep it a closely guarded secret.
I'm fairly sure they will ship in Jan (Score:2, Informative)
Apple is going to use an MPC85xx. Here is one, if not the, chip apple will use MPC8540 info [motorola.com]
64/32 bit processing, 333mhz DDR, Rapid I/O, etc.
Hypertransport will probably also find it's way into the new motherboard designs. That's been done for a while now.
Re:Might have 64-bit computing very soon. (Score:3, Informative)
Re:Might have 64-bit computing very soon. (Score:2)
The PPC spec has included 64-bit instructions from day 1, but they've only been used in IBM's mainframes. The problem with apple using a standard 64-bit PPC is that there are a few minor differences in how certain generic instructions are handled (most instructions are specific to single- or double-words) which make running code compiled for 32-bit PPC uncertain on 64-bit PPC.
Is this really true? I run on a mixture of 32bit and 64bit 4-way POWER RS6000 machines - all the software is compiled up on the 32bit platforms and runs seemlessly everywhere. So either your statement doesn't apply on AIX or the PowerPC chips are subtly different to the POWER platform when it comes to 32bit/64bit.
Cheers,
Toby Haynes
Re:Might have 64-bit computing very soon. (Score:2)
I'm not currently speaking for 64 bit PPC, as I've never seen one. I've seen 64 bit POWER-4 servers, but that's a little different. I do, however, also target and maintain Solaris versions of my software, which are 64 bit aware. I do have to deal with the 32 bit library/64 bit application issues. I do have to deal with building both 32 and 64 bit versions. I even have to deal with testing gcc 3.0.x 32/64 modes against the Forte CC 32/64 modes. I'm pretty damned familiar with the issues involved in making software on mixed addressing operating systems work.
Before I go on, let me note that a 64 bit application in the sparcv9 format cannot link to a 32 bit sparvcv8 library, either static or dynamic, and the only solution with a commercial library will be to actually write an interface by interface transport layer for the library, linking the 64 bit side of the transport layer to the application, and the 32 bit side to the library, and take the penalty of using pipes for communication right on the jaw. Oh, and the 32 bit side will have the 4GB memory limit, too...
While Sun does do a good job of making the 32 bit/64 bit transition look smooth, it's not, really. SGI and HP face similar issues. I'm told that Alpha Linux may have workarounds not available on the big iron platforms, but I don't know the details, as I don't do any serious Alpha work.
Now I am currently speaking for the PPC. Please take this as speculation based on POWER-4 details and the original PPC spec, not as insider knowledge.
The PPC is interesting. The original design calls for mode switching (like the sparc or mips), but there's a provision for realtime mode switching in there. I expect you would take a heavy hit, but you might be able to link 64 and 32 bit binaries, if the linker was smart enough to insert mode switch instructions into the calling sequence and if the compiler were set to interpret interface definitions (in headers) according to a dependancy determined pointer size assumption. Come to think of it, it should have been possible to implement something like this for the sparc and mips binary formats... (eg _int_v8 and _int_v9 as seperate types in the compiler's internal interpretation...)
Being realistic, I expect eventually we'll have a 64 bit kernel (Darwin) with 32 bit libraries provided as interfaces for mixed mode applications, and a handful of apps (Photoshop, FCPro) that require 4+GB memory being released in 64 bit form (requires G5(6?) or greater!!!) for power users... this of course, at the point in time where we have 2GB+ DDR modules, and four slots again... and another major transition. At least Apple has proven that they are good at tremendous transitions, remarkably so, considering...
There are other possible benefits to 64 bit computing, beyond addressing. Some of them can be realized now... on the G4s, and the P4s, there are ways to use 64 bit (or 128 bit, or even, in one case on the P4, 256 bit) bit vector arithmatic to speed up comparisons, sometimes by unbelievable factors... some higher precision mathematical processing is possible only with 128 bit floating point, which is generally coupled only with 64 bit integer registers, which are the basis of 64 bit memory addressing as a reasonable proposition...
There's also a possible two-instruction-per-cycle trick that could be performed on a 64 bit CPU with a hybrid (64 bit with 32 bit support) kernel for certain operations. There's some documentation for this online, but I haven't tried anything of the sort myself (no current access to a POWER-4 server), so I can't vouch for the usefulness of this.
We're not talking about a trivial task, or any immediate benefits, so don't expect a 64 bit MacOS X anytime soon. Even if the CPUs are 64 bit. It should be transparent, however, as the PPC is upward compatible (32 bit binaries run on 64 bit CPUs) just as the sparc and MIPS are...
Re:Might have 64-bit computing very soon. (Score:2)
64 bit? Nothing new to SGI/IRIX/Mips users... (Score:2, Funny)
Yay! (Score:2)
link to Full article (Score:5, Interesting)
MIPS + ARM ?? (Score:2)
32bit: you see ARM with about 60%-70% MIPS with about 25%-35% and then rest are split into 5% depending on who did the research
64bit you see MIPS 90%-95% sparc with about 3%-7% and rest split into 2%-3%
ask yourself whats in my Set Top Box ?
an MIPS/ARM of some kind whats in my printer MIPS, whats in my phone ARM, whats in my router or adsl box a MIPS
wake up people no one cares the war is over
intel know it what do you think IA64 is about ?
oh and notice 1 Billion intel bucks bought a ARM licence they dont care about ia32 StrongARM and StrongARM2 aka Xscale are taped out and earning cash
64 bit MIPS has gone past the 1GHz a while back with dual cores on one die and has speculative execution from NEC
really these are the things to worry about
regards
john jones
Contents of the Article (Score:3, Informative)
By: Paul DeMone (pdemone@realworldtech.com) Updated: 01-02-2002
A Quick Look Back
In the last six months several noteworthy events and disclosures have occurred in the fast moving world of microprocessors. AMD started shipping its Palomino K7 processor as the Athlon XP. Despite the controversy surrounding the performance rating based model naming scheme associated with the XP, it appears the latest refinement of the AMD's venerable K7 design has, by most measures relevant to the PC world, eclipsed the performance of the 2 GHz Pentium 4 (P4), the highest speed grade offered for Intel's first implementation of its new x86 microarchitecture. However, this advantage should prove short-lived, as the second generation 0.13 um Northwood P4 will be officially released in early January. The Northwood will offer higher clock rates, an L2 cache doubled in size, and minor internal performance enhancements.
Extending their rivalry on a different front, Intel and AMD unveiled microarchitectural details of their forthcoming 64-bit standard bearers at Microprocessor Forum in October. Although the McKinley and Hammer are both future flagship parts, and thus important symbols of Intel and AMD struggle for technological leadership, the two processor families will be sold into different markets and won't directly compete. In other 64-bit news, IBM officially unveiled the POWER4 processor in several different hardware configurations with clock rates as high as 1.3 GHz and took the top spot in both the integer and floating point performance categories of the SPEC CPU 2000 benchmark. However, preliminary "teaser" numbers from Compaq suggest that IBM will lose SPEC performance leadership when the EV7, the final major product introduction in the doomed Alpha line, is unveiled. Regardless of who wins bragging rights for technical computing, both processors will offer memory and I/O bandwidth far ahead of their competitors and both should do quite well on commercial workloads.
Sun Microsystems continues to slowly upgrade its UltraSPARC-III line in the face of an increasingly difficult competitive environment. Sun recently introduced its copper process based version of the US-III at 900 MHz. The latest device ostensibly includes a fix to the prefetch buffer bug that vexed the earlier aluminum based device. Far more interesting than the new silicon was the latest version of Sun's compiler. It raised the new copper US-III/900's SPECfp2k score by roughly 20% by spectacularly accelerating one of the 14 programs in the suite using an undisclosed optimization. A recent call was issued for new programs for the next generation of the SPEC CPU benchmark. Tentatively named SPEC 2004, it now seems like it couldn't come soon enough.
McKinley: Little more Logic, Lots more Cache
The most striking aspect of McKinley is its size and transistor count. Weighing in at a hefty 220 million transistors, this 0.18 um device occupies a substantial 465 mm2 of die area. The majority of McKinley's transistor count is tied up in its cache hierarchy. It is the first microprocessor to include three levels of cache hierarchy on chip. The first level of cache consists of separate 16 KB instruction and data caches, the second level of cache is unified and 256 KB in size, and the third level of cache is an astounding 3 MB in size. The die area consumed by the final level of on-chip cache can be seen in the floorplan of the McKinley and some representative server and PC class MPUs shown in Figure 1.
Figure 1 Floorplan of McKinley and Select Server and PC MPUs.
The Itanium (Merced) floorplan is shown as blank because although its chip floorplan has been previously disclosed its die size is still considered sensitive information by Intel and has not been released. The outlines shown indicate the range of likely sizes of the Itanium die based on estimates from a number of industry sources.
Both the first and second generation IA64 designs, Itanium/Merced and McKinley, are six issue wide in-order execution processors. In-order execution processors cannot execute past stalled instructions so it is important to have low average memory latency to achieve high performance. This focus on the memory hierarchy can be clearly seen in the McKinley [1]. Although it is not surprising that the on-chip level 3 cache in McKinley is much faster than the external custom L3 SRAMs used in the Itanium CPU module, it is interesting to see how much faster in terms of processor cycles the McKinley level 1 and 2 caches are despite the McKinley's 25 to 50 percent faster clock rate in the same 0.18 um aluminum bulk CMOS process.
The improvement in average memory latency between Itanium and McKinley can be approximated using the comparative access latencies presented by Intel at their last developers conference, combined with representative hit rates based on the size of each cache in the two designs and an assumed average memory access time of 160 ns. This data is shown in Table 1.
CPU
Processor
Itanium
McKinley
Frequency (MHz)
800
1000
L1
Size (KB)
16
16
Latency (cycles)
2
1
Miss rate
5.0%
5.0%
L2
Size (KB)
96
256
Latency (cycles)
12
5
Global Miss rate
1.8%
1.1%
L3
Size (MB)
4
3
Latency (cycles)
21
12
Global Miss rate
0.5%
0.6%
Mem
Latency (ns)
160
160
Latency (cycles)
128
160
Total
Average Latency (cycles)
3.62
2.34
Average Latency (ns)
4.52
2.34
The back of the envelope type calculations in Table 1 suggests that a load instruction will be executed by McKinley with about half the average latency in absolute time than it would on Itanium. No doubt this is a major contributor to the much higher performance of the second generation IA64 processor. Although the large die area of McKinley suggests a substantial cost premium compared to typical desktop MPUs, for large scale server applications the extra silicon cost is insignificant compared to the overall system cost budget. In fact, from the system design perspective, the ability to reasonably forgo board level cache probably more than pays for the extra silicon cost of McKinley through reduction of board/module area, power, and cooling requirements per CPU. Large scale systems based on the EV7 will also eschew board level cache(s), although with the Alpha it is the greater latency tolerance of the out-of-order execution CPU core plus the integration of high performance memory controllers that permit this, rather than gargantuan amounts of on-chip cache.
Besides the greatly enhanced cache hierarchy, the McKinley will boast two more "M-units" than Itanium. These are functional units that perform memory operations as well as most type of integer operations. In a recent article I speculated about the nature of McKinley design improvements. I suggested that it would contain 2 more I-units and 2 more M-units than Itanium in order to simplify instruction dispatch and reduce the frequency of split issue due to resource oversubscription. In IA64 parlance, both I-units and M-units can execute simple ALU based integer instructions like add, subtract, compare, bitwise logical, simple shift and add, and some integer SIMD operations. I-units also execute integer instructions that occur relatively infrequently in most programs but require substantial and area intensive functional units. These include general shift, bit field insertion and extraction, and population count.
Because the integer instructions that cannot be executed by an M-unit are relatively rare, the McKinley designers saved significant silicon area with little performance loss by only adding two M-units (for a total of four) and staying with the two I-units of Itanium. Data on the relative frequency of different integer operations suggest that the vast majority of integer operations, about 90%, that occur in typical programs are of the type that can be executed by either an M-unit or I-unit [2]. If we consider a random selection of six integer operations, each with a 90% chance of being executable by an M-unit, then the odds are better than 98% that any six instructions are compatible with the MMI + MMI bundle pair combination and can be dual issued by McKinley. Thus there is practically no incentive to add two extra I-units to McKinley to permit the dual issue of the MII + MII bundle pair combination.
One curiosity in the McKinley disclosure was the fact that the basic execution pipeline was revealed to be 8 stages long. Although this is still 2 stages shorter than the pipeline in the slower clocked Itanium, it is one more stage than the 7 stages previously attributed to McKinley [3]. Whether this represents a slightly different way of counting the pipe stages or an actual design change isn't clear. Ironically, it has long been rumored that the Itanium pipeline was stretched by at least one stage quite late in development. It will be interesting to see if the new IA64 core under development by the former Alpha EV8 design team (now at Intel) also suffers this strange pipeline growth affliction.
Hammering x86 into the 64 bit World
In October AMD revealed some aspects of K8, its next generation x86 core code-named Hammer [4]. This new design is primarily distinguished by being the first processor to implement x86-64, AMD's extension to the x86 instruction that supports 64 bit flat addressing, 64 bit GPRs, as well as other enhancements. As can be seen in Figure 2, the Hammer core heavily leverages AMD's highly successful K7 core
Figure 2 Comparison of K7 Athlon and K8 Hammer Organization
The back end execution engine of the K8 Hammer core is basically identical to that of the K7 except that the integer schedulers are expanded from 5 to 8 ROPs. The increase in the integer out-of-order instruction scheduling capability this implies may have been intended to better hide the data cache's two cycle load-use latency, and thus slightly increase per clock performance. An alternative hypothesis is that the latency of some integer operations may have been increased to allow higher clock rates and the change was made to prevent a slight loss in per clock performance. The basic execution pipeline of the K7 and K8 are compared in Figure 3.
Figure 3 Comparison of K7 and K8 Basic Execution Pipeline
The K8 execution pipeline has two more stages than K7, and these new stages seem to be related to x86 instruction decode and macro op distribution to the integer and floating point schedulers. Although some of the stages have been renamed it appears that the final five pipe stages, representing the back end execution engine, are comparable. This is unsurprising as the most complex and difficult task in an x86 processor like the K7 or K8 is the parallel parsing of up to three variable length x86 instructions from the instruction fetch byte stream and their decoding into groups of systematized internal operations. In comparison, the execution engine is hardly much more complex than a typical out-of-order execution RISC processor.
Both the block diagram and execution pipeline indicate that AMD has spent nearly all its effort in Hammer development at revamping the front end of the K7 design. Some of the extra degree of pipelining may be related to the extra degree of complexity in decoding yet another level of extensions (x86-64) on top of the already Byzantine x86 ISA. Some of the increase may be related to increased flexibility in internal operation dispatch to reduce the occurrence of stall conditions and increase IPC. And, some of the increase may simply reflect a reduction in the work per stage to increase the clock scalability relative to the K7 core. Without a detailed description of each of the pipeline stages in the K8 it is difficult to correlate front end pipe stages in the K7 to the K8, and next to impossible to assess how the benefit of the extra two pipe stages is allocated between accounting for increased ISA complexity, measures to increase IPC, and reduction in timing pressure per pipe stage to allow higher clock rates.
Although the 64-bit instruction set extension makes for attention grabbing headlines in the technical trade press, the major performance enhancements in the Hammer series are much more prosaic from a processor architecture point of view. These enhancements are the direct integration of interprocessor communications interfaces and a high performance memory controller. Like a "poor man's EV7", the Hammer includes three bi-directional HyperTransport (HT) links and a memory controller supporting a 64 or 128-bit wide DDR memory system using unbuffered or registered DIMMs. With the latter, a K8 processor can directly connect to 8 DIMMs, although this number may be reduced at the higher memory speeds supported. It is interesting to compare the results of the same design philosophy applied to the high-end server and mainstream PC segments of the MPU market as shown in Table 2. Power and clock rates for the Hammer MPU are estimates.
Alpha EV7 [5]
K8 Hammer
Process
0.18 um bulk CMOS
0.13 um SOI CMOS
Die Size
397 mm2
104 mm2
Power
125 W @ 1.2 GHz
~70 W @ 2 GHz
Comm Links
4 links, each 6.4 GB/s,
one 6.4 GB/s IO bus
3 links, each ~6 GB/s
Memory Controller
2 x 64 bit DRDRAM
12.8 GB/s peak
64 or 128 bit DDR
2.7 or 5.4 GB/s peak
Package
1443 LGA
?
Although the Intel McKinley and AMD Hammer are both 64 bit MPUs, these devices are directed at different markets. While the large and expensive McKinley will target medium and high-end server applications, the first member of the Hammer family, code named "Clawhammer", will target the high end desktop PC market. That is not to say that McKinley will outperform the Clawhammer device. Indeed, I expect the AMD device will easily beat the much slower clocked IA64 server chip in SPECint2K and many other integer benchmarks, as well as challenge much faster clocked Pentium 4 devices in both integer and floating point performance.
Exactly how much performance the Hammer core may provide is the subject of some controversy. AMD's Fred Weber was quoted as stating the Hammer core could offer SPECint2k performance as much as twice that of current processors. Although this comment is vague enough to drive a truck through (twice as fast as the best AMD processor? Best x86 processor? Best processor announced but not yet shipping?, IA-32 or x86-64 code?, Clawhammer or the big cache Sledgehammer?) a few web based news sites interpreted this comment as meaning the Hammer would achieve 1400 SPECint2k and now some people are incorrectly attributing this figure to Weber himself. Keep in mind that no Hammer device has even taped out as of the end of 3Q01 let alone been fabricated, debugged, verified, and benchmarked at the target clock frequency. Whatever figure Mr. Weber had in mind was derived from architectural simulation and for a benchmark suite as cycle intensive as SPEC CPU simulation results are approximate at best [6][7]. As been shown time and time again, it is best not to count performance chickens too closely before the silicon eggs hatch.
Alpha Goes Out With a Bang not a Whimper
Although Compaq announced the wind down of Alpha development in June and transferred nearly the entire EV8 development team to Intel over the summer there is still one more surprise in store for the computer industry. The EV7, the final major design revision in store for Alpha, has been the subject of intense testing, verification, and system integration exercises since late spring. This design has been in the pipeline for a long time. It was first announced more than three years ago and finally taped out in early 2001. Because the complexity of this device (basically a complex CPU and large scale server chipset all on one die) and the incredible degree of shakedown server class MPUs and systems undergo, the EV7 will not go into volume production until the second half of 2002. To bridge the gap between current products and EV7 based systems Compaq will shortly release a 1.25 GHz version of the workhorse EV68.
Although general details of the EV7 design have been in the public domain for more than three years, and specific facts about the performance of this MPU's router and memory controllers were disclosed in February, I think the performance it will achieve when officially rolled out in 2H02 will surprise and dismay many in the industry (possibly including senior Compaq management). At the Microprocessor Forum in October Compaq's Peter Bannon unveiled some preliminary performance numbers for the EV7, namely 804 SPECint2k, 1253 SPECfp2k, and roughly 5 GB/s STREAM performance.
Although these numbers are quite good in absolute terms, comparable to the fastest speed grade POWER4 running in a contrived and unrealistic hardware configuration, the numbers failed to live up to my estimates given in a previous article. However, former members of the Alpha design team have privately confirmed my suspicions that Mr. Bannon was clearly sandbagging the EV7 numbers, keeping a not insignificant amount of performance off the table. For a product still more than six months from release that is a not unexpected tactic. I still hold the opinion that when it is all said and done the EV7 has a good chance of being the highest performance general purpose microprocessor ever fabricated in 0.18 um technology, a fitting ending to a remarkable and tragic technological saga (EV79, an EV7 shrink to 0.13 um SOI is on the roadmap for 1H04 but the continued turmoil at Compaq suggests a healthy amount of scepticism is in order).
Sun's Surprising Spike SPARCs SPECulation
Sun recently introduced a new member of its UltraSPARC-III family. This new 900 MHz device differs from earlier US-III parts by the use of copper interconnect instead of aluminum. Although Sun submitted official SPEC scores for a 900 MHz Sun Blade 1000 Model 1900 using an aluminum US-III in late 2000, yield was apparently poor and this speed grade wasn't generally available. A rarely occurring bug related to a prefetch buffer inside the US-III was discovered and as a work around this feature was disabled in firmware. Unfortunately for Sun Microsystems, this caused the SPECfp_base2k score for the Model 1900 to drop from an already lackluster 427 to a lamentable 369 in a second SPEC submission in the spring of 2001. So it comes as no small surprise that the Sun Blade 1000 Model 900 Cu workstation, based on the new copper processor turned in a SPECfp_base2k score of 629 in a recent submission. Both the Model 1900 and Model 900 Cu versions of the Blade 1000 feature 8 MB of L2 cache.
It is possible that the copper US-III incorporates improvements beyond a fix to the prefetch buffer bug as well as improvements to system level hardware between the Model 1900 and Model 900 Cu. However it appears much of the improvement can be attributed to the use of the Sun Forte 7 EA compiler instead of the earlier Forte 6 update 1 compiler used to generate the 427 and 369 scores. The reason why I say that with confidence can be seen quite readily in the graph in Figure 4.
Figure 3 SPECfp_base2k Component Scores for US-III and Competitors
The SPECfp_base2k scores for the 14 sub-component programs for the pre-bug fix Sun Blade 1000 Model 1900 submission using the Forte 6 compiler are compared to the recent Sun Blade Model 900 submission using the Forte 7 compiler. In addition, scores for the Itanium (4MB, 800 MHz version in an HP i2000), Alpha EV68C (1000 MHz version in an ES45/1000), and POWER4 (1300 MHz version in a pSeries 690 Turbo) are provided for reference. It is the new compiler's score on the 179.art program that quite literally stands out from the rest. Although several other programs see appreciable improvement (the 183.equake score nearly triples), the new compiler increases the score of 179.art by more than 800%. In absolute terms this score, 8176, is more than four times higher than that achieved by the Alpha EV68 and POWER4, MPUs that easily beat the copper US-III on nearly every other SPECfp2k program. The 179.art score achieved by the Forte 7 compiler is vital to the new machine's pumped up SPECfp_base2k score. If you leave 179.art out of the geometric mean then its SPECfp_base2k score would drop by nearly 18% from 629 to 516.
This remarkable improvement on 179.art is unusual in the field of compiler engineering where single digit percentage performance increases are often considered major victories. So it is no surprise that Sun's achievement immediately raised suspicions among industry observers and competitors about the nature of the optimization employed by the Forte 7 compiler. It is hard not to think of Intel's infamous eqntott compiler bug that erroneously increased the SPECint92 score of its processors by about 10% until caught and fixed [8]. This bug used an illegal optimization that allowed the output of 023.eqntott to pass result checking with the test data used but was invalid in the general case.
Although the exact nature of the new Sun optimization isn't known, suspicion has fallen on several inner loops within the 179.art program. Speculation is that this code was originally written in FORTRAN and converted to C. Because FORTRAN and C access two dimensional arrays in opposite row and column order it is presumed that 179.art accesses arrays by the wrong index in the innermost loop causing poor cache locality. It is possible that the new Sun compiler recognizes this situation and turns the nested loops that step through the array accesses "inside out" and achieves much lower cache miss rates. Whatever the exact nature of the Sun optimization turns out to be there is the question of whether it violates one of the SPEC rules, namely "Optimizations must improve performance for a class of programs where the class of programs must be larger than a single SPEC benchmark or benchmark suite".
Without knowing the nature of the new Sun optimization it is impossible to say whether Sun should be praised or scolded. But here are the words of Sun engineer John Henning who made the following comments in a November 27 post to the comp.arch usenet news group:
"Our compiler team believes that what Sun has done with art is (1) the result of perfrectly [sic] legitimate optimizations (2) compliant with SPEC's rules and (3) not appropriate for further discussion - if you want to figure out to make art faster, go work on it yourself, don't ask Sun how we did it!"
With the widespread attention this incident has engendered within the industry we can presume that compiler and benchmarking experts working for Sun's competitors have closely scrutinized the code Forte 7 generates for 179.art. The fact that Sun's new scores haven't been withdrawn from the SPEC official web site yet suggests that Mr. Henning is correct. No doubt we can expect competitor's processors to score much higher on 179.art in the months and years to come as the Sun optimization migrates to other compilers. Depreciation of a benchmark's value is seldom as spectacular as in the case of 179.art, but still naturally occurs over time and provides incentive to accelerate the development of a successor to the SPEC CPU 2000 benchmark suite (which no doubt will not include 179.art). A message soliciting programs for this new suite, tentatively named SPEC 2004, was posted on comp.arch on December 28. Ironically the author of this message, the secretary of the SPEC CPU subcommittee, is none other than the previously mentioned John Henning.
Conclusion
It is comforting to see the pace of innovation in the microprocessor field shows no sign of slackening. The great seesaw battle between Intel and AMD for share of silicon's richest prize, the x86 microprocessor market, is about to enter a new phase with the imminent release of the 0.13 um Northwood Pentium 4. Although AMD will also migrate its K7 core to 0.13 um later in 2002 with both bulk and SOI versions, it is unlikely to be in the position to regain the performance advantage over Intel it previously achieved with the T-bird and XP Athlon until its new 64-bit Hammer core ships. Unlike AMD, Intel plans to reserve its 64-bit offerings for the high-end market. With McKinley Intel hopes to address the significant performance difficulties seen in the Itanium in part by taking advantage of its capacious manufacturing facilities to incorporate a huge amount of on-chip cache on its sizable die.
It seems like the time it takes for new ideas and features to migrate down from high-end server MPUs to mass-market devices is shrinking. The integration of high performance interprocessor communication links and memory controller(s) onto a processor die has been on the drawing board for many years and will soon be realized in the high end server market in the form of the EV7. Remarkably, the same concepts will appear in a mass-market x86 processor, the first of AMD's Hammer series, not too much later. Although these features will naturally be more limited in the scope in the x86 device to keep costs under control, they should still provide a large boost in performance from significantly reduced memory access latency as well as a dramatic reduction in the cost of producing multiprocessor systems based on this device.
Few topics in the computer and microprocessor field can raise a controversy, as well as blood pressure, as quickly as benchmarks and benchmarking. Sun managed to throw a hand grenade into the simmering debate between the supporters and detractors of the industry standard SPEC CPU benchmark by speeding up the execution of one of the fourteen programs in the floating point suite by nearly an order of magnitude through the use of a previously unexploited compiler optimization. This in turn raised the SPECfp2k score of its latest US-III processor by roughly 20%. We can now look forward to the spectacle of competing firms scrambling to reverse engineer Sun's new compiler trick and incorporate the same voodoo into their own wares.
References
[1] Krewell, K."Intel's McKinley Comes Into View", Microprocessor Report, October 2001, Volume 15, Archive 10.
[2] Hennessy, J. and Patterson, D., "Computer Architecture A Quantitative Approach", Morgan Kaufmann Publishers Inc., 1990, ISBN 1-55860-069-8, p. 181.
[3] Advance Program, 2001 IEEE International Solid-State Circuits Conference", p. 35.
[4] Weber, F., "AMD's Next Generation Microprocessor Architecture", October 2001, Downloaded from AMD web site.
[5] Jain, A. et al, "A 1.2 Ghz Alpha Microprocessor with 44.8 GB/s Chip Pin Bandwidth", Digest of Technical Papers, ISSCC 2001, Feb 6, 2001, p. 240.
[6] Dulong, C. et al, "The Making of a Compiler for the Intel Itanium Processor", Intel Technology Journal, Q3 2001, Downloaded from Intel web site.
[7] Desikan, R. et al, "Measuring Experimental Error in Microprocessor Simulation", Digest of Technical Papers, 28th Annual International Symposium on Computer Architecture, June 2001.
[8] "Intel OverSPECs Parts", Microprocessor Report, January 22, 1996, Volume 10, Number 1, P. 5.
Copyright © 1996-2001, Real World Technologies - All Rights Reserved
Compaq and Alphacide (Score:4, Interesting)
By the way, Pricewatch is quoting about $3K for the lowend Itaniums running at about 700 Mhz. No thanks.
Mirror (Score:2, Informative)
Link [rogers.com]
Shrinkage (Score:5, Informative)
Remember that the components in any digital system - and I'm not just talking about your windoze desktop PC, but servers, mainframes and embedded systems too - have to talk to each other in order to do anything remotely useful. Last time I looked, most PCI devices din't utilise the provision for 64-bit data bus operation.
There's a perfectly good reason for this, of course... in order to attach a chip to a circuit board, you need an array of pins (or solder balls) that are macroscopic, so they can be soldered and handled without too much risk of accidental damage. Additionally, PCB tracks can only go so small (and so close together) without undesirable electrical effects and again, an inability to work with it in a production environment.
The "more bits" phenomenon has been sustained by improvements in VLSI and the advent of true System-on-a-chip design, but this too has its limits. If you compare a P4 motherboard with, say, a 386 mobo circa 1995, you'll see the chip count is drastically reduced. But fewer interconnected components means less repairability, upgradability, and interoperability. My old 486 had a VLB EIDE hard disk controller, which I swapped in after the last one failed. If my controller failed today, I couldn't do that; I'd either need to buy a new mobo or start replacing chips on the old one (which is just as expensive).
Don't get me wrong - I'm all for progress! And I expect we'll see more and more 64/128-bit chips springing up inside custom devices (e.g. 3D cards, routers) where the local interconnect can be made as fat as necessary. But the PC will remained shackled by slow frontside busses for a while yet, I reckon.
Re:Shrinkage (Score:2)
Perhaps your 486 MB was the first of its kind, but modern motherboards with integrated devices have the ability to disable them so that can be replaced by cards in slots.
This all stems from the fact that those 'chips' that are taking ever more responsibility are trashable. I remember watching an old movie in gradeschool about the development of computers (this would've been in the 80s). A man recalled an interview where the reporter kept asking what sort of tiny tools the guy would use to go in and fix a part of the circuit (the reporter's mind was forever stuck with tubes). Eventually, the guy got through to him that the chip wouldn't be repaired, just replaced.
Thus, the chip count may be reduced, implying more complex chips, but they're not necessarily more expensive. On the other hand, they've become so cheap that its more cost-effective to bundle the functions of multiple chips past into a single chip.
But, still , regarding your BUS argument, there have been numerous articles all over the web about newer BUS standards competing to be the future industry standard. Those BUSes will get big right when these chips do.
Re:Shrinkage (Score:5, Interesting)
> but modern motherboards with integrated devices
> have the ability to disable them so that can be
> replaced by cards in slots.
True, but that presupposes the existence of spare slots
I hear what you're saying about trashable chips, but I think the real phenomenon is the "trashable board". Think about it - if your mobo dies and your warrantee has run out, you go buy a replacement and ditch the old board. If it happens still to be under its manufacturer's warrantee, most likely you just take it back to the shop and swap it for a working one. What happens to the old one? Most likely, they throw it away. The cost of postage, packing, an engineer's time to find the problem, repairs, parts... it's more than the damn thing retails for anyway.
I think this is missing the point anyway. The integration idea goes like this: with today's technology, you could put the equivalent of an early Pentium processor, plus hard disk and graphics controllers, BIOS chipset, etc. onto a single piece of silicon. Pretty much all you'd be left with off-chip would be (a) RAM and (b) I/O circuitry, because they're both harder to integrate. So your computer is about four or five chips. This is approximately the case in palm-tops now.
The point is that you've lost all ability to choose your own components. That graphics block/macrocell has probably been chosen by the manufacturer becuase it was the best value for money (i.e. the cheapest they could find). If you're lucky, they will give you expansion ports so you can plug your own stuff in. But that costs money, and if they think you'll pay for the lesser product then they'll make that instead.
Does it matter? Probably not to the average user. But I think it would matter to the industry. The whole point of having standard architectures like PCI, SCSI, EIDE (and before them, ISA et al.) is that many vendors can compete to produce compatible products, which drives innovation and generally provides a good deal for the consumer.
But if the minimisation continues and the busses become subsumed into the very chips themselves, then the chances are the manufacturers will cut corners. They won't wait for the not-quite-standard-yet SuperBus2005 architecture... they'll design their own and make you buy their proprietary upgrades. Again, the economics work out such that you the consumer probably get a good deal. But trading off good deals today against innovation tomorrow is dangerous.
So, it would be much better to keep all those busses outside the individual components, right? But that's exactly what is keeping the PC architecture slow at the moment (which was the point of my previous post. I think.).
I could go on and on... <looks up> oh, wait...
Re:Shrinkage (Score:2)
PCI devices or PCI busses? Even the original old PCI buses support 64bit transfers via multiplexing (2 32bit transfers). So the bandwidth essentially remained the same, but usage as a "64 bit bus" was supported.
However, just because a CPU can process at 64 bits does not mean it must communicate at 64 bits outside the CPU. 64 bit CPU's do often support smaller word transfers.
It is true that most PCI devices are not true 64bit PCI, but that is mainly due to there being no need for the bandwidth that 64bit PCI affords.
If the bandwidth of 32bit at 33MHz (132MB/s) is not enough for your device to operate at it's fullest potential, then it is probably available as a true 64bit PCI device for a 64bit 66MHz PCI (528MB/s) slot, found in servers.
Realise that the IDE bus that may well be used in your computer, is only 16 bits wide. A 64bit CPU most certainly does not require 64bit here there and everywhere.
My old 486 had a VLB EIDE hard disk controller, which I swapped in after the last one failed. If my controller failed today, I couldn't do that; I'd either need to buy a new mobo or start replacing chips on the old one (which is just as expensive).
Not true, I've yet to see a mobo that would not allow the disabling of it's onboard VGA, IDE, SCSI, SERIAL, PARALLEL, USB, etc. Adding a card to replace a busted and disabled onboard device usually works.
The real value of a 64bit CPU over a 32bit CPU, is in the ability to compute more data at once, higher precision data or larger number data much faster and possibly also address way more data if a 32bit address bus is being compared with a 64bit address bus. A 64bit address bus, can access 4,294,967,296 *times* more data than a 32bit bus.
Is it really a benefit 32 vs 64? (Score:1)
Re:Is it really a benefit 32 vs 64? (Score:2, Insightful)
Re:Is it really a benefit 32 vs 64? (Score:2)
When you have more than 2000 processes or threads on a box then you will see the difference that 64 bits makes.
Jeff
Re:Is it really a benefit 32 vs 64? (Score:2)
It's possible to reduce the reserved stack, but only for all the threads in a process. We switched to using only a few threads & assigning jobs to them.
Death of Alpha, Long Live AMD (Score:2)
With Digital being sold to Compaq and then Alpha being sold to Intel and Compaq possibly merging with HP, the future there is clouded. I have been working with Alphas and have been told that the future is Itanium coloured, but sorry, I don't really like the chip. EV7 will come out, but so far its performance doesn't look so competitive.
With a lot of former Digital talent working at AMD, I think this will be the better option. However, the K8 is not a clean design, it seems to be a 64-bit version of the K7 with some extras on the pipelineing. I guess hat the chip is not ging to be the easiest to get the best performance from.
Re:Death of Alpha, Long Live AMD (Score:2)
Huh????? EV7 will almost certainly be the fastest MPU available at the time of its launch (by this I mean highest scores in SPEC2k int and fp), even ahead of the extremely expensive POWER4 (which sort of "cheats" on the single-threaded SPEC2k because then the one active core on the device gets the entire 128MB of shared L3 cache to itself).
Of course Compaq's support is questionable, the upgrade path is zero, and there's no telling how quickly they'll get out the high-end 32 and 64-way boxes out, but in terms of plain old CPU performance the EV7 is going to be the chip to beat. (BTW, sounds like you didn't read the article if you didn't get this point.)
Re:Death of Alpha, Long Live AMD (Score:2)
I like the Alpha though and have been using it since it first appeared. I will be very sorry to see it go.
/.'ed already? (Score:1)
Error Diagnostic Information
An error occurred while attempting to establish a connection to the service.
The most likely cause of this problem is that the service is not currently running. You can use the 'Services' Control Panel to verify that the service is running and to restart it if necessary.
Windows NT error number 2 occurred.
Intel learning from their mistakes (Score:5, Insightful)
The same can't be said of AMD's offering, although in fairness the Hammer is not directed at the server market unlike the McKinley. The pipeline is longer than both their previous design and the McKinley, which is going to give them a performance hit. We can only hope that their cache is as good as Intel's.
What amazes me is that they can still keep adding instruction extensions without too much of a performance hit. Anyone looked at the latest instruction set documentation for these processors? Eugh! The pain of backwards compatibility...
Re:Intel learning from their mistakes (using HP!) (Score:2)
As far as I know, Merced is HP's design. Mckinley is Intel's. So... you could say Intel is learning from their mistakes by letting HP engineers do a good job.
Anyway, it's a mutually beneficial thing because HP doesn't have the resources to market and drive the product, while Intel doesn't have the engineers or resources to design and implement the architecture in a 'good' way. Intel provides process and HP provides layout, and together they will take over the world!
At least that's what I've heard and read. Myself, I own a G4 and use a Mac, it's not exactly as if Itanium is going to strike me down anytime soon.
Re:Intel learning from their mistakes (Score:4, Informative)
Hammer does not have an 3MB L3 but it has an integrated memory controller, that would drastically reduce latencies of cache misses.
Assuming amd will go fro bigger than 32 kb L1 cache, and will not succeed in making cache hits as fast as mckinley (speculation based on current offerings) picture is a bit complicated:
Watch it: hammer and mckinley asks for an instruction/piece of data, both hit, mckinley wins, but a more probable scenerio is mckinley misses and hammer hits - a clear win for hammer, a still more probable scenerio is that both misses. If data is in the L2, mckinley is faster, it has lower miss penalty and can fetch from L2 faster but it is more probable that it is in hammer's cache, but not in mckinley's cache, that would benefit hammer . If L2 misses too, but mckinley scores an L3 hit, mckinley wins, if it suffers from an L3 miss, it has to suffer both L3 miss latency and memory latency, but hammer suffers no L3 miss latency and its memory latency is probably much lower, so with huge data processed in not-so-tight loops hammer wins hands down, while for medium sized data that could fit into L3 mckinley wins hands down.
Although mckinley is a server product and hammer is not (or so it is said), an integrated memory controller benefits hammer in multiway systems so much that it may as well be positioned as a server product. No more asking the chipset to fetch a piece of data and wait until chipset serves other processors' requests, just go and grab it!
Finally, some of the hammer line will have L3 caches and hammer line will have a higher clockrate than mckinley. If Amd can deliver what they have promised, they have a clear winner overall. But I'm still a bit scpetical.
Re:Intel learning from their mistakes (Score:2)
The big question is wether the compilers for IA-64 will be any good or not... that's what caused Intel's last attempt to divorce from x86 to fail, and their back-up plan (the 386, the 32-bit extension of the 16-bit x86 ISA) to succeed and become the most popular desktop microprocessor ever.
This time around, Intel doesn't have a backup plan , and AMD is the one doing the extension of a tried and true system.
Conclusion? I put my money on AMD, but if Intel can pull of the compiler, they dramatically increase their chances.
AMD is deceiving you (Score:3, Insightful)
IA64 is very different from x86-64. AMD's 64bit solution is nothing more than extension to current 32bit instruction set. Of course there are some tweaks, but nothing very radical. You will still be able to run old 16 and 8bit code efficiently.
Intel's IA64 is a huge step in the future... architecture wise is far superior to x86-64. Why?
Why do we need 64bit processors? Addressing? Nah, current processors can address enough space.. with 386 processors FAR addressing was introduced, which expanded allocatable address space drastically. (those silly DS, SS,
AMD's 64bit solution currently has no real value.. except for huge data storage (could work faster with 64bit data blocks) and probably some heavy encryption. x86-64 compiled Quake3 would make minimum use of 64bit registers.. and would probably be just a margin faster than IA32 compiled version.
Is IA64 better? Yes it is. IA64 has 128 usable 64bit registers, predicates... But that is not all.. in single 64bit register you can store 4 16bit values(common integer). (or 8 8bit or 2 32bit)And manipulate with them almost as much as you like. And if you have 4 integers in other register.. you can make 4 arithmetical operations with SINGLE instruction. You can do similar things with floating point operations... and with ILP you could do 3 instructions per cycle. This means that Quake2's VectorAdd/Subtract could be done in SINGLE cycle.
Clawhammer will be better for a year or so.. but soon it will hit the ceiling. Intel will be able to get better performance from 1/2 clocked IA64.
And please don't respond with lame comments if you haven't read at least whitepapers from Intel and AMD.
Re:AMD is deceiving you (Score:1)
From your description. it seems to me like IA64 is more MMX/SSE registers and an expanded instruction set, which is the same incremental change what we've been getting from each version of intel chips for a while now.
--
Benjamin Coates
Re:AMD is deceiving you (Score:1)
You can download IA64 instruction set manual from intel's website.. any ASM programmer will be fascinated with options IA64 offers. I know, I was.
Re:AMD is deceiving you (Score:2)
are no where near dealing with all the toys
in IA64, so your left with programs that run
slower on IA64 than existing Sparc, Alpha and
X86 machines.
Re:AMD is deceiving you (Score:1)
This probably qualifies as a "lame comment", but Alpha is arguably a cleaner and more elegant design. If you NEED a 64-bit server then Debian on Alpha will probably serve you well.
You probably don't though.
So why the fashion for 64bit x86-alikes? It's not like you'd want to run Windows on a serious server anyway.
Re:AMD is deceiving you (Score:3, Interesting)
software pipelining and parallel instructions give you a real complex monster cpu. Languages like C and C++ make it extremely hard to optimize for a cpu like this since C and C++ were never designed for fine-grained parallelism and software pipelining. So it results is a lot of wasted clock cycles.
What I'd LOVE to see is a statically typed pure functional language that could be used to generate the code for IA64. Then it would be feasable to fully take advantage of the IA64's features!
In the meantime, people compiling IA64 C code with GCC will be extremely disappointed. People compiling IA64 C code with Intel's optimizing compiler will be happier but will only be mildly impressed.
I've worked with VLIW (256 bit instructions) software pipelined DSP's before and learned very quickly that the C and C++ language standards are fundamentally limited for these things. I also learned very quickly that writing assembly language directly for them is an easy way to gain a special invitation to a padded room!!! I shudder to think what the compiler writers have to go through!
--jeff
Re:AMD is deceiving you (Score:1)
Re:AMD is deceiving you (Score:2)
jeff
Re:AMD is deceiving you (Score:2)
The only real benefit I can see from a industry perspective is it will drive down the price of high end systems for corporations. Intel desperately wants to get into the mainframe, research and scientific market, since the margin are much higher. As others have noted, IA64 isn't going to really revolutionize partical physics, astrophysics, realtime weather simulation or any other research requiring massive bandwidth and address space. It might make it easier for smaller universities to build faster, better, and slightly cheaper clusters in 2 years, but for now who cares if AMD's extensions have a limit.
Really, if we want faser 3D graphics, we need faster Bus, memory and GPU's.
probably get modded redundant, but who cares.
AMD will win (Score:2, Insightful)
Firstly, it would appear that you have at least read some white papers on the web, but have you used a real Itanium?
I have, and let me say that the reason AMD will win is compatibility and making sure that the things don't sound like a jumbo jet taking off. These may seem like minor points but they are what will count.
The major point of the AMD solution is backward compatibility... Intel knows this, why do you think that all their previous chips have been sucessfull? Because they were the "best" solution for the job? NO, not by a long shot. They were, however, able to run the old software faster, and provide a route for new software to run even faster still.
AMD provides this solution, new software can take advantage of the 64 bit addressing and processing several integers in just one register. But at the same time old software can run on the system faster than it would on the same clock speed 32 bit processor.
Look at it from an IT purchasing point of view, you can pick machine A which will theoretically be faster in the future but to get anything even aproaching decent performance you need to buy a whole load of new software. Or you can pick machine B which will run all your current software faster than your current machines, and run future software even faster still. Which would you choose, people like a sure thing, not the promise of something good in the future.
AMD can then concentrate on moving towards a pure 64 bit machine once most of the applications have moved to 64 bit, this makes the most sense long term. You buy your 64 bit machine, run 32 and 64 bit mixed software quickly. Then once you are running mostly 64 bit you can move seamlessly to a 64 bit proven and tested environment.
Current Itaniums are slow large and noisy, this makes a huge difference if you only have a small server room, or have to run a server under your desk - some people really have to do this in smaller comanies. You won't see them on the desktop market anytime soon, they are too slow, 32 bit performance is not great.
The Itanium might have a good archtecture, but I think that it's lack of speed and compatibility with 32 bit applications, coupled with noise and heat will cause it to lose the battle
Re:AMD is deceiving you (Score:2)
And if all else fails, SSE-3 and MMX-2 can provide your lovely 128 visible registers.
Re:AMD is deceiving you (Score:5, Informative)
AMD has stated *explicitly* that the Hammer is an evolutionary rather than revolutionary design. They've said all along that it is an Athlon with 64-bit extensions and some minor tweaks (SSE2, extended pipeline). They haven't deceived anyone.
Now, as to the relative performance of the two architectures (x86-64 vs. IA-64): the Athlon XP 1900+ achieves a SpecInt2000 score of 701 (peak) while the 800MHz Itanium manages... 314. On floating point the Itanium does rather better: 645 vs. 634 for the Athlon. (The current leader is the IBM Power4, which gets 814 SpecInt and 1169 SpecFP.)
Having 128 64-bit registers is good, but remember that the Athlon and Hammer have far more physical registers than are presented in the programming model, and automatically map them according to the requirements of instructions in the pipeline. And the predicates and wide issue of the Itanium are balanced against the ability of the Athlon to *automatically* issue instructions speculatively and re-order the instruction queue to improve ILP.
And on the subject of manipulating multiple values with a single instruction: ever heard of MMX? 3DNow? SSE? Athlon has all of these, and Hammer will add SSE2. What do you think these are for?
As to the value of 64-bit addressing: I've programmed for machines (Suns and Compaq Alphas) with as much as 64GB of memory. While you *can* address that much with a 32-bit CPU, it means that you have to constantly re-map your view of memory, which is a royal pain. Moving to 64 bit addressing makes the problem disappear. And with current memory prices, even small commodity servers could make good use of more than 4GB of memory.
And 64-bit integer registers are good for a lot of things, and while you can certainly use 64-bit integers on a 32-bit CPU, making them faster won't hurt.
So, Athlon currently has a huge performance advantage over Itanium on integer apps, and a huge price/performance advantage (with comparable absolute performance) on FP apps. AMD's aim with Hammer is to extend Athlon cheaply and effectively into the 64-bit realm.
Intel's aim with Itanium appears to be to crush all competition; unfortunately, they've placed a *huge* bet on improvements in compiler technology that just hasn't paid off yet, resulting in a high-end chip that lags behind not just the high-end RISC chips like Alpha and Power, but low-cost desktop chips. To achieve commercial success, the Itanium needs integer performance somewhere in the vicinity of their competitors, but they currently trail the pack by a huge margin. Even SGI do better, and they all but shut down their CPU design efforts years ago.
Maybe McKinley will be the answer - but it doesn't look like it, given that the promised speeds have dropped to 1GHz. IA-64 is an interesting architecture which may even have a future, but so far it just don't fly.
Re:AMD is deceiving you (Score:4, Insightful)
Why do we need 64bit processors? Addressing? Nah, current processors can address enough space.. with 386 processors FAR addressing was introduced, which expanded allocatable address space drastically. (those silly DS, SS,
Sheesh, where are you coming from? You can address 64 Gig of physical memory with an x86 now, but you can only address 4 Gig (at most!) of it linearly. 32 bit address registers, get it? Gosh, and far addressing was introduced with 386's was it? Give me a break, try 8086's.
AMD's 64bit solution currently has no real value.. except for huge data storage (could work faster with 64bit data blocks) and probably some heavy encryption. x86-64 compiled Quake3 would make minimum use of 64bit registers.. and would probably be just a margin faster than IA32 compiled version.
Right, and I'm supposed to believe you on this, given your performance above. Um, you seem to have ignored the value of being able to crunch 8 byte integers, or pixels 8 bytes at a time, nicely matching the width of the MMX registers. For starters. Repeat this to yourself: "sledge hammer". "sledge hammer". Good, that's more like it.
Is IA64 better? Yes it is. IA64 has 128 usable 64bit registers, predicates... But that is not all.. in single 64bit register you can store 4 16bit values(common integer). (or 8 8bit or 2 32bit)
Um, and guess how many 16 bit values you can store in a 64 bit sledgehammer register? Ah, and guess how many fp/mms instructions sledge can retire per cycle?
Clawhammer will be better for a year or so.. but soon it will hit the ceiling. Intel will be able to get better performance from 1/2 clocked IA64.
You don't have any idea why it's called itanic, do you. Moderaters, take a look above. Remember, that's what 'random' looks like. Yes, I've got mod points right now. No, I won't waste them on you.
Re:AMD is deceiving you (Score:2)
Um, and guess how many 16 bit values you can store in a 64 bit sledgehammer register? Ah, and guess how many fp/mms instructions sledge can retire per cycle?
If I understood the whitepaper the answer is 2. You can also store 2 8 bit values in a 64 bit hammer registers. I know the math doesn't hold, but the ISA does; you can't access an arbitrary byte, word or dword section of hammer registers. Ofcourse the number you can just store is 64/r_size of r_sized values, but accessing them (except the lower two) requires rotate or swap operations.
Re:AMD is deceiving you (Score:2)
I'm just writing to say that 8 + 4 = 12
FAR addressing. (Score:3, Insightful)
Far adressing can handle 4GB. but 4GB is not much by today standards(it is not little either). You want flat adress space, and ram is cheap this days. 32 bits get you to 4GB, if you want more you have to resort to tricks. (those silly DS.DD etc)
With the first 64 bit alpha's they used this as an argument: it is useful for fast memory scans when using big databases.
--640 Kb will be enough....
Now we can wait for software support... (Score:5, Interesting)
Cases in point:
Silicon Graphics machines with MIPS R4400 (and up) CPUs were 64-bit, but the additional address and pointer space wern't utilized until IRIX 6.0 in 1994 -- over 18 months later. (And, of course, certain SGIs still run in 32-bit mode due to RAM concerns -- 64-bit requires more RAM -- all Indys, all Indigos, all O2s, and R4400 Indigo2s).
Sun machines with UltraSPARC CPUs were 64-bit, but again, the additional address and pointer space had to wait for software support. (Multi-stage transition to 64-bit, starting with Solaris 2.5 and finally complete with Solaris 7 in 1998).
Then there's application optimization. Many apps can get slight speedups by processing data in larger (say, 38-bit or even 64-bit chunks). Sometimes the difference is huge, many times it's small. But, lots of little speedups can add up across an entire system. Still, someone has to make these changes to apps and compilers. It takes time, testing, and adoption. In better times, SGI did several such overhauls... they got some insane speed out of Netscape Enterprise and Netscape FastTrack web servers during the Everest project. One of their engineers also did some cool (but nonstandard) hacks to Apache, including the very first pure, clean 64-bit port/mod.
Newer, faster, wider, more-torque hardware is always great. But don't forget the software.
Re:Now we can wait for software support... (Score:4, Informative)
Dunstan
Re:Now we can wait for software support... (Score:2)
Linux already runs find on itanic, oops I mean itanium. Linux runs on AMD hammer even before it's out [x86-64.org] .
64-bit is more than speed (Score:2)
32-bit CPUs use a 32-bit address space. That's space enough to address 2^32 bytes, or 4GB. With today's 100+GB hard drives and fractional to low GB RAM capacities, each requires its own addressing.
However, with 64-bits of addressing space, you have enough room to memory-map your entire stinking (yes, stinkin) hard drive into virtual RAM address space. This means your virtual address space would represent both RAM and your file system together.
Re:64-bit is more than speed (Score:3, Insightful)
--
Benjamin Coates
Virtual memory filesystem breaks "revert to saved" (Score:2)
You need sector-sized granularity for tracking changed sectors. The latter does not scale well.
The common implementations of virtual memory use page-sized granularity for tracking changed pages. Does that scale?
The big problem I've seen with using a single memory space is that applications often forget to implement multiple levels of undo. With many PDA applications, once you make two accidental changes to a file, the previous version is gone forever because many applications modify files in-place, breaking the "revert to last saved version of document" feature. This bit me in the butt several times on Newton OS.
Re:64-bit is more than speed (Score:1)
What's the point? (Score:1)
64 bits on a desktop = ?
I'm sorta missing why this is good for the rest of us...
Re:What's the point? (Score:2, Interesting)
MS need all that extra space for for their forthcoming operating systems, which will be more bloated than ever.
Linux will never be able to bloat as fast as MS, so MS will finally and inevitably win!
All your BSOD are belong to us!
Chips, maybe, but applications? (Score:3, Informative)
Actually, even that's not strictly true, since according to the Resource Kit documentation Windows XP's initial configuration detection is *still* 16 bit.
Re:Chips, maybe, but applications? (Score:2)
Re:Chips, maybe, but applications? (Score:2)
Getting techy for a minute, a PC's BIOS transfers control via a disk's boot sector to the boot loader in real mode, the boot loader (GRUB, LILO, NTLDR etc.) then loads the OS proper. Now, technically a boot loader could switch to protected (32 bit) mode before loading a fully 32 bit OS, but so far all mainstream PC OSs (yes, Linux and BSD too) run some initial boot code in real mode before making the switch to protected mode. Some make the switch sooner than others and I'm sure some of the experimental OS's out there make the switch immediately they gain control from the boot sector.
The main point though, was that for the 32 bit Windows platform (boot stubs aside) the process of Hardware support -> OS support -> decent app support has taken the best part of a decade. If you think the switch from 32 bit to 64 bit is going to happen much quicker, then you are probably going to be disappointed.
Re:Chips, maybe, but applications? (Score:2)
Bill Gates (Score:2, Funny)
He will finally be able to store his net worth in dollars in a single long int.
I'm tired of x86-centric articles... (Score:2)
Which is later followed by:
Hmmm... 3MB is astounding, but 8MB is unremarkable... Well, I'd have to agree. I haven't bought a server with less than 4MB of cache in years. Oops, the SunBlade is only a workstation... Kinda makes you wonder.
Sun might be expensive, but it's solid, fast (enough), and predictable. I love x86 (usually Linux) at home, but wouldn't dream of putting it someplace business vital - much less mission critical.
Business Vital == 1 maintance window per month and a mean time to recover exceeding 6 hours potentially costs several million dollars.
Mission Critical == 1 maintance window per quater and a mean time to recover exceeding 15 minutes potentially costs several million dollars.
Re:I'm tired of x86-centric articles... (Score:2)
Is it worth spending an extra 3-4K per machine to make sure you don't loose a couple million? I don't think math is needed to figure that one out. Plus, do you want to be the one responsible for it when it goes down? How much would a full redundant system cost to build with PC components that are equal to Solaris 1K+ servers like 4500?
I for one would never put a mission critical database or transaction application on linux PC. Not because linux isn't a good operating system, because PC components are designed to be thrown away in 1-2 years.
But where is Windows for x86-64? (Score:2)
Win64 has been ported to Itanium for some time now. We've already ported our memory-hungry special-FX app to it. But few people outside the server space are going to be interested in getting an Itanium because the performance with legacy IA32 apps is dog-slow. I mean really slow, like P90 speeds. So we don't expect too many sales of that version, just a few for hardcore dedicated seats.
Sledgehammer is really interesting to us. Combine the best available x86-32 performance for running 3DS Max, Lightwave, Photoshop etc etc, along with the serious memory & address space of a 64 bit CPU for our app, not to mention quite a bit more speed when doing 64 bit calculations (pretty common with 64 bit pixels), makes for a powerful and still flexible beast.
But without Windows support, the IA32 performance advantage is largely meaningless. In our market, that relegates Hammer to Linux-64 render farms - which is fine, but it's not where our money is, and it's not where the CPU would shine. You can use Win32 or Linux-32, of course (unlike Itanium), but that's kinda missing the point.
AMD better get MS & Win64 on their side soon, if they want to capture the workstation market. A lot of server apps still require Windows too. The reality of the market is that mainstream OS support is required, or you get niched PDQ.
But I *already* have a 128-bit computer (Score:2)
Why 128-bit? Because a 4-tupple (x,y,z,w) for vector and matrix operations can then be natively done. (Yes, I'm spoiled with the VU's on the PS2)
I do wonder when 64-bit cpu's will actually become a commodity item though. A 32-bit cpu provides 99% functionality for most of the general public using them. It's only gaming, scientific computing, & multimedia that really need the 128-bit registers, correct? Or am I missing something?
Secondly, for 64-bit cpus, is there a standard instruction set? Or do I need to compile our game code specifically for IA64, and Hammer?
I do agree, that a 64-bit address would be a welcome change. I can imagine the Database guys jumping up and down with joy once cheap PC hardware supports 64-bit.
Re:But I *already* have a 128-bit computer (Score:2)
Unfortunately ture, the early consoles would play marketing games like this.
> By that standard even the Pentiums were 64bit.
The (classic) Pentium is classified as 32-bit because the *general purpose* CPU registers are only 32 bits. (There are a few 64-bit registers, i.e. TimeStampCounter, etc)
With MMX/SSE, the PentiumIII is actually a 32-bit / 128-bit hybrid. It has *native* instructions and registers for *both* 32 and 128-bit processing.
http://x86.ddj.com/articles/sse_pt2/simd2.htm [ddj.com]
> However both the PS2 and PentiumIII are really 32bit.
Incorrect.
Pentium) I explained this above.
PS2) Do you even program on a PS2??
I think you need to re-read your "EE User's Manual", "EE Overview Manual", and "EE Core User's Manual" (Section 1.4) The core internal bus is 128 bits, and *ALL* the General Purpose Registers (total of 32) are 128 bits. What do you think LQ and SQ do? They load/store 128-bits to/from a register!
Now, it is true, that most PS2 instructions only deal with 32-bit (word) and 64-bits (doublewords), but there are native 128-bit multimedia instructions.
Don't let the fact that the PS2 treats the 128-bit registers as 2 * 64-bits, or 4 * 32-bits confuse you.
Technically the PS2 is a 64-bit/128-bit hybrid, much the same way the PentiumIII is.
Cheers
Re:So why do I need 64bits? (Score:1, Insightful)
Re:So why do I need 64bits? (Score:5, Interesting)
One word: addressing. With those 32 bits, you can typically address up to 2 gig files on your machine - which is a limit easily encountered when you start working with video, for instance.
It took hacks to get 4 gig of RAM working on x86 with the linux kernel.
Go 64 bit, and that limit vanishes. You keep your linear addressing, none of those ugly segments like in the unfamous real-mode of PC-XT times.
I don't see what's really new about it all though, we've had 64 bit since Alpha, and there's several 64 bit architectures around. It may not be mainstream yet, but will IA 64 or Hammer really change that (soon)? Allow me to have doubts.
Re:So why do I need 64bits? (Score:1, Informative)
Linux does some hacks to work around this...
But things will be made easier with 64 bits.
Like Intel X86 32bits does support in Linux support up to 8 gb memory.
This is a hack, and is not native for X86
Re:So why do I need 64bits? (Score:2, Informative)
With 32 bits you can address 2 gigabytes worth of addresses:
0000000000000000000000000000000 is one address
0000000000000000000000000000001 is another
0000000000000000000000000000010 is another, and so on.
With 64 bits I believe you can address up to 8 exabytes of RAM, which is equal to 8192 petabytes or 8589934592 gigabytes. It shouldn't be too long before some program out there requires most of that to run even though now it seems like infinite RAM.
Re:So why do I need 64bits? (Score:2)
I surely would call MMX as an ugly hack. Not because one needs to use normal registers to access data but because MMX uses FPU registers. Hello? To use instructions designed for 64bit integer calculations, you need to disable FPU? And remember, this was because OSes couldn't support task switching without changes if there wouldn't have been a hack like this. MMX is useful for such a special cases that practically no compiler generates MMX code - it's always hand-tuned assembler.
Re:So why do I need 64bits? (Score:2)
And, the physical address is LARGER than 32 bits. It is 36 bits on Wmt. The physical bus on the processors has 36 bits for the address (well, actually 33 bits, since all addresses are chunk aligned, but that's an implementation detail).
FYI, on x86-64, the maximum linear and virtual addresses are 2^48, and the maximum physical address is 2^40.
Re:So why do I need 64bits? (Score:2)
Chapter 3 of volume 3 of the current IA-32 manual is much more clear on this.
The easiest way to look at this is through paging, which clearly has a 32 bit size.
I'd be curious of a more complete reference for that citation (URL?)
Re:So why do I need 64bits? (Score:3, Insightful)
With most operations, 64 bits isn't 2x as fast its 1x as fast unless you deal with the stack in which case it could be even slower.
Addressing has little to do with word size. The 8088 shows that.
Suns running in 64 bit mode are offten slower than running in 32 bit mode.
Nintendo 64 games are all 32 bit code with just a few 64 bit operations. The good emulator proved that.
As far as going two 32 bit ops at once, I still don't need a 64 bit data path to do that, I just need several 32 bit data paths. What I don't need is to dump a bunch of unused 64 bit number on the stack everytime an exception happens (which one of my computers has done about 1047563950 times in the last 51 days)
Re:PCWORLD Link (Score:2, Informative)
Re:Why is gcc produced code so slow? (Score:2, Insightful)
Compilers are notoriously slow at catching up with the latest processor design, and you can probably expect gcc to catch up with the P4 around the time it's superseded by the 64-bit babies.
This is not to slur gcc - M$'s Visual Studio compiler suite hasn't yet been optimised for the P4 as far as I know (although I expect the
Re:Why is gcc produced code so slow? (Score:2)
Moderators on crack? (Score:2)
With the currently popular 32-bit CPU chips, Robot AI memory limitations are too severe because a memory of 2^32 size is not enough.
Ah, an attempt to be somewhat on topic. Hovever, I don't buy it - how much memory is enough - do you know or are you blowing smoke? And seeing as few machines have this much ram (2^32 = 4Gb ram) don't they use disk swap files or databases anyway? There are many file systems that can handle files this size already, so how exactly will 64bit processors suddenly enable AI in VB that can't be done at present?
An increase in computer power is a rising tide that lifts all boats, even crank AI, but how exactly is the move to 64 bits a sudden huge leap for your Javascript "mind"?
Crackpot spouts buzzword, film at 11 (Score:1)
--
Benjamin Coates
Re:64-bit Computing: Looking Forward to 64-bit AI (Score:1)
(But, well, yeah I guess he is.)