Understanding Bandwidth and Latency 160
M. Woodrow, Jr. writes "Ars has a very eye-opening article on the real causes of bandwidth latency and why we should just drool endlessly over maximum throughput issues. In particular, I think the author's look into the PowerPC 970 and the P4's frontside bus is interesting considering how we're constantly being told by marketers that more speed is always going to translate into massive performance gains. The issue is, of course, far more complex, and this article does a good job of thinking about the problem from an almost platform agnostic point of view."
Bandwidth (Score:5, Informative)
Re:Bandwidth-questions? (Score:2, Interesting)
Doesn't some DRAM's have S-RAM caches built in?
What about dual-ported ram?
How about seperate buses for C&C and data?
How about putting base instructions (zero out section of memory) into RAMs?
Is there any memory ordering by the OS to facilitate BUS filling?
Aren't you getting tired of these questions?
Re:Bandwidth (Score:3, Insightful)
C//
Ultra 320 SCSI (Score:5, Insightful)
So actually, Ultra-320 SCSI is the shit.
Re:Bandwidth - Useless Without Latency (Score:3, Informative)
Re:Bandwidth - Useless Without Latency (Score:2)
Surely you're talking about something different from the article poster, who was referring to the causes of an entirely different (and uncommon) metric: "bandwidth latency".
it's easy ... (Score:4, Funny)
I can't wait... (Score:1, Funny)
Re:I can't wait... (Score:3, Interesting)
Hope this helps.
Re:I can't wait... (Score:1)
Many ISPs have bandwidth caps.
Re:I can't wait... (Score:2, Informative)
Sure that's true, except when it isn't. I've seen a site get
Re:I can't wait... (Score:2)
Re:I can't wait... (Score:2, Informative)
Anyone remember this (Score:3, Funny)
Anyone hear from these guys lately, or at least know a url, if they havn't been bought out be the telecoms?
Re:Anyone remember this (Score:5, Interesting)
Wired had an article about it around the beginning of the year.
All the sceptics were correct, and eventually the believers let the idea slip out of the collective conciousness, not wanting to have to admit they were totally duped.
Re: your sig (Score:1, Offtopic)
Never put salt in your eyes.
Never put salt in your eyes.
Never put salt in your eyes.
Always put salt in your eyes.
AAAAAAUUUGH!!!!
God I miss kids in the hall. Thanks for the reminder.
Re:Anyone remember this (Score:2, Insightful)
It was pretty clear from the article that the guy was a crook and that there was nothing to his claims. But he got a lot of money from a lot of supposedly smart people.
By the way, what does the claim that "latency everywhere would be under 10ms" mean?
MM
--
Seems familiar... (Score:4, Informative)
Performance tip for software on modern processors (Score:5, Interesting)
Much software is not written to take advantage of the architecture of modern microprocessors. If you rewrite some of your software to take advantage of them, then it is not hard to double your speed.
The problem is that many, if not most programs are not very intelligent in how they access the CPU cache.
It is not uncommon for a CPU to be running at ten times the speed of the memory bus. To keep from starving the CPU, we have caches that run nearer or at the speed of the processor.
There's two problems. One is that the cache is limited in size. The other, less well understood, is that the cache comes in small blocks called "cache lines", that are typically 32 bytes.
So if you have a cache miss at all, or you fill up the cache and have to write a cache line back to memory, your memory bus is going to be occupied for the time it takes to write 32 bytes. The external data bus of the PowerPC is 64 bits (8 bytes) so there will be four memory cycles, during which the processor is essentially stopped.
What can you do to maximize performance? Make better use of the cache. If you use some memory, use it again right away. Use other memory that's right next to it. Avoid placing data values near each other that won't be used near each other in time.
Simply rearranging the order of some items in a struct or class member list may make cache usage more effective.
Also be aware of how your data structures affect the cache. Be aware of data you don't see, like heap block headers and trailers.
Arrays are often more efficient than linked lists, especially if you are going to traverse them all at once, because each item in a linked list will likely be loaded in a different cache line, where an array may get several items together in a cache line.
Finally, if you really have a structure that's full of small items that is accessed in a highly random way, consider turning off caching for the memory the data structure occupies. You won't get the benefit of the cache after you've accessed an item, but on the other hand you won't have to wait to fill a 32-byte cache line each time you read a single item.
Imagine a lookup table of bytes that's several hundred k in size, accessed very randomly - you would benefit to not use the cache.
Re:Performance tip for software on modern processo (Score:2, Informative)
Why the compiler can't help you (Score:5, Informative)
Also every compiler I have ever come across stores struct and class members in the order they are declared in the source file. I don't think that's guaranteed by either C or C++, but that's how it always is.
Also, the compiler is not going to make fundamental changes to your data structures and algorithms for you. If you write some code to manipulate a linked list, there's now way the compiler will change that to an array for you because it thinks it might be more efficient.
The one case I have seen tools able to affect cache access in a positive way is the use of code profilers that record the most common code paths in your program and then edit the executable binary so that all the less common code paths are towards the end of the file. Thus if you take an uncommon branch, you might jump back and forth a megabyte within a single subroutine.
Apple's MrPlus did that. It was based on an IBM RS-6000 tool whose name I don't recall.
This has the advantage not just of improving cache performance but of reducing paging - a greater percentage of the code pages that are resident in memory are used for something useful, rather than containing code that is mostly jumped over. Uncommonly used code will all be at the end of the file and may never be paged in.
One problem with a tool like this is that the results are only valid for a certain use of the program. If you have a program that can be used in many different ways, it may be difficult to find a test case that helps you.
Re:Why the compiler can't help you (Score:1)
Re:Why the compiler can't help you (Score:2, Informative)
Re:Why the compiler can't help you (Score:1)
No re-ordering classes
Re:Why the compiler can't help you (Score:2)
The preprocessor doesn't know what the layout of the structure is, and it doesn't have to. offsetof() is typically defined in <stddef.h> as something like:
#define offsetof(_T, _M) ((size_t)&((_T*)0)->_M)which the compiler will evaluate based on the way it actually laid out the structure. But see my comment above.
Re:Why the compiler can't help you (Score:2, Informative)
Also every compiler I have ever come across stores struct and class members in the order they are declared in the source file. I don't think that's guaranteed by either C or C++, [...]
Yes, it is.
Re:Why the compiler can't help you (Score:4, Informative)
That's guaranteed to happen to a group of non-static member variables with no access specifiers among them. So for example in:
class foo
{
public: int bar; int baz;
private: int quux;
};
'baz' is guaranteed to be placed after 'bar' in an instance of class foo; but 'quux' might not be placed after 'baz'.
Re:Performance tip for software on modern processo (Score:3, Insightful)
When I worked there - we ran the DRG Game lab - which was for getting game developers to optimize their code to take advantage of new instructions etc on the latest processors.
This made the processors look better, any game that we tested that ran better on the processors after having the code optimized was pushed out with a big marketing hoopla and Intel would say "HEY! come look at our new machines - look how great X software title runs on the latest and greatest"
But the truth is that this was pretty much all fake - as rather than testingthe software on the exact same boxes that had just two different processors - the tests were done on boxes that had totally different configurations - although we never told anyone about that littel detail.
Marketing & Intel VTune Performance Analyzer (Score:1)
I haven't tried, but I would be surprised if VTune ran on an AMD processor.
For the very fastest code, you can take advantage of special instruction, write stuff in assembly with the clever use of registers, etc. But the performance gains won't be portable.
Optimizing cache use could be considered a non-portable optimization, but it can be done directly in C or C++, and any processor most people are likely to use will use a cache. There will just be some variations in its size, the size of a cache line and stuff like that.
Re:Performance tip for software on modern processo (Score:1)
If you use some memory, use it again right away
That type of memory is called a 'register' in the CPU. The compiler will perform the optimisation you describe using these 'registers' for you.
What if you don't have that many registers? (Score:1)
Re:Performance tip for software on modern processo (Score:2)
Re:Performance tip for software on modern processo (Score:1)
Please forgive my failing memory, but isn't the functional unit requesting the load notified immediately when the requested word is available from the load/store unit? Unless I am imagining things, I seem to remember this procedure:
Wouldn't it be a silly implementation that forces the load/store unit to wait for the entire cache line to be read before returning the requested word?? In other words, doesn't the memory hierarchy bring the cache lines in "in the background" while the requested data is returned to the load/store unit? And wouldn't this mean that turning the cache off doesn't solve "cache line latency" since it doesn't really exist to begin with?
Re:Yes, you're right (Score:1)
Re:Performance tip for software on modern processo (Score:3, Funny)
The linux kernel guys pay attention to these thigns and code for them by hand. Hence their badass performance
Re:Performance tip for software on modern processo (Score:3, Informative)
It's worse than you think on PCs (whatever OS they're running). The article talks about "bus mastering" and "data tenure", but on real workstation-class hardware there is no bus (not even one with a "north bridge") there'a a proper switch, like Crossbow or GigaPlane. These give you point-to-point, non-blocking sustained peak I/O. On a switched system, if components A and B want to communicate they can do so at the switch's full speed, and so can components C and D, no contention at all. That means no wasted cycles for the bus to constantly change ownership.
If you're doing a job that requires heavy use of the "bus" on an x86 system (lots of storage I/O, lots of random memory access hence lots of L2 misses), then optimizing code for cache locality is the least of your problems, you'll never get around the fact that the inefficient design of the hardware itself is the bottleneck. Fancy FSBs and the like are just workarounds and don't address the real problem.
Re:It can be done better with self-modifying code (Score:2)
and for quiche.
Re:It can be done better with self-modifying code (Score:2)
Re:It can be done better with self-modifying code (Score:2)
Self-modifying code is bad... (Score:4, Informative)
This is obviously a troll, but seeing it's been moderated up I should warn the kiddies out there that this is a *bad idea*...
Self-modifying code makes pipelining, branch prediction, instruction cacheing (particularly on SMP systems) and a bunch of other things dangerous, and just slows down the processor as it checks for and deals with it. IIRC some architectures don't even explicitly check for it anymore and die horribly if you try it.
Aside from the fact that trying to debug self-modifying code is just asking for fscking trouble....
But self modifying code is fun! (Score:3, Informative)
Used with care, self-modifying code is a powerful and useful tool. And yes there are caching issues - most processors have separate data and code caches, so writing into code using data instructions will put the code into the wrong cache, so you have to flush it.
We couldn't have program loaders without self-modifying code!
A number of the products I wrote for the Mac back at Working Software [working.com] were self-modifying code, and they did very well.
You just have to know what you're doing, that's all.
Another use for them is dynamically relinking a running program as you edit its source code. Instead of relinking and relaunching the whole program, you can just reload the last subroutine that you edited. This is done by a number of development environments, and can greatly speed up the edit-compile-debug cycle.
Re:It can be done better with self-modifying code (Score:5, Informative)
It's going to take more than 1 cycle to keep those lines coherent, which is going to increase your average I-cache latency (and is exactly what you're trying to avoid). You really don't want to do this on modern processors. Besides, if your inner loop is big enough to thrash in your I-cache, you've got bigger problems (pun intended)... and if it's not big enough, you're not going through that slow memory bus, are you?
Bottom line: self-modifying code is a bad idea.
Second bottom line: Modern Java JITs end up doing this sort of thing, which gives computer architects a major headache!
Re:It can be done better with self-modifying code (Score:2)
And how do you think the new code gets into the cache in the first place? Whether the CPU loads it from memory or your program copies it from memory, most of it has to come from memory somewhere. But if you copy it yourself, it just gets there so much more slowly. Also, on many architectures, if you modify code, you have to synchronize the cache, which is very, very expensive.
The closest to self-modifying code these days is runtime code generation, as in JIT compilers. They win not because of cache effects but because they can generate code based on runtime information.
Re:It can be done better with self-modifying code (Score:4, Insightful)
So self-modifying code is rarely important (and of course very hard to write/maintain). Code with dynamic compilation (e.g. jvm) is possible to write in a sane way, and can give potentially large speedups. Of course, this goes for C as well. Sometimes for an inner loop, it's better to write a C-program at runtime, compile it, and load it as a dynamic library instead of having lots of parameters to the function. Of course, that is much more heavyweight than what the JVM does. It would be nice to have a portable alternative. But actually modifying that code afterwards is really hard (and inherently non-portable).
Of course, there are some uses for self-modifying code that can be made quite safe, and simple to understand. E.g. Knuth's MMX uses self-modifying code to store the return address in procedure calls. (I believe that was quite a common thing to do when making FORTRAN compilers back then...).
On the x86, such tricks are relatively easy, because the x86 tends to almost always have instructions available where you can store a full 32-bit pointer/integer in the opcode (whereas most RISC architectures will not). But you will not get a speed benefit by using it, as explained above in the first paragraph.
Re:It can be done better with self-modifying code (Score:2)
Yes, that's why. But I really think it's still true, and not just true for some ancient pre-fortran.
I guess you could put a self-modifying trampoline on the stack containing the return address, but ... why not just store the address then?
I hope you are the only one who did pose that question. The rest of us would happily just store the return address :-)
Re:It can be done better with self-modifying code (Score:2)
Re:It can be done better with self-modifying code (Score:1)
1: Notoriously hard to debug (I don't know anybody who can write a few hundred lines of asm code without making a mistake)
2: Slow
3: We forgot it for a reason: Intel have branch prediction, which is a GODSEND - modify your code and you lose it for all those conditional jumps
I wrote some self modifying assembly code for a robot controller back in university CS - it was a bastard to debug because I hadn't meant to write self modifying code in the first place
Re:It can be done better with self-modifying code (Score:1)
You also potentially throw away deterministic behavior, which can be an especially bad thing in certain application realms.
Re:I'm no democrat. The demos are corporate whores (Score:2)
The miracle of cache (Score:5, Interesting)
Once upon a time, on mainframes of the 1960s, minicomputers of the 1970s, and desktop computers of the 1980s, there was no cache. Every time the CPU wanted something from memory, it went all the way out to the memory bus (which, in early minis and PCs, was also the peripheral bus.) This was OK, because memory latencies were about 1000ns, and that was reasonably well matched to CPU speeds in the 1MhZ range.
But today, we have 2GHz CPUs. We thus ought to have 0.5ns main memory to match, but what we have is about two orders of magnitude slower. The fact that modern systems are capable of papering over this issue is, when you think about it, a huge achievement. Of course, what really makes it go is that fast, but expensive, memory in the caches.
Virtual memory hasn't done as well over the years. In the 1960s, the fastest drums for paging turned at around 10,000 RPM. Today, the fastest disks for paging turn at around 10,000 RPM. (Bandwidth is way up, but it's RPM that determines latency.) Meanwhile, real main memory has become about 20x faster, and main memory as seen by the CPU at the front of the cache is about 1000x faster. There's nothing cheaper than DRAM but faster than disk to use for a cache, so cacheing isn't an option. As a result, virtual memory buys you less and less as time goes on. With RAM at $100/GB, it's almost time to kill off paging to disk. Besides, it runs down the battery.
Caches old tech (Score:3, Interesting)
Re:The miracle of cache (Score:3, Insightful)
I agree with you except that having a gig or more of RAM won't exactly do wonders for your battery life either.
Fairly Unimpressive (Score:5, Interesting)
Summing up, the article doesn't inform the technical, will confuse the non-technical, doesn't follow any consistent set of example conditions, contains very arbitrary graphs, and is generally poorly written. It is possible that I couldn't do any better (before I get flamed), but I doubt any technical writer worth his/her salt would do much worse.
Re:Fairly Unimpressive (Score:2)
I think the pictures and graphs did their job (he chose those analogies for a reason), but you have to be on the ball.
All in all, a good read for a sysadmin who isn't an electronic engineer.
Re:Fairly Unimpressive (Score:1)
Andrew S. Tanenbaum (Score:5, Funny)
Never underestimate the bandwith of a station wagon full of tapes hurtling down the highway.
The latency is terrible, though.
Re:Andrew S. Tanenbaum (Score:3, Funny)
hey! look at the bright side (Score:2)
Re:hey! look at the bright side (Score:2)
Wouldn't that be a `smashdotting'? (Score:2)
Re:hey! look at the bright side (Score:2)
I wouldn't worry about it too much (Score:2)
The rules of thumb are pretty much the same now as they ever were: preferentially, access memory sequentially, and for non-sequential accesses, keep the accesses local; there are a bunch of programming tricks for that that work as well now as they ever did. If you can, use a hand-optimized, architecture specific library like BLAS. As a last resort, rewrite tiny bits of performance critical code in a language like Fortran 77, where the compiler may be able to do a bit more optimization than C/C++.
If a processor, compiler, or system architecture requires any more specific hacks to reach its stated performance, then for practical purposes, its performance is overstated. The only way to know is to run your code (or a set of benchmarks similar to your code) on it and see whether it runs fast enough.
This is more important to modern game optimization (Score:5, Interesting)
When I started in the mid 1990's the current thinking about optimization among those who cared was all about reducing cycle counts, and paring instructions for a Pentium. Memory system and bus behavior was mostly ignored or assumed to be rendered irrelevant by on-chip caches.
During this time, while I was working on the graphics core for Age of Empires, I had lunch with Michael Abrash, who was at id software working on Quake at the time. While eating Mexican food, he casually mentioned the results of some memory bandwidth testing he had done and how he was shaping the rasterizer to make use of the time spent waiting on memory writes. This interested me enough to perform similar tests on my own work, and the results were telling.
I wound up with core rendering code that, if you used the conventional cycle counting wisdom of the time, appeared to be slower than what it replaced... but in fact was faster, especially for various effects processing. Both games had very large hand-written assembly software rendering routines, in the size 10K+ lines.
The reason for this of course was that memory bandwidth was being maxed out and with clever restructuring of code, it was possible to put the wait time to use on related processing, even if the code appeared to be more awkward and cumbersome that way. Though the exact memory behaviors would vary from system to system, one thing that was true and only got more so was that CPU speed was outstripping memory speed. Games like Quake and Age of Empires would have to process, in what usually amounts to a mutated memory copy, large amounts of textures or sprites each frame; so the data in question was pretty much guaranteed not be in the CPU caches.
You would think that with the current generation of games using Hardware 3D only, this issue would be reduced to upload speed across the AGP Bus, but if Age of Mythology is any indication, that's not going to happen. In Age of Mythology we were able to make some significant performance gains by using the same techniques of coding to make the most of the slower speed and latency of main memory.
As long the effort keeps paying off in increased FPS rates, we're going to be coding our games to account for and best deal with the realities of how the CPU relates to and waits on Cache and System memories.
A repeat duplicate article? (Score:1)
As in, on Slashdot at least 3 times in a short time.
There are too many issues, and it gets too complex (Score:5, Informative)
For example, a few syncronization commands , and eieio paranoia when not needed in drivers can slow down IO.
A good PCI-X capable Fiber Channel card on a mac can get 49 microseconds per complete genuine 512 byte IO (over 20,000 IOs per second) and thats per channel, but just a few mistakes in the hardware interrupt handler or cache coherency misunderstood paranoia can add many microseconds.
Even the fastest direct IDE cannot get speeds that fast (49 microseconds).
And SCSI 320 barely does.
But what about REAL WORLD, as we al know from the press releases of RC5 competition a standard mac g4 laptop was over twice as fast as Pentium 4 desktop units.
In fact, apple only sells dual cpu systems now, and the ones they sold in Feb 2002 got over 21,129,654 RC5 keyrate for dual 1.0 ghz macs.
The fastest AMD boards, dual cpu, no l3 cache available, get only 10,807,034 RC5 keyrate!
half for AMD
way less than that for Pentium 4.
Why? The Pentium 4 lacks a good 32 bit barrel shifter.(4 clock latency on left shift!)
Why the AMD is so slow? Perhaps because no L3 cache but the object code and data set of RC5 benchmark (get source yourself)fits in AMD L2 cache.
Cold memory random read and write is FASTER on macs than DDR machines as seen in benchmarks but this author does hit upon that topic indirectly a little. Even if macs in Feb 2002 were faster than AMD for scatterred random read and write, the current 3 desktop macs all use DDR ram now so probably lack speed boost for that action, but do have write agregate (combined writes) across pci bus and other tricks.
Macs also have a lot of other little advantages to offset thepenalty of huge RISC instructions... a great C-language way of programming the SIMD execution engine (called Altivec by Moto) and its SIMD is very good. Its SIMD has a few very minor assists to the RC5, but as experts have shown, removing them competently does not cripple apples speed much.
The fastest macs have alwasy had the fastest GENUINE IO.
In fact, copying data in 1992 was twice as fast to do for real using RAID, than copying to dev/null (nothing transferred) on a high end SUN!
People complained that dev/null was not optimized.
the truth is that commands that xfer data using cache controller tricks and not using cpu registers on macs help out enormously. Motorola 040 machines xfer 128 bit aligned dat 16 bytes per cycle using the strange and special cache controller command (trick) called Move16.
move16 made the sun servers look slow and silly, not the badly written dev/nul.
in 1995 I saw with my own eyes 6 Seagate ST 12450W drives (each had two heads per surface very very rare drives) transfer almost 65 megabytes per second sustained on a high end mac.
that was 7 years ago, and the fastest PC for all the money you had with the fastest adaptec controller you could find and the best raid was : LESS THAN ONE FIFTH AS FAST.
And now in 2002 you have people endlessly worrying about AGP and PCI-X without understanding those are OUTPUT tweaks not INPUT sppedup tweaks, and people trying to speed up streaming speed of ram faster and faster without realizing that speed of L! and L2 cache are Key.
Or ability to SHARE the L2 cache amoungst multiple cpus.
The hiddedn "backside only" cache of Pentium 4, and older macs, is the reason you could only have one cpu.
having two fast, low voltage, high speed cpus or more is key to performance in 2003.
you cannot do this with Pentium 4, you need to use expensive xeons if you want 2 intel chips on one board, else use pentium 3.
And pricewatch this week shows a 800 Mhz itanium from intel (base model now) at over 7 thousand dollars.
7 thousand! no wonder 6 or 8 box vendors dropped plans to use itanium this year. Geeeez.
FAST L2 and L3 cache is where its at.
The latest mac cpus to come out in a couple months (not the Power4 based ones in august), the moto ones, will allow 4 megabytes of L3 cache instead of 2, and have a staggering 512K of L2 cache running at 1 ghz, instead of 500 Mhz.
I did not even think that was possible in todays world.
feeding a rick chip is harder than intel, because the data code cache only holds half as much logic with the wasteful 32 bit opcodes, but the ALIGNED data, the sweet wonderful mac world ALIGNED DATA help the mac enormously.
There is no "PACK(1)" prgma for c structures on a mac.
I am not kidding.
Its not part of the mac experience.
True, many fields are 2 byte aligned instead of 4 byte aligned at times, but since 1995 apple has stressed 32 bit aligned integers and 64 bit aligned qauds religiously.
Macs perform well because of ALIGNMENT of structures.
Do archetecture people understand how many obscene PACK(1) (8 bit aligned) structures there are in Win32?
do they even code on multiple systems?
I do. If you use a 64 bit integer that is 2 byte aligned on a Pentium and pass it as argument to MS Win32 it will silently fail in some of its timer routines. That never happens on a Mac, plus mac routines tend to paranoia check a little more often on input, but not always.
multiple registers helps a coder
multiple registers helps assemly coders avoid push-pop hell
people need to think about those things too before wasting time religiously bragging about high end streaming speed of RAM.
ever timed REAL IO? Real IO pumped from card to card faster using good DMA back-to-back faster than could ever be moved using conventional single registers?
architecture is all about asking why?
Why use floppy disks in 2002?
Why use big hot parallel printer connectors in 2002 or ever ( IBM CHRP ref spec demanded it on hand helds!)
(IBM "PREP" spec required centronics connector on handhelds too!!!, MS Win 95 spec insisted on it strongly, but said SCSI was not highly important)
Why use ISA in 2002?
Why use hot hot steamy chips that do lots of speculative branching eating up power? Apples fastest machines use microcontrollers. I kid you not. They are using MICROCONTROLLER cpus with very very shgort pipelines and very very little speculative branching and very low power requirements
Why use PS2 keyboards?
Why insist on VGA at boot?
Why insist on legacy BIOS calls that have no relevence except for anciet OSes taht are not even guranteed to run by motherboard vendors?
I respect legacy too, but the legacy of Apple spurned all of these in 1984. Yup. macs never had any of that slop, though they do have open-boot style pci, and now use vga style connectors (though the connectors have detect diaodes in them to see waht size monitor tyou have), and have IDE now as default drive, though very fast performing vs pci bus contention. In fact apples 14 drive server uses 14 IDE controller chips for each of the 14 IBM GXP120 gig drives. 14 chips! 14 masters! Each pumping 35 megabytes sustained or more, and for only 15,000 bucks with fiber channel. Unfortunately its a 3U, but the drives are cold.)
I think its funny that people try to write papers yapping about things that can change rapidly in one or two years, or have little bearing on true io speeds.
The sad truth is that right now... RIGHT NOW... in 2003 November not ONE motherboard on pricewatch or for sale that I know of supports PCI-X, except for rich-man XEON and rich-man itanium.
NONE.
No Pentium 4 with PCI-X, no mac (though apple X-Serve is 488 megabytes per second per slot), and no MP AMD and no AMD thunderbird class.
Just vapor-hardware and promises for 3 straight years.
Now AMD said they will give fast PCI only to Hammer chips and hammer chips are getting horrible benchmark speeds.
Does anyone reaslize how pathetic PCI slots are in 2002?
I have in my machine 3 different pci-X cards and i have to run all of them at slower speeds even though some are capable of 770 megabytes per second bidirectionally (in-out simultaneous), at 133Mhz.
This world sucks.
And RAM? Don't make me laugh! Try to find an AMD board that takes 4 gigabytes of RAM and USES it as fast as the fastest AMD can. every tweaker site says you can only use one 512MB part and have a max of 512MB.
Thats insane. i have not one machine with less than 768 MB in this house and my main mac from 1995 supported and allwoed a single user proccess to hold and lock (physical real ram) 1.5 MB of memory.
In 2002 no linux with any normal tweak allows a user task to hold and lock 1.5GB of reeal ram, its all virtual or fake.
Even most UNIX never allow more than 3 GB of physical REAL RAM in total usage ever... its all wasted for bad VM designs.
nobody cares. Everyone says "I know 7 different unix OS that support 4 GB of ram" and then you have to reming them that VM is not RAM and physical RAM can be easily proven to be there or not and that no intel unix allows tasks to utilize 1.5 gigabytes of real physical RAM normally. And even if netbsd is hacked it runs no shrinkwrapped software. all shrinkwrapped software is mac or windows.
thankfully apple is migrating to 40 bit address space physically soon in august with the new lightweight Power4.
does anyone think that this nightmare of not physical ram in osses is real problem or not?
sure NT has a
Arrrgh! I hate all this least common-denominator lowest-cost-component world.
Fake powersupplies that lie about ratings over 450 watts
cheap-ass capaciters that heat blow and leak beacuse tantalum costs too many extra cents
traces that corrode instantly in salt air near ANY coast, especially in florida
fans that silently die and expensive fans doing the same
drives that have 34% failurerates after 18 months of usage (Fujistu lawsuit, IBM lawsuit)
And to think that people try to make themselves feel good that they can move memory from one area to another quickly using ram streaming commands. BIG DEAL! Try moving it to a disk drive ro through a network connector or to another CPU. (many multi cpu designs cap inter cpu speed to 50% or 25%).
who cares about ram streaming! bus contention, pci latency, and cold ram jump reading are far more critical issues.
But no one cares. They just want to download mp3, porn, dvdrips, and console warez and you can do that on any 5 year old box.
What a terrible world when a hard drive from seagate in 1995 allowed 12 megabytes per second SUSTAINED and in Nov 2002 the fastest single spindle drives sustain only 39 MB per second or so.
What garbage.
And the PCI bus is not 50 times faster after all these years, or 40 times, or 20 times faster, or 10 times faster, its so slow even at 64 bit, 64Mhz I want to just cry.
Re:There are too many issues, and it gets too comp (Score:2)
Re:There are too many issues, and it gets too comp (Score:3, Funny)
To be able to read a 2 page long comment.
Especially when it wold only normally be a small paragraph.
Except that the author thought that it wasn't long enough.
So they typed it like this,
And made everyone hate them.
Re:There are too many issues, and it gets too comp (Score:2, Funny)
Re:There are too many issues, and it gets too comp (Score:3, Informative)
False for not-pretend 64-bit architectures (e.g. UltraSparc) and has been for years.
Re:There are too many issues, and it gets too comp (Score:2)
Really sad thing is: we could get by without those fans at all. Run the CPUs a few percent slower and use non-power-hog architectures, put a real heatsink on the PSU instead of toys and a blower, put multiple heads in the drives instead of spinning them faster (or better still, install more RAM so the disk gets hit less often).
And seal the case up completely. No corrosion problems - with optical connections and batteries (machine consuming a fraction of the power that your desktop P4 does) you could in theory take your computer swimming. What would you call a mouse that operates in water? An eel?
And how about `level 5' cache: buckets of slower, low-power, low-cost RAM for swapping, temp files, disk cache etc?
Re:There are too many issues, and it gets too comp (Score:5, Interesting)
A good PCI-X capable Fiber Channel card on a mac [...]
There are no Macs that support PCI-X. I am therefore suspicious of the numbers you claim for this configuration.
Next, RC5. The rant here seems similar to another Anonymous Coward post back here [slashdot.org]; I'm not going to copy in my response [slashdot.org] again; quick summary: I didn't buy my computer to run RC5 really fast, and neither did you.
Cold memory random read and write is FASTER on macs than DDR machines as seen in benchmarks but this author does hit upon that topic indirectly a little. Even if macs in Feb 2002 were faster than AMD for scatterred random read and write, the current 3 desktop macs all use DDR ram now so probably lack speed boost for that action, but do have write agregate (combined writes) across pci bus and other tricks.
This paragraph is confused. Yes, "cold start" memory latency is very important for many tasks, and is often overlooked. But how is the first sentence be true when many Macs are DDR machines? And where are these benchmarks? I just went looking for DDR Mac latency scores and couldn't find anything. Does anyone have lmbench memory latency numbers for the Xserve or the current PowerMacs? Oh, and write combining is hardly a Mac trick.
The hiddedn "backside only" cache of Pentium 4, and older macs, is the reason you could only have one cpu.
Incorrect. You just need a cache coherency protocol between your processors. "Backside" has nothing to do with it. For example, the dual-processor Pentium III box I'm typing this on has "backside" cache on each processor; it's just hidden inside the CPU packaging rather than brought out to extra pins to connect to an external cache.
There is no "PACK(1)" prgma for c structures on a mac.
struct foo { char c; int i; } __attribute__ ((packed));
struct foo foo_inst;
main() { printf("%d\n", (int)&foo_inst.i - (int)&foo_inst); }
happily returns "1" on 10.2. In fact, if i doesn't cross a double-word boundary, there is no penalty for use on later CPUs. Yes, I just verified this on the G4 downstairs.
And RAM? Don't make me laugh! Try to find an AMD board that takes 4 gigabytes of RAM and USES it as fast as the fastest AMD can. every tweaker site says you can only use one 512MB part and have a max of 512MB.
Although you can't get the absolute, topped out single-CPU performance with it, dual-CPU boards like the Tyan ThunderK7Xpro support up to 4G of registered PC2100 RAM now; these boxes still comfortably beat current top-end G4s at tasks like SPEC CPU2000. If you really want a lot of memory you'll have to get a box from a major vendor; the Dell PowerEdge 6650 [dell.com] comes to mind as a 16G machine. Unfortunately, there aren't any AMD boxes out there like this that I know of, but Hammer will change that.
In 2002 no linux with any normal tweak allows a user task to hold and lock 1.5GB of reeal ram, its all virtual or fake.
Get an Alpha. Although I have no direct experience with this, reliable sources claim you've been able to go past the 32-bit 4G address space limit for several years.
thankfully apple is migrating to 40 bit address space physically soon in august with the new lightweight Power4.
Why wait? Apple isn't the only vendor out there.
Re:There are too many issues, and it gets too comp (Score:2)
Calculating Latency (Score:5, Informative)
Ace's Guide to Memory Technology [aceshardware.com]
Basically, the latency of the whole memory (From FSB to DRAM) system is equal to the sum of:
If you want to calculate the latency that CPU sees, you need to multiply the latency of the memory system with the multiplier of the CPU. So a 500 MHz (5 x 100 MHz) CPU will see 5 x 9 cycles latency. This CPU will have to wait at least 45 cycles before the information that could not be found in the L2-cache will be available in the cache.
Re:Calculating Latency (Score:1)
Lame article, by Ars standards (Score:3, Insightful)
And why all that complex hand-waving about practical upper limits to burst length. He gave all kinds of secondary limiting factors, but missed the obvious one: How about the simple arguement that long bursts are useless unless you have a reasonable expectation that the speculativly fetched portion of the data will be consumed. Moving lots of data fast is only useful if a substantial fraction of it is data that you care about.
(It's the same reason that there's an upper bound on the useful cache line size.)
How Does Increasing FSB affect Performance? (Score:3, Informative)
Athlon XP 2800+: 333 MHz FSB and nForce 2 [aceshardware.com]
First of all, we tested the Athlon XP 2800+ on the "normal" KT333 platform with a 17x multiplier, the FSB set at 133 MHz DDR (266 MHz) and the memory set at 166 MHz DDR (333 MHz), CAS at 2, RAS to CAS at 3, Precharge at 3. The second time, the KT333 platform (ASUS A7V333) was set at a FSB of 166 MHz (333 MHz) and the multiplier was set to 13.5x.
Where do I start? There is an enormous amount of info hidden is this table. Let us first start with the 266 MHz versus 333 MHz FSB discussion.
There have been many reports that show that the Athlon does not benefit much from an increase in FSB clockspeed, moving from 266 MHz to 333 MHz. But Membench tells us exactly why. First of all, compare the two KT333 latency numbers (64 byte strides). All BIOS settings were exactly the same, only the FSB speed, and thus the multiplier, are different. Normally one would expect, everything else being equal, that the Athlon with the 166 MHz FSB would see 25% lower latency, but the CPU with the 166 MHz FSB version actually sees a higher latency! This shows that the (ASUS) KT333 board, in order to guarantee proper stability, increases certain latencies of the memory controller. Memory bandwidth increases by 14%, which is also less than expected.
Now what does this mean for "real world" performance? It means that many applications will see either a very small performance increase or none at all, as it is latency and not bandwidth that is the most important performance factor. Let us explain this in more detail.
Re:How Does Increasing FSB affect Performance? (Score:3, Interesting)
The real world scoop on this is that someone typing a document in OpenOffice or surfing the internet won't see any performance increase over a Pentium-II 233 MHz machine with 64MB of 60ns RAM. Gamers might get 46 gajillion frames per second instead of 42 gajillion frames per second, which is completely indistinguishable to humans, so they won't notice either
I, on the other hand, might see a simulation of a 5250-node electromagnetic scattering problem take 36 hours instead of 39 hours, which is quite significant. But, I would probably get the same increase in performance by going through and cleaning up my code a little. FORTRAN is funny that way...
Making computers faster to the nth power only makes code that's worse to the n+1th power :)
Re:How Does Increasing FSB affect Performance? (Score:2)
It should read: Making computers faster to the nth power only makes code that's worse to the (n+1)th power
Otherwise, what you have is n+(1th) power...
The bliss of SDRAM banking (Score:3, Interesting)
This avoids incurring latency since read commands can be issued in parallel to incurring the cas latency.
There's more details on this in the SDRAM specification (lost the URL but it's out there - I think its Intel who wrote it though).
Read this (Score:2, Informative)
Everyone who is interested in issues of bandwidth and latency should read this book :
Computer Architecture : A Quantitative Approach, by John L. Hennessy and David A. Patterson
What kind of bandwidth and latency? (Score:1)
Old sayings... (Score:1)
- Never underestimate the bandwidth of a station wagon filled with magnetic tape.
- What's the fastest way to get 1 TB of data from LA to NYC? FedEx.
We can translate that to modern terms but the idea is the same. Just because bandwidth is high, doesn't mean that the latency is low.
My favorite old saying (Score:1)
"Money can buy bandwidth, but latency is given by God"
(You can always increase bandwidth by adding more bits, but the speed of light is fixed...)
So old it clunks (Score:1)
In the distant past, embedded systems used eproms that were rather slow, so memory access needed several wait states - the author doesn't seem to know this ancient term - while the eprom went "duh, that's address #F0F0, better go back in the stores and find the data". So as soon as fast RAM was cheap enough we would load the eprom contents into RAM at power up (or at least the frequently accessed bits) and then run from RAM where no wait states were needed. This was usually a 50% performance boost without changing the processor
And there you have it. Substitute L1 cache for fast RAM and dram for eprom, and despite the fanciness of the modern technology, and the enormously bigger memory space, nothing has really changed.
Re:Who cares? (Score:5, Interesting)
Also, I've been doing a lot of numerical calculations in python, because the time saved writing the code is much greater than the time spent waiting for it to execute. Nevertheless, knocking down a run time from 7 hours would still be really nice, even if I have it running on someone else's computer. Even the five minute solves that could be reduced to 1 minute would make a difference - because five minutes isn't enough time to do something else.
Re:Who cares? (Score:4, Insightful)
Re:Who cares? (Score:3, Insightful)
Remember, we are living in a day when there are massive amounts of progress. Just because you cannot see any immediate use for the processing power doesn't mean there isn't any anywhere or that you won't need it in the future.
What about 3D animators? Compile times? People in the print field who deal with massive 300DPI images? What about actually being able to have that Microsoft Paperclip run without 100% CPU usage?
Games are certainly pushing the CPU mark along one area, but remember computing isn't just limited to home use as well.
Hope this helps.
Re:Who cares? (Score:1, Interesting)
Re:Who cares? (Score:3, Insightful)
Well, let's just say you decide to buy a new computer. And the sales guy starts telling you that you could get this one for $1300, but this over here is much "faster" and it's only $1600. (Yeah, I know, I build my own too. But this article is written to be accessible to those who may not be quite so handy.) Knowing what those numbers mean is very important in making that decision. It will help people realize that they don't need that speed, just as you mention.
Here here! (Score:4, Insightful)
I just wish this information was more well distributed, and that people would actually research what they're getting into. They treat it like clothes shopping, they stop in and take something that looks cool home, but if you're plunking down a big wad of money you should research, sadly they don't, then they're pissed when they realize a week later they were suckered. And since I work customer service counter I get to play whipping boy.
(On a truly sad note this one customer swore at me and said it was "Horse shit..." that we didn't carry Dell even though I said it wasn't our decision to make...)
Re:Who cares? (Score:1)
but honestly, this kind of stuff needs to be looked at. right now, computers dont operate to their potential, and if it wasn't for people out there constantly looking for ways to improve then we would still be using computers the size of office buildings that take half of the city's power just to run it...oh, and it would operate slow as hell. do you have a laptop? how about a PDA? cell phone? these things are only here now and have all of those sexy features because of people who are constantly trying to make things run as efficiently as possible. oh, and you're right. it does make the games better, too.
Re:Who cares? (Score:1)
What do you drive? I'm willing to bet it's not a Geo Metro. But why not? Why do you really need anything faster? I mean, a Metro does the speed limit, right? Do you really need better acceleration and a higher top speed than what a Metro provides?
For those of us that strive for the best, the fastest computer is NEVER fast enough.
Re:Who cares? (Score:4, Insightful)
I have.
I write server-side Java code for a living at a web agency. Until recently, I had a P3 450 with 384meg of RAM, and it was too damn slow. I develop in JBuilder, and deploy my code using Resin (see caucho.com), and for complex sites it could take literally minutes for the server to start, then a couple more minutes per page in debug mode for the pages to be parsed and compiled. You're looking at 10-15 minutes to check to see if a one-line bugfix has worked, and hasn't had any unexpected side-effects, etc. That's 10-15 minutes of waiting, waiting, waiting, clicking a link, waiting, waiting, waiting, entering some details and hitting submit, waiting, waiting....
Extremely frustrating, especially if you're working late, especially when you know that 30-45 minutes spent going to the nearest high-street electrical retailer will buy you a machine 4 times as fast.
That's all before we even get on to the responsiveness of JBuilder...
I now have a P4 1.9GHz with 3/4gig of RAM, and the difference is incredible. JBuilder is much more responsive (feels like native code rather than Java most of the time), compile-run-test-debug cycles are much reduced, etc. This has a knock-on effect - people are generally happier and less fresutrated, stress levels are lower, there's less swearing (one or two of my colleagues regularly vented steam by hitting their PCs and swearing loudly) - work is all-round more enjoyable.
The bottom line is that anything that means that I spend less time waiting for things to compile, or startup, or whatever, is a good thing. Do not make the mistake of thinking that you'll never have a use for all that power; you'll find a use. The laser, for example, was sat around in research labs for years before anyone thought of anything practical to do with them. Now, practically every PC has at least one, not to mention hi-fis, DVD players, etc.
Re:Who cares? (Score:1)
Re:Who cares? (Score:2)
Say, if something takes 7 hours on one computer (like a complex database report)but 2 minutes on another, the chances that you run it on the slower one is much less likely, and you just don't do it.
On the otherhand if the difference between doing a task is small, 5 seconds vs 10 seconds, the chances of it getting done is not much greater on the faster computer.
The differnce a fast computer has is not really speed, but rather wheter or not you do that task.
Re:never trust the back of the box. (Score:2, Insightful)
While we're on the subject Ars talks about 8 bytes as being called a "word". As a programmer I was under the impression that a "word" is 2 bytes, a Double Word (DWORD) was 4 bytes and a Quadword was 8 bytes or 64 bits. What's he on about?
Re:never trust the back of the box. (Score:2)
I believe word size depends on the processor. I.e. a
32 bit processor has 32 bit or 4 byte word and a
64 bit processor has 64 bit or 8 byte word.
Re:never trust the back of the box. (Score:3, Insightful)
Pentium is a 32-bit processor, but for historical reasons, Intel still calls 16-bits a word, and 32-bits a dword. This is only to confuse you, pay no attention to the marketing behind it...
In a side note, it could be said that when C was designed, an "int" really was intended to be a "word". For compatibility reasons, most 64-bit processors now have their words (64-bits) as either "long" or "long long", since everyone for the last decades have assumed that an int is 32 bits.
Of course, that was for processor words, which are most interesting for the programmer. But in this article, it was the memory bus that was discussed, and it is of course allowed to define what it thinks is a word (how many bits will be transferred at once, when you read or write a memory address).
Having the memory bus being 64 bits wide on a 32 bit processor is perfectly sane and acceptable, as is having the memory bus 16 bits wide on a 32 bit processor (I believe the 386 did this).
In the end, because of marketing, and other reasons, it's best to not use "word" at all. Personally, I see "word" as something that when put together results in speach or text.
Re:never trust the back of the box-confusion (Score:1)
what with 18-bit address registers and 60-bit operand registers, and 6-bit characters
Re:never trust the back of the box-confusion (Score:1)
IIRC... The i386SX had a 16-bit external data bus. The Motorola 68000 had a 16-bit external data bus as well (and 32-bit registers). The 68008 was a 68000 with an 8-bit external data bus. IIRC, the HP48 family of processors were 64-bit internally but 4-bit external data bus (to save power and space).