Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Understanding Bandwidth and Latency 160

M. Woodrow, Jr. writes "Ars has a very eye-opening article on the real causes of bandwidth latency and why we should just drool endlessly over maximum throughput issues. In particular, I think the author's look into the PowerPC 970 and the P4's frontside bus is interesting considering how we're constantly being told by marketers that more speed is always going to translate into massive performance gains. The issue is, of course, far more complex, and this article does a good job of thinking about the problem from an almost platform agnostic point of view."
This discussion has been archived. No new comments can be posted.

Understanding Bandwidth and Latency

Comments Filter:
  • Bandwidth (Score:5, Informative)

    by Patik ( 584959 ) <cpatik@NoSPAM.gmail.com> on Thursday November 07, 2002 @01:50AM (#4614944) Homepage Journal
    Here's a handy bandwidth chart [acme.com] for common components to bookmark for easy reference.
    • by Anonymous Coward
      What about crossbar switches like the newer video cards use?

      Doesn't some DRAM's have S-RAM caches built in?

      What about dual-ported ram?

      How about seperate buses for C&C and data?

      How about putting base instructions (zero out section of memory) into RAMs?

      Is there any memory ordering by the OS to facilitate BUS filling?

      Aren't you getting tired of these questions? :)
    • Re:Bandwidth (Score:3, Insightful)

      by Courageous ( 228506 )
      Actually, this was quite an interesting chart. Seeing that Ultra-320 is twice PCI, I wonder if A) anyone makes Ultra-320 for PCI, and B) anyone is stupid enough to buy it?

      C//
      • Ultra 320 SCSI (Score:5, Insightful)

        by Bullseye_blam ( 589856 ) <bullseye_1.yahoo@com> on Thursday November 07, 2002 @02:42AM (#4615090) Journal
        Yes, while the theoretical rate is much faster than PCI (as you noted), I believe that these cards are designed for 64-bit PCI slots, which you can see by the chart (which only lists fast/wide PCI) is 4x faster. A standard 64-bit slot running at 33 mhz (the speed at which most 32-bit slots run) is twice as fast as standard PCI.

        So actually, Ultra-320 SCSI is the shit. ;)
    • You're making the classic mistake of assuming that Bandwidth is what matters, when in fact it's the latency which kills applications. Having bandwidth numbers without the corresponding latency is worse than useless, it's misleading for the uninformed.
      • Bandwidth is what matters, when in fact it's the the latency which kills applications

        Surely you're talking about something different from the article poster, who was referring to the causes of an entirely different (and uncommon) metric: "bandwidth latency". ;-)
  • by ZeekWatson ( 188017 ) on Thursday November 07, 2002 @01:51AM (#4614950)
    latency causes there to already be 100 posts when you bring up the comments page ... and you thought you were first! :)
  • until this site is Slashdotted...
    • Re:I can't wait... (Score:3, Interesting)

      by Anonymous Coward
      Sorry if I made this topic a little unclear for some moderators. When a site is Slashdotted it isn't due to a "lack of bandwidth", you think that someone servers just "run out"? A site which is truely Slashdotted runs out of ram and processing power to keep the number of daemons alive to sustain the number of hits and the daemon itself crashes under the load, or gets so heavily bogged down it never recovers unless it itself is restarted. Therefore an article on CPU latency which is Slashdotted is ironic.

      Hope this helps.
      • Except that in a lot of cases, sites do "run out" of bandwidth.

        Many ISPs have bandwidth caps.

      • Re:I can't wait... (Score:2, Informative)

        by virtual_mps ( 62997 )

        When a site is Slashdotted it isn't due to a "lack of bandwidth", you think that someone servers just "run out"? A site which is truely Slashdotted runs out of ram and processing power


        Sure that's true, except when it isn't. I've seen a site get /.'d, and the machine was fine--but the entire organization where the machine was located ran out of bandwidth. Local users could acess the web site but traffic to and from the internet was halted. It really depends on your mix of static/dynamic pages, and the average request size. For a static site it's fairly easy to max out a 100Mbit lan--which is more internet bandwidth that most people outside of hosting facilities can easily obtain.
    • This is not OT. Please moderate parent up. I guess +1, Insightful would be the best fit, although in this case I would prefer +1, Clever.
    • Re:I can't wait... (Score:2, Informative)

      by CyberBry ( 196935 )
      Ars gets slashdotted all the time - I've never seen their server even flinch.
  • by Rooked_One ( 591287 ) on Thursday November 07, 2002 @02:12AM (#4615015) Journal
    There was a guy who demonstrated a way to transmit data over the electromagnetic field surrounding every powerline. ALl you do is plug your computer into a power outlet, basically. The throughput was incredible, and latency everywhere would be under 10ms, as they demonstrated.

    Anyone hear from these guys lately, or at least know a url, if they havn't been bought out be the telecoms?
    • by AvitarX ( 172628 ) <me AT brandywinehundred DOT org> on Thursday November 07, 2002 @02:20AM (#4615036) Journal
      I am probably being seriously trolled, but the guy was shown to be a total fraud.

      Wired had an article about it around the beginning of the year.

      All the sceptics were correct, and eventually the believers let the idea slip out of the collective conciousness, not wanting to have to admit they were totally duped.
      • Re: your sig (Score:1, Offtopic)

        by Myco ( 473173 )
        Never put salt in your eyes.
        Never put salt in your eyes.
        Never put salt in your eyes.
        Never put salt in your eyes.
        Always put salt in your eyes.

        AAAAAAUUUGH!!!!

        God I miss kids in the hall. Thanks for the reminder.

    • Wired magazine wrote an article about a company (which was mostly just a front for one man) that fits your description.

      It was pretty clear from the article that the guy was a crook and that there was nothing to his claims. But he got a lot of money from a lot of supposedly smart people.

      By the way, what does the claim that "latency everywhere would be under 10ms" mean?

      MM
      --
  • Seems familiar... (Score:4, Informative)

    by josh crawley ( 537561 ) on Thursday November 07, 2002 @02:13AM (#4615018)
    This description of Bandwidth and Latency in CPU's and memory is almost the same as in network transmissions. Really easy to increase the bandwidth (10 Mbit to 100 Mbit to 1000MBit)... But try as hard as you can to make those electrons go faster along with the equipment...
  • by MichaelCrawford ( 610140 ) on Thursday November 07, 2002 @02:16AM (#4615023) Homepage Journal
    Here's a dead horse I've been beating for years.

    Much software is not written to take advantage of the architecture of modern microprocessors. If you rewrite some of your software to take advantage of them, then it is not hard to double your speed.

    The problem is that many, if not most programs are not very intelligent in how they access the CPU cache.

    It is not uncommon for a CPU to be running at ten times the speed of the memory bus. To keep from starving the CPU, we have caches that run nearer or at the speed of the processor.

    There's two problems. One is that the cache is limited in size. The other, less well understood, is that the cache comes in small blocks called "cache lines", that are typically 32 bytes.

    So if you have a cache miss at all, or you fill up the cache and have to write a cache line back to memory, your memory bus is going to be occupied for the time it takes to write 32 bytes. The external data bus of the PowerPC is 64 bits (8 bytes) so there will be four memory cycles, during which the processor is essentially stopped.

    What can you do to maximize performance? Make better use of the cache. If you use some memory, use it again right away. Use other memory that's right next to it. Avoid placing data values near each other that won't be used near each other in time.

    Simply rearranging the order of some items in a struct or class member list may make cache usage more effective.

    Also be aware of how your data structures affect the cache. Be aware of data you don't see, like heap block headers and trailers.

    Arrays are often more efficient than linked lists, especially if you are going to traverse them all at once, because each item in a linked list will likely be loaded in a different cache line, where an array may get several items together in a cache line.

    Finally, if you really have a structure that's full of small items that is accessed in a highly random way, consider turning off caching for the memory the data structure occupies. You won't get the benefit of the cache after you've accessed an item, but on the other hand you won't have to wait to fill a 32-byte cache line each time you read a single item.

    Imagine a lookup table of bytes that's several hundred k in size, accessed very randomly - you would benefit to not use the cache.

    • by Anonymous Coward
      Sounds like a job for the compiler to me, and btw, you never have to wait on the cache. The trick is to query the cache and memory at the same time for a data item. If it's in cache, then the memory request will be cancelled, if it's not in cache, then memory goes just as fast as it ever would. Cache is truly amazing in that if you are using a write-through scheme, it only provides a boost to performance...there's no speed-size tradeoff at all.
      • by MichaelCrawford ( 610140 ) on Thursday November 07, 2002 @02:53AM (#4615112) Homepage Journal
        I don't think you're going to be able to find a compiler that can reorder your struct or class members depending on how they are accessed. It may be possible to have one do that based on profiling, but I think that is beyond current compiler technology.

        Also every compiler I have ever come across stores struct and class members in the order they are declared in the source file. I don't think that's guaranteed by either C or C++, but that's how it always is.

        Also, the compiler is not going to make fundamental changes to your data structures and algorithms for you. If you write some code to manipulate a linked list, there's now way the compiler will change that to an array for you because it thinks it might be more efficient.

        The one case I have seen tools able to affect cache access in a positive way is the use of code profilers that record the most common code paths in your program and then edit the executable binary so that all the less common code paths are towards the end of the file. Thus if you take an uncommon branch, you might jump back and forth a megabyte within a single subroutine.

        Apple's MrPlus did that. It was based on an IBM RS-6000 tool whose name I don't recall.

        This has the advantage not just of improving cache performance but of reducing paging - a greater percentage of the code pages that are resident in memory are used for something useful, rather than containing code that is mostly jumped over. Uncommonly used code will all be at the end of the file and may never be paged in.

        One problem with a tool like this is that the results are only valid for a certain use of the program. If you have a program that can be used in many different ways, it may be difficult to find a test case that helps you.

        • The compiler can't really reorder fields of a class/struct because the programmer could potentially address directly into the class without using the variable. There would be some trouble with that if the programmer couldn't predict where the data was going to be.
          • by Anonymous Coward
            this is why the programmer should use the offsetof macro :-)
            • The preprocessor replaces the macro before the code is actually compiled. If your optimizer then reordered all the fields in the struct, offsetof screws up as well.

              No re-ordering classes :)
              • The preprocessor doesn't know what the layout of the structure is, and it doesn't have to. offsetof() is typically defined in <stddef.h> as something like:

                #define offsetof(_T, _M) ((size_t)&((_T*)0)->_M)

                which the compiler will evaluate based on the way it actually laid out the structure. But see my comment above.

        • Also every compiler I have ever come across stores struct and class members in the order they are declared in the source file. I don't think that's guaranteed by either C or C++, [...]

          Yes, it is.

        • by Ben Hutchings ( 4651 ) on Thursday November 07, 2002 @07:45AM (#4615803) Homepage
          Also every compiler I have ever come across stores struct and class members in the order they are declared in the source file. I don't think that's guaranteed by either C or C++, but that's how it always is.

          That's guaranteed to happen to a group of non-static member variables with no access specifiers among them. So for example in:

          class foo
          {
          public: int bar; int baz;
          private: int quux;
          };

          'baz' is guaranteed to be placed after 'bar' in an instance of class foo; but 'quux' might not be placed after 'baz'.

    • This is why companies like intel have whole departments dedicated to getting people to write software that is optimized for whatever new features are available on a new processor.

      When I worked there - we ran the DRG Game lab - which was for getting game developers to optimize their code to take advantage of new instructions etc on the latest processors.

      This made the processors look better, any game that we tested that ran better on the processors after having the code optimized was pushed out with a big marketing hoopla and Intel would say "HEY! come look at our new machines - look how great X software title runs on the latest and greatest"

      But the truth is that this was pretty much all fake - as rather than testingthe software on the exact same boxes that had just two different processors - the tests were done on boxes that had totally different configurations - although we never told anyone about that littel detail.
      • Intel VTune Performance Analyzer [intel.com] is an impressive code profiler, and can even profile Linux code (over the net, with the UI hosted on Windows), but Intel's marketing shows through clearly in the advice it gives you on how to optimize your program - by making use of assembly opcodes that are only available on Intel processors, and only the very latest ones at that.

        I haven't tried, but I would be surprised if VTune ran on an AMD processor.

        For the very fastest code, you can take advantage of special instruction, write stuff in assembly with the clever use of registers, etc. But the performance gains won't be portable.

        Optimizing cache use could be considered a non-portable optimization, but it can be done directly in C or C++, and any processor most people are likely to use will use a cache. There will just be some variations in its size, the size of a cache line and stuff like that.

    • If you use some memory, use it again right away

      That type of memory is called a 'register' in the CPU. The compiler will perform the optimisation you describe using these 'registers' for you.

      • Yes, it's more efficient to keep reusing registers for the same data, but the cache can store considerably more data than the registers can.

      • Sometimes registers don't cut it. For example, you cannot pass a pointer to a register. If you have a finction with several disparate return parameters, you might well pass pointers to the places to put the returns. And they, obviously, you are going to use the returned data straight away - else why did you ask for it. So you group the ints into which the values will be returned in order to take advantage of the cache if you can.
    • but on the other hand you won't have to wait to fill a 32-byte cache line each time you read a single item.

      Please forgive my failing memory, but isn't the functional unit requesting the load notified immediately when the requested word is available from the load/store unit? Unless I am imagining things, I seem to remember this procedure:

      1. word load is encountered in the code
      2. all functional units requiring the result of this load wait for load/store unit to succeed
      3. load/store unit fires off request to cache controller, which continues the request up the memory hierarchy until word is found
      4. requested word is returned immediately down the hierarchy to the load/store unit where it is directed to the appropriate file register (or data bus a la Tomasulo)
      5. functional units waiting for load to finish proceed, while simultaneously the cache hierarchy loads the rest of the cache line for that word into the cache

      Wouldn't it be a silly implementation that forces the load/store unit to wait for the entire cache line to be read before returning the requested word?? In other words, doesn't the memory hierarchy bring the cache lines in "in the background" while the requested data is returned to the load/store unit? And wouldn't this mean that turning the cache off doesn't solve "cache line latency" since it doesn't really exist to begin with?


    • The linux kernel guys pay attention to these thigns and code for them by hand. Hence their badass performance :)
    • Much software is not written to take advantage of the architecture of modern microprocessors. If you rewrite some of your software to take advantage of them, then it is not hard to double your speed.

      It's worse than you think on PCs (whatever OS they're running). The article talks about "bus mastering" and "data tenure", but on real workstation-class hardware there is no bus (not even one with a "north bridge") there'a a proper switch, like Crossbow or GigaPlane. These give you point-to-point, non-blocking sustained peak I/O. On a switched system, if components A and B want to communicate they can do so at the switch's full speed, and so can components C and D, no contention at all. That means no wasted cycles for the bus to constantly change ownership.

      If you're doing a job that requires heavy use of the "bus" on an x86 system (lots of storage I/O, lots of random memory access hence lots of L2 misses), then optimizing code for cache locality is the least of your problems, you'll never get around the fact that the inefficient design of the hardware itself is the bottleneck. Fancy FSBs and the like are just workarounds and don't address the real problem.
  • The miracle of cache (Score:5, Interesting)

    by Anonymous Coward on Thursday November 07, 2002 @02:16AM (#4615025)
    The article doesn't go into the miracles of modern cache architecture. It's impressive that memory that's about 50x too slow for its CPU can be made to work effectively at all.

    Once upon a time, on mainframes of the 1960s, minicomputers of the 1970s, and desktop computers of the 1980s, there was no cache. Every time the CPU wanted something from memory, it went all the way out to the memory bus (which, in early minis and PCs, was also the peripheral bus.) This was OK, because memory latencies were about 1000ns, and that was reasonably well matched to CPU speeds in the 1MhZ range.

    But today, we have 2GHz CPUs. We thus ought to have 0.5ns main memory to match, but what we have is about two orders of magnitude slower. The fact that modern systems are capable of papering over this issue is, when you think about it, a huge achievement. Of course, what really makes it go is that fast, but expensive, memory in the caches.

    Virtual memory hasn't done as well over the years. In the 1960s, the fastest drums for paging turned at around 10,000 RPM. Today, the fastest disks for paging turn at around 10,000 RPM. (Bandwidth is way up, but it's RPM that determines latency.) Meanwhile, real main memory has become about 20x faster, and main memory as seen by the CPU at the front of the cache is about 1000x faster. There's nothing cheaper than DRAM but faster than disk to use for a cache, so cacheing isn't an option. As a result, virtual memory buys you less and less as time goes on. With RAM at $100/GB, it's almost time to kill off paging to disk. Besides, it runs down the battery.

    • Caches old tech (Score:3, Interesting)

      by Goonie ( 8651 )
      Caches have been used in mainframes and minis since 1969, when the IBM 360/85 used it for exactly the same reason as modern CPUs need cache - the low-cost memory technology of the time (magnetic cores, IIRC) were much slower than the CPU, and memory that was fast enough was expensive.
    • With RAM at $100/GB, it's almost time to kill off paging to disk. Besides, it runs down the battery.

      I agree with you except that having a gig or more of RAM won't exactly do wonders for your battery life either.
  • Fairly Unimpressive (Score:5, Interesting)

    by Kommet ( 27381 ) on Thursday November 07, 2002 @02:22AM (#4615039) Homepage
    First, a caveat: I've been a regular Ars reader for the last two years. That said, I did not care for this article for the following reasons:
    • It was too shallow for the truely technical and too contorted for the uninitiated to follow. The author mixed metaphors, then piled confusing illustration atop constant admonitions not to let the illustration mislead you.
    • It tried to cover theory and therefore didn't include any real-world examples drawn from either modern or historic system designs with the exception of a short blurb about the Apple G3. It switched haphazardly from assuming a 3 cycle latency on memory reads to 9, then back to 3, then to 6, without explaining where those numbers came from. Graphs have large ranges with no explaination of whether one would ever see a situation that mimics the higher end of the graph.
    • It was not internally consistent. The choice of bus speeds in the bandwidth examples jumps back and forth between 100 MHz and 133 MHz, which mean that the examples cannot be compared to each other. Also, the illustrations show what the bandwidth usage would be for a 4 word burst, then shows a graph that goes into the low hundreds of words.

    Summing up, the article doesn't inform the technical, will confuse the non-technical, doesn't follow any consistent set of example conditions, contains very arbitrary graphs, and is generally poorly written. It is possible that I couldn't do any better (before I get flamed), but I doubt any technical writer worth his/her salt would do much worse.

    • Hmmm, maybe the article was aimed at people like me. It was interesting to have a peek at the guts of a computer, but luckily I'm technical enough not to get confused by his rather odd illustrations.

      I think the pictures and graphs did their job (he chose those analogies for a reason), but you have to be on the ball.

      All in all, a good read for a sysadmin who isn't an electronic engineer.
    • ... And word burst graphs should be discrete. You can't burst 1.5 word.
  • by Kj0n ( 245572 ) on Thursday November 07, 2002 @02:24AM (#4615048)
    ... once wrote:
    Never underestimate the bandwith of a station wagon full of tapes hurtling down the highway.

    The latency is terrible, though.
  • The tradeoffs that system designers make change constantly, and there are many other factors besides SDR/DDR that affect latency and throughput. Compiler writers also keep changing their minds about how they optimize and what cases they handle and don't handle.

    The rules of thumb are pretty much the same now as they ever were: preferentially, access memory sequentially, and for non-sequential accesses, keep the accesses local; there are a bunch of programming tricks for that that work as well now as they ever did. If you can, use a hand-optimized, architecture specific library like BLAS. As a last resort, rewrite tiny bits of performance critical code in a language like Fortran 77, where the compiler may be able to do a bit more optimization than C/C++.

    If a processor, compiler, or system architecture requires any more specific hacks to reach its stated performance, then for practical purposes, its performance is overstated. The only way to know is to run your code (or a set of benchmarks similar to your code) on it and see whether it runs fast enough.

  • by The Optimizer ( 14168 ) on Thursday November 07, 2002 @02:39AM (#4615082)
    I have worked on low-level systems for commercial PC games for over 6 years now.

    When I started in the mid 1990's the current thinking about optimization among those who cared was all about reducing cycle counts, and paring instructions for a Pentium. Memory system and bus behavior was mostly ignored or assumed to be rendered irrelevant by on-chip caches.

    During this time, while I was working on the graphics core for Age of Empires, I had lunch with Michael Abrash, who was at id software working on Quake at the time. While eating Mexican food, he casually mentioned the results of some memory bandwidth testing he had done and how he was shaping the rasterizer to make use of the time spent waiting on memory writes. This interested me enough to perform similar tests on my own work, and the results were telling.

    I wound up with core rendering code that, if you used the conventional cycle counting wisdom of the time, appeared to be slower than what it replaced... but in fact was faster, especially for various effects processing. Both games had very large hand-written assembly software rendering routines, in the size 10K+ lines.

    The reason for this of course was that memory bandwidth was being maxed out and with clever restructuring of code, it was possible to put the wait time to use on related processing, even if the code appeared to be more awkward and cumbersome that way. Though the exact memory behaviors would vary from system to system, one thing that was true and only got more so was that CPU speed was outstripping memory speed. Games like Quake and Age of Empires would have to process, in what usually amounts to a mutated memory copy, large amounts of textures or sprites each frame; so the data in question was pretty much guaranteed not be in the CPU caches.

    You would think that with the current generation of games using Hardware 3D only, this issue would be reduced to upload speed across the AGP Bus, but if Age of Mythology is any indication, that's not going to happen. In Age of Mythology we were able to make some significant performance gains by using the same techniques of coding to make the most of the slower speed and latency of main memory.

    As long the effort keeps paying off in increased FPS rates, we're going to be coding our games to account for and best deal with the realities of how the CPU relates to and waits on Cache and System memories.

  • That same content was previously a duplicate article... meaning that it's the third time here that I know of.

    As in, on Slashdot at least 3 times in a short time.
  • by Anonymous Coward on Thursday November 07, 2002 @03:00AM (#4615125)
    There are too many issues, and it gets too complex quickly.

    For example, a few syncronization commands , and eieio paranoia when not needed in drivers can slow down IO.

    A good PCI-X capable Fiber Channel card on a mac can get 49 microseconds per complete genuine 512 byte IO (over 20,000 IOs per second) and thats per channel, but just a few mistakes in the hardware interrupt handler or cache coherency misunderstood paranoia can add many microseconds.

    Even the fastest direct IDE cannot get speeds that fast (49 microseconds).

    And SCSI 320 barely does.

    But what about REAL WORLD, as we al know from the press releases of RC5 competition a standard mac g4 laptop was over twice as fast as Pentium 4 desktop units.

    In fact, apple only sells dual cpu systems now, and the ones they sold in Feb 2002 got over 21,129,654 RC5 keyrate for dual 1.0 ghz macs.

    The fastest AMD boards, dual cpu, no l3 cache available, get only 10,807,034 RC5 keyrate!

    half for AMD

    way less than that for Pentium 4.

    Why? The Pentium 4 lacks a good 32 bit barrel shifter.(4 clock latency on left shift!)

    Why the AMD is so slow? Perhaps because no L3 cache but the object code and data set of RC5 benchmark (get source yourself)fits in AMD L2 cache.

    Cold memory random read and write is FASTER on macs than DDR machines as seen in benchmarks but this author does hit upon that topic indirectly a little. Even if macs in Feb 2002 were faster than AMD for scatterred random read and write, the current 3 desktop macs all use DDR ram now so probably lack speed boost for that action, but do have write agregate (combined writes) across pci bus and other tricks.

    Macs also have a lot of other little advantages to offset thepenalty of huge RISC instructions... a great C-language way of programming the SIMD execution engine (called Altivec by Moto) and its SIMD is very good. Its SIMD has a few very minor assists to the RC5, but as experts have shown, removing them competently does not cripple apples speed much.

    The fastest macs have alwasy had the fastest GENUINE IO.

    In fact, copying data in 1992 was twice as fast to do for real using RAID, than copying to dev/null (nothing transferred) on a high end SUN!

    People complained that dev/null was not optimized.

    the truth is that commands that xfer data using cache controller tricks and not using cpu registers on macs help out enormously. Motorola 040 machines xfer 128 bit aligned dat 16 bytes per cycle using the strange and special cache controller command (trick) called Move16.

    move16 made the sun servers look slow and silly, not the badly written dev/nul.

    in 1995 I saw with my own eyes 6 Seagate ST 12450W drives (each had two heads per surface very very rare drives) transfer almost 65 megabytes per second sustained on a high end mac.

    that was 7 years ago, and the fastest PC for all the money you had with the fastest adaptec controller you could find and the best raid was : LESS THAN ONE FIFTH AS FAST.

    And now in 2002 you have people endlessly worrying about AGP and PCI-X without understanding those are OUTPUT tweaks not INPUT sppedup tweaks, and people trying to speed up streaming speed of ram faster and faster without realizing that speed of L! and L2 cache are Key.

    Or ability to SHARE the L2 cache amoungst multiple cpus.

    The hiddedn "backside only" cache of Pentium 4, and older macs, is the reason you could only have one cpu.

    having two fast, low voltage, high speed cpus or more is key to performance in 2003.

    you cannot do this with Pentium 4, you need to use expensive xeons if you want 2 intel chips on one board, else use pentium 3.

    And pricewatch this week shows a 800 Mhz itanium from intel (base model now) at over 7 thousand dollars.

    7 thousand! no wonder 6 or 8 box vendors dropped plans to use itanium this year. Geeeez.

    FAST L2 and L3 cache is where its at.

    The latest mac cpus to come out in a couple months (not the Power4 based ones in august), the moto ones, will allow 4 megabytes of L3 cache instead of 2, and have a staggering 512K of L2 cache running at 1 ghz, instead of 500 Mhz.

    I did not even think that was possible in todays world.

    feeding a rick chip is harder than intel, because the data code cache only holds half as much logic with the wasteful 32 bit opcodes, but the ALIGNED data, the sweet wonderful mac world ALIGNED DATA help the mac enormously.

    There is no "PACK(1)" prgma for c structures on a mac.

    I am not kidding.

    Its not part of the mac experience.

    True, many fields are 2 byte aligned instead of 4 byte aligned at times, but since 1995 apple has stressed 32 bit aligned integers and 64 bit aligned qauds religiously.

    Macs perform well because of ALIGNMENT of structures.

    Do archetecture people understand how many obscene PACK(1) (8 bit aligned) structures there are in Win32?

    do they even code on multiple systems?

    I do. If you use a 64 bit integer that is 2 byte aligned on a Pentium and pass it as argument to MS Win32 it will silently fail in some of its timer routines. That never happens on a Mac, plus mac routines tend to paranoia check a little more often on input, but not always.

    multiple registers helps a coder
    multiple registers helps assemly coders avoid push-pop hell

    people need to think about those things too before wasting time religiously bragging about high end streaming speed of RAM.

    ever timed REAL IO? Real IO pumped from card to card faster using good DMA back-to-back faster than could ever be moved using conventional single registers?

    architecture is all about asking why?
    Why use floppy disks in 2002?
    Why use big hot parallel printer connectors in 2002 or ever ( IBM CHRP ref spec demanded it on hand helds!)
    (IBM "PREP" spec required centronics connector on handhelds too!!!, MS Win 95 spec insisted on it strongly, but said SCSI was not highly important)

    Why use ISA in 2002?
    Why use hot hot steamy chips that do lots of speculative branching eating up power? Apples fastest machines use microcontrollers. I kid you not. They are using MICROCONTROLLER cpus with very very shgort pipelines and very very little speculative branching and very low power requirements

    Why use PS2 keyboards?
    Why insist on VGA at boot?
    Why insist on legacy BIOS calls that have no relevence except for anciet OSes taht are not even guranteed to run by motherboard vendors?

    I respect legacy too, but the legacy of Apple spurned all of these in 1984. Yup. macs never had any of that slop, though they do have open-boot style pci, and now use vga style connectors (though the connectors have detect diaodes in them to see waht size monitor tyou have), and have IDE now as default drive, though very fast performing vs pci bus contention. In fact apples 14 drive server uses 14 IDE controller chips for each of the 14 IBM GXP120 gig drives. 14 chips! 14 masters! Each pumping 35 megabytes sustained or more, and for only 15,000 bucks with fiber channel. Unfortunately its a 3U, but the drives are cold.)

    I think its funny that people try to write papers yapping about things that can change rapidly in one or two years, or have little bearing on true io speeds.

    The sad truth is that right now... RIGHT NOW... in 2003 November not ONE motherboard on pricewatch or for sale that I know of supports PCI-X, except for rich-man XEON and rich-man itanium.

    NONE.

    No Pentium 4 with PCI-X, no mac (though apple X-Serve is 488 megabytes per second per slot), and no MP AMD and no AMD thunderbird class.

    Just vapor-hardware and promises for 3 straight years.

    Now AMD said they will give fast PCI only to Hammer chips and hammer chips are getting horrible benchmark speeds.

    Does anyone reaslize how pathetic PCI slots are in 2002?

    I have in my machine 3 different pci-X cards and i have to run all of them at slower speeds even though some are capable of 770 megabytes per second bidirectionally (in-out simultaneous), at 133Mhz.

    This world sucks.

    And RAM? Don't make me laugh! Try to find an AMD board that takes 4 gigabytes of RAM and USES it as fast as the fastest AMD can. every tweaker site says you can only use one 512MB part and have a max of 512MB.

    Thats insane. i have not one machine with less than 768 MB in this house and my main mac from 1995 supported and allwoed a single user proccess to hold and lock (physical real ram) 1.5 MB of memory.

    In 2002 no linux with any normal tweak allows a user task to hold and lock 1.5GB of reeal ram, its all virtual or fake.

    Even most UNIX never allow more than 3 GB of physical REAL RAM in total usage ever... its all wasted for bad VM designs.

    nobody cares. Everyone says "I know 7 different unix OS that support 4 GB of ram" and then you have to reming them that VM is not RAM and physical RAM can be easily proven to be there or not and that no intel unix allows tasks to utilize 1.5 gigabytes of real physical RAM normally. And even if netbsd is hacked it runs no shrinkwrapped software. all shrinkwrapped software is mac or windows.

    thankfully apple is migrating to 40 bit address space physically soon in august with the new lightweight Power4.

    does anyone think that this nightmare of not physical ram in osses is real problem or not?

    sure NT has a /3 3 gb switch and another version allows bank switching 16 GB of ram slowly, but no NT system allows a sungle process to utilize over 1 GB of real physical genuine RAM (critical for FTDT 3d energy simulations).

    Arrrgh! I hate all this least common-denominator lowest-cost-component world.

    Fake powersupplies that lie about ratings over 450 watts

    cheap-ass capaciters that heat blow and leak beacuse tantalum costs too many extra cents

    traces that corrode instantly in salt air near ANY coast, especially in florida

    fans that silently die and expensive fans doing the same

    drives that have 34% failurerates after 18 months of usage (Fujistu lawsuit, IBM lawsuit)

    And to think that people try to make themselves feel good that they can move memory from one area to another quickly using ram streaming commands. BIG DEAL! Try moving it to a disk drive ro through a network connector or to another CPU. (many multi cpu designs cap inter cpu speed to 50% or 25%).

    who cares about ram streaming! bus contention, pci latency, and cold ram jump reading are far more critical issues.

    But no one cares. They just want to download mp3, porn, dvdrips, and console warez and you can do that on any 5 year old box.

    What a terrible world when a hard drive from seagate in 1995 allowed 12 megabytes per second SUSTAINED and in Nov 2002 the fastest single spindle drives sustain only 39 MB per second or so.

    What garbage.

    And the PCI bus is not 50 times faster after all these years, or 40 times, or 20 times faster, or 10 times faster, its so slow even at 64 bit, 64Mhz I want to just cry.

    • Very interesting AC. Get an account, please, we need more like that.
    • Wow it sure is nice,

      To be able to read a 2 page long comment.

      Especially when it wold only normally be a small paragraph.

      Except that the author thought that it wasn't long enough.

      So they typed it like this,

      And made everyone hate them.
      • by Anonymous Coward
        Considering each line is a different idea you fool, you would have a 10 page article if each line was expanded into a eloquent paragraph. Additionally, with requisite sentences crafted at the beginning and end of each paragraph over 30% would become filler. If you understand the prinicples of reading, you will know based on psychological testing of comprehension and legibility speed that horizontal sentences with whitespace above and below are rapidly read in 8 word clusters by people with high IQs. I bet you are one of those anal fools that used text mode white-on-black background fixed point MS-DOS text through the 1980 and 1990s while every one else went macintosh style modern fontography and legibility. In reality you are a closet mac bigot and hate yourself for not knowing anything concrete to criticize except poking fun at the extra linefeed characters to seperate the countless seperate topics in the post. Did you ever think for a moment, that perhaps the extra line feeds were placed there DELIBERATELY just to provoke people similar to yourself. Well its probably the case, so the intention and effort reached their mark well. Long live AGIT-PROP!

    • In 2002 no linux with any normal tweak allows a user task to hold and lock 1.5GB of reeal ram, its all virtual or fake.

      False for not-pretend 64-bit architectures (e.g. UltraSparc) and has been for years.

    • fans that silently die and expensive fans doing the same


      Really sad thing is: we could get by without those fans at all. Run the CPUs a few percent slower and use non-power-hog architectures, put a real heatsink on the PSU instead of toys and a blower, put multiple heads in the drives instead of spinning them faster (or better still, install more RAM so the disk gets hit less often).

      And seal the case up completely. No corrosion problems - with optical connections and batteries (machine consuming a fraction of the power that your desktop P4 does) you could in theory take your computer swimming. What would you call a mouse that operates in water? An eel?

      And how about `level 5' cache: buckets of slower, low-power, low-cost RAM for swapping, temp files, disk cache etc?
    • by Jay Carlson ( 28733 ) on Thursday November 07, 2002 @11:47AM (#4617319) Homepage
      Here we go again. I really don't have all day to poke holes in this, and because I'm actually trying to cite and verify I'm going to completely miss the moderation window, and lose readership. While some of the claims are correct, don't assume I agree with any of them just because I didn't refute.

      A good PCI-X capable Fiber Channel card on a mac [...]

      There are no Macs that support PCI-X. I am therefore suspicious of the numbers you claim for this configuration.

      Next, RC5. The rant here seems similar to another Anonymous Coward post back here [slashdot.org]; I'm not going to copy in my response [slashdot.org] again; quick summary: I didn't buy my computer to run RC5 really fast, and neither did you.

      Cold memory random read and write is FASTER on macs than DDR machines as seen in benchmarks but this author does hit upon that topic indirectly a little. Even if macs in Feb 2002 were faster than AMD for scatterred random read and write, the current 3 desktop macs all use DDR ram now so probably lack speed boost for that action, but do have write agregate (combined writes) across pci bus and other tricks.

      This paragraph is confused. Yes, "cold start" memory latency is very important for many tasks, and is often overlooked. But how is the first sentence be true when many Macs are DDR machines? And where are these benchmarks? I just went looking for DDR Mac latency scores and couldn't find anything. Does anyone have lmbench memory latency numbers for the Xserve or the current PowerMacs? Oh, and write combining is hardly a Mac trick.

      The hiddedn "backside only" cache of Pentium 4, and older macs, is the reason you could only have one cpu.


      Incorrect. You just need a cache coherency protocol between your processors. "Backside" has nothing to do with it. For example, the dual-processor Pentium III box I'm typing this on has "backside" cache on each processor; it's just hidden inside the CPU packaging rather than brought out to extra pins to connect to an external cache.

      There is no "PACK(1)" prgma for c structures on a mac.

      struct foo { char c; int i; } __attribute__ ((packed));
      struct foo foo_inst;
      main() { printf("%d\n", (int)&foo_inst.i - (int)&foo_inst); }


      happily returns "1" on 10.2. In fact, if i doesn't cross a double-word boundary, there is no penalty for use on later CPUs. Yes, I just verified this on the G4 downstairs.

      And RAM? Don't make me laugh! Try to find an AMD board that takes 4 gigabytes of RAM and USES it as fast as the fastest AMD can. every tweaker site says you can only use one 512MB part and have a max of 512MB.

      Although you can't get the absolute, topped out single-CPU performance with it, dual-CPU boards like the Tyan ThunderK7Xpro support up to 4G of registered PC2100 RAM now; these boxes still comfortably beat current top-end G4s at tasks like SPEC CPU2000. If you really want a lot of memory you'll have to get a box from a major vendor; the Dell PowerEdge 6650 [dell.com] comes to mind as a 16G machine. Unfortunately, there aren't any AMD boxes out there like this that I know of, but Hammer will change that.

      In 2002 no linux with any normal tweak allows a user task to hold and lock 1.5GB of reeal ram, its all virtual or fake.

      Get an Alpha. Although I have no direct experience with this, reliable sources claim you've been able to go past the 32-bit 4G address space limit for several years.

      thankfully apple is migrating to 40 bit address space physically soon in august with the new lightweight Power4.

      Why wait? Apple isn't the only vendor out there.
    • In most operating systems with memory protection, ALL the memory is virtual, from the standpoint of a userland program. It's up to the OS, not the hardware, to decide whether that virtual memory address is going to be mapped onto RAM or disk.
  • Calculating Latency (Score:5, Informative)

    by SailorBob ( 146385 ) on Thursday November 07, 2002 @04:07AM (#4615264) Homepage Journal
    From:
    Ace's Guide to Memory Technology [aceshardware.com]

    Basically, the latency of the whole memory (From FSB to DRAM) system is equal to the sum of:
    1. The latency between the FSB and the chipset (+/- 1 clockcycle)
    2. The latency between the chipset and the DRAM (+/- 1 clockcycle)
    3. The RAS to CAS latency (2-3 clocks, charging the right row)
    4. The CAS latency (2-3 clocks, getting the right column)
    5. 1 cycle to transfer the data.
    6. The latency to get this data back from the DRAM output buffer to the CPU (via the chipset) (+/- 2 clockcycles)
    This gets you the first word (8 bytes). A good PC100 SDRAM CAS 2 will have a latency of about 9 cycles, and the next 3 cycles another 24 bytes will be ready. The PC100 SDRAM will, in this case be able to get 32 bytes in 12 cycles.

    If you want to calculate the latency that CPU sees, you need to multiply the latency of the memory system with the multiplier of the CPU. So a 500 MHz (5 x 100 MHz) CPU will see 5 x 9 cycles latency. This CPU will have to wait at least 45 cycles before the information that could not be found in the L2-cache will be available in the cache.

    • Wouldn't light speed be an issue too? I did an estimation a while ago, and at 4 GHz a clock cycle should be lost while waiting for the signal to travel over the wire. Sure it's not a huge difference if you add it to the 45 cycles above, but as CPUs get faster it'll grow.
  • by RockyMountain ( 12635 ) on Thursday November 07, 2002 @04:18AM (#4615295) Homepage
    How can an article about frontside bus and memory latency entirely ignore the concept of request pipelining? Huh?

    And why all that complex hand-waving about practical upper limits to burst length. He gave all kinds of secondary limiting factors, but missed the obvious one: How about the simple arguement that long bursts are useless unless you have a reasonable expectation that the speculativly fetched portion of the data will be consumed. Moving lots of data fast is only useful if a substantial fraction of it is data that you care about.

    (It's the same reason that there's an upper bound on the useful cache line size.)
  • by SailorBob ( 146385 ) on Thursday November 07, 2002 @04:26AM (#4615320) Homepage Journal
    More From Ace's:

    Athlon XP 2800+: 333 MHz FSB and nForce 2 [aceshardware.com]

    First of all, we tested the Athlon XP 2800+ on the "normal" KT333 platform with a 17x multiplier, the FSB set at 133 MHz DDR (266 MHz) and the memory set at 166 MHz DDR (333 MHz), CAS at 2, RAS to CAS at 3, Precharge at 3. The second time, the KT333 platform (ASUS A7V333) was set at a FSB of 166 MHz (333 MHz) and the multiplier was set to 13.5x.

    ...

    Where do I start? There is an enormous amount of info hidden is this table. Let us first start with the 266 MHz versus 333 MHz FSB discussion.

    There have been many reports that show that the Athlon does not benefit much from an increase in FSB clockspeed, moving from 266 MHz to 333 MHz. But Membench tells us exactly why. First of all, compare the two KT333 latency numbers (64 byte strides). All BIOS settings were exactly the same, only the FSB speed, and thus the multiplier, are different. Normally one would expect, everything else being equal, that the Athlon with the 166 MHz FSB would see 25% lower latency, but the CPU with the 166 MHz FSB version actually sees a higher latency! This shows that the (ASUS) KT333 board, in order to guarantee proper stability, increases certain latencies of the memory controller. Memory bandwidth increases by 14%, which is also less than expected.

    Now what does this mean for "real world" performance? It means that many applications will see either a very small performance increase or none at all, as it is latency and not bandwidth that is the most important performance factor. Let us explain this in more detail.

    • Now what does this mean for "real world" performance? It means that many applications will see either a very small performance increase or none at all, as it is latency and not bandwidth that is the most important performance factor. Let us explain this in more detail.

      The real world scoop on this is that someone typing a document in OpenOffice or surfing the internet won't see any performance increase over a Pentium-II 233 MHz machine with 64MB of 60ns RAM. Gamers might get 46 gajillion frames per second instead of 42 gajillion frames per second, which is completely indistinguishable to humans, so they won't notice either

      I, on the other hand, might see a simulation of a 5250-node electromagnetic scattering problem take 36 hours instead of 39 hours, which is quite significant. But, I would probably get the same increase in performance by going through and cleaning up my code a little. FORTRAN is funny that way...

      Making computers faster to the nth power only makes code that's worse to the n+1th power :)

  • by gwappo ( 612511 ) on Thursday November 07, 2002 @04:54AM (#4615397)
    What I couldn't find in the article is that it is possible to reach the maximum rate to SDRAM as an SDRAM chip has multiple banks - eg. one can issue a command to one bank, while still receiving data from another.

    This avoids incurring latency since read commands can be issued in parallel to incurring the cas latency.

    There's more details on this in the SDRAM specification (lost the URL but it's out there - I think its Intel who wrote it though).

  • Read this (Score:2, Informative)

    by chthon ( 580889 )

    Everyone who is interested in issues of bandwidth and latency should read this book :

    Computer Architecture : A Quantitative Approach, by John L. Hennessy and David A. Patterson

  • It could be useful for the people who can't dedicate much time to read /. to state clearly at the headline what the article is really talking about. opcode latency? memory latency? network latency?
  • Old sayings about bandwidth:

    - Never underestimate the bandwidth of a station wagon filled with magnetic tape.
    - What's the fastest way to get 1 TB of data from LA to NYC? FedEx.

    We can translate that to modern terms but the idea is the same. Just because bandwidth is high, doesn't mean that the latency is low.
    • There are a few different versions of this (google comp.arch to see some examples):

      "Money can buy bandwidth, but latency is given by God"

      (You can always increase bandwidth by adding more bits, but the speed of light is fixed...)
  • This article could have been summarised in about 400 words, in fact I would do it myself if I hadn't got a deadline to meet. This is old, old stuff. So here comes a boring oldtimer bit of information.

    In the distant past, embedded systems used eproms that were rather slow, so memory access needed several wait states - the author doesn't seem to know this ancient term - while the eprom went "duh, that's address #F0F0, better go back in the stores and find the data". So as soon as fast RAM was cheap enough we would load the eprom contents into RAM at power up (or at least the frequently accessed bits) and then run from RAM where no wait states were needed. This was usually a 50% performance boost without changing the processor

    And there you have it. Substitute L1 cache for fast RAM and dram for eprom, and despite the fanciness of the modern technology, and the enormously bigger memory space, nothing has really changed.

I do not fear computers. I fear the lack of them. -- Isaac Asimov

Working...