Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Memory vs. Disk vs. CPU: How 35 Years Has Changed the Trade-Offs (wordpress.com) 103

Long-time Slashdot reader 00_NOP is a software engineer (with a PhD in real-time computing) re-visits a historic research paper on the financial trade-offs between disk space (then costing about $20,000 per kilobyte) and (volatile) memory (costing about $5 per kilobyte): Thirty-five years ago that report for Tandem computers concluded that the cost balance between memory, disk and CPU on big iron favoured holding items in memory if they were needed every five minutes and using five bytes to save one instruction.

Update the analysis for today and what do you see?

Well my estimate is that we should aim to hold items that we have to access 10 times a second.

And needless to say, some techniques for saving data space are more efficient than they were 35 years ago, their article points out.

"The cost of an instruction per second and the cost of a byte of memory are approximately equivalent — that's tipped the balance somewhat towards data compression (eg., perhaps through using bit flags in a byte instead of a number of booleans for instance), though not by a huge amount."
This discussion has been archived. No new comments can be posted.

Memory vs. Disk vs. CPU: How 35 Years Has Changed the Trade-Offs

Comments Filter:
  • Reality (Score:5, Funny)

    by Tough Love ( 215404 ) on Sunday November 22, 2020 @04:42PM (#60754720)

    Does of reality. The software world is now dominated by self described PHP gurus living in an echo chamber where 19 layers of slushy "frameworks" that slows down the internet by a factor of 100 is easier and cheaper to stitch together than anything remotely resembling competent software engineering. These clowns have no clue whatsoever what a latency hierarchy is. For them, an article like this is just dogs watching television.

    • Re:Reality (Score:5, Informative)

      by Tough Love ( 215404 ) on Sunday November 22, 2020 @04:43PM (#60754724)

      Dose even. Slashdot: let me edit my posts, or did you forget how to code?

      • They can't (won't? the lack of communication continues to be a problem) even write simple text filters that are worth a damn - would you trust them to write code that interacts with the database? Personally, I think the lack of editing here is a plus, and the sense of peril is something that has always drawn me to this site. If it ever gets implemented, it just wouldn't be the same, and I'd have one less reason to use it over any other discussion forum. An extremely short time limit (10 minutes tops, same a
      • Re:Reality (Score:4, Funny)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday November 23, 2020 @10:01AM (#60756738) Homepage Journal

        Slashdot: let me edit my posts, or did you forget how to code?

        You can edit your posts after previewing them, but before submitting them. Or did you forget how to preview?

        Letting people edit posts is a misfeature that leads to confusion. If correctness were important to you, you would have used preview.

        • Letting people edit posts is a misfeature that leads to confusion.

          Sounds good on paper. So why doesn't that happen on Reddit?

          • Letting people edit posts is a misfeature that leads to confusion.

            Sounds good on paper. So why doesn't that happen on Reddit?

            Because reddit prides itself on being a shithole where people write something in reply to "Comment deleted by user".

            My personal favourite is the fuckwits who lose an argument, call you names so the reply appears in your recent feed and then delete their message so that you can't reply to them.

            Does that sound like the kind of bullshit you want for Slashdot? I mean you see what ACs here did with endless nazzi ascii art, do you thing the world would be better with even less accountability?

            No that was not a typ

            • Yah no, that's not my experience. But feel fine in your bubble, that's your prerogative.

              • If you haven't come across it then it's not me living in a bubble. I invite you to slide those sliders at the top of the screen to -1 if you feel the need to remind yourself that yes there are enough fuckwits out just dying to abuse any commenting feature you give them.

          • Having hardly ever felt a temptation to go to read something on Reddit (and even less often actually gone there - maybe 2 or 3 times in however-long Reddit has been going), isn't the whole of Reddit well described as a "mis-feature"?
            • Each subreddit has its own rules and moderation or lack of it. Quality varies enormously between them, as does civility.

              • I'm not here to advertise Reddit. Rather, to throw rotten fruit at Slashdot owners for a deficient UI that is apparently frozen in time.

                • The UI that's frozen in time (classic) is the best UI on the site.

                  The lack of unicode with a use list for commonly needed characters is embarrassing, but that's more an architectural than interface problem.

        • Agreed that unrestricted editing would be bad. But it would be nice to be allowed to amend, while putting a strikethrough mark on the portion of text you're correcting. That way the original text is still available so you can't pretend you never made the error. But the correction is still placed front and center right next to the error, instead of you needing to reply to yourself and hope someone mods up your correction as much as they do your main comment containing the error.

          But it may all be putting
    • Going way back here but one of the projects I worked was coding real-time embedded radar systems when available space was never close to a megabyte. Our EEPROM was never that generous and the system worked. No disk, little ram. Gotta go, my lawn needs watering.
    • Re:Reality (Score:5, Insightful)

      by realmolo ( 574068 ) on Sunday November 22, 2020 @05:02PM (#60754800)

      You're right, of course, but...does it matter? Making it easier to write useful software, even at the expense of efficiency, is a good thing.

      And don't forget - the stuff that REALLY needs to be fast still is. Yeah, most websites have horribly inefficient back-ends, but so what? It doesn't matter. Server hardware his cheap, and the latency/speed of the network undoes any efficiencies gained on the back-end anyway.

      It's easy to become nostalgic for the "old days" when developers could realistically know *every single thing* about the hardware they were using, and the software they wrote used every resource possible. But it's also easy to forget how limited software used to be. Every piece of software was an island. Communication between different programs was almost non-existent. It was a nightmare. And the reason that nightmare is mostly over is that we have PILES of libraries/frameworks that make all of it possible. It's a mess, but it's a beautiful mess.

      • Come on... let the old greybeards grump in peace about how "bloated" modern software is. Granted, I think maybe they have a point when an Electron application carries with it an entire browser - damn near an operating system itself, and chews up a few GB of RAM for the simplest of applications. But for the most part, yeah, I agree. People tend to forget how ridiculously limited and fragile those older systems tended to be compared to modern software.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          People tend to forget how ridiculously limited and fragile those older systems tended to be compared to modern software.

          Hey, at least our "ridiculously limited" old systems didn't get infected with ransomware every other week. Or require permission from Apple, every time you went to use the thing...

        • Come on... let the old greybeards grump in peace about how "bloated" modern software is. Granted, I think maybe they have a point when an Electron application carries with it an entire browser

          I can't help but notice that electron app seemlessly runs on anything. Tinkering with the code in Visual Studio on Windows and I do the layout on one screen with one aspect ratio and a quick push of a button that app is running on an ARMv6 platform under Linux and working completely identically as it did in Windows on x64 or x86 honestly I don't even know what the arch target was. All the while letting novices like me who've done little more than some HTML and java script with a side of C for microcontrolle

        • Echo, meet chamber.

        • by teg ( 97890 )

          Come on... let the old greybeards grump in peace about how "bloated" modern software is. Granted, I think maybe they have a point when an Electron application carries with it an entire browser - damn near an operating system itself, and chews up a few GB of RAM for the simplest of applications. But for the most part, yeah, I agree. People tend to forget how ridiculously limited and fragile those older systems tended to be compared to modern software.

          Remember the classical example of bloat? Eight Megabytes And Constantly Swapping [gnu.org].

        • That should only be an problem when you are running several electron apps AND:
          have EMACS open - at the same time.

        • Emacs is my favorite operating system. The text editor is a bit weak, but fairly good overall.

      • by Anonymous Coward

        All those extra cycles to fill Facebook's dossiers on citizens matter for energy consumption. And across all datacenters millions of processors that could be running at a much lower load also means less thermal output. End result is less hardware/infrastructure required and therefore energy input.

      • Re:Reality (Score:5, Insightful)

        by quonset ( 4839537 ) on Sunday November 22, 2020 @05:45PM (#60754890)

        It's a mess, but it's a beautiful mess.

        Yes, because nothing says a beautiful mess like needing to run ten scripts just to play a video on a web page, or needing at least sixty scripts to display a web page. And that doesn't include all the other cruft needed so people can look at cat pictures.

        Software expands to fill the available memory [embeddedrelated.com]. As a result, we need to have faster processors and more RAM just to keep the speed of current software running the same as previous software running on slower systems. That does not sound beautiful.

        • How many times have you counted the number of scripts running on a website? Or how many bolts hold the engine together on your car? The reality is beautiful here means "looking pretty" and "doing what it's supposed to do". If that means 60 scripts then execute away my computer is otherwise idle anyway.

          Which brings me to my next point: Software expands to fill the available space because space is what restricts software. It's been a solid 20 years since someone upgraded a general purpose computer because of

          • Most users have RAM sitting there being wasted.

            "Wasted?"
            Not being allocated to a process, not being used by a single process when it doesn't need it, isn't "wasted." It's there to be given by other processes, or used by the O/S as cache, among other purposes.

            Seriously. I don't know where this idea that anything less than all your RAM being used all the time started being seen as bad, but it's fucking stupid.

            Unless you like hitting swap, and going from that to continually causing thrashing.

            • Now, eating up memory "JUST BECASUE" - without a need to use it, where other processes can't use it when it needs it, and in a way where you bring in unnecessary overhead from invoking the O/S to clean up your mess, that's by definition wasteful - of memory, AND CPU cycles IMO.
            • Not being allocated to a process, not being used by a single process when it doesn't need it, isn't "wasted."

              Yes it is. It's the fastest form of storage for CPU based activities in the system. Any RAM not actively being used is potentially causing performance degradation should any data be needed since it would otherwise need to be fetched from slower storage.

              I don't know where this idea that anything less than all your RAM being used all the time started being seen as bad, but it's fucking stupid.

              LOL the Linux kernel developers would like to talk to you about your views.

              Unless you like hitting swap, and going from that to continually causing thrashing.

              Allocating releasable RAM and filling it with data does not cause you to hit swap when another application needs it. You have a lot to learn about how computer memory works.

      • Comment removed based on user account deletion
      • Re:Reality (Score:4, Informative)

        by lrichardson ( 220639 ) on Sunday November 22, 2020 @08:10PM (#60755166) Homepage

        "Making it easier to write useful software, even at the expense of efficiency, is a good thing."

        No, it isn't. Making it easier to write software is a good thing. Losing efficiency is a bad thing.
        Saw one DB2 application replaced with a modern, fancy, graphic app ... which added zero new functionality ... and brought the system to its knees when deployed. Final tally was the new program required approximately 50x the CPU of the old. But, ya know, shiny and pretty!

        The big driver here is accountability. If the new software allows you to write something in less time than the old stuff, and it runs, well, any issues with performance can be fixed by the HARDWARE group ... it's not your problem anymore once the code works.

        • Re:Reality (Score:5, Informative)

          by tlhIngan ( 30335 ) <[ten.frow] [ta] [todhsals]> on Monday November 23, 2020 @05:08AM (#60756180)

          50 years ago, computing time and memory was expensive. Thus, having people spend time working things out on paper and coding it up on paper and optimizing the heck out of it was emphasized, because you got one run per day. So you spent hours simulating the code so it would run correctly with as few tries as possible. Tries includes assembling or compiling it, so you checked to make sure you didn't have syntax errors.

          35 years ago is mid-80s and close to where the inflection point happened where human time started becoming more valuable than computing time. It's where we started having interactive debuggers and compiling was just a few keystrokes away. It was cheaper to do the edit-compile-debug cycle interactively so you could see the results instantly than have the human spend hours figuring it out.

          These days human time hasn't gotten much cheaper. So people use libraries to help write less code that does more. Again, computer time is cheap.

          • Both things can be true at once. Wasting all this CPU time means wasting a lot of energy, which means producing a lot of pollution. It also means a shorter upgrade cycle, which has the same problems. We get a lot of software we never could have had otherwise, but we also sell out the future.

      • Mostly things that need to be fast are. I do still run into people writing bloated software when thins really are performance critical.

        It all depends. Even in the modern world sometimes you want a simple piece of code that is very efficient - and people have to remember how to write that.

        Then there are embedded systems. There is a junk food machine (new) at work where I can type inputs faster than it can process them, and I can fill up its input buffer. How in gods name can you made a modern microcont

        • yes but on the upside, if you crank away it enough, there's a pretty good chance you could get free drinks out of it.

      • by Somervillain ( 4719341 ) on Sunday November 22, 2020 @10:16PM (#60755364)

        You're right, of course, but...does it matter? Making it easier to write useful software, even at the expense of efficiency, is a good thing.

        And don't forget - the stuff that REALLY needs to be fast still is. Yeah, most websites have horribly inefficient back-ends, but so what? It doesn't matter. Server hardware his cheap, and the latency/speed of the network undoes any efficiencies gained on the back-end anyway.

        Check your AWS bill. Does efficiency matter? Sure, most customers aren't going to shop at Target if WalMart.com loads in .5s instead of 0.1s. However, if you're using 5x as much CPU, that does cost money. I thought the biggest hidden bright side of cloud computing was it would make organizations clearly see how much stupid is costing them money and motivate them to think things through more carefully.

        Bad engineering has costs. It costs network bandwidth when you send too much data. It costs electricity to process transactions. That heat generated has to be cooled. The number of users you can serve per server goes down. You'd never do this with a car. You'd never leave your car running for 2h for no reason. You never load up your car with bricks and leave them there for 2 years for fun. Most people turn off the TV when they're done watching. Most never leave the faucet running for no reason.

        So...why write a server side application with 20x more layers than it needs? Why use terrible tools for the job? The biggest offender I personally run across is Hibernate/JPA. I have seen soo many applications load entire object hierarchies into memory, use 1% of what was loaded and throw the rest away...in a loop, across all applications. For those who don't work with Hibernate, this can be remedied by writing a query to get the exact 1% of data you need...but that requires some basic thought an minimal understanding of the tool you're working with and most "full stack" developers are competent in 2 tiers at most and wildly incompetent in 2 or more, usually the DB. Don't get me wrong, JPA/Hibernate and ORM generally is a great tool when used by a skilled developer, but people view tools and frameworks as religions..."if we're a Hibernate shop, it's blasphemy to write a native SQL query, even if it improves performance by 1000x" (I have literally had to fight to use native SQL to convert an import job from 15 min that failed regularly due to deadlock issues to .2 seconds by moving from sloppy JPA to native SQL)....the team thought it too hard to understand since it wasn't vanilla Spring JPA. I had to deal with that "does it matter?" question. Instead of learning DB 101, they just asked the customers if they would leave the company if we didn't make that functionality faster. It is important to ask "does it matter?" for very small optimizations. However, I have no patience for people who use that logic to justify not knowing how to do their job. Take some pride in your job and learn how to do it. Your customers will thank you. Your AWS bill will thank you.

        Cloud computing eat a lot of power...it releases carbon in the atmosphere....so when you waste it, you're making your users miserable, your company poorer, and shitting on the environment....why? Because you didn't want to learn SQL? You wanted to write your app in Node (and few that do learn how to do it properly)? You thought it was too hard for trained programmers making over 150k a year to think through use cases as to whether or not to use the default framework or use something lower level when the framework is a bad fit? I have to argue with people like you every workday. I'll say it again. Take some pride in your profession. Learn how to do your job. Everyone will thank you, including me.

        • So...why write a server side application with 20x more layers than it needs? Why use terrible tools for the job?
          Because it is cheaper. same reason for long time companies used coal plants.

          The biggest offender I personally run across is Hibernate/JPA.
          Seriously? In what regard?
          Both are super efficient in memory and CPU.

          "if we're a Hibernate shop, it's blasphemy to write a native SQL query, even if it improves performance by 1000x"
          Sorry, that is just ridiculous. Hibernate is not used for circumstances like tha

          • by sjames ( 1099 )

            But it's not cheaper. It's the same 'savings' offered by rent-a-center. Pay less today but pay forever.

        • The thing is companies are run by MBAs. If server efficiency is actually born in the costs, then it gets addressed. It's precisely the accounts that both drive the development of efficient code as well as at the same time the development of inefficient code depending on the specific application.

      • It was a nightmare. And the reason that nightmare is mostly over is that we have PILES of libraries/frameworks that make all of it possible. It's a mess, but it's a beautiful mess.

        Yes, frameworks have made possible vast amounts of (nearly) working software. Granted that, the problem with them is that after using one comfortably for the first 90% you are stuck with corner cases which end up taking most of your time. Once the whole thing's working you have a mess which you barely understand and don't dare touch to "optimise" it, and also have no time left for optimisation.

        IOW, when working with a framework, you don't have the luxury of exploring anything other than cajoling it into wor

      • by nagora ( 177841 )

        Server hardware his cheap, and the latency/speed of the network undoes any efficiencies gained on the back-end anyway.

        No, and no. Server hardware is expensive (looking at what we are charged for Azure at least), and when you have 2000 simultaneous connections to your webserver the latency of the network is the least of your problems.

        It's easy to become nostalgic for the "old days" when developers could realistically know *every single thing* about the hardware they were using, and the software they wrote used every resource possible. But it's also easy to forget how limited software used to be. Every piece of software was an island. Communication between different programs was almost non-existent.

        Was this 1950?

        It was a nightmare. And the reason that nightmare is mostly over is that we have PILES of libraries/frameworks that make all of it possible.

        You're deluded.

        It's a mess, but it's a beautiful mess.

        It's not. It's just a mess.

        • No, and no. Server hardware is expensive (looking at what we are charged for Azure at least), and when you have 2000 simultaneous connections to your webserver the latency of the network is the least of your problems.

          The gap between what Azure or AWS charges you and the cost of the hardware platform is huge, it's kind of absurd to describe server hardware as expensive based on the cloud providers' billing.

          Not only do they bake in their entire cost for physical capitalization (and probably long term expansion), but I'm sure it's all done at replacement/upgrade rates, along with operations, networking and big profit margins.

          My only hope is that the scheme is to get all the early adopters to pay for build-out and scale-up

          • by nagora ( 177841 )

            No, and no. Server hardware is expensive (looking at what we are charged for Azure at least), and when you have 2000 simultaneous connections to your webserver the latency of the network is the least of your problems.

            The gap between what Azure or AWS charges you and the cost of the hardware platform is huge, it's kind of absurd to describe server hardware as expensive based on the cloud providers' billing.

            Yeah. I realised that after I posted. I've just got so used to everyone talking about servers on the cloud.

            My long term worry is the cost of cloud computing doesn't come down but adoption is high enough that on premise servers go up in price, and computing becomes a lot less egalitarian unless you can afford the monthly consumption cost.

            I know what you mean. We're being charged per month what it would cost us to buy the same amount of storage. Even adding in costs like cooling, electricity, and man-hours of support/maintenance it's pretty awful. Adding in redundancy it is a bit better but we're forking over $60k per month (for storage) for something I think we could do inhouse for a quarter of that. That's a good few salaries pissed a

            • I sometimes wonder if "the future" isn't just some company where literally everything is outsourced. Some guy with an idea buys consultants to develop it, hires contract manufacturers to make it, logistics to ship it, contract marketers to sell it, accounting firms to keep the books and the rest just goes into his pocket, with zero jobs/wages involved.

      • by sjames ( 1099 )

        Then there was Unix where the user could just pipe the output of one program to the input of the next even if they were never meant to inter-operate. At least until all those libraries and frameworks made it impossible without 3GB of glue code.

    • Re:Reality (Score:5, Insightful)

      by StormReaver ( 59959 ) on Sunday November 22, 2020 @06:40PM (#60755036)

      ...cheaper to stitch together than anything remotely resembling competent software engineering.

      The cause of that is upper management, not the developers. Most developers are under very tight, very real deadlines to get miracles working under a charlatan's constraints. I absolutely LOVE to write everything from scratch, but I have too much to do and not anything even remotely resembling enough time in which to do it. As such, I look for pre-existing libraries and frameworks to shorten my development time. Some of those libraries are frameworks are efficient and well written, and some of them are not. All upper management care about are the end results. They don't care about professional pride or craftsmanship.

      As a craftsman, I can write beautiful, highly efficient code in about ten thousand times the amount of time it takes to find and install a ready-made library that does the same job in ten minutes of my time (because those developers have already spent ten thousands times that amount of time writing and debugging it). I will try to find the highest quality library available, but sometimes all that exists is crap that gets the job done.

      Developers everywhere are in the same boat.

    • Re:Reality (Score:4, Funny)

      by MarkRose ( 820682 ) on Monday November 23, 2020 @12:07AM (#60755622) Homepage

      Could be worse. Imagine using a news site for nerds written in Perl.

    • Gotta agree with ya here.
      It seems like great hardware advances are rendered far less beneficial by truly sloppy programs.
      Much of that sloppy software are also major vectors for hacks.

      Today's software developers are kind-a like politicians. Too many are unqualified yet manage to pull the wool over the eyes of their constituents.
  • Fixed storage cost (Score:5, Informative)

    by Ichijo ( 607641 ) on Sunday November 22, 2020 @04:55PM (#60754780) Journal

    disk space (then costing about $20,000 per kilobyte)

    Actually, it was $20,000 per 540 megabytes, or 3.7 cents per kilobyte.

    • Indeed. That was where I stopped reading TFA.

    • I bought my first PC (a demo model from the Hannover Messe) around that time, I think its hard drive was 10MB or 20MB. There is absolutely no way I would have paid more than 1000 Deutschmark for it. It was a 386/20 I believe and I finally junked it less than 18 months ago.
      It did not have a lot of memory, 4MB (that was with an expansion board) at most.

      • by PolygamousRanchKid ( 1290638 ) on Sunday November 22, 2020 @06:09PM (#60754976)

        I bought my first PC (a demo model from the Hannover Messe) around that time . . . I finally junked it less than 18 months ago.

        By a bizarre coincidence, the Hannover Messe junked the CeBIT computer exhibition about 18 months ago.

        The farewell one was in 2018.

      • I bought my first PC (a demo model from the Hannover Messe) around that time, I think its hard drive was 10MB or 20MB. There is absolutely no way I would have paid more than 1000 Deutschmark for it. It was a 386/20 I believe and I finally junked it less than 18 months ago.
        It did not have a lot of memory, 4MB (that was with an expansion board) at most.

        You know how I know you're lying? You didn't use the word "winchester".

        • You know how I know you're lying? You didn't use the word "winchester".
          And you are just an idiot who does not know what a Winchester is. Probably you do not even know the gun ...
          Hint: there never was a Winchester "hard drive".

    • by 00_NOP ( 559413 )

      Yes - you are right - I've fixed that now. Though it didn't alter the overall conclusions but an embarrassing mistake none the less!

      • by Anonymous Coward

        Yes - you are right - I've fixed that now. Though it didn't alter the overall conclusions but an embarrassing mistake none the less!

        Of course it would have changed the conclusions. The reality is that disk space was much cheaper than volatile memory, so only hold data in memory if [conditions] are met. If disk space was insanely more expensive than volatile memory, in line with the figure in the summary, then you'd keep everything in memory if you possibly could.

  • by mykepredko ( 40154 ) on Sunday November 22, 2020 @05:45PM (#60754894) Homepage

    is a puppy.

    • Nope, Longtime Slashdot *poster* may be a puppy, if he's anything like many of us he probably had been reading Slashdot for many years before making an account.

  • But, as we are making estimates here we will opt for 3000 MIPS costing you £500 and so a single MIPS costing £6

    Doesn't 3000/500 = 6 MIPS/£? So wouldn't a single MIPS cost you 1/6 of a £, or ~17p?

  • by SpaceLifeForm ( 228190 ) on Sunday November 22, 2020 @06:54PM (#60755056)
    November 15, 2020 Adrian McMenamin
    Updating the five minute and the five byte rules

    (As been pointed out I misread the original paper – it was $20,000 for a 540MB disk or about 3 cents per KB – quite a major error of scale. I also realised I wasn’t using the same comparison points as the original paper – so I’ve updated that too – the break even point is now 5 seconds on cache-ing and not 1/10th of a second. Obviously that’s a big difference, but the same general points apply. Sorry for my errors here.)
    • by rew ( 6140 )

      If the "original article" was 35 years ago, that would put it in 1985. I bought a 30Mb harddisk for HFL 300 in 1988. HFL 10 per Mb, one cent per kb. Moore says: 3x every 3 years, so 1985: 3 cents/kb : Sounds about right. I was going into this calculation expecting to find: "You're still wildly off!", but I was wrong. in 1985, $0.03 per kb is about right.

  • by MtHuurne ( 602934 ) on Sunday November 22, 2020 @09:03PM (#60755236) Homepage

    In the old days, you would get very accurate performance estimates by adding up the clock cycles needed for each instruction. These days, often performance is dominated by cache misses rather than by actual CPU cycles spent on instructions.

    So while having lots of memory is cheap, getting data from that memory into the CPU isn't. Making your data cache-friendly (compact and high locality), even at the cost of using a few more instructions, is very much worth it.

    • That's very true, and as you mention apart from total operations, getting larger sets of similar data handled simultaneously like with SIMD instruction sets can be hugely more efficient.

      I'd guess this type of optimization at the PC buyer level doesn't make sense any more. If anything, the replaced idea would be that buying similarly matched clocks between a processor and memory makes more sense than overspending on one part or the other. On Zen2 for example the advantage of getting memory fast to match th

    • In addition to keeping the pipelines fed, there are other benefits to having large amounts of fast memory. Part of why I used to opt for Intel 'enthusiast' platforms (though I have yet to upgrade beyond Sandy/Ivy Bridge) - RAM drive performance, thanks to 4 memory channels per socket. These days, NVMe SSDs are plenty fast for anything I'd think to do (though only 10-30% as fast), but being able to do silly stuff like setting up a 30GB RAM drive (had two machines with 32GB RAM, connected by "10" gbit/s infin
    • even at the cost of using a few more instructions, is very much worth it.

      You're assuming the cache miss costs *you* money.

  • Nowhere in that article (nor my memory) was RAM that expensive. Hell, I don't think it was that expensive when it was wire-wrapped core. Get your units right, folks.

  • 35 years ago was 1985. That's one year after the first Macintosh was released. Which for $2,499 had a million instructions per second, 128 Kbyte of RAM and 400 Kbyte of storage on floppy disks that cost $10 each. And it wasn't well known to be a cheap computer.

    So where do these $20,000 for one Kbyte of disk storage come from? In 1975, my university had I think 60 MB disk drives. The size of a washing machine admittedly, but I'm sure they didn't cost 1.2 billion dollars each.
  • *Shudder*

    Many years ago, in the Olden Days of the 90's when Programmers were Programmers and sometimes still debugged with oscilloscopes I had to deal with system that was coded like that. A machine that had pallets holding objects. The design allowed for 24 pallets, but at first it only held 16 pallets. For each pallet there needed to be a flag saying whether it was present, otherwise the machine might be damaged by trying to access a pallet that wasn't there. So the original programmer just stored the

    • Well, that was a pretty extreme case. I don't know if the limitations of the system actually justified doing things that way (it seems they didn't since you could apparently use a long with no negative consequences) but one thing I'm totally sure about: If I were to do something like that at least I would comment it.
  • Memory back then didn't cost that much.

    In 1985 1024 Bytes of RAM did cost around 1,50 Mark or 0,50 Dollar.

    The same year a floppy disk DD 80 tracks double sided holding up to 800kByte was around 3,00 Mark or 1,00 Dollars which sums roughly up to 0,00375 Mark/kByte or 0,00125 Dollar/kByte.

    A nacked 5MByte Harddisk without controller was around 500 Mark or 150 Dollars, requal to 0,10 Mark or 0,03 Dollars.

    Professional Tape-Prices where around Floppy Disk prices but offered much higher capacity, while consumer ta

  • $5 is cheap. Thats like a 1980's price for DRAM. I think core memory in the 1960's was a penny a bit or $80/kilobyte.

  • My estimates based on recall are very different:
    1985 1985 2020
    High PC PC
    Memory MB $15K $1K $0.01
    Disk MB $40 $10 $0.00003
    CPU MIP $1M $5K $1
    Note that:
    "High" means high end mainframe or Tandem server.
    The CPU MIP price includes th

The one day you'd sell your soul for something, souls are a glut.

Working...