Memory vs. Disk vs. CPU: How 35 Years Has Changed the Trade-Offs (wordpress.com) 103
Long-time Slashdot reader 00_NOP is a software engineer (with a PhD in real-time computing) re-visits a historic research paper on the financial trade-offs between disk space (then costing about $20,000 per kilobyte) and (volatile) memory (costing about $5 per kilobyte):
Thirty-five years ago that report for Tandem computers concluded that the cost balance between memory, disk and CPU on big iron favoured holding items in memory if they were needed every five minutes and using five bytes to save one instruction.
Update the analysis for today and what do you see?
Well my estimate is that we should aim to hold items that we have to access 10 times a second.
And needless to say, some techniques for saving data space are more efficient than they were 35 years ago, their article points out.
"The cost of an instruction per second and the cost of a byte of memory are approximately equivalent — that's tipped the balance somewhat towards data compression (eg., perhaps through using bit flags in a byte instead of a number of booleans for instance), though not by a huge amount."
Update the analysis for today and what do you see?
Well my estimate is that we should aim to hold items that we have to access 10 times a second.
And needless to say, some techniques for saving data space are more efficient than they were 35 years ago, their article points out.
"The cost of an instruction per second and the cost of a byte of memory are approximately equivalent — that's tipped the balance somewhat towards data compression (eg., perhaps through using bit flags in a byte instead of a number of booleans for instance), though not by a huge amount."
Reality (Score:5, Funny)
Does of reality. The software world is now dominated by self described PHP gurus living in an echo chamber where 19 layers of slushy "frameworks" that slows down the internet by a factor of 100 is easier and cheaper to stitch together than anything remotely resembling competent software engineering. These clowns have no clue whatsoever what a latency hierarchy is. For them, an article like this is just dogs watching television.
Re:Reality (Score:5, Informative)
Dose even. Slashdot: let me edit my posts, or did you forget how to code?
Re: (Score:2)
Re:Reality (Score:4, Funny)
Slashdot: let me edit my posts, or did you forget how to code?
You can edit your posts after previewing them, but before submitting them. Or did you forget how to preview?
Letting people edit posts is a misfeature that leads to confusion. If correctness were important to you, you would have used preview.
Re: (Score:2)
Letting people edit posts is a misfeature that leads to confusion.
Sounds good on paper. So why doesn't that happen on Reddit?
Re: (Score:2)
Letting people edit posts is a misfeature that leads to confusion.
Sounds good on paper. So why doesn't that happen on Reddit?
Because reddit prides itself on being a shithole where people write something in reply to "Comment deleted by user".
My personal favourite is the fuckwits who lose an argument, call you names so the reply appears in your recent feed and then delete their message so that you can't reply to them.
Does that sound like the kind of bullshit you want for Slashdot? I mean you see what ACs here did with endless nazzi ascii art, do you thing the world would be better with even less accountability?
No that was not a typ
Re: (Score:2)
Yah no, that's not my experience. But feel fine in your bubble, that's your prerogative.
Re: (Score:2)
If you haven't come across it then it's not me living in a bubble. I invite you to slide those sliders at the top of the screen to -1 if you feel the need to remind yourself that yes there are enough fuckwits out just dying to abuse any commenting feature you give them.
Re: (Score:2)
Re: (Score:2)
Each subreddit has its own rules and moderation or lack of it. Quality varies enormously between them, as does civility.
Re: (Score:2)
I'm not here to advertise Reddit. Rather, to throw rotten fruit at Slashdot owners for a deficient UI that is apparently frozen in time.
Re: (Score:2)
The UI that's frozen in time (classic) is the best UI on the site.
The lack of unicode with a use list for commonly needed characters is embarrassing, but that's more an architectural than interface problem.
Re: (Score:2)
But it may all be putting
Re: Reality (Score:1)
Re: Reality (Score:5, Funny)
Gotta go, my lawn needs watering.
With an eyedropper, no doubt.
Re: (Score:2)
Re:Reality (Score:5, Insightful)
You're right, of course, but...does it matter? Making it easier to write useful software, even at the expense of efficiency, is a good thing.
And don't forget - the stuff that REALLY needs to be fast still is. Yeah, most websites have horribly inefficient back-ends, but so what? It doesn't matter. Server hardware his cheap, and the latency/speed of the network undoes any efficiencies gained on the back-end anyway.
It's easy to become nostalgic for the "old days" when developers could realistically know *every single thing* about the hardware they were using, and the software they wrote used every resource possible. But it's also easy to forget how limited software used to be. Every piece of software was an island. Communication between different programs was almost non-existent. It was a nightmare. And the reason that nightmare is mostly over is that we have PILES of libraries/frameworks that make all of it possible. It's a mess, but it's a beautiful mess.
Re: (Score:2)
Come on... let the old greybeards grump in peace about how "bloated" modern software is. Granted, I think maybe they have a point when an Electron application carries with it an entire browser - damn near an operating system itself, and chews up a few GB of RAM for the simplest of applications. But for the most part, yeah, I agree. People tend to forget how ridiculously limited and fragile those older systems tended to be compared to modern software.
Re: (Score:2, Insightful)
People tend to forget how ridiculously limited and fragile those older systems tended to be compared to modern software.
Hey, at least our "ridiculously limited" old systems didn't get infected with ransomware every other week. Or require permission from Apple, every time you went to use the thing...
Re: (Score:2)
Come on... let the old greybeards grump in peace about how "bloated" modern software is. Granted, I think maybe they have a point when an Electron application carries with it an entire browser
I can't help but notice that electron app seemlessly runs on anything. Tinkering with the code in Visual Studio on Windows and I do the layout on one screen with one aspect ratio and a quick push of a button that app is running on an ARMv6 platform under Linux and working completely identically as it did in Windows on x64 or x86 honestly I don't even know what the arch target was. All the while letting novices like me who've done little more than some HTML and java script with a side of C for microcontrolle
Re: (Score:2)
Echo, meet chamber.
Re: (Score:3)
Come on... let the old greybeards grump in peace about how "bloated" modern software is. Granted, I think maybe they have a point when an Electron application carries with it an entire browser - damn near an operating system itself, and chews up a few GB of RAM for the simplest of applications. But for the most part, yeah, I agree. People tend to forget how ridiculously limited and fragile those older systems tended to be compared to modern software.
Remember the classical example of bloat? Eight Megabytes And Constantly Swapping [gnu.org].
Re: (Score:2)
That should only be an problem when you are running several electron apps AND:
have EMACS open - at the same time.
Emacs is my favorite operating system (Score:2)
Emacs is my favorite operating system. The text editor is a bit weak, but fairly good overall.
Re: (Score:1)
All those extra cycles to fill Facebook's dossiers on citizens matter for energy consumption. And across all datacenters millions of processors that could be running at a much lower load also means less thermal output. End result is less hardware/infrastructure required and therefore energy input.
Re:Reality (Score:5, Insightful)
It's a mess, but it's a beautiful mess.
Yes, because nothing says a beautiful mess like needing to run ten scripts just to play a video on a web page, or needing at least sixty scripts to display a web page. And that doesn't include all the other cruft needed so people can look at cat pictures.
Software expands to fill the available memory [embeddedrelated.com]. As a result, we need to have faster processors and more RAM just to keep the speed of current software running the same as previous software running on slower systems. That does not sound beautiful.
Re: (Score:2)
How many times have you counted the number of scripts running on a website? Or how many bolts hold the engine together on your car? The reality is beautiful here means "looking pretty" and "doing what it's supposed to do". If that means 60 scripts then execute away my computer is otherwise idle anyway.
Which brings me to my next point: Software expands to fill the available space because space is what restricts software. It's been a solid 20 years since someone upgraded a general purpose computer because of
Re: (Score:2)
Most users have RAM sitting there being wasted.
"Wasted?"
Not being allocated to a process, not being used by a single process when it doesn't need it, isn't "wasted." It's there to be given by other processes, or used by the O/S as cache, among other purposes.
Seriously. I don't know where this idea that anything less than all your RAM being used all the time started being seen as bad, but it's fucking stupid.
Unless you like hitting swap, and going from that to continually causing thrashing.
Re: (Score:2)
Re: (Score:2)
Not being allocated to a process, not being used by a single process when it doesn't need it, isn't "wasted."
Yes it is. It's the fastest form of storage for CPU based activities in the system. Any RAM not actively being used is potentially causing performance degradation should any data be needed since it would otherwise need to be fetched from slower storage.
I don't know where this idea that anything less than all your RAM being used all the time started being seen as bad, but it's fucking stupid.
LOL the Linux kernel developers would like to talk to you about your views.
Unless you like hitting swap, and going from that to continually causing thrashing.
Allocating releasable RAM and filling it with data does not cause you to hit swap when another application needs it. You have a lot to learn about how computer memory works.
Re: (Score:2)
Re:Reality (Score:4, Informative)
"Making it easier to write useful software, even at the expense of efficiency, is a good thing."
No, it isn't. Making it easier to write software is a good thing. Losing efficiency is a bad thing. ... which added zero new functionality ... and brought the system to its knees when deployed. Final tally was the new program required approximately 50x the CPU of the old. But, ya know, shiny and pretty!
Saw one DB2 application replaced with a modern, fancy, graphic app
The big driver here is accountability. If the new software allows you to write something in less time than the old stuff, and it runs, well, any issues with performance can be fixed by the HARDWARE group ... it's not your problem anymore once the code works.
Re:Reality (Score:5, Informative)
50 years ago, computing time and memory was expensive. Thus, having people spend time working things out on paper and coding it up on paper and optimizing the heck out of it was emphasized, because you got one run per day. So you spent hours simulating the code so it would run correctly with as few tries as possible. Tries includes assembling or compiling it, so you checked to make sure you didn't have syntax errors.
35 years ago is mid-80s and close to where the inflection point happened where human time started becoming more valuable than computing time. It's where we started having interactive debuggers and compiling was just a few keystrokes away. It was cheaper to do the edit-compile-debug cycle interactively so you could see the results instantly than have the human spend hours figuring it out.
These days human time hasn't gotten much cheaper. So people use libraries to help write less code that does more. Again, computer time is cheap.
Re: (Score:2)
Both things can be true at once. Wasting all this CPU time means wasting a lot of energy, which means producing a lot of pollution. It also means a shorter upgrade cycle, which has the same problems. We get a lot of software we never could have had otherwise, but we also sell out the future.
Re: (Score:2)
Mostly things that need to be fast are. I do still run into people writing bloated software when thins really are performance critical.
It all depends. Even in the modern world sometimes you want a simple piece of code that is very efficient - and people have to remember how to write that.
Then there are embedded systems. There is a junk food machine (new) at work where I can type inputs faster than it can process them, and I can fill up its input buffer. How in gods name can you made a modern microcont
Re: (Score:2)
yes but on the upside, if you crank away it enough, there's a pretty good chance you could get free drinks out of it.
Consider Environmental Impact of your shit code (Score:5, Informative)
You're right, of course, but...does it matter? Making it easier to write useful software, even at the expense of efficiency, is a good thing.
And don't forget - the stuff that REALLY needs to be fast still is. Yeah, most websites have horribly inefficient back-ends, but so what? It doesn't matter. Server hardware his cheap, and the latency/speed of the network undoes any efficiencies gained on the back-end anyway.
Check your AWS bill. Does efficiency matter? Sure, most customers aren't going to shop at Target if WalMart.com loads in .5s instead of 0.1s. However, if you're using 5x as much CPU, that does cost money. I thought the biggest hidden bright side of cloud computing was it would make organizations clearly see how much stupid is costing them money and motivate them to think things through more carefully.
.2 seconds by moving from sloppy JPA to native SQL)....the team thought it too hard to understand since it wasn't vanilla Spring JPA. I had to deal with that "does it matter?" question. Instead of learning DB 101, they just asked the customers if they would leave the company if we didn't make that functionality faster. It is important to ask "does it matter?" for very small optimizations. However, I have no patience for people who use that logic to justify not knowing how to do their job. Take some pride in your job and learn how to do it. Your customers will thank you. Your AWS bill will thank you.
Bad engineering has costs. It costs network bandwidth when you send too much data. It costs electricity to process transactions. That heat generated has to be cooled. The number of users you can serve per server goes down. You'd never do this with a car. You'd never leave your car running for 2h for no reason. You never load up your car with bricks and leave them there for 2 years for fun. Most people turn off the TV when they're done watching. Most never leave the faucet running for no reason.
So...why write a server side application with 20x more layers than it needs? Why use terrible tools for the job? The biggest offender I personally run across is Hibernate/JPA. I have seen soo many applications load entire object hierarchies into memory, use 1% of what was loaded and throw the rest away...in a loop, across all applications. For those who don't work with Hibernate, this can be remedied by writing a query to get the exact 1% of data you need...but that requires some basic thought an minimal understanding of the tool you're working with and most "full stack" developers are competent in 2 tiers at most and wildly incompetent in 2 or more, usually the DB. Don't get me wrong, JPA/Hibernate and ORM generally is a great tool when used by a skilled developer, but people view tools and frameworks as religions..."if we're a Hibernate shop, it's blasphemy to write a native SQL query, even if it improves performance by 1000x" (I have literally had to fight to use native SQL to convert an import job from 15 min that failed regularly due to deadlock issues to
Cloud computing eat a lot of power...it releases carbon in the atmosphere....so when you waste it, you're making your users miserable, your company poorer, and shitting on the environment....why? Because you didn't want to learn SQL? You wanted to write your app in Node (and few that do learn how to do it properly)? You thought it was too hard for trained programmers making over 150k a year to think through use cases as to whether or not to use the default framework or use something lower level when the framework is a bad fit? I have to argue with people like you every workday. I'll say it again. Take some pride in your profession. Learn how to do your job. Everyone will thank you, including me.
Re: (Score:2)
So...why write a server side application with 20x more layers than it needs? Why use terrible tools for the job?
Because it is cheaper. same reason for long time companies used coal plants.
The biggest offender I personally run across is Hibernate/JPA.
Seriously? In what regard?
Both are super efficient in memory and CPU.
"if we're a Hibernate shop, it's blasphemy to write a native SQL query, even if it improves performance by 1000x"
Sorry, that is just ridiculous. Hibernate is not used for circumstances like tha
Re: (Score:2)
But it's not cheaper. It's the same 'savings' offered by rent-a-center. Pay less today but pay forever.
Re: (Score:2)
The thing is companies are run by MBAs. If server efficiency is actually born in the costs, then it gets addressed. It's precisely the accounts that both drive the development of efficient code as well as at the same time the development of inefficient code depending on the specific application.
Re: (Score:2)
It was a nightmare. And the reason that nightmare is mostly over is that we have PILES of libraries/frameworks that make all of it possible. It's a mess, but it's a beautiful mess.
Yes, frameworks have made possible vast amounts of (nearly) working software. Granted that, the problem with them is that after using one comfortably for the first 90% you are stuck with corner cases which end up taking most of your time. Once the whole thing's working you have a mess which you barely understand and don't dare touch to "optimise" it, and also have no time left for optimisation.
IOW, when working with a framework, you don't have the luxury of exploring anything other than cajoling it into wor
Re: (Score:2)
Server hardware his cheap, and the latency/speed of the network undoes any efficiencies gained on the back-end anyway.
No, and no. Server hardware is expensive (looking at what we are charged for Azure at least), and when you have 2000 simultaneous connections to your webserver the latency of the network is the least of your problems.
It's easy to become nostalgic for the "old days" when developers could realistically know *every single thing* about the hardware they were using, and the software they wrote used every resource possible. But it's also easy to forget how limited software used to be. Every piece of software was an island. Communication between different programs was almost non-existent.
Was this 1950?
It was a nightmare. And the reason that nightmare is mostly over is that we have PILES of libraries/frameworks that make all of it possible.
You're deluded.
It's a mess, but it's a beautiful mess.
It's not. It's just a mess.
Re: (Score:2)
No, and no. Server hardware is expensive (looking at what we are charged for Azure at least), and when you have 2000 simultaneous connections to your webserver the latency of the network is the least of your problems.
The gap between what Azure or AWS charges you and the cost of the hardware platform is huge, it's kind of absurd to describe server hardware as expensive based on the cloud providers' billing.
Not only do they bake in their entire cost for physical capitalization (and probably long term expansion), but I'm sure it's all done at replacement/upgrade rates, along with operations, networking and big profit margins.
My only hope is that the scheme is to get all the early adopters to pay for build-out and scale-up
Re: (Score:2)
No, and no. Server hardware is expensive (looking at what we are charged for Azure at least), and when you have 2000 simultaneous connections to your webserver the latency of the network is the least of your problems.
The gap between what Azure or AWS charges you and the cost of the hardware platform is huge, it's kind of absurd to describe server hardware as expensive based on the cloud providers' billing.
Yeah. I realised that after I posted. I've just got so used to everyone talking about servers on the cloud.
My long term worry is the cost of cloud computing doesn't come down but adoption is high enough that on premise servers go up in price, and computing becomes a lot less egalitarian unless you can afford the monthly consumption cost.
I know what you mean. We're being charged per month what it would cost us to buy the same amount of storage. Even adding in costs like cooling, electricity, and man-hours of support/maintenance it's pretty awful. Adding in redundancy it is a bit better but we're forking over $60k per month (for storage) for something I think we could do inhouse for a quarter of that. That's a good few salaries pissed a
Re: (Score:2)
I sometimes wonder if "the future" isn't just some company where literally everything is outsourced. Some guy with an idea buys consultants to develop it, hires contract manufacturers to make it, logistics to ship it, contract marketers to sell it, accounting firms to keep the books and the rest just goes into his pocket, with zero jobs/wages involved.
Re: (Score:2)
Then there was Unix where the user could just pipe the output of one program to the input of the next even if they were never meant to inter-operate. At least until all those libraries and frameworks made it impossible without 3GB of glue code.
Re: (Score:2)
Yeah, one of those worked on my outboard. Fucked it up.
Re: (Score:2)
I've never rebuilt a carburetor, but I can follow directions and I know how to use all the tools. And I also have mechanical sympathy and tend not to strip out screws and such at this point in my life. If I had to, I could do it. But this is 2020, and by now most outboards are four strokes with electronic ignition.
Re: (Score:2)
Four strokes with electronic ignition have carburetors. You're thinking about fuel injection.
Re: Reality (Score:2)
Yeah, that's what I meant to say. They have all of those things. It is difficult to meet modern emissions standards with carburetors. Even boats have them, thanks to California.
Re: (Score:2)
Fuel injected outboards are more expensive, more efficient, more powerful per pound, more complex and (this is the dominant one) more expensive. In general, emission standards are looser for marine engines so there are still a lot of carberated engines in the market, especially the small ones. Which in some cases can be replaced by trolling motors but the performance and endurance of the latter are, so far, far from competitive. Not sure I'd be fully happy with battery power in a tender, in a storm either.
Re: (Score:2)
Trolling motors are cool, these days they can hold a GPS position against a current, if it's not too strong. But they don't really do long hauls, and it's generally clever to have multiple engines anyway.
Re: (Score:2)
Re: (Score:2)
Bow-wow.
Re:Reality (Score:5, Insightful)
...cheaper to stitch together than anything remotely resembling competent software engineering.
The cause of that is upper management, not the developers. Most developers are under very tight, very real deadlines to get miracles working under a charlatan's constraints. I absolutely LOVE to write everything from scratch, but I have too much to do and not anything even remotely resembling enough time in which to do it. As such, I look for pre-existing libraries and frameworks to shorten my development time. Some of those libraries are frameworks are efficient and well written, and some of them are not. All upper management care about are the end results. They don't care about professional pride or craftsmanship.
As a craftsman, I can write beautiful, highly efficient code in about ten thousand times the amount of time it takes to find and install a ready-made library that does the same job in ten minutes of my time (because those developers have already spent ten thousands times that amount of time writing and debugging it). I will try to find the highest quality library available, but sometimes all that exists is crap that gets the job done.
Developers everywhere are in the same boat.
Re:Reality (Score:5, Insightful)
The echo chamber is when they all convince each other that their work is genius, and that they are geniuses.
Re:Reality (Score:4, Funny)
Could be worse. Imagine using a news site for nerds written in Perl.
Re: (Score:1)
News in Perl. Stuff that don't stutter.
Re: (Score:1)
It seems like great hardware advances are rendered far less beneficial by truly sloppy programs.
Much of that sloppy software are also major vectors for hacks.
Today's software developers are kind-a like politicians. Too many are unqualified yet manage to pull the wool over the eyes of their constituents.
Fixed storage cost (Score:5, Informative)
Actually, it was $20,000 per 540 megabytes, or 3.7 cents per kilobyte.
Re: (Score:2)
Indeed. That was where I stopped reading TFA.
Re: (Score:2)
The article gets it right, it's the summary that is broken.
Re: (Score:2)
Never mind, the article is fine now, but apparently wasn't before.
Re: (Score:1)
Never mind, the article is fine now, but apparently wasn't before.
But, doesn't 3000MIPS / 500£ = 6 MIPS/£? So wouldn't a single MIPS cost you 1/6 of a £, or ~17p?
Re: (Score:2)
I bought my first PC (a demo model from the Hannover Messe) around that time, I think its hard drive was 10MB or 20MB. There is absolutely no way I would have paid more than 1000 Deutschmark for it. It was a 386/20 I believe and I finally junked it less than 18 months ago.
It did not have a lot of memory, 4MB (that was with an expansion board) at most.
Re:Fixed storage cost (Score:5, Interesting)
I bought my first PC (a demo model from the Hannover Messe) around that time . . . I finally junked it less than 18 months ago.
By a bizarre coincidence, the Hannover Messe junked the CeBIT computer exhibition about 18 months ago.
The farewell one was in 2018.
Re: (Score:2)
I bought my first PC (a demo model from the Hannover Messe) around that time, I think its hard drive was 10MB or 20MB. There is absolutely no way I would have paid more than 1000 Deutschmark for it. It was a 386/20 I believe and I finally junked it less than 18 months ago.
It did not have a lot of memory, 4MB (that was with an expansion board) at most.
You know how I know you're lying? You didn't use the word "winchester".
Re: (Score:3)
You know how I know you're lying? You didn't use the word "winchester". ...
And you are just an idiot who does not know what a Winchester is. Probably you do not even know the gun
Hint: there never was a Winchester "hard drive".
Re: (Score:2)
Yes - you are right - I've fixed that now. Though it didn't alter the overall conclusions but an embarrassing mistake none the less!
Re: (Score:1)
Yes - you are right - I've fixed that now. Though it didn't alter the overall conclusions but an embarrassing mistake none the less!
Of course it would have changed the conclusions. The reality is that disk space was much cheaper than volatile memory, so only hold data in memory if [conditions] are met. If disk space was insanely more expensive than volatile memory, in line with the figure in the summary, then you'd keep everything in memory if you possibly could.
"Long-time Slashdot reader 00_NOP" (Score:3)
is a puppy.
Re: (Score:2)
Nope, Longtime Slashdot *poster* may be a puppy, if he's anything like many of us he probably had been reading Slashdot for many years before making an account.
Math much? (Score:1)
But, as we are making estimates here we will opt for 3000 MIPS costing you £500 and so a single MIPS costing £6
Doesn't 3000/500 = 6 MIPS/£? So wouldn't a single MIPS cost you 1/6 of a £, or ~17p?
Note update to article (Score:3)
Updating the five minute and the five byte rules
(As been pointed out I misread the original paper – it was $20,000 for a 540MB disk or about 3 cents per KB – quite a major error of scale. I also realised I wasn’t using the same comparison points as the original paper – so I’ve updated that too – the break even point is now 5 seconds on cache-ing and not 1/10th of a second. Obviously that’s a big difference, but the same general points apply. Sorry for my errors here.)
Re: (Score:2)
If the "original article" was 35 years ago, that would put it in 1985. I bought a 30Mb harddisk for HFL 300 in 1988. HFL 10 per Mb, one cent per kb. Moore says: 3x every 3 years, so 1985: 3 cents/kb : Sounds about right. I was going into this calculation expecting to find: "You're still wildly off!", but I was wrong. in 1985, $0.03 per kb is about right.
Memory bandwidth is a major issue (Score:5, Insightful)
In the old days, you would get very accurate performance estimates by adding up the clock cycles needed for each instruction. These days, often performance is dominated by cache misses rather than by actual CPU cycles spent on instructions.
So while having lots of memory is cheap, getting data from that memory into the CPU isn't. Making your data cache-friendly (compact and high locality), even at the cost of using a few more instructions, is very much worth it.
Re: (Score:2)
That's very true, and as you mention apart from total operations, getting larger sets of similar data handled simultaneously like with SIMD instruction sets can be hugely more efficient.
I'd guess this type of optimization at the PC buyer level doesn't make sense any more. If anything, the replaced idea would be that buying similarly matched clocks between a processor and memory makes more sense than overspending on one part or the other. On Zen2 for example the advantage of getting memory fast to match th
Re: (Score:2)
Re: (Score:2)
even at the cost of using a few more instructions, is very much worth it.
You're assuming the cache miss costs *you* money.
$20,000/KB... NOT. (Score:2)
Nowhere in that article (nor my memory) was RAM that expensive. Hell, I don't think it was that expensive when it was wire-wrapped core. Get your units right, folks.
Re: (Score:2)
Oh, it was, but not in the microcomputer era.
That's $20/byte.
One of the major price breakthroughs was when memory dropped to a buck a byte--as the *monthly rental* cost in the 60s.
Re: $20,000/KB... NOT. (Score:2)
Hmm... maybe the waaay early 60's; looks like it was down to a low, low $5/byte, presumably to own (vs. rent), fairly early on after ditching vacuum tubes:
https://jcmit.net/memoryprice.... [jcmit.net]
Idiotic numbers (Score:2)
So where do these $20,000 for one Kbyte of disk storage come from? In 1975, my university had I think 60 MB disk drives. The size of a washing machine admittedly, but I'm sure they didn't cost 1.2 billion dollars each.
Re: (Score:2)
In the actual article, it's $20k for 540MB.
using bit flags in a byte (Score:2)
*Shudder*
Many years ago, in the Olden Days of the 90's when Programmers were Programmers and sometimes still debugged with oscilloscopes I had to deal with system that was coded like that. A machine that had pallets holding objects. The design allowed for 24 pallets, but at first it only held 16 pallets. For each pallet there needed to be a flag saying whether it was present, otherwise the machine might be damaged by trying to access a pallet that wasn't there. So the original programmer just stored the
Re: (Score:2)
Wrong numbers (Score:2)
Memory back then didn't cost that much.
In 1985 1024 Bytes of RAM did cost around 1,50 Mark or 0,50 Dollar.
The same year a floppy disk DD 80 tracks double sided holding up to 800kByte was around 3,00 Mark or 1,00 Dollars which sums roughly up to 0,00375 Mark/kByte or 0,00125 Dollar/kByte.
A nacked 5MByte Harddisk without controller was around 500 Mark or 150 Dollars, requal to 0,10 Mark or 0,03 Dollars.
Professional Tape-Prices where around Floppy Disk prices but offered much higher capacity, while consumer ta
$5/kilobyte of volatile memory? (Score:2)
$5 is cheap. Thats like a 1980's price for DRAM. I think core memory in the 1960's was a penny a bit or $80/kilobyte.
My unit cost estimates are very different (Score:1)
1985 1985 2020
High PC PC
Memory MB $15K $1K $0.01
Disk MB $40 $10 $0.00003
CPU MIP $1M $5K $1
Note that:
"High" means high end mainframe or Tandem server.
The CPU MIP price includes th