The Legacy of CPU Features Since 1980s 180
jones_supa writes: David Albert asked the following question:
"My mental model of CPUs is stuck in the 1980s: basically boxes that do arithmetic, logic, bit twiddling and shifting, and loading and storing things in memory. I'm vaguely aware of various newer developments like vector instructions (SIMD) and the idea that newer CPUs have support for virtualization (though I have no idea what that means in practice). What cool developments have I been missing? "
An article by Dan Luu answers this question and provides a good overview of various cool tricks modern CPUs can perform. The slightly older presentation Compiler++ by Jim Radigan also gives some insight on how C++ translates to modern instruction sets.
"My mental model of CPUs is stuck in the 1980s: basically boxes that do arithmetic, logic, bit twiddling and shifting, and loading and storing things in memory. I'm vaguely aware of various newer developments like vector instructions (SIMD) and the idea that newer CPUs have support for virtualization (though I have no idea what that means in practice). What cool developments have I been missing? "
An article by Dan Luu answers this question and provides a good overview of various cool tricks modern CPUs can perform. The slightly older presentation Compiler++ by Jim Radigan also gives some insight on how C++ translates to modern instruction sets.
Virtualisation dates from the 1960's ! (Score:5, Informative)
The first large scale availability of virtualisation was with the IBM 370 series, dating from June 30, 1970, but it had been available on some other machines in the 1960's.
So the idea that "newer machines have support for virtualisation" is a bit old.
Re: (Score:2)
Also, vector processors [wikipedia.org] are also from the 70's as well.
Yeah, I remember when VMWare first came out... (Score:5, Informative)
I remember when VMWare first came out, and there was all this amazement about all the cool things you could do with Virtual Machines. Very little mention anywhere that these were things you could do for decades already on mainframes.
Same thing with I/O offloading (compared to mainframes, x86 and UNIX I/O offload is still quite primitive and rudimentary), DB-based filesystems (MS has been trying to ship one of those for over 20 years now; IBM has been successfully selling one (the AS/400 / iSeries) for 25, built-in encryption features, and a host of other features.
Re: (Score:3)
I remember when the 8086 came out, Intel also brought out the 8087 FPU and the 8089 I/O Processor. The former got bundled into the CPU a few generations later. I don't rememeber details of the 8089, but it seems to have withered away. Nor does Wikipedia say much about it, once you differentiate it from the Hoth Wampa Cave Lego set.
Re:Yeah, I remember when VMWare first came out... (Score:4, Informative)
http://pdf.datasheetcatalog.co... [datasheetcatalog.com]
It wasn't used in IBM PCs, but was in some other systems such as Apricot PC and the Altos 586
Re: (Score:3, Informative)
Re: (Score:3)
Even considering VMware's periodic moves into 'We still have the x86 virtualization market by the balls, right?' pricing models, being able to do virtualization on practically disposable Dell pizzaboxes looks like a revelation if you've previously been enjoying the pleasure of juggling your PVUs, subcapacity licenses, capped and uncapped LPARs, and sim
Re: (Score:3)
Not only do their prices sting, but they suffer heavily from living on their own little island and steadfastly refuse to use standard terminology, and seem to be doing a lot of stuff differently just because they can - not because it makes sense.
Re: (Score:2, Interesting)
In the before time, in the long, long ago, Quarterdeck first offered DESQView. I built VMs operating Index BBS on i286 platforms using digiboards.
Sweet Jesus little has changed in >25 years.
Re: (Score:2)
Re: (Score:3)
People were aware that mainframes could do virtualization back then, not least because all the magazine articles about VMWare mentioned it. What was surprising was that a feature previously only available on room sized computers costing as much as your lifetime earnings was now available on cheap commodity hardware.
Re:Yeah, I remember when VMWare first came out... (Score:5, Funny)
Bugger me, when I first got hold of VMWare as a teenager then heavily into Linux, I went mad. And I mean maaaaaaaaaaaaaaaaaad.
My home server was a simple affair - 6GB hard disk, 512MB ram.
So what did I do? Bring up as many Redhat VMs as I could - all with 4MB ram :D It was like a drug, 10 wasn't enough, so I did more. I got to 50 and just knew I had to do 100. I eventually ran out of free ram, but hell, I had more than 100 servers at my disposal!
What did I do with them? Uhm, nothing. Apart from sshing into a few just because I could.
*sigh* Thems were the days....
Re: (Score:2)
Re: (Score:2)
And in turn, that reminds me of the Zorro PC bridge card you could get for the Amiga - turning an Amiga into an Apple Mac was easy, same hardware etc, but you could also turn it into a 286 PC if you stuck a Zorro expansion card into the computer (it basically came with all the PC guts you needed for it to work). Pretty amazing stuff back in the day.
Re: (Score:2)
Re: (Score:2)
It depends on the VMware product, Workstation 1.0 was available at around $70 back then I believe.
Re: (Score:2)
Back then I, along with pretty much everyone else on Freenode or Arcnet, didn't care about pirating things. Thankfully I have matured somewhat in my opinions on the matter.
Re: (Score:3)
This is probably due to the fact that for most values of "you," you don't have a mainframe. So the cool things switched from "things a-few-people-who-aren't-me can do" to "things I can do." That increases relevance.
Re:Yeah, I remember when VMWare first came out... (Score:5, Funny)
If it's a proper mainframe you don't have it in your house, you have your house in it.
Re: (Score:3)
While it's true that microprocessors never really acquired mainframe I/O, they've had virtual machine support as far back as the 680x0 series, back in the 80s.
Re: (Score:2)
Re: (Score:2)
but historically only available on IBM systems that leased for the GDP of a small nation state...
Apart from virtualisation not much has been pioneered by IBM.
And we used to joke back in the day that the reason IBM were so keen on virtualisation was that they couldn't write a multi-user operating system, so they worked out how to let each user run his own copy of a single user operating system.
Re: (Score:3)
Riiight. Other than minor little things like:
System architecture being independant of machine implementation
8 bit bytes
32 bit words
Byte addressable memory
Standard IO connections
And that is just stuff from the 360 family, 50 years ago.
Re: (Score:2)
The 360/20 was an exception. All other 360s ran the same microcoded machine language, however different their internals were.
Re: (Score:2)
It's a bit before my time, but I believe OS/VM was not originally an IBM product, but was from people in the User Group. There was a Free Software/Open Source culture back then, and then as now anybody who had a computer could participate. This included hacks to the OS itself, like HASP, which would read a whole card deck at a time and put it on disk, rather than the program reading cards from the reader each time it wanted more input.
Re: (Score:2)
There is no OS/VM. VM-370 came from CP-67, which was an IBM project. No doubt many users contributed later, but the origins were IBM.
HASP was an extension to OS/360 (z/OS predecessor) and has nothing to do with VM.
Re: (Score:2)
The first large scale availability of virtualisation was with the IBM 370 series, dating from June 30, 1970, but it had been available on some other machines in the 1960's.
So the idea that "newer machines have support for virtualisation" is a bit old.
This point has been made since the first virtualization software on microcomputers were being experimented with. Those who don't know history are doomed to repeat it (or something similar depending how diligent your citation tracking it).
I'm still waiting for someone tell us that IBM discovered perceptional acceptable lossy compression, such as JPEG, MP3, and MPEG, back in the mid-1960s mainframe era to generate image and videos for punchcards distribution.
And Xerox PARC labs had a portable MP3 player prot
Re: (Score:3)
Yes, but he is clearly writing for people who grew up professionally with the x86.
In the PC world, efficient virtualization is a new thing even though mainframes had it long ago.
Re: (Score:2)
Yes, but he is clearly writing for people who grew up professionally with the x86.
There was never an 8-bit x86, and that includes the 8088.
Re: (Score:2)
The 8088 had an 8 bit external bus. Meanwhile, the z80 could do 16 bit fetches in a manner quite similar to the way an 8088 did a 16 bit fetch. Either way, 1 memory cycle got you 8 bits, so which is 8 bit and which is 16?
Re: (Score:2)
The 8088 had an 8 bit external bus
bus width
address lines
fastest word size
Which one of these has never been used to define the bitness of a machine? Yes, its the one you are using.
Anyone with a triple channel i7 has a 192-bit desktop right now (thats the width of the data bus of first gen i7's) according to your idea of what machines a machine 8-bit...
no IBM PC was ever 8-bit.. never.. they started with a 16-bit word size and 20-bit addresses.. some might argue they were 20-bit, but it
Re: (Score:2)
Now go upstairs and change your pants.
You also apparently don't know how multiple memory channels work. Hint: it's separate busses, that's where the speed comes from.
Re: (Score:2)
I'd get off your lawn but its only artificial turf.... some dork that signed up to slashdot really early but never actually did shit...
Re: (Score:2)
You might like the second sentence of the article:
things that were new to x86 were old hat to supercomputing, mainframe, and workstation folks.
Just curious, did you read one sentence of the article before commenting on it?
At least he didn't stop at "asked" and immediately fired off a diatribe against Ask Slashdot.
Easily my favorite modern features (Score:5, Informative)
The latest generation of CPUs have instructions to support transactional memory.
Near future CPUs will have a SIMD instruction set taken right out of GPUs where you can conditionally execute without branching.
Re: (Score:3)
A lot of systems are going with a traditional general purpose CPU for the control system, but DSP CPUs for the hard core or time critical calculations. DSPs already have a lot of SIMD features.
What's interesting is that DSPs are also adding more and more general purpose capabilities.
Re: (Score:3)
Yes, but instead of having a status register, you compare each item in one vector with each vector in another and get the results as a vector of booleans.
Then execute a SIMD instruction, where each component scalar operation is conditional according to each corresponding boolean.
Or, you could convert that vector of booleans into something else. For instance, you could count the number of leading 1's in the vector and store into a scalar, which would allow you implement operations such as strlen() or strcmp(
Re: (Score:2)
Only on Intel. IBM Z and POWER processors have transactional memory.
Cooking (Score:5, Funny)
There was a period in the 0s when PC processors were good for cooking eggs. You had to be careful with the AMD ones though, they had a tendency to burn the egg quickly.
Re: (Score:2)
I would imagine that the prizes for that would go to the Itanium and the Alpha. I've always thought it would be neat if one could establish a cluster farm in the open in the North Pole of Cold in Siberia, where the CPUs could be laid out w/o heat sinks and overclocked. It could be a good way of warming the air, while also providing fantastic supercomputing facilities. Of course, they'd have to be laid out in ways that they always face a wind chill of -40.
Of course, one is more likely to get something o
Re: (Score:2)
I've never worked with Itanium or Alpha, but on PCs the Pentium 4 was the undisputed King of Watts. Power usage on it got rather silly. You see that odd little four-pin power cable you going to the mainboard in addition to the main power bundle? That's the supplimentary power supply introduced to feed the insatable demands of the P4.
Re: (Score:2)
Not quite that, but there is a supercomputing center in Alaska to take advantage of easy cooling.
Re: (Score:2)
Re: (Score:2, Funny)
Since then, we have offloaded that task to the graphics card, thereby freeing up the main CPU for hash browns.
Re:Cooking (Score:5, Informative)
That was during the Mega/Gigahertz war, during the Golden Age of the Desktop.
Power Usage wasn't an issue. Heating wasn't much of an issue as you can fit big honking liquid cooled heat sinks to your CPUs. We had these big upgradable towers which gave us room to fill with stuff.
So we had hotter CPU's because we wanted the faster clock speed.
What happened? Well first 3ghz kinda became the max you can go, and we moved to more parallel systems. (Multi-Core CPU's, and GPU), and we wanted more portable devices. Laptops became far more popular then smartphone and tablets. So power usages, size, and heat became a bigger issue.
Re: (Score:2)
Shirley you mean Intel. Around 2000 the Pentium 4 silliness was in full swing. Super long execution pipeline, totally reliant on high clock speeds for performance. Those things could burn the Intel logo into your toast in under 5 seconds.
Re: (Score:3)
Why did you call that person Shirley?
Re: (Score:2)
What happened to my /.? (Score:5, Funny)
We just had a story about low-level improvements to the BSD kernel, and now we get an article about chip-level features and how compilers use them?
Is this some sort of pre-April-Fools /. in 2000 joke? Where are my Slashvertisements for gadgets I'll never hear about again? My uninformed blog posts declaring this the "year of Functional Declarative Inverted Programming in YAFadL"? Where the hell are my 3000-word /. editor opinions on the latest movie?
If this keeps up, this site might start soaking up some of my time instead of simply being a place I check due to old habits.
Slashdot is powered by your submissions (Score:5, Informative)
If you want to see more Slashdot-in-2000 style posts, and you have access to the sort of articles that Slashdot-in-2000 might have posted, Slashdot welcomes your submissions. You could even become a "frequent contributor".
Re: (Score:2)
If you want to see more Slashdot-in-2000 style posts, and you have access to the sort of articles that Slashdot-in-2000 might have posted, Slashdot welcomes your submissions. You could even become a "frequent contributor".
Um, no. This used to be the case, but the only stories I've submitted that have actually been picked by the editors in the last 5 years have been the "look at this new tech", "look at this glaring mistake" or the political kind. Anything about explaining actual existing tech or showing novel new uses for existing tech has never made it past the editors.
These days it's extremely easy to become a "frequent submitter" without becoming a contributor at all -- even when you do your own editing, have a reasonab
Re:Slashdot is powered by your submissions (Score:4, Interesting)
Re: (Score:2)
I used to be one/1 of the top /. submitters until it changed. I rarely submit to /. these days, but do on other web sites like Reddit [reddit.com], Blue's News [bluesnews.com], etc. :)
Should hardware even be a concern? (Score:2)
I have had arguments over this. People in various fora have asked what programming languages they should learn. I always put assembly in my list. But is it really important enough to learn these days? Is hardware still relevant?
8-bit MCUs (Score:2)
If you want to get the most out of an 8-bit microcontroller, you'll need assembly language. Until recently, MCU programming wasn't easily accessible to the general public, but Arduino kits changed this.
Re: (Score:2)
If you want to get the most out of an 8-bit microcontroller, you'll need assembly language
Or you just throw your 8-bitter in the garbage, and grab one of the many ARM MCUs out there.
depends what you're doing (Score:5, Insightful)
For example, I worked for a decade in the linux kernel and low-level userspace. Assembly definitely needed. I tracked down and fixed a bug in the glibc locking code, and you'd better believe assembly was required for that one. During that time I dealt with assembly for ARM, MIPS, powerpc, and x86, with both 32 and 64-bit flavours of most of those. But even there most of the time you're working in C, with as little as possible in assembly.
If you're working in the kernel or in really high-performance code then assembly can be useful. If you're working with experimental languages/compilers where the compilers might be flaky, then assembly can be useful. If you're working in Java/PHP/Python/Ruby/C# etc. then assembly is probably not all that useful.
Re: (Score:3)
Let's not forget something as simple as debugging. Someone who understands the hardware and knows Assembler will always have an advantage when it comes to debugging.
Re:depends what you're doing (Score:5, Insightful)
Again, it depends on what they're debugging. If it's a syntax error in a SQL string, then assembly helps not so much.
Re: (Score:2)
Hardware should always be a concern, because hardware is the reality that implements the abstraction of a program. No matter how efficient something is in purely mathematical terms, it's the hardware that determines the actual performance, complexity and problems. ISA, I/O capabilities, amount of RAM etc all matter in deciding what will be the best way to implement something.
No matter how many layers of abstraction you put in to provide the illusion of being able to ignore the hardware, the reality of hardw
Re: (Score:2)
There are also other problems that really play into multi-threading, where reducing the number of dirty cache-lines you need to access is important, which means understanding memory alignment and many other things.
Re: (Score:2)
There are only two situations I can think of where a working knowledge of assembly would be useful:
1. You write compilers.
2. You work in embedded device programing, where some IO requires you to count instruction cycles to ensure correct timing.
Re: (Score:2)
Or you need to do stuff that requires special instructions, like enabling/disabling interrupts, or magic co-processor instructions that flush the cache. Of course, if you're lucky, somebody made C wrappers for them, but that's not always the case.
Re: (Score:2)
Re: (Score:2)
It's not just that assembly language isn't as used any more, as that the machine language on the current processors is much more of a mess than the other ones I learned and used a long time ago. If there was something like the old Motorola 6809, that would be great to learn on.
Caches, threading, SIMD/GPUs, and floating point (Score:2)
I haven't seen the article or video. But for 99% of developers, I'd say the only CPU-level changes since the 8086 that matter are caches, support for threading and SIMD, and the rise of external GPUs.
Out-of-order scheduling, branch prediction, VM infrastructure like TLBs, and even multiple cores don't alter the programmer's API significantly. (To the developer/compiler, multicore primitives appear no different than a threading library. The CPU still guarantees microinstruction execution order.)
Some of th
Re: (Score:2)
And of course, ever since the 80486 (1989), all CPUs support floating point instructions.
486 SX chips had the FPU disabled or absent. So not all CPUs (or even all 80486 CPUs). As far as I'm aware Penitum (586) did not have a model without FPU support (although in the MMX models, you couldn't use MMX and the FPU at the same time).
Re: (Score:2)
Chipset Integration (Score:2)
I'm not a CPU expert so feel free to take my opinions below with a grain of salt... (grin)
The biggest change to processors in general is the increased use and power of desktop GPUs to offload processing-intense math operations. The parallel processing power of GPUs outstrips today's CPUs. I'm sure that we will be seeing desktop CPUs with increased GPU like parallel processing capabilities in the future.
http://en.wikipedia.org/wiki/G... [wikipedia.org]
http://www.pcworld.com/article... [pcworld.com]
Re: (Score:3)
indeed, the architecture of stream processors is quite a bit different than the general purpose processors we are used to programming. It's kind of exciting that programming stream processors through shaders, openCL, and CUDA has gone mainstream. And for a few hundred dollars a poor college student can afford to build a small system capable of running highly parallel programs. While not equivalent in performance to a super computer, has structural similarities sufficient for experimentation and learning.
20
SIMD is still von Neumann (Score:2)
I'm not sure SIMD really falls outside of a "1980s" model of a CPU. Maybe if your model means Z80/6502/6809/68K/80[1234]86/etc, rather than including things like Cray that at least some students and engineers during the 80s would have been exposed to.
von Neumann execution, but Harvard cache become common place in the 1990s. Most people didn't need to know much about the Harvard-ness unless they need to do micro-optimization.
Things don't get radically different until you start thinking about Dataflow archit
Re: (Score:2)
TFA's second line indicates that everything in the article refers to the x86/64 architecture. It is true many of these features originate in much older archs, but they are being introduced into the x86/64 arch over time.
Re: (Score:2)
Sorry if you missed it, but my post tried to point out that x86/64 is not really different from "1980s" CPU architecture. It's an 8085 with harvard cache, out-of-order execution and RISC translation instead of microcoded (not that that is a significant difference in my opinion)
x86 will eventually get transactional memory, making it competitive with a 1970s cray. And x86 has gotten hardware visualization, making it competitive to an 1960s IBM.
Everything old is new again!
Is this what they mean by RAM? (Score:2)
while (hitCount < arraySize) {
i=rand() % arraySize;
if (hitArray[i] == 0) {
sum += array[i];
array[i]=0;
hitArray[i]=1;
hitCount++;
}
}
Re: (Score:2)
/. constantly fucks up ecode indenting ... blaming the author doesn't solve anything.
Any computer scientist worth their 2 bits can figure out the algorithm; being pedantic only make you look like an idiot.
Re: (Score:3)
/* this line is misindented */
Whoa there. He's not composing python. He's writing in a real programming language.
wow, too complicated (Score:2)
I used to sort of understand how a computer works. Not anymore. It's just magic.
Summary of Summary. (Score:2)
Guy who doesn't understand how CPUs work amazed about how CPUs work. /thread
Re: (Score:2)
Guy who used to understand CPUs is amazed at how they've changed.
Re: (Score:2)
Guy who used to understand CPUs is amazed at how they've changed.
Ya, but, as several posters have pointed out, many of the things you mentioned, like vector instructions and virtualization *have* been around since the 1980s (and/or earlier) -- heck, I was a sysadmin at the NASA Langley Research Center in the mid-80s and worked on a Cray-2 (a vector processor system) and an Intel system with 1024 processors.
So either your CPU experience is *really* old or perhaps your CPU experience is more limited than you think rather than they have changed to the extent you think.
Can some explain the puzzles? (Score:2)
Re: (Score:2)
I believe this is a scenario that could cause two threads performing a incl instruction without lock added to overwrite the same memory address in such a way that the end result would be 2.
thread 1, iteration 0 of 9999: load 0 from memory
thread 2, iteration 0 of 9999: load 0 from memory
thread 2, iteration 0 of 9999: add 1 to 0 = 1
thread 2, iteration 0 of 9999: save 1 to memory
thread 2, iteration 9998 of 9999: load 9998 from memory
thread 2, iteration 9998 of 9999: add 1
Cray 1 from the 1970s used SIMD (Score:3)
if you understand scalar assembly, understanding the basic "how" of vector/SIMD programming is conceptually similar
Actually, if you think back to pre-32bit x86 assembler, where the X registers (AX, BX) were actually addressable as half-registers (AH and AL were the high and low sections of AX), you already understand, to some extent, SIMD
SIMD just generalizes the idea that a register is very big (e.g. 512 bits), and the same operation is done in parallel to subregions of the register.
So, for instance, if you have a 512 bit vector register and you want to treat it like 64 separate 8 bit values, you could write code like follows:
C = A + B
If C, A, and B are all 512 bit registers, partitioned into 64 8 bit values, logically, the above vector/SIMD code does the below scalar code:
for (i == 1..64) {
c[i] = a[i] + b[i]
}
If the particular processor you are executing on has 64 parallel 8-bit adders, then the vector code
C = A + B
Can run as one internal operation, utilizing all 64 adder units in parallel.
That's much better than the scalar version above - a loop that executes 64 passes..
A vector machine could actually be implemented with only 32 adders, and could take 2 internal ops to implement a 64 element vector add... that's still a 32x speedup compared to the scalar, looping version.
The Cray 1 was an amazing machine. It ran at 80mhz in 1976
http://en.wikipedia.org/wiki/C... [wikipedia.org]
According to WP, the only patent on the Cray 1 was for its cooling system...
Re: (Score:3)
There is some interesting stuff. But it mostly boils down to ways to optimize code. The older chips may have had the idea for something but didn't implement it due to the enormous cost. Sometimes it's handy to have just a couple of instructions to help out rather than add a giant feature; is in having no floating point or multiplication (early RISC machines) but having an instruction to find first or last bit set which makes the software library to do this much faster.
There are instructions to help out c
Re: (Score:3, Insightful)
He wrote, "introduced to x86 since the early 80s include paging / virtual memory, pipelining, and floating point." We know that some platforms had some of these features earlier than x86, but he was speaking to those who had been programming on the x86 platform. Of course, this ignores the x87 math coprocessor, but I digress.
Re: (Score:2)
True. The one article linked is every specifically x86 oriented (all hail to the monoculture). There really are far far too many people out there still who act as if microcomputers were the beginning of computer history.
Re:1980s? (Score:5, Funny)
Re: (Score:2)
What does that even mean? Pipeline-able executions? How much data? There is no context.
I think we can safely assume "yes" and "one machine word" here
[rants how crude the author's understanding of the matter is without giving a grain of indication that he's got a better understanding]
Speaking of people who sound like they're in junior high.
Re:1980s? (Score:5, Insightful)
I am staying away from your lawn, that's for sure. If my frisbee lands over there, you can keep it; you've earned it.
Re: (Score:2)
Re: (Score:2)
You can have a memory bus interfaces a wider than 64 bits. This has nothing to do with word size or address space size, but the fact that reading and writing more bits at once is a big speed increase. Ie, DIMM memory with a 64 bit interface (more if you count ECC) was common long before there were 64-bit PCs.
There are also floating point representations in common use with 80 bits, and a 128 bit format is in use but less common.
Re: (Score:3)
You are a retarded idiot. The author states right at the beginning of the article that he's focusing on x86. In the (late) 80s, most people had an IBM PC, if they had anything.
"Most", maybe. But the late 1980s were the heyday of the Macintosh, Amiga and Atari ST.
Come to think of it, I'm not even sure of "most" outside the business world. The Commodore 64 and Apple computers fit in there somewhere.
Re: (Score:2)
You are a retarded idiot. The author states right at the beginning of the article that he's focusing on x86. In the (late) 80s, most people had an IBM PC, if they had anything.
Re: (Score:2)
What about that part of the question?
The things I’m most interested in are things that programmers have to manually take advantage of (or programming environments have to be redesigned to take advantage of) in order to use and as a result might not be using yet. I think this excludes things like Hyper-threading/SMT, but I’m not honestly sure.
That is what the article is really about and what it is answering.
The answers seem comprehensive and useful, i.e. you can switch page size to reduce pressure on the TLBs if your application benefits from it, you can ignore the branch penalties due to the hardware being so efficient at running branches.
Your old rat's nest mainframe did not run at over 1GHz with a million or however many transistors dedicated to OoOE and branches you dumbfuck.
Re: (Score:2)
A llifetime of stereotyping will do that to you.
Re: (Score:2)
It's not that we're all crotchety though. But these articles are like going to a history class where you're taught that everything before 1960 isn't relevant, so just assume that JFK was the first president. There a monoculture out there with the PC, but it's not a representative of the state of the art, in the past or the present. It's not the most common chip, it's not the best designed chip, it's not a good chip for learning architecture with, there's nothing much to recommend it for except that it's
Re: (Score:2)
8 bit was not the most popular back then. And 64 bit is not the most popular today. This may be true back then if you consider only the newly created microcomputer segment, and it may be true today if you consider only the PC & Mac segment.
When 8 bit CPUs were new and thus few in number we had lots of computers already with 16 bits and more. The 8 bit CPUs were primarily used by hobbyists at the time, or as support for larger computers.
Today the x86-64 is not the most common chip, because the majorit
Re: (Score:2)
Moderation is our form of peer review.
Re:You are still wrong (Score:4, Informative)
The IBM 360/370 line and its successors have had decimal arithmetic (in addition to binary and after the 370/158 floating point) since the 1960/70s. Others have had these also.
Re:L1,2,3,4 Cache? (Score:5, Informative)
Higher latency of larger caches
Higher latency of more layers of cache
Poor transistor scaling of fully associate caches or increased rate of false evictions for n-way caches.
Increased power usage. It's very difficult to turn off part of your cache to save power, but it's very easy to turn off a core
Not all problems scale well with more cache
I'm sure there are many other reasons.