Understanding the Microprocessor 165
Citywide writes "Ars has a very thorough technical piece up entitled Understanding the Microprocessor. It's pitched lower than many Ars articles (all of which are a bit over my head, to be honest), but that's why it's worth checking out: it explains the fundamentals is a very clear and useful way. And as the author notes, this kind of information is really crucial to get a grip on before Hammer arrives."
Oh really? (Score:2, Insightful)
The only information you'll need to know once Hammer has arrived is that it's the fastest thing on the planet, and the only mass-market 64-bit processor.
Oh yeah, and where to buy one. :-)
Re:Oh really? (Score:3, Insightful)
Oh yeah, and where to buy one.
Except for us who have to make informed decisions about future upgrade paths and which processors are going to provide the best cost and performance for our specific applications. Sometimes you need to know the specifics of how a processor operates and what it's specific strengths and benefits are before you recommend changing a companies whole server base etc...
Guess what? (Score:3, Interesting)
chances are you don't need, or you could write this article.
Also, if you are a big enough player, you get some sample procs and run some benchmark tests, maybe even write some of your own.
Re:Guess what? (Score:3, Insightful)
Re:Guess what? (Score:2)
Actually, I'm spoiled. My Microprocessors class had a hand written text book that was SUPER FANTASTIC. This guy could teach. He could also design systems like a madman.
But really reading a book (becuase your gonna need something on hand you can reference) and then getting one of those trainer boards (With the hex input and 8 segment LED display of IAR and register A) and you are set.
Re:Guess what? (Score:2)
I don't think you're trolling, but there's no reason the information shouldn't be available in multiple places, especially places where it's free.
I'm getting off topic... (Score:2)
That relation is so tenuous, that you could say "My boss might buy a computer, so I need to read this."
Re:I'm getting off topic... (Score:2)
Re:Guess what? (Score:1)
The kind of person who goes and buys a WLAN, deploys it without locking it down, and goes 'oh I was trying it out' when the resident security weeny tries to kill them. All with higher up technophile management approval of course. They don't bother to consider the implications of new tech, they just go 'ugh, shiny new laptop. Must be better. Must have'.
Re:Oh really? (Score:3, Interesting)
No one with a clue would ever do this any other way than by buying/borrowing a system for evaluation and running the specific application as a benchmark.
The beauty of Hammer is that doing so will be quite inexpensive compared to other comparable options. :-)
I stand by my original post.
(BTW, my vote for most innovative Hammer feature is the integrated memory controller(s) - memory bandwidth scales with processor count in SMP systems.)
Ahh! (Score:3, Funny)
Oog simple Caveman, like Hammer. Oog use 64-bit Hammer bash! Oog buy AMD. Oog love AMD!
Re:Oh really? (Score:1)
I concur! Not a joke! (Score:4, Funny)
As Socrates said, the unexamined life is not worth living.
But as many EE or even ECE people know, most programmers don't give a rats ass about what the hardware is doing. those that do have this understanding ( OS people, real-time people, embedded people, well a lot of people!) have it because they need it.
I'm not arguing that it isn't beneficial to know the difference between SIMD, SISD, MIMD, MISD systems, but if you aren't programming or designing for parallel systems, how will this help you when a new processor comes to market?!
The "Hammer" line is just a fumble for relevance. Guess what? We're reading this on a computer. The relevance is already there!
Re:I concur! Not a joke! (Score:5, Funny)
embedded people, are they, like, fetuses?
But seriously (and to stay on topic), I am really excited about hammer too. 64 bit processors for the people! I hope the mobo manufacturs get some nice, commodity products out there so that hammer is a viable chioce for my desktop!
Re:I concur! Not a joke! (Score:2)
SIMD(Single Instruction Multiple Data) is the hot 'new' kid on the block and the basic abstraction/concept behind altivec, MMX, SSE, VIS, etc.
MIMD(Multiple Instruction Multiple Data) would (IMNSHO) be just a misnomer for VLIW(Very large instruction word) which is almost the same thing as EPIC(Explicitly parallel instruction set computing) aka Itanic
Now what exactly would a MISD(multiple instructions single data) system be?! And, can anyone point to an example of such a system?
Re:I concur! Not a joke! (Score:2, Insightful)
They don't exist - just a theoretical fourth type to complete the set. Always in computer science courses, but none ever built.
Re:I concur! Not a joke! (Score:3, Informative)
MISD is fairly near useless, since it's basically one great big implicit race condition. It's included in the list for completelness only.
(These two processors in parallel, add 1 to this element and multiply it by 2. So do you get 2 (x+1) or do you get (2x + 1))
Re:I concur! Not a joke! (Score:1)
MISD (Score:1)
Re:I concur! Not a joke! (Score:2)
Re:I concur! Not a joke! (Score:2)
Plenty of OS, real time and embedded people have no need for anything beyond instruction timings, if that. C is a wonderful thing.
Don't get me wrong, you should read the article just to appreciate the technology. But to imply that reading the article is necessary for a programmer (even an assembly level one), much less an end user, is a big overreach. That was my original point...along with the fact that Hammer will rock! :-)
BTW, whoever moderated my original post a "troll"...get a life. :-)
As someone else's tagline reminds us: "To moderate is human, to reply divine." ;-)
Moderation Totals: Troll=1, Insightful=4, Overrated=3, Total=8. Heh. I guess I have some anti-fans. Most likely Intel employees or fans I guess...diversify those portfolios guys! ;-)
Disclaimer: I don't currently hold AMD or INTC stock. That will change soon though. =)
Re:Oh really? (Score:3, Insightful)
Alex
Re:Oh really? (Score:1)
Actually, Alpha was never really marketed much at all, which is the main reason why it never did very well, despite it's technical strengths.
The original poster was quite correct, the AMD Hammer will be the first 64-bit, general purpose CPU that is mass marketed.
Re:Oh really? (Score:2)
And not one second to soon. See this machine (points to the floor and right)? It's got 2 GB of memory (2^63), and it's my friggin' home computer. The fact that Intel is not pushing for 64-bit desktops is very strange indeed, considering that the will be nessecary even for high-end consumers like myself within the year.
And don't give me that crap about 36 bit virtual adressing... The reason I use 2 GB in my machine is that I USE >1GB, in one process, and in a very random fashion (in fact, the hobby program I'm developing would really like ~7 GB of RAM (yes, just for this one process), but I can't afford that just yet).
For those who are wondering what kind of "hobby program" I'm writing that needs such a shit-load of memory: It's an application that displays the globe, using the Blue Marble [nasa.gov] world texture maps at 1x1 km resolution from NASA (40,000x20,000 pixels, night and day side). And sometime soon-ish they'll release 100m maps, and I will need 700 GB of ram, then 10m/70TB, 1m/7PB...
Re:Oh really? (Score:2)
Given your application and the reasonable maximum resolution of 1920x1440, a 4kx4k texture map should be sufficient - just downsample your original map and display it. Even if you made your map zoomable, you wouldn't need gobs of ram. Ideally, you could store the large map on disk as a series of large squares, thus allowing efficent access to data in the shape you're likely to need. No a major memory requirement and probably fairly interesting to build.
Re:Oh really? (Score:2)
Of course I keep the humongous texture map in memory. The whole point of my program is to be able to see the entire globe, quickly zoom in on an area of interest, then quickly zoom out and move to another area of interest. Basically, I'm trying to reproduce the effect in MiB (although that movie had'nt been done yet when I started in 1996 (on a 128 MB SparcStation, with 8192x4096 textures)). To smoothly scan to any region of the Earth at great detail.
Regarding keeping the interesting parts of the map in memory, and swapping the others to disk... That's what virtual memory is supposed to do for you, yes? I do use mip-mapping, ie. having a hierarchy of textures, each with half the resolution of the last. When looking at the whole globe, you're just using the 1024x512 texture, but when you're fully zoomed in, you're using (a small subset of) the 40,000 x 20,000 map.
Basically, you are suggesting that I should use complex alogorithms to shift any un-needed data to disk (when normal paging will probably do the same task faster and simpler).
I should use a SCSI RAID (or perhaps fibre channel SAN?) to make globe-turning faster? Yes, it will be less than turgid, but it won't be smooth. Besides, I run this on a normal consumer computer, no RAID in sight... What you don't see here is that I, simply, for good reasons, wan't to keep it ALL in memory, allowing beautful, fast sweeps from any region on the Earth to any other. I DON'T want to swap to disk. Thankfully, I won't have to, in a year or so (due to current RAM increases), IFF Intel finally realizes that some programs do need more than 4 GB. AMD does. Guess which system I'll buy.
Re:Oh really? (Score:2, Insightful)
so maybe not "fastest", but it will be fast.
Re:Oh really? (Score:2)
Oh really? (seems to be my phrase for the day) Please point me to the SPEC numbers for Hammer...
We'll see what the best compilers available at the time of release can do. Also, there will be PR 4000+ Opterons (according to leaked information) in the first half of '03. Those should beat the Itaniums shipping at that time, even in FP. Don't forget, the highest end Opterons have two memory controllers, and double the memory bandwidth of the baseline Hammers.
Regardless of whether or not Hammer is behind by a minscule amount in FP performance, I expect it will cost less than 1/2 what Itanium costs, even in its best Opteron incarnation. Finally, AMD will really be able to put the hurt on Intel. :-)
Myself, I just want one of these babies coupled to a GeForce FX card. I'm thinking about June of next year... =)
Re:Oh really? (Score:2)
hammer:
specint 1202
specfpu 1170
itanium2 numbers are real spec results available on their site:
specint 674
specfpu 1431
AMD says they expect hammer aware compilers to render a 20% performance improvement in fpu, thanks to using the extra registers, etc. This is likely true, but also convenient that it's just enough to bring them to the same level as I2.
In the end, hammer will always dominate i2 in performance per $ just based on the volumes of markets. As time progresses, itanium will ride moore's law wider, whereas it's not clear how hammer will capitalize on die shrinks. Interconnect delay is becoming the dominate issue, and so riding the clock advance like processor makers have been for the last 5 years will not be so straightforward.
I'm earger to by a hammer as well. Don't think I'm somehow badmouthing the chip. Just wanted to balance out over-enthusiastic statements like "it's the fastest thing ever!!!".
Sure (Score:4, Funny)
Re:Sure (Score:4, Funny)
You mean I've been sacrificing my social life for nothing? Say it ain't so!
You bastard... You cruel, cruel bastard... Can't you break things to the slashdot community gently?
Unless of course (Score:2)
Nomination (Score:5, Funny)
Re:Nomination (Score:4, Funny)
On a related note - Anybody wanna buy a used book on architecture, programming, and interfacing with the 8086 and 8088 microprocessors? Rarely used, little wear, only used page 33 (ascii table)
Re:Nomination (Score:4, Funny)
I would, but I don't want to buy a book with a used ASCII table
Re:Nomination (Score:4, Informative)
Too late. Charles Petzold has already done it. See CODE [barnesandnoble.com]. It should be on every geek's bookshelf.
Re:Nomination (Score:2)
This book reminds me of my university education [carleton.ca]. When I was done, we had done transistors, digital logic, PC architecture, assembly language, C and C++ and network protocols. I have always felt that having an understanding of how everything works has made me a better programmer.
Re:Nomination (Score:3, Funny)
Re:Nomination (Score:2, Informative)
Computer Organization and Design: The Hardware/Software Interface is the ultimate intro to microprocessors book. It covers instruction sets and architecture extremely well. As the title suggests, it show how (and why) the instruction set and architecture are tied together. (Most of the discussions revolve around MIPS, but they have stuff on Pentiums and PowerPCs, too.) The meat of the book involves actually designing a pipelined, MIPS-like processor from scratch. Really cool stuff - you can actually implement the final design on an FPGA board pretty easily. The final design is, of course, much much much simpler than an actual processor, but it really gives you a sense of what all the components do and how they function together. Anyways, highly recommended...
Computer Organization and Design: The Hardware/Software Interface [amazon.com] from Amazon
Re:Nomination (Score:1)
I'd hope not
The whole point of dummed down is so you _can_ start to understand it. You start at a fairly high level (this is a processor. It does stuff) and work your way down to the detail. If I were to just throw a schematic of Hammer at someone, very few would have any idea what it was, let alone have a detailed understanding of it.
Re:Nomination (Score:5, Insightful)
Everybody has a 'level' they need to start at when learning ANYTHING.
More often than not, the starting level offered to someone is a bit higher than their current comprehension level on a given subject. It happens all the time to college freshmen. It doesn't mean they are dumb. They have the capability to learn it. It means that their life experiences and knowledge are incompatible with the method in which the subject material is being presented.
With the assistance of an additional reference level (a bridge to knowledge, if you will), a person can then make 'the connection' to the material being taught. The microprocessor diagram helped me a lot. I learned about microprocessor theory from many books and diagrams. I'll be damned if I'll ever be able to share my knowledge with anyone else because it seemed tough to explain to someone else after I understood how it worked. I've always had a problem with 'dumbing down' anything I needed to explain. People always complain that I talk over their heads when they want an answer to a 'simple' question.
The example given in this article greatly streamlines the concept. Now I can give a quick intro to microprocessors the next time the subject comes up.
Yo yo (Score:4, Funny)
Yah, you don't want to be caught without da knowledge when the MC gets back in town to teach these new kids a "lesson".
2 legit 2 quit! Hammer time, yo!! Word!
Re:Yo yo (Score:1)
We got to pray
Just to make it today
I said we pray(pray) ah,yeah,pray(pray)
We got to pray
Just to make it to pray
That's word,we pray
I mean, yeah MC Hammer rocks and all, but come on... what does this have to do with Microprocessors?
But in that case ... (Score:2)
Re:Yo yo (Score:3, Funny)
Finally, someone really understands me.
Here's What I'd like to see (Score:2, Insightful)
you're not alone. (Score:2)
Some things, like microprocessor design, simply can't be gleaned without a proper education and then experience working in the field. Even the best undergrad program will only take you to about the pentium II level in design. I had a course where we built (paper design, and simulated of course) a processor that was a PII equivalent.
Anything higher than that... well, go get a job with amd or intel.
Re:you're not alone. (Score:2)
Cuz that's pretty much all you need to do!
I've done some sili-layout first with L-Edit and then with Cadence (Ugh!) but you VHDL first, and let that layout your chip. Ship that to the fab, and you got yrself a nifty micro.
pah! VHDL is for pussies! (Score:2)
That was fun, let me tell you.
Re:pah! VHDL is for pussies! (Score:2)
Thank god for the Gentleman's C!
Re:Here's What I'd like to see (Score:5, Interesting)
http://www.aduni.org/courses/how_computers_work
Re:Here's What I'd like to see (Score:2, Interesting)
Re:Here's What I'd like to see (Score:4, Informative)
I think what you would like, although it's a bit dated, would be Understanding Digital Computers [amazon.com]. This book takes starts at the gate level and goes through the layout and operation of a simple 8 bit CPU. I got this book when I was 13. When I went to college and took my digital architecture classes I aced them, and even though that was much more difficult I credit my success to having read this book first instead of diving in naked like most students do/did. It's been forever since I've read it, but I still have it on my bookshelf.
Re:Here's What I'd like to see (Score:1, Offtopic)
$ dict -P - layman (Score:2, Insightful)
From Webster's Revised Unabridged Dictionary (1913) [web1913]:
Layman \Lay"man\n.; pl. {Laymen}. [Lay, adj. + man.]
1. One of the people, in distinction from the clergy; one of
the laity; sometimes, a man not belonging to some
particular profession, in distinction from those who do.
From WordNet (r) 1.7 [wn]:
layman
n : someone who is not a clergyman or a professional person
[syn: {layperson}] [ant: {clergyman}]
I understand microprocessors... (Score:1)
That's about all I ever needed to know about microprocessors.
Craenor
This is news how? (Score:5, Funny)
Re:This is news how? (Score:1)
Wait a minute.
Re:This is news how? (Score:4, Funny)
Re:This is news how? (Score:2)
Re:This is news how? (Score:1)
This reminds me... (Score:3, Funny)
But i thought (Score:4, Funny)
Suppose I'd better stop putting food for them in the coffee cup holder. Who would have thought that the nice man from IT support was right all along
Re:But i thought (Score:2)
question is...where has the food you put in your CD-dri^H^H^H^H^H^Hcoffee cup holder gone??
--paul
Re:But i thought (Score:2)
I thought that computers all had an autistic kid inside.
Re:But i thought (Score:1)
if you *really* want to understand (Score:4, Interesting)
Not too low! (Score:2, Insightful)
I used to feel the same way, but now that I have had several courses in this area, I find Ars's usual detail about right. (if they are still too low pitched you can always read the references) If your a person who understands this stuff, but doesn't want to spend the time reading the latest journals and conferences, Ars articles often provide a great way to stay up to date. Although they may not be accessable to some (I've been there). I hope this "lower pitch" doesn't become a trend.
Re:Not too low! (Score:2)
on the other hand, i think this article is a good idea too--it helps out those of us who would like some more background to help us understand the normal stories.
all in all, i think that running articles giving background info is good, but i hope they keep it separate from the other articles. when i already have the background, i'd rather not have to drudge through it everytime i want to read an article on the subject.
Re:Not too low! (Score:1)
Behold the power of nerdism... (Score:1)
wow (Score:4, Insightful)
However, I still don't see how this is relevant to Hammer, as the article doesn't even go into detail about different takes on architecture vis a vis Intel and AMD. There's a few links at the end to a discussion of the diffs in the G4e and the P4, but nothing on the AMD side.
[offtopic]
Personally, I'm getting wary of various AMD products. I continually see issues w/ AMD and games (the EQ debacle being one of them), I see general weirdness w/ my software on my Athlon, and it just reminds me of all the hideously weird incompatibilities I've had over the years (some that aren't even regularly reproduceable, maybe it's a bad mobo?), and it makes me recall a discussion w/ some of my friends:
"If you want it to run right, use Intel. Everyone, _everyone_ tests w/ Intel stuff first. From MS (yah, boo, whatever) to id, from nVidia to Creative Labs, everyone tests on Intel _first_."
I'm not trying to bash AMD, it's just that, well, every time I use an AMD system, I end up experiencing weird glitchy errors, that come and go as they please. While my Athlon setup has been orders of magnitude more stable than past AMD systems, it's still not the rock that my P3 was.
[/offtopic]
Re:wow (Score:4, Insightful)
AMD's problem is that their image is that of a "cost-saving" choice. So some system builders who use AMD go into "cost-saving" mode on all the other compenents of the system -- leading to a greater chance of instablility and a bad rep for AMD.
Re:wow (Score:2)
I have found Athlons to be just as stable as any Pentium III or IV. I sell only Athlons to all new customers, and will install P4's if specifically requested. But, I spend, and I do mean spend, tons of painstaking amounts of time research the AMD motherboards. Almost all of the cheap ones are crap (along with almost anything VIA used to make more than 1 1/2 years old). Please consider this before you let anyone tell you otherwise about Intel chips or Athlons. My bestfriend works at Cray Inc. and they are building a super cluster computer using all Athlons. Think about it. Those are not VIA chipsets in those beasts.
Not had any problems (Score:2)
I've been using home-built AMD-based PCs since the K6 series. I've run Linux, Windows (various flavors), one or two BSDs and BeOS on them -- with all manners of software. I've never had any issues whatsoever with the processor. I have had issues with dodgy motherboards/chipsets, but never the CPU (e.g., same CPU, different mainboard or new BIOS clears up the problem). I'd look elsewhere for the source of your problems. Heat, specifically, would be a good start. I'd check memory next (a lot of the "weird", or at least intermittent, errors I've seen have had to do with one of the two).
As for the "everyone tests on Intel", well, possibly. But they also probably test on AMD, and it really shouldn't matter much anyway since the two are essentially completely compatible as far as your game is concerned. In addition, I know a lot of people using AMD now in fairly intensive environments (for things like clustering) since you can effectively reduce the cost per cycle in half. I haven't heard them complain of "glitchy errors" or "general weirdness". I imagine when the Hammer line comes out they will be even more popular, especially on the low cost high-end.
-B
Computer Organization and Design (Score:3, Interesting)
Re:Computer Organization and Design (Score:1, Informative)
In Soviet Russia... (Score:3, Funny)
Re:In Soviet Russia... (Score:2)
Re:In Soviet Russia... (Score:2)
Re:In Soviet Russia... (Score:3, Interesting)
One of them has a message to russian reverse engineers, in cyrilic russian, to the effect of "only steal from the best"
Re:In Soviet Russia... (Score:1)
It must have been strange to be a USSR engineer at the time. On one hand playing along with all the Anti West rhetoric, at the same time having to steal technology just to keep up.
Re:In Soviet Russia... (Score:2)
Re:In Soviet Russia... (Score:2)
Many years ago, I heard a rumor about a VAX-11/780 (first model of the VAX) disappearing while being shipped on a train in West Germany. Supposedly it was taken to East Germany for reverse engineering.
Pretty Useful (Score:3, Interesting)
I'd like to see a series of books on the way computers work, at various levels of knowledge, so people can get the knowledge in bite-sized chunks. It'd be helpful to me, since I often end up being "Mr. Explainer" and I'd LOVE to just hand someone a book and get back to work.
A good introduction, but... (Score:2)
But if you need it, you shouldn't be reading Slashdot, or at least, not posting stories.
The classic on this subject, is, of course, Von Neumann's First draft report on the EDVAC [stanford.edu].
Re:A good introduction, but... (Score:2)
Also Slashdot is good for people to learn new things, the whole point of articles is to bring to light information that people can discuss and be informed!!
what I understand about microprocessors (Score:2, Funny)
Just a few problems (Score:4, Informative)
Second, the first two instruction types given are arithmetic and load/store. Unfortunately something like half the instructions (or more) in a program are usually arithmetic and branch instructions (conditional jumps in fact.) So those are definitely the things to discuss first, before load/store, if you're going to do it that way. I personally would bring all three types of operation to the front right away and then delve into how they work, but that's a personal decision.
Speaking of branching instructions he describes forward and backward branches. This is silly. There are two kinds of branches, relative (offset) and absolute. You can jump to a location which is +/- however far from your current position, or you can jump to a specific address. Some CPUs only allow one or the other of these. x86 uses both. (A short jump is an 8 bit signed jump, -128/+127 offset from your current location. A near jump is 16 bit. A far jump specifies a segment and offset, because x86 uses a segmented memory model.) So branching forward or backward is only a significant concept (at all - of course the assembler handles this for you) when talking about relative branches.
I thought that this article was going to talk about how it was actually done. Maybe I'm just special (where's my helmet?) but I've got most of this material (in this article) out of previous ars technica articles. The stuff in this comment I'm writing now, on the other hand, is based on a class in x86 assembly, the final for which is on this coming Tuesday. I want to know how the instruction decoder is put together, for example.
If you ignore every other point I've made in this, consider the possibility that it is a big mistake to start talking about heavily pipelined CPUs. It would be best to start with the classic four-stage pipeline (fetch -> decode -> execute -> write) in which an instruction is fetched from memory (via the program counter; In x86 this is coupled with the CS register (code segment) and it is called the instruction pointer (IP or on 32 bit CPUs in 32 bit mode, EIP) and so you load the new instruction from CS:IP. As per my above paragraph a short or near jump updates IP or EIP, a far jump updates CS and [E]IP.
Finally, is it just me or is it amusing that we're supposed to understand this before hammer arrives but every page has a gigantic animated Pentium IV ad? Up yours, ars adsica.
Re:Just a few problems (Score:1)
The flat memory model has been the standard on x86 since the advent of win32. Maybe the segmented memory model is an interesting historical footnote, but I can't see why it would actually be taught as part of an x86 assembly language course.
Re:Just a few problems (Score:1)
Martin Tilsted
Re:Just a few problems (Score:3, Informative)
Probably everyone who takes a class in x86 assembler (I would have preferred 68k but x86 is what was offered here) is learning about segmented addressing. This is because:
In addition even in 32 bit mode you still have and use segment registers. Oh, you might never CHANGE them, but they are still there. As the sibling to this comment points out, you can use them for optimization (changing DS and ES to allow your offsets to be the same, or smaller than they otherwise would be) and save a few cycles. The registers are there, and still in use.
Re:Just a few problems (Score:5, Informative)
Did you read the article, or did you just skim it. Nowhere do I launch into a discussion of SIMD. The only reason the term is present is because I used a diagram from a previous article.
"Second, the first two instruction types given are arithmetic and load/store. Unfortunately something like half the instructions (or more) in a program are usually arithmetic and branch instructions (conditional jumps in fact.) So those are definitely the things to discuss first, before load/store, if you're going to do it that way. I personally would bring all three types of operation to the front right away and then delve into how they work, but that's a personal decision. "
Yes, it's "personal decision," and I opted to go a different route. I think the order in which I introduced the concepts works. Other orders, are, of course, possible.
"Speaking of branching instructions he describes forward and backward branches. This is silly. There are two kinds of branches, relative (offset) and absolute. You can jump to a location which is +/- however far from your current position, or you can jump to a specific address."
Once you're done with your little intro to ASM, chief, you might stick around for some more advanced courses. In them, you'll learn that what branch prediction algorthims care about are whether a branch is forward or backward, because this tells you whether or not to assume it's part of a loop condition or not. I won't explain further, though, because a. I've covered the topic in previous articles, and b. I don't like to feed trolls anymore than I have to.
"I thought that this article was going to talk about how it was actually done. Maybe I'm just special (where's my helmet?) but I've got most of this material (in this article) out of previous ars technica articles."
Maybe if you'd have read the intro a little more closely, you'd know that I made it clear that everything in that article was covered in more depth in previous Ars articles. This article was intended as background for those articles.
"If you ignore every other point I've made in this, consider the possibility that it is a big mistake to start talking about heavily pipelined CPUs."
I don't discuss heavily pipelined CPUs, or pipelining in general, in this article. I do refer back to previous articles on the P4, but that's recommended as furthe reading. I'll cover pipelining in a future article (a point that I made clear in the conclusion.) And yes, I know that PC = IP in x86 lingo. Thank you. Now we all know that you know, too. Here's a cookie.
"Finally, is it just me or is it amusing that we're supposed to understand this before hammer arrives but every page has a gigantic animated Pentium IV ad? Up yours, ars adsica. "
I made one reference to Hammer in the intro, along with a reference to Itanium2, Yamhill, etc. Let it go, man. This article doesn't pretend to have much of anything specific to do with AMD.
Re:Just a few problems (Score:2)
Uh, I wasn't trying to do that, I was showing why what he wrote was erroneous. As I said the kinds of jumps are not forward and backward, those are two kinds of jumps by offset which on x86 (the only processors I know very well are old intel ones, plus I did writeups on the G3 and G4 chips for E2) are only ways to describe far and near jumps, and he's completely ignoring a direct jump as opposed to a jump by offset. Saying that there are two kinds of jumps, forward and backward, is like saying that there are two ways of moving along the X axis or that a door can be open or closed. That's true, but it doesn't tell anyone anything. It's more interesting to explain the actual kinds of jumps (again, relative and absolute - I wouldn't go into as much detail for an article like that as I did with my /. comment because I assume people here can handle it, and if they can't, that's not my problem. If my point is to explain things to people so they can understand them, then I do so. That's not what I'm doing here.)
It's amazing how you decide that I am "just a geek" -- as if being a geek made you less of a person, when in my opinion it makes you more of one, whoa I'm getting off topic -- based on one slashdot comment I've written. Even if every one linked from my user page currently makes me look like I can only speak technically, it still wouldn't mean that was true.
My point (as I say above) is that the information he was providing was incorrect. I didn't feel the entire article was incorrect, or I would have said so. But again saying you are making a jump forward or backwards is true but it doesn't mean anything.
I think I'll call you Hoover J. Electrolux.
Undefined Acronyms (Score:1)
Re:IN SOVIET RUSSIA (Score:1)
Re:why? (Score:2)
And almost certainly higher cost and higher power consumption.
Re:why? (Score:3, Informative)
I suggest you go read the developer docs [amd.com] for the hammer.
Memory addressing has been re-designed, because no-one uses the protection model in x86. etc....
Do you understand me, or the developer docs, probably not.
now read Understanding the Microprocessor and say that the Hammer isn't an improvement.
Re:why? (Score:2)
I suspect some people do use alternate addressing, since Intel put it in the Xeon -- it has a (slower) 36-bit mode that can address up to 8 GB of virtual memory.
The Hammer has a 48-bit virtual and 40-bit physical addressing -- not 64-bit, although there's a good bit of confusion on this (particularly since AMD's own docs refer to 64-bit addressing in some areas, but refer to the 40/48 bit addressing in others -- I doubt they implemented full 64-bit addressing; it'd be massive overkill for the lifetime of the CPU).
I agree that Hammer is important for much more than address space though.
Re:why? (Score:2, Insightful)
This is a small step away from CISC, since the memory access model is becoming simpler, which inturn can give performance improvements.