Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Understanding the Microprocessor 165

Citywide writes "Ars has a very thorough technical piece up entitled Understanding the Microprocessor. It's pitched lower than many Ars articles (all of which are a bit over my head, to be honest), but that's why it's worth checking out: it explains the fundamentals is a very clear and useful way. And as the author notes, this kind of information is really crucial to get a grip on before Hammer arrives."
This discussion has been archived. No new comments can be posted.

Understanding the Microprocessor

Comments Filter:
  • Oh really? (Score:2, Insightful)

    by Glock27 ( 446276 )
    And as the author notes, this kind of information is really crucial to get a grip on before Hammer arrives.

    The only information you'll need to know once Hammer has arrived is that it's the fastest thing on the planet, and the only mass-market 64-bit processor.

    Oh yeah, and where to buy one. :-)

    • The only information you'll need to know once Hammer has arrived is that it's the fastest thing on the planet, and the only mass-market 64-bit processor.
      Oh yeah, and where to buy one. :-)

      Except for us who have to make informed decisions about future upgrade paths and which processors are going to provide the best cost and performance for our specific applications. Sometimes you need to know the specifics of how a processor operates and what it's specific strengths and benefits are before you recommend changing a companies whole server base etc...
      • Guess what? (Score:3, Interesting)

        by mekkab ( 133181 )
        If you are in that position,
        chances are you don't need, or you could write this article.

        Also, if you are a big enough player, you get some sample procs and run some benchmark tests, maybe even write some of your own.
        • Re:Guess what? (Score:3, Insightful)

          by jgerman ( 106518 )
          Except for those who may want to be in that position in the future, except for students who want to learn about it, except for children taking their first steps into becoming really good with computers. Except for ANYONE who doesn't want to be clueless about what's going on with something they bought, and why. ;)
          • Most of the time you don't need ALL the nitty gritty, just a little bit of the goods to make good high-level decisions. And if you do want the nitty: Not to troll, but, go read a book! Take a class!

            Actually, I'm spoiled. My Microprocessors class had a hand written text book that was SUPER FANTASTIC. This guy could teach. He could also design systems like a madman.

            But really reading a book (becuase your gonna need something on hand you can reference) and then getting one of those trainer boards (With the hex input and 8 segment LED display of IAR and register A) and you are set.
            • Need is seaparate from want. Personally, I don't need to go and buy another book on the subject, when I was a kid and first getting into computers I couldn't afford all the books I would have needed (of course there was no web at that time either).


              I don't think you're trolling, but there's no reason the information shouldn't be available in multiple places, especially places where it's free.

              • YES I agree. I think the initial point was in reference to the "this is in some way related to the Hammer coming out" comment.

                That relation is so tenuous, that you could say "My boss might buy a computer, so I need to read this."
                • Sounds good to me. I was actually posting to the "the only thing you need to know parent". Doesn't really matter, those that are interested in this kind of thing will check it out, those that aren't will complain... something for everybody ;)
        • Sorry, from what I've seen the people who advocate, trial and recommend new tech are usually at management level looking for cushy numbers.
          The kind of person who goes and buys a WLAN, deploys it without locking it down, and goes 'oh I was trying it out' when the resident security weeny tries to kill them. All with higher up technophile management approval of course. They don't bother to consider the implications of new tech, they just go 'ugh, shiny new laptop. Must be better. Must have'.
      • Re:Oh really? (Score:3, Interesting)

        by Glock27 ( 446276 )
        Except for us who have to make informed decisions about future upgrade paths and which processors are going to provide the best cost and performance for our specific applications. Sometimes you need to know the specifics of how a processor operates and what it's specific strengths and benefits are before you recommend changing a companies whole server base etc...

        No one with a clue would ever do this any other way than by buying/borrowing a system for evaluation and running the specific application as a benchmark.

        The beauty of Hammer is that doing so will be quite inexpensive compared to other comparable options. :-)

        I stand by my original post.

        (BTW, my vote for most innovative Hammer feature is the integrated memory controller(s) - memory bandwidth scales with processor count in SMP systems.)

      • Ahh! (Score:3, Funny)

        by Inoshiro ( 71693 )
        Words bad, hurt Oog head!

        Oog simple Caveman, like Hammer. Oog use 64-bit Hammer bash! Oog buy AMD. Oog love AMD!
      • I just want it for the kewlness factor! Yep here is my Athlon XP 2100+, and my Athlon 64 2XXX. :-) I could probably heat my house with just those two boxes....
    • by mekkab ( 133181 ) on Wednesday December 04, 2002 @12:20PM (#4810687) Homepage Journal
      It is nice to have an appreciation for the underlying mechanisms of the things we use.
      As Socrates said, the unexamined life is not worth living.

      But as many EE or even ECE people know, most programmers don't give a rats ass about what the hardware is doing. those that do have this understanding ( OS people, real-time people, embedded people, well a lot of people!) have it because they need it.

      I'm not arguing that it isn't beneficial to know the difference between SIMD, SISD, MIMD, MISD systems, but if you aren't programming or designing for parallel systems, how will this help you when a new processor comes to market?!

      The "Hammer" line is just a fumble for relevance. Guess what? We're reading this on a computer. The relevance is already there!

      • by warpSpeed ( 67927 ) <slashdot@fredcom.com> on Wednesday December 04, 2002 @12:34PM (#4810799) Homepage Journal
        ( OS people, real-time people, embedded people, well a lot of people!)

        embedded people, are they, like, fetuses?

        But seriously (and to stay on topic), I am really excited about hammer too. 64 bit processors for the people! I hope the mobo manufacturs get some nice, commodity products out there so that hammer is a viable chioce for my desktop!

      • Ok I am quite familiar with SISD(Single Instruction Single Data), this is the basic abstraction that pretty much every computer out there uses.

        SIMD(Single Instruction Multiple Data) is the hot 'new' kid on the block and the basic abstraction/concept behind altivec, MMX, SSE, VIS, etc.

        MIMD(Multiple Instruction Multiple Data) would (IMNSHO) be just a misnomer for VLIW(Very large instruction word) which is almost the same thing as EPIC(Explicitly parallel instruction set computing) aka Itanic ... er ... ia64 ... er ... itanium(2) ... er ... whatever.

        Now what exactly would a MISD(multiple instructions single data) system be?! And, can anyone point to an example of such a system?
        • Now what exactly would a MISD(multiple instructions single data) system be?! And, can anyone point to an example of such a system?

          They don't exist - just a theoretical fourth type to complete the set. Always in computer science courses, but none ever built.
        • SIMD ~= Array processing. Take this bazillion element array and add 1 to each. MIMD ~= current multiprocessing. x processors running x separate bits of code.
          MISD is fairly near useless, since it's basically one great big implicit race condition. It's included in the list for completelness only.
          (These two processors in parallel, add 1 to this element and multiply it by 2. So do you get 2 (x+1) or do you get (2x + 1))
        • How about a graphics processor?
        • ARM has a feature that could be considered MISD, whereby a single instruction can do a shift/rotate on one operand of an arithmetic or logical operation (e.g. add r0, r1, r2, lsl #3 to place the value of r1 + (r2 << 3) in r0).
      • But as many EE or even ECE people know, most programmers don't give a rats ass about what the hardware is doing. those that do have this understanding ( OS people, real-time people, embedded people, well a lot of people!) have it because they need it.

        Plenty of OS, real time and embedded people have no need for anything beyond instruction timings, if that. C is a wonderful thing.

        Don't get me wrong, you should read the article just to appreciate the technology. But to imply that reading the article is necessary for a programmer (even an assembly level one), much less an end user, is a big overreach. That was my original point...along with the fact that Hammer will rock! :-)

        BTW, whoever moderated my original post a "troll"...get a life. :-)

        As someone else's tagline reminds us: "To moderate is human, to reply divine." ;-)

        Moderation Totals: Troll=1, Insightful=4, Overrated=3, Total=8. Heh. I guess I have some anti-fans. Most likely Intel employees or fans I guess...diversify those portfolios guys! ;-)

        Disclaimer: I don't currently hold AMD or INTC stock. That will change soon though. =)

    • Re:Oh really? (Score:3, Insightful)

      by adewolf ( 524919 )
      The DEC Alpha's have been 64 bit for a long time and the Alpha backplane is the fastest in the biz.

      Alex
      • The Alpha (now more of an Intel chip than anything else) is not currently being mass marketed by anyone, and it never was mass marketed.

        Actually, Alpha was never really marketed much at all, which is the main reason why it never did very well, despite it's technical strengths.

        The original poster was quite correct, the AMD Hammer will be the first 64-bit, general purpose CPU that is mass marketed.
        • And not one second to soon. See this machine (points to the floor and right)? It's got 2 GB of memory (2^63), and it's my friggin' home computer. The fact that Intel is not pushing for 64-bit desktops is very strange indeed, considering that the will be nessecary even for high-end consumers like myself within the year.

          And don't give me that crap about 36 bit virtual adressing... The reason I use 2 GB in my machine is that I USE >1GB, in one process, and in a very random fashion (in fact, the hobby program I'm developing would really like ~7 GB of RAM (yes, just for this one process), but I can't afford that just yet).

          For those who are wondering what kind of "hobby program" I'm writing that needs such a shit-load of memory: It's an application that displays the globe, using the Blue Marble [nasa.gov] world texture maps at 1x1 km resolution from NASA (40,000x20,000 pixels, night and day side). And sometime soon-ish they'll release 100m maps, and I will need 700 GB of ram, then 10m/70TB, 1m/7PB...

          • Given your application and the reasonable maximum resolution of 1920x1440, a 4kx4k texture map should be sufficient - just downsample your original map and display it. Even if you made your map zoomable, you wouldn't need gobs of ram. Ideally, you could store the large map on disk as a series of large squares, thus allowing efficent access to data in the shape you're likely to need. No a major memory requirement and probably fairly interesting to build.

    • Re:Oh really? (Score:2, Insightful)

      great enthusiasm, but a little mis-informed. As far as spec2k results go, Itanium2 is faster in floating point. And in the real world, "fastest" can vary by individual application, or even the particular inputs you give those applications.

      so maybe not "fastest", but it will be fast.
      • great enthusiasm, but a little mis-informed. As far as spec2k results go, Itanium2 is faster in floating point. And in the real world, "fastest" can vary by individual application, or even the particular inputs you give those applications.

        Oh really? (seems to be my phrase for the day) Please point me to the SPEC numbers for Hammer...

        We'll see what the best compilers available at the time of release can do. Also, there will be PR 4000+ Opterons (according to leaked information) in the first half of '03. Those should beat the Itaniums shipping at that time, even in FP. Don't forget, the highest end Opterons have two memory controllers, and double the memory bandwidth of the baseline Hammers.

        Regardless of whether or not Hammer is behind by a minscule amount in FP performance, I expect it will cost less than 1/2 what Itanium costs, even in its best Opteron incarnation. Finally, AMD will really be able to put the hurt on Intel. :-)

        Myself, I just want one of these babies coupled to a GeForce FX card. I'm thinking about June of next year... =)

        • the spec numbers released by AMD are "estimated" and used the intel compiler without the native x86 mode. they're all over the web:

          hammer:
          specint 1202
          specfpu 1170

          itanium2 numbers are real spec results available on their site:
          specint 674
          specfpu 1431

          AMD says they expect hammer aware compilers to render a 20% performance improvement in fpu, thanks to using the extra registers, etc. This is likely true, but also convenient that it's just enough to bring them to the same level as I2.

          In the end, hammer will always dominate i2 in performance per $ just based on the volumes of markets. As time progresses, itanium will ride moore's law wider, whereas it's not clear how hammer will capitalize on die shrinks. Interconnect delay is becoming the dominate issue, and so riding the clock advance like processor makers have been for the last 5 years will not be so straightforward.

          I'm earger to by a hammer as well. Don't think I'm somehow badmouthing the chip. Just wanted to balance out over-enthusiastic statements like "it's the fastest thing ever!!!".
  • Sure (Score:4, Funny)

    by gowen ( 141411 ) <gwowen@gmail.com> on Wednesday December 04, 2002 @12:14PM (#4810634) Homepage Journal
    This kind of information is really crucial to get a grip on before Hammer arrives
    Yeah, right. In exactly the same way its necessary to understand the principles of the cathode ray tube and sideband modulation before the new season of Buffy [upn.com] starts.
  • Nomination (Score:5, Funny)

    by Apathy costs bills ( 629778 ) on Wednesday December 04, 2002 @12:15PM (#4810637) Homepage Journal
    Nomination for Best Diagram Ever [arstechnica.com]. I really wish my "Introduction to MicroProcessors" had had something like that; instead we were drowned in the whiteboard handwavings of a man with an accent I could hardly understand. Maybe this guy should spin this off into a book, make a killing selling it to Undergrad CS students lost in space...
    • by greechneb ( 574646 ) on Wednesday December 04, 2002 @12:22PM (#4810709) Journal
      Fortunately, my microprocessor teacher didn't have an accent, but nonetheless, the diagrams were on a whiteboard, and highly illegible. A book like this would be nice for students taking such a class. The worst part of my class was the $150 book, and using it only for the ASCII table. Then they decided to change the book the next year so I couldn't sell it back -

      On a related note - Anybody wanna buy a used book on architecture, programming, and interfacing with the 8086 and 8088 microprocessors? Rarely used, little wear, only used page 33 (ascii table)
      • by Anonymous Coward on Wednesday December 04, 2002 @12:34PM (#4810798)
        Anybody wanna buy a used book on architecture, programming, and interfacing with the 8086 and 8088 microprocessors? Rarely used, little wear, only used page 33 (ascii table)

        I would, but I don't want to buy a book with a used ASCII table
    • Re:Nomination (Score:4, Informative)

      by JonTurner ( 178845 ) on Wednesday December 04, 2002 @12:35PM (#4810805) Journal
      Maybe this guy should spin this off into a book,
      Too late. Charles Petzold has already done it. See CODE [barnesandnoble.com]. It should be on every geek's bookshelf.
      • This book reminds me of my university education [carleton.ca]. When I was done, we had done transistors, digital logic, PC architecture, assembly language, C and C++ and network protocols. I have always felt that having an understanding of how everything works has made me a better programmer.

    • by ccweigle ( 25237 )
      On the other hand, I found this one [arstechnica.com] pretty confusing.
    • Re:Nomination (Score:2, Informative)

      by ShotgunEd ( 621584 )
      Maybe this guy should spin this off into a book, make a killing selling it to Undergrad CS students lost in space...

      Computer Organization and Design: The Hardware/Software Interface is the ultimate intro to microprocessors book. It covers instruction sets and architecture extremely well. As the title suggests, it show how (and why) the instruction set and architecture are tied together. (Most of the discussions revolve around MIPS, but they have stuff on Pentiums and PowerPCs, too.) The meat of the book involves actually designing a pipelined, MIPS-like processor from scratch. Really cool stuff - you can actually implement the final design on an FPGA board pretty easily. The final design is, of course, much much much simpler than an actual processor, but it really gives you a sense of what all the components do and how they function together. Anyways, highly recommended...

      Computer Organization and Design: The Hardware/Software Interface [amazon.com] from Amazon
  • Yo yo (Score:4, Funny)

    by Burgundy Advocate ( 313960 ) on Wednesday December 04, 2002 @12:17PM (#4810660) Homepage
    And as the author notes, this kind of information is really crucial to get a grip on before Hammer arrives.

    Yah, you don't want to be caught without da knowledge when the MC gets back in town to teach these new kids a "lesson".

    2 legit 2 quit! Hammer time, yo!! Word!
  • Ok, I know that this probably doesnt jive with the majority of people who post here, however I'm not a tech person totally. I have my moments, but over all I'm just not. I'd like to get a better understanding of the inner workings of microprocessors from a laymans perspective... kind of like I'm in 4th grade. I feel like if I had that understanding I would better understand my machine, but this stuff is just too much for me...
    • most people here know very little about how the machines actually work. They just like to claim they do because they can write insignificant shell scripts.

      Some things, like microprocessor design, simply can't be gleaned without a proper education and then experience working in the field. Even the best undergrad program will only take you to about the pentium II level in design. I had a course where we built (paper design, and simulated of course) a processor that was a PII equivalent.

      Anything higher than that... well, go get a job with amd or intel.

      • When you say "paper", do you mean VHDL?
        Cuz that's pretty much all you need to do!

        I've done some sili-layout first with L-Edit and then with Cadence (Ugh!) but you VHDL first, and let that layout your chip. Ship that to the fab, and you got yrself a nifty micro.
        • we did use vhdl for most of it though. I recall an early design class I had where the professor demanded we design a multiplier - at the transistor level.

          That was fun, let me tell you.

          • we (tried) to design an entire 8-bit micro at the silicon level. Yep- Here's your P region, here's your N region, run some poly here, some metal 1 here, a via and then some metal 2 here, and VOILA! You have an 8 bit register that shifts and increments. My partner made the ALU. We got about as far as designing the RAM and WHOOSH the semester was over.

            Thank god for the Gentleman's C!

    • by baywulf ( 214371 ) on Wednesday December 04, 2002 @12:30PM (#4810779)
      I've been studying hardware design for a while now and the following course documents from the (former) ARSDigita university are a clear yet consise depiction of what you would learn in a beginnning microprocessor design course.

      http://www.aduni.org/courses/how_computers_work/
    • I am currently enjoying Charles Petzold's book "Code", which essentially walks you through the workings of a CPU by describing one built with telegraph equipment from the 19th century. Lots of interesting history as well. This is the best written popular tutorial on microprocessors I've seen.
    • by dohnut ( 189348 ) on Wednesday December 04, 2002 @01:28PM (#4811193)

      I think what you would like, although it's a bit dated, would be Understanding Digital Computers [amazon.com]. This book takes starts at the gate level and goes through the layout and operation of a simple 8 bit CPU. I got this book when I was 13. When I went to college and took my digital architecture classes I aced them, and even though that was much more difficult I credit my success to having read this book first instead of diving in naked like most students do/did. It's been forever since I've read it, but I still have it on my bookshelf.
  • I used to have a t-shirt with "2b or not 2b" in Boolean Algebra on the back.

    That's about all I ever needed to know about microprocessors.

    Craenor
  • by radiumhahn ( 631215 ) on Wednesday December 04, 2002 @12:23PM (#4810711)
    Everybody knows computers work because the ONEs and ZEROs are at war with each other...
    • And then there is the occasional TWO, AKA The One. All the ONEs and ZEROs tremble is his presence.

      Wait a minute.
    • by vectra14 ( 470008 ) on Wednesday December 04, 2002 @12:47PM (#4810891)
      you should never bend computer wires too much. you see, 0's are round, so they get through fine, but 1's tend to get stuck in the bends =)
    • I should clarify... I'm rooting for the ZEROs. I hate those ONEs with their picture perfect good looks and their "holier than thou" attitudes. People of the world hold down your ZERO keys! We'll make an army the likes of which no ONEs have ever seen! If the freedom fighters can just hold out there will be only ZEROs after the new year!
    • You can compress data by removing all zeros, because they don't contain any information anyway. Besides, a "0" is more bulky than a "1" so you'll save more than half of the space.
  • by youngerpants ( 255314 ) on Wednesday December 04, 2002 @12:23PM (#4810714)
    that my 'puter was powered by a series of little mice on little wheels.

    Suppose I'd better stop putting food for them in the coffee cup holder. Who would have thought that the nice man from IT support was right all along
  • by ch-chuck ( 9622 ) on Wednesday December 04, 2002 @12:35PM (#4810810) Homepage
    build one of these [widomaker.com]
  • Not too low! (Score:2, Insightful)

    by Anonymous Coward
    "It's pitched lower than many Ars articles (all of which are a bit over my head, to be honest)"

    I used to feel the same way, but now that I have had several courses in this area, I find Ars's usual detail about right. (if they are still too low pitched you can always read the references) If your a person who understands this stuff, but doesn't want to spend the time reading the latest journals and conferences, Ars articles often provide a great way to stay up to date. Although they may not be accessable to some (I've been there). I hope this "lower pitch" doesn't become a trend.
    • i agree--for those of us who don't need anymore background, the articles are a great resource, and it would take a lot away for them to start writing all stories with a "lower pitch".

      on the other hand, i think this article is a good idea too--it helps out those of us who would like some more background to help us understand the normal stories.

      all in all, i think that running articles giving background info is good, but i hope they keep it separate from the other articles. when i already have the background, i'd rather not have to drudge through it everytime i want to read an article on the subject.

    • Yeah, I'd tend to agree. But, those of us with a couple classes or a couple good books under our belt are different than people who'd like a better understanding, but don't want it enough to go read a couple hundred pages of hennesy and patterson or whoever.
  • Try walking out of the lab and explaining what a multimasked register is... then you'll understand why this article is nice!
  • wow (Score:4, Insightful)

    by netwiz ( 33291 ) on Wednesday December 04, 2002 @12:42PM (#4810852) Homepage
    This is quite possibly the best "intro to computers" on a high level that I've ever seen, and it even delves into some of the more specifics of CPU operation. Kudos to Ars...

    However, I still don't see how this is relevant to Hammer, as the article doesn't even go into detail about different takes on architecture vis a vis Intel and AMD. There's a few links at the end to a discussion of the diffs in the G4e and the P4, but nothing on the AMD side.

    [offtopic]
    Personally, I'm getting wary of various AMD products. I continually see issues w/ AMD and games (the EQ debacle being one of them), I see general weirdness w/ my software on my Athlon, and it just reminds me of all the hideously weird incompatibilities I've had over the years (some that aren't even regularly reproduceable, maybe it's a bad mobo?), and it makes me recall a discussion w/ some of my friends:

    "If you want it to run right, use Intel. Everyone, _everyone_ tests w/ Intel stuff first. From MS (yah, boo, whatever) to id, from nVidia to Creative Labs, everyone tests on Intel _first_."

    I'm not trying to bash AMD, it's just that, well, every time I use an AMD system, I end up experiencing weird glitchy errors, that come and go as they please. While my Athlon setup has been orders of magnitude more stable than past AMD systems, it's still not the rock that my P3 was.
    [/offtopic]
    • Re:wow (Score:4, Insightful)

      by aero6dof ( 415422 ) <aero6dof@yahoo.com> on Wednesday December 04, 2002 @01:26PM (#4811179) Homepage
      Having built and bought systems for many years now, I've decided that the processor doesn't matter much for stability. If you want a stable system, you need to put thought and money into selecting a solid motherboard, chipset, and power supply.

      AMD's problem is that their image is that of a "cost-saving" choice. So some system builders who use AMD go into "cost-saving" mode on all the other compenents of the system -- leading to a greater chance of instablility and a bad rep for AMD.

      • I have found Athlons to be just as stable as any Pentium III or IV. I sell only Athlons to all new customers, and will install P4's if specifically requested. But, I spend, and I do mean spend, tons of painstaking amounts of time research the AMD motherboards. Almost all of the cheap ones are crap (along with almost anything VIA used to make more than 1 1/2 years old). Please consider this before you let anyone tell you otherwise about Intel chips or Athlons. My bestfriend works at Cray Inc. and they are building a super cluster computer using all Athlons. Think about it. Those are not VIA chipsets in those beasts.
    • Personally, I'm getting wary of various AMD products. I continually see issues w/ AMD and games ...

      I've been using home-built AMD-based PCs since the K6 series. I've run Linux, Windows (various flavors), one or two BSDs and BeOS on them -- with all manners of software. I've never had any issues whatsoever with the processor. I have had issues with dodgy motherboards/chipsets, but never the CPU (e.g., same CPU, different mainboard or new BIOS clears up the problem). I'd look elsewhere for the source of your problems. Heat, specifically, would be a good start. I'd check memory next (a lot of the "weird", or at least intermittent, errors I've seen have had to do with one of the two).

      As for the "everyone tests on Intel", well, possibly. But they also probably test on AMD, and it really shouldn't matter much anyway since the two are essentially completely compatible as far as your game is concerned. In addition, I know a lot of people using AMD now in fairly intensive environments (for things like clustering) since you can effectively reduce the cost per cycle in half. I haven't heard them complain of "glitchy errors" or "general weirdness". I imagine when the Hammer line comes out they will be even more popular, especially on the low cost high-end.

      -B

  • by Snoochie Bootchie ( 58319 ) on Wednesday December 04, 2002 @01:09PM (#4811040) Journal
    For a more detailed treament of the topic, take a look at David Patterson's and John Hennessy's _Computer Organization & Design_. It is an excellent text book on the topic.
  • by Phosphor3k ( 542747 ) on Wednesday December 04, 2002 @01:22PM (#4811140)
    The microprocessors understand YOU!
    • Actually a question that always struck me is : what computers did the soviets use ? These people had a comparable space program, and yet I have never heard of the computers they were using. I tried a while ago to make some google searches (yes, I do know in soviet russia google searches YOU), but the only thing I found was a pay per view IEEE article. Does anyone know a good website on the matter ?
      • They used a fair amount of Western technology, mainly at the board level. Back in the 70's it was common knowledge (I worked in the industry) that certain distributors that covered "University accounts" shipped into the east block. It was centered around Finland, Austria and for Telecom products France. They paid a huge premium as the boards was marked-up by two or three levels of distribution.

        • Do a google for "silicon zoo"; you should find a site which has loads of pictures of 'silicon art', basically 'doodles' (well, too high tech to call them doodles, really) made on production chips, in the die margins, whereever.

          One of them has a message to russian reverse engineers, in cyrilic russian, to the effect of "only steal from the best" :)
          • Thanks, I vaguely remember I have seen this before.

            It must have been strange to be a USSR engineer at the time. On one hand playing along with all the Anti West rhetoric, at the same time having to steal technology just to keep up.

      • There were lots of Eastern Bloc Z80 clones. If you google, you can find loads of Sinclair Spectrum clones from Russia, East Germany etc. IIRC they were actually ahead of the West in supercomputer design though, because the Soviet government poured large amounts of cash into it.
      • Some of their computers were clones of popular U.S. computers such as the IBM 360, PDP-11 and VAX.

        Many years ago, I heard a rumor about a VAX-11/780 (first model of the VAX) disappearing while being shipped on a train in West Germany. Supposedly it was taken to East Germany for reverse engineering.

  • Pretty Useful (Score:3, Interesting)

    by Badgerman ( 19207 ) on Wednesday December 04, 2002 @01:56PM (#4811414)
    This was definitely worth posting - it's a good, helpful summary. It's the kind of thing that I wish there was more of since I can pass the article on to people who need it.

    I'd like to see a series of books on the way computers work, at various levels of knowledge, so people can get the knowledge in bite-sized chunks. It'd be helpful to me, since I often end up being "Mr. Explainer" and I'd LOVE to just hand someone a book and get back to work.
  • It's a good introduction to how a computer works.

    But if you need it, you shouldn't be reading Slashdot, or at least, not posting stories.

    The classic on this subject, is, of course, Von Neumann's First draft report on the EDVAC [stanford.edu].

    • How arrogant can you get? People have different skills and different knowledge.
      Also Slashdot is good for people to learn new things, the whole point of articles is to bring to light information that people can discuss and be informed!!
  • There are 10 kinds of people on this board: those who understand the language of microprocessors and those who don't. As for myself, I fall into the latter category because the darn thing is /.ed already.
  • Just a few problems (Score:4, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday December 04, 2002 @02:30PM (#4811753) Homepage Journal
    First of all, I think it would have been beneficial to examine a really stupid CPU (like the 8086 perhaps) before launching into stuff like SIMD.

    Second, the first two instruction types given are arithmetic and load/store. Unfortunately something like half the instructions (or more) in a program are usually arithmetic and branch instructions (conditional jumps in fact.) So those are definitely the things to discuss first, before load/store, if you're going to do it that way. I personally would bring all three types of operation to the front right away and then delve into how they work, but that's a personal decision.

    Speaking of branching instructions he describes forward and backward branches. This is silly. There are two kinds of branches, relative (offset) and absolute. You can jump to a location which is +/- however far from your current position, or you can jump to a specific address. Some CPUs only allow one or the other of these. x86 uses both. (A short jump is an 8 bit signed jump, -128/+127 offset from your current location. A near jump is 16 bit. A far jump specifies a segment and offset, because x86 uses a segmented memory model.) So branching forward or backward is only a significant concept (at all - of course the assembler handles this for you) when talking about relative branches.

    I thought that this article was going to talk about how it was actually done. Maybe I'm just special (where's my helmet?) but I've got most of this material (in this article) out of previous ars technica articles. The stuff in this comment I'm writing now, on the other hand, is based on a class in x86 assembly, the final for which is on this coming Tuesday. I want to know how the instruction decoder is put together, for example.

    If you ignore every other point I've made in this, consider the possibility that it is a big mistake to start talking about heavily pipelined CPUs. It would be best to start with the classic four-stage pipeline (fetch -> decode -> execute -> write) in which an instruction is fetched from memory (via the program counter; In x86 this is coupled with the CS register (code segment) and it is called the instruction pointer (IP or on 32 bit CPUs in 32 bit mode, EIP) and so you load the new instruction from CS:IP. As per my above paragraph a short or near jump updates IP or EIP, a far jump updates CS and [E]IP.

    Finally, is it just me or is it amusing that we're supposed to understand this before hammer arrives but every page has a gigantic animated Pentium IV ad? Up yours, ars adsica.

    • Is this a troll, or are some people still being taught about x86 segments and offsets?

      The flat memory model has been the standard on x86 since the advent of win32. Maybe the segmented memory model is an interesting historical footnote, but I can't see why it would actually be taught as part of an x86 assembly language course.
      • How would you address 16GB ram with only 32 bits and no segments? Besides segments always start at 0(virtual address) which allow for some optimization. (Well nobody does that kind of optimizing anymore, and the only os which allow you to controll the segments is OS/2) but still :}

        Martin Tilsted
      • Is this a troll, or are some people still being taught about x86 segments and offsets?

        Probably everyone who takes a class in x86 assembler (I would have preferred 68k but x86 is what was offered here) is learning about segmented addressing. This is because:

        1. x86 CPUs still use segmented addressing. It is necessary to cross the 4GB boundary. In addition BIOS executes in real mode so even the most modern x86-based systems do segmented addressing at some point in the boot process.
        2. ASM is used for three things. The first is inline assembler for optimization. The second is drivers, where things either have to happen on schedule (one operation MUST follow the prior and be followed by another specific operation.) The third is embedded systems where ALL of the code is sometimes STILL written in assembler, from front to back, typically runs on small x86-based computers running some form of DOS, and is typically 16 bit real mode code.

        In addition even in 32 bit mode you still have and use segment registers. Oh, you might never CHANGE them, but they are still there. As the sibling to this comment points out, you can use them for optimization (changing DS and ES to allow your offsets to be the same, or smaller than they otherwise would be) and save a few cycles. The registers are there, and still in use.

    • by Hannibal_Ars ( 227413 ) on Wednesday December 04, 2002 @04:11PM (#4812737) Homepage
      "First of all, I think it would have been beneficial to examine a really stupid CPU (like the 8086 perhaps) before launching into stuff like SIMD."

      Did you read the article, or did you just skim it. Nowhere do I launch into a discussion of SIMD. The only reason the term is present is because I used a diagram from a previous article.

      "Second, the first two instruction types given are arithmetic and load/store. Unfortunately something like half the instructions (or more) in a program are usually arithmetic and branch instructions (conditional jumps in fact.) So those are definitely the things to discuss first, before load/store, if you're going to do it that way. I personally would bring all three types of operation to the front right away and then delve into how they work, but that's a personal decision. "

      Yes, it's "personal decision," and I opted to go a different route. I think the order in which I introduced the concepts works. Other orders, are, of course, possible.

      "Speaking of branching instructions he describes forward and backward branches. This is silly. There are two kinds of branches, relative (offset) and absolute. You can jump to a location which is +/- however far from your current position, or you can jump to a specific address."

      Once you're done with your little intro to ASM, chief, you might stick around for some more advanced courses. In them, you'll learn that what branch prediction algorthims care about are whether a branch is forward or backward, because this tells you whether or not to assume it's part of a loop condition or not. I won't explain further, though, because a. I've covered the topic in previous articles, and b. I don't like to feed trolls anymore than I have to.

      "I thought that this article was going to talk about how it was actually done. Maybe I'm just special (where's my helmet?) but I've got most of this material (in this article) out of previous ars technica articles."

      Maybe if you'd have read the intro a little more closely, you'd know that I made it clear that everything in that article was covered in more depth in previous Ars articles. This article was intended as background for those articles.

      "If you ignore every other point I've made in this, consider the possibility that it is a big mistake to start talking about heavily pipelined CPUs."

      I don't discuss heavily pipelined CPUs, or pipelining in general, in this article. I do refer back to previous articles on the P4, but that's recommended as furthe reading. I'll cover pipelining in a future article (a point that I made clear in the conclusion.) And yes, I know that PC = IP in x86 lingo. Thank you. Now we all know that you know, too. Here's a cookie.

      "Finally, is it just me or is it amusing that we're supposed to understand this before hammer arrives but every page has a gigantic animated Pentium IV ad? Up yours, ars adsica. "

      I made one reference to Hammer in the intro, along with a reference to Itanium2, Yamhill, etc. Let it go, man. This article doesn't pretend to have much of anything specific to do with AMD.
  • Anyonw else notice the number of acronyms in this "basic introduction" type article with no definitions. Just off the top of my head there were: SISD (single instruction single datum??) SIMD (single instuction multiple data??) ISA (instruction set architecture?????)

Remember to say hello to your bank teller.

Working...