Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Processors and the Limits of Physics 168

An anonymous reader writes: As our CPU cores have packed more and more transistors into increasingly tiny spaces, we've run into problems with power, heat, and diminishing returns. Chip manufacturers have been working around these problems, but at some point, we're going to run into hard physical limits that we can't sidestep. Igor Markov from the University of Michigan has published a paper in Nature (abstract) laying out the limits we'll soon have to face. "Markov focuses on two issues he sees as the largest limits: energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology does not shrink in a way that's proportional to their shrinking physical dimensions. The primary result of this issue has been that lots of effort has been put into making sure that parts of the chip get shut down when they're not in use. But at the rate this is happening, the majority of a chip will have to be kept inactive at any given time, creating what Markov terms 'dark silicon.' Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level. ... The energy use issue is related to communication, in that most of the physical volume of a chip, and most of its energy consumption, is spent getting different areas to communicate with each other or with the rest of the computer. Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other."
This discussion has been archived. No new comments can be posted.

Processors and the Limits of Physics

Comments Filter:
  • Dupe (Score:5, Informative)

    by Anonymous Coward on Saturday August 16, 2014 @10:38AM (#47684573)
  • by Anonymous Coward

    Well, except that every other technology has hit limits, except computers! They'll just endlessly get better. Forever.

    • Re: (Score:2, Insightful)

      by blue trane ( 110704 )

      Yes, like Simon Newcomb [wikipedia.org] proved we had hit limits in heavier-than-air flight, in 1903!

      In the October 22, 1903, issue of The Independent, Newcomb made the well-known remark that "May not our mechanicians . . . be ultimately forced to admit that aerial flight is one of the great class of problems with which man can never cope, and give up all attempts to grapple with it?"

  • by Jody Bruchon ( 3404363 ) on Saturday August 16, 2014 @10:44AM (#47684595)
    Clockless logic circuits [wikipedia.org] might be an interesting workaround for the communication problem. The other side of the chip starts working when the data CAN make it over there, for example. I don't claim to know much about CPU design beyond how the work on a basic logical level, but I'd love to hear the opinions of someone here who does regarding CPUs and asynchronous logic.
    • Chuck Moore does these. Of course, they're extremely simple at the moment (comparatively speaking). But they are indeed extremely energy efficient, and the self-timing thingy works great for them.
      • How are they energy efficient?

        More gates == more static power draw.
        Leaving a circuit switched on because you don't know when asynchronous transitions will arrive == more static power draw.

        Global async design may have made sense in 1992, but not these days. Silicon has moved on.

        • It depends. This is the same guy that Intel licenses a lot of power-saving patents from. You'd have to ask him, but the static power draw of his circuits is indeed minimal. Perhaps the reason is that he doesn't use manufacturing processes with high static power draw on purpose, I really don't know. It may also be the case that a switch from contemporary silicon to something else in the future will make this design more relevant again, power-wise (but the timing considerations, as well as the speed of light,
          • It depends.

            Yes. In semiconductors, there's a basic tradeoff between static power dissipation and propagation latency. In a slow/low static current process, you might well be able to use asynchronous design to improve power efficiency. This is more the realm of RFID tags, payment cards and smart card chips. You won't be finding much of that going on in a desktop or phone CPU.

    • I guess that's me then.

      Every D-flip flop is an async circuit. We use a variety of other standard small async circuits we use that are a little bigger. Receiving clock-in-data signals like DS links is a common example. What you're talking about is async across larger regions.

      Scaling fully asynchronous designs to a whole chip is a false economy. The area cost is substantially greater than a synchronous design and with the static power draw of circuits now dominating, the dynamic power savings of asynchronous

    • More relevant links to asynchronous/clockless computing:
      http://www.embedded.com/design... [embedded.com]
      http://www.technologyreview.co... [technologyreview.com]
      http://www.scientificamerican.... [scientificamerican.com]
      http://www.nytimes.com/2001/03... [nytimes.com]
  • Go vertical! (Score:5, Interesting)

    by putaro ( 235078 ) on Saturday August 16, 2014 @10:48AM (#47684603) Journal

    Stacking dies or some other form of going from flat to vertical will get you around some of the signaling limits. If you look back at old supercomputer designs there were a lot of neat tricks played with the physical architecture to work around performance problems (for example, having a curved backplane lets you have a shorter bus but more space between boards for cooling). Heat is probably the major problem, but we still haven't gone to active cooling for chips yet (e.g. running cooling tubes through the processor rather than trying to take the heat off the top).

    • Ah, Prime Radiant !

    • by Nemyst ( 1383049 )
      This. It won't be easy, of course not, but there's this entire third dimension we're barely even using right now which would give us an entirely new way to scale up. The possible benefits can already be seen in for instance Samsung's new 3D NAND, where they can get similar density to current SSDs with much larger NAND, thus improving reliability while keeping capacities and without significantly increasing costs. Of course, CPUs generate far more heat than SSDs, but the benefits could be tremendous. If anyt
      • But as you said yourself, CPUs (and GPUs) generate a lot more heat. They are already challenging enough on their own, imagine how hot the CPU or GPU at the middle of the stack would get with all that extra thermal resistance and heat added above and below it. As it is now, CPU manufacturers already have to inflate their die area just to fit all the micro-BGAs under the die and get the heat out.

        Unless you find a way to teleport heat out from the middle and possibly bottom of the stack, stacking high-power ch

    • by guruevi ( 827432 )

      There have been plenty of concept designs and current chips use 3d technology to an extent. The problem IS cooling. On a flat plane, you can simply put a piece of metal on top and it will cool it. Current chips sometimes stoke away close to 200W. With 3D designs, you need to build-in the heat transfer (taking up space you can't use for chips or communications) in between and both planes will produce equal amounts of heat so either heat transfer needs to be really, really good or you need a heat sink several

      • Pump the coolant through the chip.
      • by putaro ( 235078 )

        Think different!

        Maybe instead of stacking the chips, you put one on the bottom and have it double as a backplane and then mount additional dies to it vertically (like itty bitty expansion cards). Then you can get some airflow or other coolant flow in between those vertically mounted dies.

        These kinds of funky solutions will only show up when they're cost-effective (that is, absolutely needed). The reason we stick with flat dies (and single die packages) is because it's cheaper to make/mount a single die in

  • So why don't we use Alpha radiation particles?
    • So how would we use alpha particles?

      • They work just like electrons, but faster. Heat would be an issue though. And radiation, of course.
        • by slew ( 2918 )

          So just why would alpha particles (which are basically a helium nucleus consisting of 4 really heavy particles) gonna be somehow faster than electrons (which are much lighter and take less energy to manipulate)?

          Another problem is that we aren't currently using free-space electrons either, but electrons in a wave guide (where we lay down conductors to steer the electrons around the circuits we design). Not as easy to do with alpha particles...

  • by dbc ( 135354 ) on Saturday August 16, 2014 @11:01AM (#47684637)

    "Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other." ... in a single clock.

    So in the 1980's I was a CPU designer working on what I call "walk-in, refrigerated, mainframes". It was mostly 100K-family ECL in those days and compatible ECL gate arrays. Guess what -- it took most of a clock to get to a neighboring card, and certainly took a whole clock to get to another cabinet. So in the future it will take more than one clock to get across a chip. I don't see how that is anything other than a job posting for new college graduates.

    That one statement in the article reminds of when I first moved to Silicon Valley. Everybody out here was outrageously proud of themselves because they were solving problems that had been solved in mainframes 20 years earlier. As the saying goes: "All the old timers stole all our best ideas years ago."

    • by Rockoon ( 1252108 ) on Saturday August 16, 2014 @11:27AM (#47684693)
      Even more obvious is that even todays CPU's dont perform any calculation in a single clock cycle. The distances involved only effects latency, not throughput. The fact that a simple integer addition operation has a latency of 2 or 3 clock cycles doesnt prevent the CPU from executing 3 or more of those additions per clock cycle.

      Even AMD's Athon designs did that. Intels latest offerings can be coerced into executing 5 operations per cycle that are each 3 cycle latency, and then thats on a single core with no SIMD.

      Its not how quickly the CPU can produce a value.. its how frequently the CPU can retire(*) instructions.

      (*) Thats actually a technical term.
      • I think the informed among us can agree, this whole article combines a special lack of imagination, misunderstanding of physics, and a complete lack of understanding of how computers work, in order to come up with a ridiculous article that sounds like it was written by chicken little :-)
      • > The fact that a simple integer addition operation has a latency of 2 or 3 clock cycles doesnt prevent the CPU from executing 3 or more of those additions per clock cycle.

        That's just wrong. It does't take three clock periods to propagate through an adder on today's silicon unless it's a particularly huge adder. It might take several cycles for an add instruction to propagate though a CPU pipeline, but that is completely different.

        • Maybe he was talking FADD.

          In a float addition, you need to denormalize the inputs, do the actual addition and then normalize the output. Three well-defined pipelining steps, each embodying one distinct step of the process.

    • by Splab ( 574204 )

      Are you saying electrons were moving slower in the 80's?

      • There is no such constant in physics like the speed of the electron. The speed of the electron depends on the medium it is travelling into as well as the force applied to it. That's why the electron's speed is not the same in an old CRT monitor than in the LEP (Large Electron-Positron Collider, the ancestor of the LHC in Geneva).

        • by Splab ( 574204 )

          Erm, well true, but same goes for light, yet we speak about the speed of light as a constant...

          The point I was trying to make, obviously, Slashdot of old has gone away, so I guess you need to pencil it out in stone, was that the guy is claiming a clock cycle took ages to propagate through the systems, which tells us he has no idea, what was and is going on in a computer. Now syncing a clock across several huge monolithic machines back then was easy, because a clock cycle was happening almost at a walking pa

      • The speeds of the electrons is immaterial. The speeds of the electric field in the wires is what matters.
        The electrons move really slowly.

  • "Impossible" is just a state of individual mind, not an objective property of anything. Anyone still believes the machines havier than air cannot fly, just because some authority said so?
    • by AK Marc ( 707885 )
      We all know Bees can't fly.
  • Each semiconductor node shrink is faster and more power effiecient than the previous. For instance, TSMC 20nm process is 30% higher speed, or 25% less power than 28nm. Likewise, 16nm will provide 60% power saving than 20nm.

    • The summary is so confused, the only explanation I can think of is that it doesn't reflect what is in the article. Which is behind a paywall, so I won't be reading it any time soon.
    • by Anonymous Coward

      Something called leakage grows as process size goes down.
      For 40nm, leakage was around 1 to 4% depending on the process variant chosen.
      For 28nm, is jumped to 5 to 10%.
      For 20nm, it is around 20 to 25%. This means that just turning a circuit on and doing nothing (0 Mhz) adds to the power consumption.

  • by Anonymous Coward

    You don't need to constantly shrink everything. My computer is about 2 feet tall and wide. I don't care if it's a couple more inches in any direction. Make a giant processor that weighs 20 pounds.

    • by gl4ss ( 559668 )

      shrinking can allow for higher speed.

      that's what makes this article sound dumb just by the blurb(..that it takes x amount of time to get to the other side of the chip and thus the chip can't run faster bullcrap).

      I mean, current overclocking records are way, way, wayyy over 5ghz. so what is the point?

      • Re:So what (Score:4, Informative)

        by ledow ( 319597 ) on Saturday August 16, 2014 @12:43PM (#47685017) Homepage

        Nobody says 5GHz is impossible. Read it.

        It says that you can't traverse the entire chip while running at 5GHz. Most operations don't - why? Because the chips are small and any one set of instructions tends to operate in a certain smaller-again area.

        What they are saying is that chips will no longer be synchronous - if chips get any bigger, your clock signal takes too long to traverse the entire length of the signal and you end up with different parts of the chips needing different clocks.

        It's all linked. The size of the chip can get bigger and still pack in the same density, but then the signals get more out of sync, the voltages have to be higher, the traces have to be straighter, the routing becomes more complicated, and the heat will become higher. Oh, and you'll have to have parts of it "go dark" to avoid overheating neighbours, etc. This is exactly what the guy is saying.

        At some point, there's a limit at which it's cheaper and easier to just have a bucket load of synchronous-clock chips tied together loosely than one mega-processor trying to keep everything ticking nicely.

        And current overclocking records are only around 8GHz. Nobody says you can't make a processor operating at 10THz if you want. The problem is that it has to be TINY and not do very much. Frequency, remember, is high in anything dealing with radio - your wireless router can do some things at 5GHz and, somewhere inside it, is an oscillator doing just that. But not the SAME kinds of things as we expect modern processors to do.

        Taking account that most of those overclocking benchmarks probably operate in small areas of the silicon, are run in mineral oil or similar and are the literal speed of a benchmark over a complicated chip that ALREADY takes account that signals take so long that clocks can get out of sync across the chip, we don't have much leeway at all. We hit a huge wall at 2-3GHz and that's where people are tending to stay despite it being - what, a decade or more? - since the first 3GHz Intel chip. We add more processors and more core and more threading but pretty much we haven't got "faster" over the last decade, we're just able to have more processors at that speed.

        No doubt we can push it further, but not forever, and not with the kind of on-chip capabilities you expect now.

        With current technology (i.e. no quantum leaps of science making their way into our processors), I doubt you'll ever see a commercially available 10GHz chip that'll run Windows. Super-parallel machines running at a fraction of that but performing more gigaflops per second - yeah - but basic core sustainable frequency? No.

        • by AK Marc ( 707885 )
          async is still synchronous. You would have the region close to the clock input running at C+0. on the other side of the chip, you'd be running with a clock at C+0.9. Where the sections converged, the clocks would also converge. Two related synchornous functions (even off the same clock) are not async just because they are not synchronous.

          My words fail me. The operation is clocked. That the clock doesn't happen at the same time everywhere doesn't change the nature of the operation being clocked. And
    • The reason processors are small is mostly due to yield. Silicon wafers have more or less a constant amount of defects per unit of area. What this means is that the larger your chip is, the lower the number of working processors you end up with. The smaller the chip the more working processors you end up with per wafer.
  • Yet another reason to find a way around the speed of light.

    Actually I've always said (jokingly) that if anyone does find a way to go FTL, it'll be the computer chip manufacturers. In fact Brad Torgersen and I had a story to that effect in Analog magazine a couple of years ago, "Strobe Effect".

    • Mastering and ultimately harnessing quantum entanglement as it pertains to quantum computing and the limits we face right now go right out the window.

  • by gman003 ( 1693318 ) on Saturday August 16, 2014 @12:13PM (#47684897)

    Congratulations, you identified the densest possible circuits we can make. That doesn't even give an upper bound to Moore's Law, let alone an upper bound to performance.

    Moore's Law is "the number of transistors in a dense integrated circuit doubles every two years". You can accomplish that by halving the size of the transistors, or by doubling the size of the chip. Some element of the latter is already happening - AMD and Nvidia put out a second generation of chips on the 28nm node, with greatly increased die sizes but similar pricing. The reliability and cost of the process node had improved enough that they could get a 50% improvement over the last gen at a similar price point, despite using essentially the same transistor size.

    You could also see more fundamental shifts in technology. RSFQ seems like a very promising avenue. We've seen this sort of thing with the hard drive -> SSD transition for I/O bound problems. If memory-bound problems start becoming a priority (and transistors get cheap enough), we might see a shift back from DRAM to SRAM for main memory.

    So yeah, the common restatement of Moore's Law as "computer performance per dollar will double every two years" will probably keep running for a while after we hit the physical bounds on transistor size.

    • by slew ( 2918 ) on Saturday August 16, 2014 @07:30PM (#47686591)

      Moore's Law is "the number of transistors in a dense integrated circuit doubles every two years". You can accomplish that by halving the size of the transistors, or by doubling the size of the chip. Some element of the latter is already happening - AMD and Nvidia put out a second generation of chips on the 28nm node, with greatly increased die sizes but similar pricing. The reliability and cost of the process node had improved enough that they could get a 50% improvement over the last gen at a similar price point, despite using essentially the same transistor size.

      Bad example, the initial yield on 28nm was so bad that the initial pricing was hugely impacted by wafer shortages. Many fabless customers reverted to the 40nm node to wait it out. TSMC eventually got things sorted out so now 28nm has reasonable yields.

      Right now, the next node is looking even worse. TSMC isn't counting on the yield-times-cost of their next gen process to *ever* get to the point when it crosses over 28nm pricing per transistor (for typical designs). Given that reality, it will likely only make sense to go to the newer processes if you need its lower-power features, but you will pay a premium for that. The days of free transistors with a new node appear to be numbered until they make some radical manufacturing breakthroughs to improve the economics (which they might eventually do, but it currently isn't on anyone's roadmap down to 10nm). Silicon architects need to now get smarter, as they likely won't have many more transistors to work with at a given product price point.

      If memory-bound problems start becoming a priority (and transistors get cheap enough), we might see a shift back from DRAM to SRAM for main memory.

      Given the above situation, and that fast SRAMs tend to be quite a bit larger than fast DRAMs (6T vs 1T+C) and the basic fact that the limitation is currently the interface to the memory device, not the memory technology, a shift back to SRAM seems mighty unlikely.

      The next "big-thing" in the memory front is probably WIDEIO2 (the original wideio1 didn't get many adopters). Instead of connecting an SoC (all processors are basically SoC's these days) to a DRAM chip, you put the DRAM and SoC in the same package (either stacked with through silicon vias or side-by-side in a multi-chip package). Since the interface doesn't need to go on the board, you can have many more wire to connect the two, and each wire will have lower capacitance which will increase the available bandwidth to the memory device.

      • Odd that TSMC is so pessimistic, because Intel claims their 22nm node was their most high-yield ever, and even their 14nm yield is pretty high for this early in development. Perhaps the multi-gate FinFETs helped? I know TSMC is planning FinFET for 16nm later this year. That's not a "radical manufacturing breakthrough" but it is a pretty substantial change that could change their yields considerably.

  • I see increasing emphasis in the future on unconventional architectures to solve certain problems
    http://www.research.ibm.com/ar... [ibm.com]
    http://en.wikipedia.org/wiki/Q... [wikipedia.org]

    and a little further into the future, single molecule switches and gates.
    http://en.wikipedia.org/wiki/M... [wikipedia.org]

    We have a ways to go, but at some point we are going to have to say bye-bye to the conventional transistor.

  • The human brain is a marvel of technology. Brain waves move through it as waves of activity. It only consumes (most) energy where the wave of intensified activity is passing through it. If a 3d circuit could be made to sense when a signal is incoming then it could be more efficient. In this paradigm its no 1's and 0's, but rather circuit on vs circuit off. In addition, if you could turn those on/off cycles into charge pump circuits then you could essentially recycle the a partial of that charge and reuse i
  • by Anonymous Coward

    Maybe Markov should go back to school.. Power use is modeled as voltage squared, not as proportional.
    Apologies to Markov if it is just the summary that is wrong.

    • by Mr Z ( 6791 )

      That's true for active power. (V^2/R). For leakage power, it's even worse. That looks closer to exponential. [eetimes.com] I've seen chip for which leakage accounted for close to half the power budget.

      Supposedly FinFET /Tri-gate will help dramatically with leakage. We'll see.

  • Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level

    Wow. To me it is like P~U^2. So proportional, but not linear.
    And where would that 200 mV level come from? In my understanding it depends very much on the semiconductor used.

    • by slew ( 2918 )

      200mV likely comes from a generic analysis of CMOS on Silicon wafer oxide assuming you don't want a leakage factor more than 50% the current (most of which comes from the subthreshold conduction current) and you don't do any weird body-biasing techniques (which would consume lots of circuit area). It isn't a hard number but a general ballpark. Since everyone is scaling down the supply voltage, we must also scale down the threshold voltage and then the amount a signal is below the threshold voltage when you

  • You need single isotope silicon. Silicon-28 seems best. That will reduce the number of defects, thus increasing the chip size you can use, thus eliminating chip-to-chip communication, which is always a bugbear. That gives you effective performance increase.

    You need better interconnects. Copper is way down on the list of conducting metals for conductivity. Gold and silver are definitely to be preferred. The quantities are insignificant, so price isn't an issue. Gold is already used to connect the chip to out

    • Heat is only a problem for those still running computers above zero Celsius.

      Good luck fitting your frozen computer into a laptop case or something else that can be used while riding public transit. Not everybody is content to just "consume" on a "mobile device" while away from mains power.

      • by jd ( 1658 )

        Hemp turns out to make a superb battery. Far better than graphene and Li-Ion. I see no problem with developing batteries capable of supporting sub-zero computing needs.

        Besides, why shouldn't public transport support mains? There's plenty of space outside for solar panels, plenty of interior room to tap off power from the engine. It's very... antiquarian... to assume something the size of a bus or train couldn't handle 240V at 13 amps (the levels required in civilized countries).

    • Finally, if we dump the cpu-centric view of computers that became obsolete the day the 8087 arrived (if not before), we can restructure the entire PC architecture to something rational. That will redistribute demand for capacity, to the point where we can actually beat Moore's Law on aggregate for maybe another 20 years.

      Please explain how your vision is different from, say, OpenCL?

      • by jd ( 1658 )

        OpenCL is highly specific in application. Likewise, RDMA and Ethernet Offloading are highly specific for networking, SCSI is highly specific for disks, and so on.

        But it's all utterly absurd. As soon as you stop thinking in terms of hierarchies and start thinking in terms of heterogeneous networks of specialized nodes, you soon realize that each node probably wants a highly specialized environment tailored to what it does best, but that for the rest, it's just message passing. You don't need masters, you don

        • OpenCL is highly specific in application. Likewise, RDMA and Ethernet Offloading are highly specific for networking, SCSI is highly specific for disks, and so on.

          Well, since the CPU already specializes in general-purpose serial computation, other nodes in a heterogenous environment must logically specialize for either generic parallel computation or specific applications, otherwise you have just plain old SMP.

          But it's all utterly absurd. As soon as you stop thinking in terms of hierarchies and start thinki

          • by jd ( 1658 )

            Let's start with basics. Message-passing is not master-slave because it can be instigated in any direction. If you look at PIC Express 2.1, you see a very clear design - nodes at the top are masters, nodes at the bottom are slaves, masters cannot talk to masters, slaves cannot talk with slaves, only devices with bus master support can be masters. Very simple, totally useless.

            Ok, what specifically do I mean by message passing? I mean, very specifically, a non-blocking, asynchronous routable protocol that con

            • But now let's totally eliminate the barrier between graphics, sound and all other processors. Instead of limited communications channels and local memory, have distributed shared memory (DSM) and totally free communication between everything.

              This sounds a lot like NUMA. Which, I might add, absolutely requires differentiating between local and non-local memory, since the latter is much slower.

              Thus, memory can open a connection to the GPU,

              Like GPUs have done since the time of AGP? Or did you mean memory wil

    • by slew ( 2918 )

      Single isotope silicon? Silicon wafers surfaces (where the transistors are) are generally doped with ions using diffusion and etched, and the most serious defects are usually parametric due to patterning issues. We've go a long ways to go before actually isotope purity is going to be a limiting factor...

      Conductivity of gold vs copper? Copper is a better conductor than gold (although silver is a better conductor than both of them). The reason that gold is used for *connections* is that it is more malle

      • The most recent IC transistors - FinFETs and the like - have the control element (gate electrode) on three of the four sides of the gate. The gate-to-substrate region is rather a small part of the gate surface, compared to processes a decade ago; this should reduce the magnitude of the floating-body problem.
        Alas, my knowledge of this is becoming obsolete, so I could easily be wrong.
  • one day, computers will be twice as fast and ten times as big -- vacuum tubes? meet transistors.
    computers can't get any more popular because we'll run out of copper. . . zinc. . . nickel -- welcome to silicon. Is there enough sand for you?

    everything will stay the way it is now forever. things will never get any faster because these issues that aren't problems today will eventually become completely insurmountable.

    relax. take it easy. we don't solve problems in-advance. capitalism is about quickly solvi

  • by ChrisMaple ( 607946 ) on Saturday August 16, 2014 @09:26PM (#47686971)

    Power use is proportional to the chip's operating voltage

    Wrong.

    transistors simply cannot operate below a 200 milli-Volt level

    Wrong. Get the voltage too low and they won't be fast, but they won't necessarily stop working.

    And of course, the analysis of the communications issue is also wrong.

    There are obvious and non-obvious physical limitations that limit scaling, but nobody is being helped by this muddy, error-ridden presentation.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...