Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Hardware

'A Quadrillion Mainframes On Your Lap' (ieee.org) 101

"Your laptop is way more powerful than you might realize," writes long-time Slashdot reader fahrbot-bot.

"People often rhapsodize about how much more computer power we have now compared with what was available in the 1960s during the Apollo era. Those comparisons usually grossly underestimate the difference."

Rodney Brooks, emeritus professor of robotics at MIT (and former director of their AI Lab and CSAIL) explains in IEEE Spectrum: By 1961, a few universities around the world had bought IBM 7090 mainframes. The 7090 was the first line of all-transistor computers, and it cost US $20 million in today's money, or about 6,000 times as much as a top-of-the-line laptop today. Its early buyers typically deployed the computers as a shared resource for an entire campus. Very few users were fortunate enough to get as much as an hour of computer time per week.

The 7090 had a clock cycle of 2.18 microseconds, so the operating frequency was just under 500 kilohertz. But in those days, instructions were not pipelined, so most took more than one cycle to execute. Some integer arithmetic took up to 14 cycles, and a floating-point operation could hog up to 15. So the 7090 is generally estimated to have executed about 100,000 instructions per second. Most modern computer cores can operate at a sustained rate of 3 billion instructions per second, with much faster peak speeds. That is 30,000 times as fast, so a modern chip with four or eight cores is easily 100,000 times as fast.

Unlike the lucky person in 1961 who got an hour of computer time, you can run your laptop all the time, racking up more than 1,900 years of 7090 computer time every week....

But, really, this comparison is unfair to today's computers. Your laptop probably has 16 gigabytes of main memory. The 7090 maxed out at 144 kilobytes. To run the same program would require an awful lot of shuffling of data into and out of the 7090 — and it would have to be done using magnetic tapes . The best tape drives in those days had maximum data-transfer rates of 60 KB per second. Although 12 tape units could be attached to a single 7090 computer, that rate needed to be shared among them. But such sharing would require that a group of human operators swap tapes on the drives; to read (or write) 16 GB of data this way would take three days. So data transfer, too, was slower by a factor of about 100,000 compared with today's rate.

So now the 7090 looks to have run at about a quadrillionth (10 ** -15) the speed of your 2021 laptop. A week of computing time on a modern laptop would take longer than the age of the universe on the 7090.

This discussion has been archived. No new comments can be posted.

'A Quadrillion Mainframes On Your Lap'

Comments Filter:
  • by Rosco P. Coltrane ( 209368 ) on Sunday December 26, 2021 @11:39AM (#62116705)

    interpret javascript inside a virtual machine inside a browser to exchange gigafucktons of XML data with another piece of java code running inside a JVM running a virtualized server running inside a physical server to inform some gigantic SQL database running inside another virtualized server on another physical server that the user has clicked on a promotional link for a pair of kacks.

    • by demon driver ( 1046738 ) on Sunday December 26, 2021 @12:03PM (#62116773) Journal

      Actually, it is worse. Much worse. Most of that power isn't even used to solve the problem, regardless of how efficiently or inefficiently that might be done—most of that power is used for presenting the results.

      And then I'm not even talking about some of those modern, graphical business applications the text-mode terminal ancestors of which sometimes were more uncluttered, more manageable for the users and could be operated more quickly...

      • When I worked for IBM we used to use 3270 emulators on Windows PCs to run many IBM-internal tasks because they were vastly faster, more responsive, less likely to glitch or crash, and just downright better than their GUI-based replacements.
        • I am still using 3270 emulation on Linux for my UIs. Its not just faster for the machine, it is also faster for the user.

          The complete lack of icons means not only it is easy to use (by people who can read), it can't be used by people who can't read. Almost halving the number of idiots who need to be supported ;-}

    • by timeOday ( 582209 ) on Sunday December 26, 2021 @12:10PM (#62116793)
      Actually I would wager the vast majority of ops on modern computers are used to shuffle pixels for display - decompression, applying scaling and anti-aliasing, that sort of thing. Similarly, the vast majority of bandwidth on the internet is video. And the vast majority of a video game download is content. To most users, these operations aren't even recognizable as 'math' in the way that the 1950's computers were thought of.

      Virtualization and redundant security checking slow things down by a constant factor of maybe 5, but the difference between printing a page on teletype and streaming a minute of video is VAST.

      • by BranMan ( 29917 ) on Monday December 27, 2021 @05:20PM (#62120379)

        Yep, and it is very annoying that the vast majority of informational "How-To's" on the internet have moved to video. Finding out how to do something - that could be explained in a single paragraph of a few hundred bytes and read, lazily, in 30 seconds - now takes up a 10 minute video, tens of megabytes of data, for some expert to *tell* me how to do something at a speed of up to 39 baud (bits per second, which is the speed at which any spoken language can transfer data) when I can read at several times that speed.

        And the most annoying part is that if it's anything remotely complicated, I need to take notes to be sure I've got it all down and in the correct order. Producing the very paragraph of text that the video replaced.

        This is progress?

        • by hawk ( 1151 )

          I once found results dominated by videos when searching for . . .

          . . . the firing order for a Ford V-8.

          This is simply the numerals 1 to 8 in some order. Even a single word would wear down the information content.

    • that the user has clicked on a promotional link for a pair of kacks.

      You focus a lot on the bad stuff but seem to gloss over the fact that a user can no click on a promotional link for a pair of kacks! That's our world now. The world where you can say I want fucking kacks, and with a bit of computer magic get a pair of fucking kacks delivered to you within 24 hours.

      The TV remote control has nothing on the power that the complexity of modern web development has enabled. Sure every website runs bloated javascript, but they do so because every moron can use that framework witho

      • What the hell is a "kacks"?

        • Honestly I didn't even know until after I finished typing my post. I just went with it. But apparently it's slang for underwear, and so I added an extra sentence to my post :-)

  • It is 'news' for nerds at least.
  • Wow (Score:2, Interesting)

    by pele ( 151312 )

    Point being?

    • The point is, if you think the world of computing today is craptastic compared to the days of yore. well yeah, but it's FAST craptastic!

      • by Kokuyo ( 549451 )

        Slowness is easier to endure when it feels like you're on an adventure. I can tell you that the 1990s internet felt way better than it does today.

    • Re:Wow (Score:5, Funny)

      by thegarbz ( 1787294 ) on Sunday December 26, 2021 @02:24PM (#62117257)

      Point being?

      Sorry for writing something technical. We'll get right back to your general political posts Slashdot so adores these days so you can return to your regularly scheduled outrage.

  • "so a modern chip with four or eight cores is easily 100,000 times as fast."

    Am I the only one who finds that rather low given its compared to a mainframe from 60 years ago? I'd have thought it would be in the millions by now if not more.

    • The average portable laptop you can get today for less than a month's salary is 100,000 as fast on one of its core as the top of line room filling entire mainframe that only the biggest companies and governments could afford 60 years ago. And your Not all progress is only about one metric.
    • The article's estimate for CPU alone is too low. The simple difference in clock speed gets you a factor of ~8,000 (0.5 MHz vs 4 GHz) and modern laptops have 4-8 _hyperthreaded_ cores which is at least another factor of ~10+ and then you have pipelining plus a 64 bit CPU vs. the IBM 7090 which was around 32-bits which is at least another factor of 2. I would guess about a factor of a million is more reasonable since we have also ignored that modern machine code instruction sets are a lot more powerful today
  • A similar argument was made about early steam engines. It was said that the horsepower was overrated because it used the average power of a horse and not the maximum power. The retort was that an engine could run more of less continuously, but a horse had to rest.

    But we should still be comparing apples and apples. That is comparing oranges and oranges. A lot of the modern computation is used for the GUI. The original macs were still slower machines than the PC even though they were much more advanced in

  • by ElitistWhiner ( 79961 ) on Sunday December 26, 2021 @12:11PM (#62116799) Journal

    People use a laptop to look. Browsers have eaten people. Whole companies are built on consuming eyeballs. Human senses are rendered useless and AI is growing on consuming human conscious data stores.

    Once thought computers could teach people faster than classrooms. Twice believed computers were the future. Future is here and we got computers everywhere. The only way forward is for computers to eat computers and software its own.

    People can look forward to races, competitions and battles for more eyeballs, what else?

    • [sarcasm]Because everyone I know would have coded in C on their desktops if not for Netscape's browser. . . [/sarcasm] Wait no . . . They would not have purchased a computer because it would have no use for them. This is the same argument against tablets—just people do not use tablets for the computing tasks that you think they should perform does not mean their is a valid purpose. Not everyone creates the content you want them to create.
    • You sound like someone reading poetry in an empty coffee shop.

    • And personal computers can castrate the users [theguardian.com] as well...
  • by cjonslashdot ( 904508 ) on Sunday December 26, 2021 @12:14PM (#62116813)

    Despite the fact that today's MS Word does little of importance that it did not do 30 years ago, it still takes multiple seconds to open on my new Macbook. So something is very wrong.

    And I know what it is: dynamic typing.

    In the C language, it is possible for the linker to strip out all functions that are not actually called. But when a language is dynamically typed, a linker cannot know, so it has to include everything.

    And so today, if a program uses a library, it has to link that entire library with the executable, even if only one function from the library is used.

    And that little library itself uses ten other libraries, and so on.

    And so even "little" programs have huge multi-MB executables. And they take a long time to load.

    So we are always waiting for things, no matter how fast the computers are.

    • That's not the problem. Even programs written in statically typed languages take forever to start up when they are written by a team that does not care about startup time.

    • MS Word is written in statically typed C++.
      • Is it? What about the DLLs that it links with? - the Microsoft Foundation Classes and the .NEt libraries? I think those have a lot of dynamic typing.

        But you also make me think that there is a problematic core issue in how linking happens with such libraries - my understanding is that in Windows, one has to load an entire DLL, even if one only needs one function from it. But I am not a Windows programmer so I am not sure.

        • I think those have a lot of dynamic typing.
          Then you think wrong.

          The core libraries are all written in C#/C++ : static typed.

          my understanding is that in Windows, one has to load an entire DLL, even if one only needs one function from it. But I am not a Windows programmer so I am not sure.
          Yes and no. Or more precisely yes. But that is not the topic. DLLs have nothing to do with dynamic typed or not. It means: dynamic loaded/linked library.
          Has nothing to do with Windows either. Every OS behaves the same in tha

          • With dynamic typing, one cannot determine ahead of time which functions are needed. And so one has to either statically link the whole library, or one must dynamically load the whole library. With static typing, one can statically link just what is needed at link time.
            • First of all: you are WRONG.

              Secondly: which part of, "MS Word is not written in dynamic typed languages" do you not get? It is all static typed C#/C++.

              • You ignored the points I raised. You are also rude, so I no longer care what you think. I don't discuss things with rude people.
          • JavaScript is dynamic but runs a lot faster than your model would imply. Turns out it's possible, albeit difficult, to make such languages run fast.

          • by dryeo ( 100693 )

            Your slightly wrong in a couple of ways.
            *nix shared libraries are not quite the same as DLL's, which have fix up tables to be position independent rather then compiled with PIC.
            And while back in the day (Win9x), one DLL was loaded and shared. That led to DLL hell, where 2 slightly different DLLs with the same name meant that one was loaded and the app that needed the other one crashed.
            Today, for example, if you load 2 versions of Firefox, they'll run side by side each using its own xul.dll, load Thunderbird

    • by jmccue ( 834797 )

      Despite the fact that today's MS Word does little of importance that it did not do 30 years ago, it still takes multiple seconds to open on my new Macbook. So something is very wrong.

      I speculate you may be forgetting the "phone home" bit for license checking (never mind other things it may or may not do). If the network is slow or the server is slow, time is added. Maybe try disabling network and see if a difference exists. I have never used Word or Excel so I have no way of validating my guess.

    • it still takes multiple seconds to open on my new Macbook

      It may be perspicacious to put a few drops of oil on the hinges, or perhaps to disassemble the hinges, remove any water with a water displacer (eg, WD-40) and blow in some Molybdium lubricant. This should make opening your MacBook quicker (and quieter).

    • MS Word is most likely written in C#/C++ hence no dynamic typing.

      And even if it was: it has nothing to do with linking or not either. As such bloatware is a bunch of dynamic linked libraries and they only get loaded when needed.

      And so today, if a program uses a library, it has to link that entire library with the executable, even if only one function from the library is used.
      Simply wrong, sorry. You could argue that developers usually do not optimize the linking process, as noted above: usually 90% of the c

      • But DLLs are usually large collections, are they not? And you can't select to load only particular functions from a DLL, correct?
        • And you can't select to load only particular functions from a DLL, correct?
          Correct. But you can link with the static version instead of using the DLL version.
          They always come in pairs: static and dynamic. Well, usually. Perhaps sometimes one does not make a the static one.

          • "And you can't select to load only particular functions from a DLL, correct? Correct. But you can link with the static version instead of using the DLL version. They always come in pairs: static and dynamic. Well, usually. Perhaps sometimes one does not make a the static one."

            So my point is that if one goes the DLL route, a DLL is still a huge chunk - it is a whole library (not just the functions you need). But if you go the static route, then the linker needs to be able to statically determine which functi

            • by bws111 ( 1216812 )

              What is a 'C++ linker'? Linkers work with object code, not source. They don't care what the source language was. If an object file has a reference to a symbol, the linker includes it. If there is no reference to a symbol, the linker can leave it out.

              Comparing a GUI program like Word to a command line program like make is pointless.

              I built a 'hello world' program in both C and Go. When statically linked (the only option supported by Go), the executables were approximately the same size, about 2MB. A dy

              • "What is a 'C++ linker'? Linkers work with object code"

                True. But linkers evolved as languages evolved. E.g., the way that Go links is very different.

                "I built a 'hello world' program in both C and Go. When statically linked (the only option supported by Go), the executables were approximately the same size, about 2MB."

                I would be curious why it was so large. Back in the days when "make" was written, computers did not even have a MB of memory.

                • by bws111 ( 1216812 )

                  Because we are no longer 'back in the day'. You can find the sources for both SYS V and glibc online. Take a look at the sources for something like printf. The first thing you will notice is that the glibc source is about 3x the size of SYS V. Then compare them. You will see that the old stuff has no support for, for instance, wide characters. No long longs. No long doubles, etc. And you will notice that 'back in the day' they used a fixed size buffer on the stack to work in. No security problems

                  • Those are good points, but it is hard to see how that can add up to megabytes. The size of, say, the make program is 31488 in my 2020 Intel Mac. That's today's version - not a version from a 16 bit machine. So if one adds buffer protection and all the other things, it is hard to see how it would go from 32K to a Mb.

                    My theory is that today things get linked with large libraries, and linkers don't strip out unused functions, because they can no longer assume that a function that is not statically referenced w

                    • by bws111 ( 1216812 )

                      The make file on your mac is not statically linked. Do you actually think if that executable included all of the I/O functions, string functions, memory functions, process functions (fork(), etc) it would be 32K? Those functions are all in libraries, and no doubt add up to a few MB.

                      Do you not understand the difference between static and dynamic linking? In static linking, the linker will see that you made a reference to 'printf' (for example). It will then copy the code that IMPLEMENTS printf into your

                    • "Do you not understand the difference between static and dynamic linking?"

                      Why so nasty?

                      Of course I understand the difference. Do you know that I have written compilers, including hardware synthesis compilers? Was an electrical engineer, was a nuclear engineer before that? Started and grew an IT services company to 200 people? Wrote six books? Ran a performance lab?

                      It is true that I don't know the executable format of Windows or the Mac, because I have not written code for those platforms. My compiler experi

              • What is a 'C++ linker'? Linkers work with object code, not source. They don't care what the source language was
                Actually they do.
                Before "special C++ linkers" got developed, they e.g. would not remove unnecessary expanded template functions etc. Or more precisely: template functions that had different names (because of different template argument types), but expanded to the exact same assembly code, where not merged into one function.

            • DLLs are shared between processes. I mentioned that already.
              What I did not mention: they are only loaded page by page when a page is accessed.
              So: there is no huge delay at start up time because of DLL loading.

              C++ has both, but do C++ linkers strip unused functions? I don't think so, because it would have to examine the kind of binding used, and I don't think the linker is that smart. Not sure.
              The linker is that smart. Because it does not require any smartness, either the symbol is referenced or it is not. T

              • Then it is a mystery why today's programs are so large. Why something that would have required 20K in a 1985 Unix system requires 20Mb in a Mac today. I know that programs were usually in the kilobytes in the 80s, and they were real programs - Ada compilers and the like. But try creating a 20k Ada compiler today. Word size cannot explain the difference, nor can buffer management. I am puzzled.

                Re. Go, I used Go for about a year around six years back. I seem to recall that it did not generate a normal object

      • Why MS word loads slow: no idea.

        it probably has something t do with the fact that it was written by Microsoft.

  • IBM's 1401 variable field length business computer had an 11.5 microsecond cycle time. If you had the extra cost feature that included multiply/divide, and the washing machine sized 12K character (and these K were decimal 1000) extra memory, it was possible to set up maximum feasible length fields in memory then run a divide instruction. Completion time was just under one minute.
  • by DERoss ( 1919496 ) on Sunday December 26, 2021 @12:18PM (#62116831)

    I began my career in software, programming an IBM 7090 at UCLA. The campus actually had three of them. The one I used was upgraded to an IBM 7094, which was slightly faster. It had other improvements; but over a half-century later, I cannot remember what they were.

    When I left UCLA, I went to a commercial software company that was supporting NASA at the Jet Propulsion Lab (JPL). I was still programming an IBM 7094, this time for software used in the Voyager project before either of those two satellites were launched. (They are now beyond Pluto.)

    That job did not last long. I then went to the System Development Corporation (SDC) to test software that ran on a CDC 3800. I stuck around for 24 years, eventually testing software in a client-server environment. Meanwhile, SDC sold itself to Burroughs, which did a hostile takeover of Univac and became Unisys.

    After a two-year stop at Science Applications International Corporation (SAIC) and a few months at Omnikron, I eventually retired from TRW shortly after it was bought by Northrop Grumman. All that time, I was still testing client-server software.

    Yes, my PC has more memory and power than an IBM 7090; even some "dumb" phones have more memory and power. However, much of the computing power today still involves client-server environments. That includes E-mail, the Web, and social networking. When you use Facebook or send a tweet, you are using a client that feeds into a server.

  • by bettodavis ( 1782302 ) on Sunday December 26, 2021 @12:49PM (#62116911)
    We could have expected to have such incredibly fast apps nowadays, but we've got bloated gigabytes-of-ram-eating browser tabs instead.

    Seems all that computing power went into engineering hubris ("oh, a multi-layer VM + JIT bynary translated boondoggle would be soo cool!") and helping lazy developers (e.g. memory collectors plus bloated frameworks), not the users.

    Software is like a gas, always expanding to occupy its container, same as our hubris and laziness.
  • This argument, while certainly valid, neglects many orders of magnitude. It has been multiple decades since the network was fast enough to support over-the-net paging that was of acceptable speed to support computation. We now have bandwidth that supports what amounts to remote procedure calls that can engage thousands upon thousands of remote servers when, for example, we type a keyword into a search engine. We have at least three, if not four, five, or six additional orders of magnitude of computationa

  • watches will have one quadrillion the compute power of today's laptops. I wonder what they will do with all that. Watch your every move/thought?
    • by Tablizer ( 95088 )

      Maybe they will download an emulation of their grandparents' brains (us) so that we can continue to troll the future and kick people off our e-lawns.

  • I started using computers a bit after that era, but even then computer speed was not the limiting factor. You submitted your punched cards through the window and waited an hour for the output. Then you checked the error messages, found the typos in your card deck and did it again. If you were lucky, it only took you a few hours to complete a simple program. I also remember standing by the window of the computer room in the summer since it was the only air-conditioned room on campus. Humans are still th
  • From the article:
    - Individual cores are about ~10^5 faster in instructions per second
    - Modern CPUs have ~10^1 more cores
    - 64-bit instructions and SIMD vs. 32-bit give you ~10^1.5 more math per second
    - You can run your own PC 24/7, for ~10^2.5 more CPU time
    Ok, I see 10^10 there.

    Then the article says memory bandwidth is 10^5 faster. And somehow *multiplies* that with the core speed? How does that work? The additional 10^5 memory bandwidth is simply what's necessary to allow the modern CPU to execute 10^5

    • Exactly. Came to post exactly that, including the car analogy.

      Multiplying all the individual improvement factors of individual system components makes no sense.

      By the same token we might say Concorde is 20,000x faster than Wright Brothers' first flight because it is about 40x faster and has 500x as many atoms.

  • Read TFA and this guys analysis is confused. First off he only explicitly defined two factors of 100,000, which when multiplied give a factor of 10^10 "more powerful" (not faster) not 10^15. I think he is getting that extra factor of 100,000 by double counting the memory effect. 16 GB is about 100,000 time larger than 144 KB, but then he talks about a "So data transfer, too, was slower by a factor of about 100,000 compared with today’s rate." since it would take 3 days to load 16 GB from a tape drive.

    • It's very difficult compare because different problems have different computational needs. Some are primarily computation based with small memory requirements, other's are the opposite. One way to compare the combined set of resources and constraints is to say: given a set of problems whose memory requirements match the maximum available memory on avg laptop, and given the number of cores on an avg laptop, and whose execution lasts for X amount of time, how long would those same problems take on the old m
  • by AlanObject ( 3603453 ) on Sunday December 26, 2021 @01:18PM (#62117037)

    The first all-transistor computer I am pretty sure was the IBM 1620 [wikipedia.org], first delivered about 1959. Core cycle type was about 20 microseconds.

    My friends and I as highschoolers got access to one through a government agency. By then the 1620 was not used for much since by that time CDC 6000/7000 and Univac 1108 mainframes were available for remote job entry.

    Those mainframes might seem pretty weak compared to today's hardware, but the CPU cycles weren't wasted on driving a bloated user interface. Using RJE with card decks and line printers, one CDC 6000 series would handle hundreds of jobs per hour, serving the needs of hundreds or thousands of users. Your project would be charged by the second for CPU time and kiloword-second for memory.

    A different world, then.

  • by Tablizer ( 95088 ) on Sunday December 26, 2021 @01:20PM (#62117043) Journal

    ...1960's technicians that in the future all our grand computing power is used to display animated spam, high def cat videos, and scan for malware.

  • I saw the memory size and wondered why a byte is not 12 bits long (in USA) and 10 bits long in the rest of the world. Then I googled it and found that the convention used to be 6 bits long and that the IBM7090 used a 36-bit word length. One more thing to factor in for speed comparisons.
    • One small quibble with the figure in the article: The maximum 7090 memory was 64K x 36bits words (2 x IBM 7032 core storage units) or 2,359,296 bits or 294,912 8-bit bytes or 288K bytes. Normally, the 7090 topped out at 32K x 36bits, but a second 7032 could be added by RPQ E02120 (Request for Price Quotation).
  • In an '80s Industrial Technology class using a Computer Numerical Control (CNC) mill, students had a bad habit of busting cutters by hitting clamps as they fat fingered the tape program and ran it. So I wrote a CNC simulator that would read the program and "cut" paths in a 3 dimensional array of bits, each .001 on a side. Since the machine had a raster graphics board they could watch the cutter do its thing "real time". We had gotten one machine for the lab but if the machine was useful would get more for

  • I took my CPSC degree in 1982-1985, when the IBM PC had just appeared and most "home computers" were the same 8-bit 6502 chip at 1MHz.

    We had that chip in some custom-built computers in the lap where they taught us assembler programming, hands-on. Job 1 was to be able to monitor the keyboard, type upon it and see the same letters come up on-screen. We were handed pre-built memory locations where if you dropped a character and called a subroutine address, it would appear on-screen. Told another address

  • In other news your internet bandwidth increased from 56kbits/s modem in 90s (0.007MB/s) to average bandwidth in US 100Mbit/s (12.5MB/s). That is only 1800 times.
  • No, long-time Slashdot reader fahrbot-bot didn't write "Your laptop is way more powerful than you might realize," s/he cited it from article s/he just mentioned here. Apart from that it's news like "my iphone is more powerful than your texas TI30X!" - good to know, thank you, fahrbot-bot!
  • A computer that's half the speed of another and also has half the memory is not one quarter the speed of the other. It's half the speed for certain tasks, and either can't do other tasks or it's half the speed plus (not multiplied by) the speed of swapping, based on how much swapping needs to be done and how often.

    ieee.org should know better.

    • >ieee.org should know better.

      Maybe they have trouble finding quality authors for Spectrum (the IEEE's primary magazine).
      Have you submitted any articles for Spectrum to publish?

  • It's fun to think about how many 7090s it would take to equal a decent modern laptop, but when I started thinking about it, I realized the more impressive reduction by far is power consumption. I kind of got lost in the weeds trying to work out a satisfying answer to how much power that many 7090s would consume. Let's just say the number is a lot closer to the total electrical generation capacity of the entire planet than it is to the mere 750w powering my desktop.
    • Power usage specs of a 7090 are hard to find, but seeing the photos of a room full of equipment I'd guess 20 kW.

      CPU is about a factor 1e+6,
      memory 1e+5,
      storage bandwidth a mere 1e+2 (100 kB/s vs. 100 MB/s)
      storage random access time 1e6 (1e+2 s vs. 1e-4 s)
      Power 1e+3 (20 kW vs 20 W)

      For a factor 1e+5 in overall performance, one laptop with of 7090s would be equivalent to 2 GW power. About one big electrical power plant.

      (As others have picked out, multiplying the individual performance metrics to get 1e+15 makes

  • The SIMH emulator http://simh.trailing-edge.com/ [trailing-edge.com] can emulate an old mainframe or mini on, for example, a Raspberry Pi with plenty of oomf left over to do other Pi-stuff. Even something relatively graphics-heavy like the classic Lunar Lander simulator running on RT11 unthrottled on a virtual PDP-11/70 and simulated VR14 doesn't stress a Raspberry Pi 3 too much. The days we live in.

    And if you're nostalgic for blinking lights and toggle switches on PDPs, and have a spare Raspberry Pi sitting around (or want to

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...