Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Bitcoin Programming Hardware Technology

Bitcoin Mining On an Apollo Guidance Computer: 10.3 Seconds Per Hash (righto.com) 103

Slashdot reader volvox_voxel shares an excerpt from the latest blog post from software engineer Ken Shirriff, who is well known for his work on restoring some of the rarest computing hardware to its working condition: We've been restoring an Apollo Guidance Computer1. Now that we have the world's only working AGC, I decided to write some code for it. Trying to mine Bitcoin on this 1960s computer seemed both pointless and anachronistic, so I had to give it a shot. Implementing the Bitcoin hash algorithm in assembly code on this 15-bit computer was challenging, but I got it to work. Unfortunately, the computer is so slow that it would take about a billion times the age of the universe to successfully mine a Bitcoin block. He wasn't kidding about how long it would take to successfully mine a Bitcoin block. "The Apollo Guidance Computer took 5.15 seconds for one SHA-256 hash," writes Shirriff. "Since Bitcoin uses a double-hash, this results in a hash rate of 10.3 seconds per Bitcoin hash. Currently, the Bitcoin network is performing about 65 EH/s (65 quintillion hashes per second). At this difficulty, it would take the AGC 4x10^23 seconds on average to find a block. Since the universe is only 4.3x10^17 seconds old, it would take the AGC about a billion times the age of the universe to successfully mine a block."
This discussion has been archived. No new comments can be posted.

Bitcoin Mining On an Apollo Guidance Computer: 10.3 Seconds Per Hash

Comments Filter:
  • by cruff ( 171569 ) on Tuesday July 09, 2019 @07:24AM (#58895004)
    6 orders of magnitude is only a million, not a billion. The AGC is a thousand times faster than the summary claims.
  • by Anonymous Coward

    nobody cares about bitcoins...

    • Only nocoiners don't care about bitcoin...
      • I'm not a nocoiner and I care nothing about bitcoin. It could succeed or completely fail tomorrow and I wouldn't care either way.

        Cryptocoin is of absolutely no interest or usefulness to me.
        • If you TRULY didn't care about bitcoin, you wouldn't even read this article, let alone comment on it. So stop claiming something that is completely false.
          • I read it because it concerned the Apollo guidance computer.
          • by Anonymous Coward

            If you TRULY didn't care about bitcoin, you wouldn't even read this article

            I don't give a shit about bitcoin, but I'll click on anything involving restored Apollo era hardware.

            Now that we have the world's only working AGC, I decided to write some code for it. Trying to mine Bitcoin on this 1960s computer seemed both pointless and anachronistic, so I had to give it a shot.

            Something pointless and anachronistic on Apollo hardware is pretty much the coolest thing I can imagine.

            The architecture description of th

  • Since Bitcoin uses a double-hash, this results in a hash rate of 10.3 seconds per Bitcoin hash

    Yes, but you can optimize this a little bit. When only the nonce word changes, you can omit the first 3 rounds of SHA-256.

    • Yes, but you can optimize this a little bit. When only the nonce word changes, you can omit the first 3 rounds of SHA-256.

      "Nonce" is a really unfortunate term. I would quietly suggest changing it.

  • by Anonymous Coward

    Pretty interesting stuff. I watched a video on how it worked, looked up some info on it (only uses NOR gates). I started making my own and I think it isn't too difficult, but would be very time consuming. Its an incredibly simple design and all the documents are easily available.

    He doesn't have it running on the rope memory, he uses a rope memory simulator that he isn't entirely sure how it works, but can have a laptop supply the memory for the programs.

  • by Laxator2 ( 973549 ) on Tuesday July 09, 2019 @08:26AM (#58895186)

    Yes, the Apollo Guidance Computer is "a mere abacus" compared wiith the Bitcoin network when it comes to number crunching. But I would't truse the Bitcoin network in a space mission where reliability is paramount. I would not like a reboot during the engine burn that puts the spacecraft on the transfer trajectory back to Earth.
    You have to hit the atmosphere at a very precise angle, otherwise you'll either experience a nice flattening at >20Gs, or you'll be the first human on a very slow boat to the outer solar system.

    • by Anonymous Coward on Tuesday July 09, 2019 @09:18AM (#58895404)

      the irony of this comment is amazing. The actual guidance computer for Apollo 11 repeatedly crashed during lunar descent because it had no extra processing power and was overloaded by processing electrical interference from a radar left on as backup for abort that wasn't supposed to be sending any data to the computer. Later versions of the radar were redesigned to eliminate the interference. That being said its ability to reboot nearly instantly unlike a modern computer enabled the mission to be completed successfully.

      • by cruff ( 171569 ) on Tuesday July 09, 2019 @10:09AM (#58895560)
        The LEM's AGC didn't actually crash, but as designed, it culled the less important tasks to allow the critical tasks related to the guidance to still complete at their required intervals. A full crash would have resulted in the loss of the current vehicle state and would have resulted in a landing abort.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          It actually culled all tasks, then added tasks back in priority order. It was able to do this quickly enough that state was not lost unless it crashed twice in rapid succession. It did not have the capability to only cull unimportant tasks, although the end result was much the same since critical tasks would be restarted first.

        • Didn't 1201 and 1202 alarms (which you can hear them call out over the descent) call automatic restarts? As I recall 1201/1202 was because the computer was being overloaded by radar data.

      • Comment removed based on user account deletion
      • That being said its ability to reboot nearly instantly unlike a modern computer

        If you want to get technical a modern computer with a fast-boot option in BIOS does boot nearly instantly. It's our desire to load a frigging huge OS for the sole purpose of splitting hairs on the internet that causes a delay in function.

        But our PCs actually start processing code basically within a 1second of hitting the power button thanks to BIOSes not prepping each individual device before executing the boot function.

        I wonder how quick we would get Windows 1.11 to load on a modern machine, and I'm not ta

      • by pz ( 113803 ) on Tuesday July 09, 2019 @12:33PM (#58896148) Journal

        "Reboot" here is a stretch, at least if you mean it with a modern connotation.

        "Reset" would be a closer analogy to modern computers. The BIOS in your laptop / desktop is more complicated than the AGC software.

        As a comparison, I was involved in the design of an instrument that was required to run for months on end in a remotely located trailer without any human interaction, in the days before internet access was ubiquitous. Accessibility was non-existent, except through a low-baud-rate phone line. Reliability was, therefore, of utmost importance. The solution was to design a microcontroller based system (something derived from a Motorola 6809, if memory serves) and add an auxilliary clock that asserted the RESET_ line every 100 ms. The periodic RESET_ would bring the hardware into a known state using the ultimate non-maskable interrupt. Whatever computations needed to be done were either re-entrant, or were completed within 100 ms. Well after that project, I happened to meet a Federal inspector who made rounds of these trailers; they said that ours were the only instruments that *always* worked.

        I can't imagine that we were the first to think of this approach, so perhaps the AGC used a similar idea?

      • by proto ( 98893 )
        I remember on some one posted a simulator of guidance computer on a website for interactive use. Does any one know the website and if its still up?
      • by Agripa ( 139780 )

        That being said its ability to reboot nearly instantly unlike a modern computer enabled the mission to be completed successfully.

        Nothing prevents modern CPUs from operating the same way and some embedded systems do. I have even written state machine controlled programs in scripting languages which could do it.

  • by Anonymous Coward

    The performance and design of the algorithm both could have been easily improved using ladder logic, which is the only language really suitable for working directly with hardware with limited memory capacity. It also makes it easier to line up the code with the circuit designs. I would hope NASA has standardized on ladder logic for all future software needs.

    Best regards,
    PointyPete

  • And a whole lot more dependable too.

    • Actually it is amazing how reliable modern technology is today, then the old stuff.
      We use to have hardware failures all the time, RAM getting corrupted with the slightest EM interference. I use to be able crash my Amstrad CPC 1512 by turning on a box fan. Mainframe systems while in nearly indestructible cases. Had easy access to all the components for a reason... They failed and needed to be fixed and replaced a lot.

      Sure us old-timers put on rose colored Nostolga glasses, forgetting how we had to take these

      • Comment removed based on user account deletion
      • The reason you haven't seen BSODs for years is because they were usually caused by kernel design/implementation and driver issues, not hardware problems. BSODs would still happen but they don't display them anymore. Any system can become more reliable if you ignore all the times it exibits problems and keep on trucking. You are essentially saying that your cars exhaust system fixed itself when you turned up the radio.
      • Now I noticed a massive dip in quality in Computing in the mid-late 1990's where PC's were getting cheap and cutting a lot of corners,

        I'm pretty sure the elimination of lead solder and the resulting tin-whisker problem was at least partly to blame.

        It took a few years before people stopped (or mostly stopped?) using cheap solder that would short-circuit after a few years.

        So, yes, "cutting a lot of corners" is true, but one of the corners they cut only became an issue because the industry moved - or was forced to move - away from the previously-available cheap, reliable, but environmentally-dangerous lead-based solder.

        • That may be part of it, other things that became popular were these Win devices. Such as Win Modems, and Win printers.
          A Win Modem was in essence a D/A and a A/D converter. Where we rely on the Windows Driver to play and listen to the pitches to create the sounds and data. Win Printers relied on the drivers to handle every pixel. Where the older models had the a high level interface such as Hays AT or PostScript. Which was superior, because your 50mhz-200mhz PC that just had Multi-tasking added as a feature

      • The reason why companies are soldering in components now, is because they fail less often.

        If you solder otherwise-upgrade-able parts like RAM or storage to force people to "throw it away and buy a new one" every few years, that's greed, not reliability.

        I'm talking PCs and most laptops, where any "gain" for the end user by having the laptop 1mm thinner and $1 cheaper is more than offset by the inconvenience of having a "what you buy is what you are stuck with" device. For most phones and most embedded devices where size and shape may be paramount, and where other factors such as changing telepho

  • by scattol ( 577179 ) on Tuesday July 09, 2019 @09:47AM (#58895498)
    You can watch the progress of the AGC restoration on YouTube. Here's the first episode: https://www.youtube.com/watch?... [youtube.com] It's on Marc Verdiell channel. They are upto 15 episodes now and SPOILER ALERT:



    They pretty much succeeded
    It's still an ongoing project. Go check it out. It's incredible to see a computer being debugged at the gate level using a scope an jumper wires.
  • ...constantly being reminded why I love technology, even the seemingly useless kind.

  • The AGC was designed to perform the calculations to get to the moon and back. It was fault tolerant and highly IO dependent. It was also one, if not the first, system to use integrated circuits.

    Still, it's an interesting experiment to say the least.

  • But can it run Crysis?

  • We used to have people around here who read Apollo Assembly listings in their free time.... Now this bullshit.
  • The AGC supposedly uses about 55W of power. So by my calculations, if 65e18 hashes/sec were computed with a cluster of AGCs, it would consume about 1.8e22 watts. That's more than one billion times the total current energy consumption of human civilization, or about .005% of the total output of the sun. The total mass of the AGC computers would also be about 1/7 as much as the moon.

    I conclude that at current Bitcoin prices, this experiment is unlikely to make a profit.

  • Put in laymanâ(TM)s terms, the AGC can not conceivably mine enough bitcoins to cover the cheque at The Restaurant At The End Of The Universe.

  • And yet, we went to the Moon - six times - with computers like these, and slide rules. Too bad we can't figure out something really worthwhile to do with our modern computers.

  • The point of block chain is to make it very very expensive computing wise to crack.

    So in a word - Good.

BLISS is ignorance.

Working...