Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Superconducting Microprocessors? Turns Out They're Ultra-Efficient (ieee.org) 80

Long-time Slashdot reader AmiMoJo quotes IEEE Spectrum: Computers use a staggering amount of energy today. According to one recent estimate, data centers alone consume two percent of the world's electricity, a figure that's expected to climb to eight percent by the end of the decade. To buck that trend, though, perhaps the microprocessor, at the center of the computer universe, could be streamlined in entirely new ways.

One group of researchers in Japan have taken this idea to the limit, creating a superconducting microprocessor — one with zero electrical resistance. The new device, the first of its kind, is described in a study published last month in the IEEE Journal of Solid-State Circuits ...

The price of entry for the niobium-based microprocessor is of course the cryogenics and the energy cost for cooling the system down to superconducting temperatures. "But even when taking this cooling overhead into account," says Christopher Ayala, an Associate Professor at the Institute of Advanced Sciences at Yokohama National University, in Japan, who helped develop the new microprocessor, "The AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic device, [such as] 7-nm FinFET, available today."

This discussion has been archived. No new comments can be posted.

Superconducting Microprocessors? Turns Out They're Ultra-Efficient

Comments Filter:
  • No (Score:5, Funny)

    by nospam007 ( 722110 ) * on Saturday January 16, 2021 @01:17PM (#60952178)

    That would be Ultraconducting Microprocessors.

    Superconducting Microprocessors are only super-efficient.

    • Virtual +1, I was about to write the same thing.

    • by Togden ( 4914473 )
      Ultraconductor - a fictional class of materials that cool in proportion to the current. Early investigations into building microprocessors out of these materials was hopeful until it was very quickly realized that this is just not a good idea. As the material cools extreme mechanical stresses are experienced, the material becomes brittle so they inevitably fracture unless heated. Also the voltage climes due to the negative resistance property leading to runaway cooling effects making thermal control difficu
  • With Cryogenics built in
    • Not cryogenic but Asetek sold a few sub-zero coolers in the mid 2000's. They weren't terribly practical and (obviously) didn't become popular.
      https://www.extremeoverclockin... [extremeoverclocking.com]

      Back then, this was on my "money is no object" list, along with tricked-out G4 Cubes and Cinema Displays.

      • the best part is when humid air condenses in your PC case like on a glass of lemonade and drips puddles of water everywhere.

  • by account_deleted ( 4530225 ) on Saturday January 16, 2021 @01:25PM (#60952216)
    Comment removed based on user account deletion
    • by jonored ( 862908 ) on Saturday January 16, 2021 @01:31PM (#60952238)
      Power draw reduction is _also_ cooling requirement reduction, too. Ultra-cold is almost free if nothing is producing heat, you're just dealing with leakage through your insulators. The cost of keeping it there is probably pretty small with 80x less energy to move.
      • Power draw reduction is _also_ cooling requirement reduction, too. Ultra-cold is almost free if nothing is producing heat, you're just dealing with leakage through your insulators. The cost of keeping it there is probably pretty small with 80x less energy to move.

        Not at the temperatures where superconduction usually takes place, it is not as easy to insulate your way to.

        • t is not as easy to insulate your way to.
          a) yes it is easy
          b) the 80 times reduction in energy usage: includes cooling - that clear from the summary

          • t is not as easy to insulate your way to.
            a) yes it is easy
            b) the 80 times reduction in energy usage: includes cooling - that clear from the summary

            I was only arguing against the statement "Ultra-cold is almost free".

            Nothing is free at close to 0K temperatures, which are sometimes below those of cold space. The experiments I have seen required insane insulation, active cooling and had a constant loss of precious liquids required.

      • If it ever leaks and goes beyond critical temperature, you get an explosion that literally combusts your electronics though. And will probably quite unhealthy to breathe, too.

        • If it ever leaks and goes beyond critical temperature, you get an explosion that literally combusts your electronics though.

          An obvious solution would be to turn off the power when the temperature starts to rise.

      • The cost of maintenance when parts fail also goes up by a massive factor, however...

    • by ytene ( 4376651 ) on Saturday January 16, 2021 @02:21PM (#60952412)
      Something that interests me about this is that collectively, the big data center owners haven't put more thought in to getting positive benefit from this surplus heat.

      To be fair, I've read that some major data center using companies have explored the idea of building such facilities, for example, inside the Arctic circle [although obviously this requires good physical network connectivity to be feasible].

      There's also the challenge of the actual temperature of warmed air or coolant from a data center - it's typically not enough, for example, to heat a building.

      But if you're going to be building a data centre in northern lattitudes to get the benefit of cooler ambient air, what about using the warmed air, for example, for providing a boost to local greenhouses... With data centers running 24x7, warm air would be available continuously, which means that it might be possible to provide cost-effective heating to make it possible to grow more fresh food in locations where, today, heated greenhouses simply wouldn't be economically viable [with the cost of heat energy growing steadily]. But now we're talking about "waste heat".

      The typical temperature and humidity that kit in data centers love would be close to perfect for things like salad vegetables, some citrus fruit and so on.

      Time for more out-of-the-box thinking on some of this stuff...
      • There's in general a challenge in that people mostly like to live in places where they don't freeze too much. Distances put a limit to how far north from population centers you can place your datacentre before latency becomes an issue. With all the cloud gaming and cloud whatever, it seems that the measure of acceptable latency keeps going down.

        That said, there are many opportunities for doing what you suggest, and it has actually been done. Northern Europe in particular is well suited. Google has a datacen

      • Something that interests me about this is that collectively, the big data center owners haven't put more thought in to getting positive benefit from this surplus heat.

        My home lab heats my office in the winter. Sadly it also heats it in the summer ... but if I was someplace that was always cold it would take some of the sting out of the bill every month.

    • operating them at liquid nitrogen temperatures could be a major win.

      The IC in TFA operates at 10K. Liquid N2 is 77K.

      So liquid Helium would be needed, which is 20 times the price of liquid N2.

    • Power efficiency isn't really about the utility bill, it's the limit of how much actual compute power you can cram in a chip without it overheating and self destructing. The limit is cooling. So if you have tech that needs 80x less cooling per computation to begin with, that's a huge advantage, provided you can scale it up. That is of course not a given with non silicon semiconductors.
  • The problem is that the cooling can't be very miniaturized due to issues of freezing and condensation, so an 80% increase in efficiency might easily be dwarfed by corresponding inefficiencies in racking and the space needed to manage all the cooling and associated problems.
  • Isn't that faster (as well)?
    • Probably, but at high frequencies, there's still inductance that creates AC resistance.
      • Probably, but at high frequencies, there's still inductance that creates AC resistance.

        Even at ~0K?

        • Of course, AC "resistance" isn't really resistance, it's charge or inductance, that doesn't go anywhere just because actual resistance goes to zero.
      • It is the charging and discharging of the circuit capacitance that consumes most of the power in a high speed microprocessor. The only real advantage of running with zero resistance is that one can reach these speeds at a lower voltage. The energy require to charge a capacitor is proportional to voltage^2 so dropping the voltage will certainly help. But 80 times? Unlikely... But it would be interesting to see just how fast they can push a CPU before things like circuit inductance cause more loss then r
        • by GrpA ( 691294 )

          You've forgotten the basics there. Yes, capacitance is important, but it doesn't exist without resistance.

          If you look at a RC circuit ( Resistor/Capacitor ) you note that the time taken for a capacitor to charge is based on the resistance across which the charge flows into the capacitor and, to a smaller extent, the resistance of the capacitor plate itself.

          If you can reduce the resistance within that circuit, than accordingly the rise time of the circuit changes proportionally. Halve the resistance, halve t

          • You are making the assumption that transistors act like linear devices. They do not. The RC circuit does act just like you described, but ICs have non-linear elements that prevent this model from being accurate.

            Fundamentally you have a supply voltage with a minimum required voltage for the transistors to operate. These transistors have capacitance and require energy to be either inserted or removed in order to change state. Establishing the electric field requires energy - specifically 1/2 C*V^2. Th

  • This is fantastic news but I really do wish they would use a higher temperature superconductor because cooling to below 10 Kelvin is no easy feat. Either way, a datacenter of superconducting servers is going to be a bunch of small refrigerators inside a virtual meat locker. If they stick with 10 Kelvin superconductivity then I do not envy the poor bastard that goes to the datacenter in summer and inevitably ends up putting his sweat dampened bear hand on the outside of one of these ultra-cold boxes and fi

    • This is a research project. I assume the processor is a simple adder or something of the sort. The idea was to get electrons flowing at this temperature with "off the shelf" cooling and superconductors. This is decades from commercialization, so I think you can expect this team (or one of the thousands of other electronic materials science teams worldwide) to get a grad student working on higher temp materials ASAP.
      • This is a research project. I assume the processor is a simple adder or something of the sort.

        Indeed. According to TFA, it has 10,000 transistors.

        A modern CPU has 30,000 times as many.

        The device uses Josephson-junctions, not finfets. So it is unclear how much it can be shrunk. It can't be manufactured by existing fabs.

        • The gates are so huge perhaps they might be manufactured with multilayer roll to roll processing, which is getting into the range of 100nm feature size.

          If you can't make them small, make them cheap. Volume wise it would still be small enough if you stack things up 3D.

    • We do have them!
      They're just not using them.

      BSCCO being an example, at 110 Kelvin and no rare Earth materials.

      Or HgTlBaCaCuO at 164 Kelvin.

      Hell, if you pressurite the package to 155 GPa, H2S superconducts just fine.
      Which sounds like a lot, but in a 5x5Ã--1mm package (70mm^2 inner surface area), that's just 1110 metric tons. Not entirely unimaginable to solve.

      • I agree, 14 metric tons isn't that much.

      • The trick is not to find something that is super conducting. Or at an high enough temperature.
        The trick is to have a material you can make Josephson Junctions with.
        And that is why they are playing with Niobium based materials.

    • by robi5 ( 1261542 )

      Put it in orbit, problem solved

      • That's going to be problematic because there is a big fucking star heating everything up. Passive cooling isn't going to be an option this close to a star.

  • That was the whole premise!!

    The news here is that somebody actually did it!

    Sadly, the news isn't that they used one of dem fancy new high-temperature superconductors that only need a regular freezer.
    Because add an emergecy shutdown to that for whenever the temperature approaches the limit in e.g. an hour, and any gamer would yell at you to take their money!

  • This is as expected.

  • by ClickOnThis ( 137803 ) on Saturday January 16, 2021 @02:07PM (#60952356) Journal

    If this technology sees the light of day, I predict bitcoin and other crypto-currencies will drop in value like a rock.

    • since bitcoin costs $6K to mine, already it's clear that it's just a casino chip, a gaming token, value divorced from any reality such as cost to mine. Just look at some headlines yesterday, "bitcoin roaring towards $40K" but at moment making whoopie cushion noises as it plummets to $30K and below. What a pumper and dumper for suckers.

    • If this technology sees the light of day, I predict bitcoin and other crypto-currencies will drop in value like a rock.

      I am sorry, but this is not going to happen.

      All Proof-Of-Work coins have automatic adjustment algorithm that increase mining difficulty as soon as total processing power of the network is increased or decreased.

      In short: The amount of minted coins will always remain pretty much the same, no matter the amount of processing power you throw at it.

      You are highly upvoted, yet you don't know anything at all about the topic. Peculiar.

      • EDIT:

        Acutally it should be: "adjusts mining difficulty as soon as total processing power of the network is increased or decreased.".
        • Well, color me informed now. Thanks for the improvement.

          Even so, I think this speaks to the general non-viability of cryptocurrencies in the longer term. Currencies hold their value in part due to controls on their supply. If the bitcoin algorithms adjust their complexity to compensate for changes in the network's processing power (and thus maintain controls on supply) then that implies to me that the cost of energy required to mine a coin will remain relatively constant, or at least will not be allowed to

    • Rocks dropped in value?
  • by Computershack ( 1143409 ) on Saturday January 16, 2021 @02:17PM (#60952396)
    If they concentrated on coding more efficient then we'd not need ever more powerful processors. Over the decades all that increasing CPU power has done is to allow sloppy coders to write more and more inefficient bloated code knowing that the hardware performance will increase to allow the shite they spew out to run at a reasonable rate. Some code written comes with comments that number in word count what you'd expect a dissertation to contain and all that needless shit needs to be processed even if it doesn't perform any task.
    • Is there a reasonably accurate way to measure energy consumed by processing comments?

      When might it be worthwhile to treat comments like source code instead of leaving them as excess baggage and what might be the wise way to view comments as needed?

      Motorists don't generally carry service manuals in their vehicles but many, self included, do for convenience. I don't read them while driving.

    • by AmiMoJo ( 196126 )

      There is only so far you can go with efficient coding before you start to compromise on security and features. Yes, layers and sandboxes add overhead, but they also stop you getting hacked.

      Frameworks and the like... Well, the upside is that software is a lot cheaper and more accessible thanks to it.

      Besides more compute power is good, there are things that just need lots of it and this makes them cheaper.

    • I mostly agree but there are still some applications where the developers work very hard on efficiency. Usually not on consumer desktops - but then I doubt we'll see cryogenic systems on consumer desktops either . Of course in compiled languages, comments don't use resources. I assume python interpreters are smart enough to completely ignore them as well - but if you are using python, you don't care about efficiency anyway
    • Did you read the summary? They are presenting processors that consume less power, not providing more performance. In fact, performance is not mentioned at all.

    • What languages do you use which consume CPU cycles for comments? Some interpreted languages perhaps? Compiled languages skip over comments during compilation and they don't make the resulting executable binary any larger. Or have I dated myself by talking about executables, is it not a thing anymore amongst the current generation of programmers?

    • by dargaud ( 518470 )
      gcc provides a "-native" option to benefit from all those advanced features present on your processor. When you run a downloaded executable it actually contains only a subset of assembly instructions common to the entire architecture, so it's never optimized. Yes, even on Linux, unless you are using Gentoo which recompiles everything. I never understood why this isn't more widespread.
    • Did you just say comments must be processed?

      How long have you been programming?

  • by backslashdot ( 95548 ) on Saturday January 16, 2021 @02:21PM (#60952418)

    How many computations does and database lookups does the human brain run per second? How many neurons have to compute and process information when reading slashdot for example? Just the image processing and analytical aspect must be a lot. I am talking about the average brain here, not mine which is like 100x. But anyway, someone pointed out a while back that the brain uses electrical and chemical impulses for signaling yet is full of conducting fluid. That's just crazy how it's not crazy.

    • We dont really know yet. We have roughly 86 billion neurons each with thousands of connections to other neurons. But the messaging algorithm used by each connection is very complex. For example one isotope of lithium has a profound effect on the brain and the other is essentially inert. Chemically they are the same but because one isotope is heavier, the rate of the chemical reactions are different. It's hard to estimate but our understanding is improving greatly.
  • This could be a major breakthrough if we can clock thousands of times faster. Maybe we can host in space.
  • by Ancient_Hacker ( 751168 ) on Saturday January 16, 2021 @07:02PM (#60953206)

    Scratching my head.

    The power dissipation in a large IC has not much to do with the resistance of the conductors. Most of the power gets used in charging and discharging the capacitances. Most of the rest in leakage currents. Nothing to do with the conductors. So someone please explain how it's "80 times" better with superconducting wires.

    • These are Josephson Junctions, not FETs. Storage elements and logic are circulating currents, not charges stored on gates and capacitors, so the speed calculations are much different and far beyond my own knowledge.
    • You are partly right. In conventional CMOS logic, the gate capacitance has to be charged and discharged, but the current to do that goes through resistance in the FET channel and in the interconnect which is where most of the losses are. But it turns out not to matter what the resistance is, even infinitesimal, the I^2.R loss is the same. The maths falls down with zero resistance because then you theoretically get infinite current for zero time.

      However the fine article talks about "adiabatic" logic [wikipedia.org] and Jose

  • ... a Beowulf cluster of these.

    Pretty cool.

  • There is a long history of similar tech, a very cool article can be found here:

    https://spectrum.ieee.org/tech... [ieee.org]

    I never understood why this research was dropped given what it could mean for servers and computation.

    The rest of the "heroic failures" articles in IEEE are worth a read as well.

    • Interesting article. Especially as it covers old school stuff coming from vacuum tubes.
      For "not continuing" I doubt it ever was really stopped. In Karlsruhe (UniversitÃt Karlsruhe - now KiT), they tried to make transistors (Jepheson contacts) till the mid 1990s at least. Around that time I lost contact to the research thee. Around the same time a team in Japan attempted to build a 6502 on superconducting material.
      At KiT the problem was the manufacturing process. Attempting to make 5000 flip flops ended

  • Don't processors get hot because of friction from physical movement of transistors and not electrical resistance? Superconducting doesn't help that.
  • If you are already using superconductors they why not try for the holy-grail and build a processor that needs zero energy to run. With superconductors, it is possible to give it a kick of energy to get the electrons flowing in a circle. Just make that circle the entire path through the circuit. Since there is no loss from heat, you would only have to give it a little extra boost every now and then, kind of like the supercollider gives a boost to the particles around the loop.

    https://en.wikipedia.org/wiki/ [wikipedia.org]

If all else fails, lower your standards.

Working...