Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Science Power Hardware

Yale Physicists Measure 'Persistent Current' 68

eldavojohn writes "Modern processors rely on wires mere nanometers wide, and now Yale physicists have successfully measured a theoretical 'persistent current' that flows through them when they are formed into rings. The researchers predict this will help us understand how electrons behave in metals — more specifically, the quantum mechanical effect that influences how these electrons move through the metals. Hopefully, this work will shed new light on what dangers (or uses) quantum effects could have on classical processors as the inner workings shrink in size. The breakthrough involved rethinking how to measure this theoretical effect, as they previously relied on superconducting quantum interference devices to measure the magnetic field such a current would create — complicated devices that gave incorrect and inconsistent measurements. Instead, they turned to nothing but mechanical devices, known as cantilevers ('little floppy diving boards with the nanometer rings sitting on top'), that yielded measurements with a full order of magnitude more precision."
This discussion has been archived. No new comments can be posted.

Yale Physicists Measure 'Persistent Current'

Comments Filter:
  • I'm not quite sure what the application would be for persistent current, although my wife might have some ideas on the subject. In any case, I'm always amazed at folks who can work and innovate at such a small scale. It's like they could build a model ship in a bottle wearing boxing gloves.
    • I'm not quite sure what the application would be for persistent current, although my wife might have some ideas on the subject.

      I'm sorry for your wife.

    • by Bat Country ( 829565 ) on Saturday October 10, 2009 @11:42AM (#29703877) Homepage

      I had an EE teacher who owned his own little company. His company had done some research using GA [] to evolve a minimal adder circuit on an FPGA []. This adder was simpler than the theoretically optimal adder circuit, using fewer gates than should be possible.

      They really thought they had something (it worked every time with no apparent variation on real hardware) and started putting it on a few other FPGAs to test the solution. It didn't work on the other FPGAs.

      They did a full analysis of the solution and found out that although some inputs and outputs were mapped to a closed loop not connected to VDD or GND (they had no power and no output), if they were removed from the program on the working FPGA, the adder stopped working. They finally had to chalk it up to relying on electron migration and/or induction currents in the closed loops for a correct answer. They'd accidentally made something like a quantum adder, but it was entirely specific to the silicon they'd evolved it on, making it useful, but not interesting.

      • by Bat Country ( 829565 ) on Saturday October 10, 2009 @11:49AM (#29703921) Homepage
        gah! Interesting, but not useful.
        • (wandering two steps offtopic here)

          I think FPGA/GA evolved, silicon specific processing would be useful.

          GA and FPGA are two animals of science for which I have great interest.

          In this case, although this creation was specific for that silicon the creation was still theoretically more simple than should be possible.

          My point is that, even though *mass-production* of "magic chips" may not be possible, simple unit-by-unit production of may be.

          For supercomputing tasks or other very specialized areas of computing,

          • Re: (Score:3, Interesting)

            by Bat Country ( 829565 )

            Well, in theory they should have been able to reproduce the process on a different FPGA, resulting in a different "optimal" adder which may be more or less optimal. Since it seemed to rely on self-interference caused by imperfections in the chip, you'd just have to evolve on other chips until you found a similarly optimal solution. The reason it was only interesting but not useful is that an FPGA is a lot bigger than an actual adder circuit. It took the whole FPGA to evolve the minimal adder and trying to

            • Excellent clarification!

              My misunderstanding indeed hinged on the fact that the FPGA itself was the only device on which this 'simple' adder could operate (due to specifics of its material structure).. I must have also thought the FPGA itself to be simpler than an analogous, hard-wired circuit.

              Thank you :)

              • by oPless ( 63249 )

                Grandparent post is correct. To anthropomorphize things a little...

                GA/GP cheat unreservedly. If they find even the smallest amount of wiggle room in your fitness function, they will leverage it, and give you 'interesting' results.

                That includes not only the fitness scale you give it, but on how you measure the fitness too.

                All heady stuff, I've wasted many days of cpu cycles playing around with GP - great fun :o)

          • In this case, although this creation was specific for that silicon the creation was still theoretically more simple than should be possible.

            Seem reasonable to me. Until the LM741 was put into production transistor op-amps were voodoo. Each assembly would have to be manually tuned for the desired function. Even with the LM741 (and other integrated op-amps) some tweaking is still required. I am sure that many common digital functions in IC applications have to be 'tuned' during final assembly to make them work as designed. What we get in the part-tray/anti-static tube is a tuned circuit trimmed to spec.

      • by eh2o ( 471262 ) on Saturday October 10, 2009 @02:31PM (#29704979)

        Induction isn't a quantum-scale effect, just plain old electromagnetics. Floating pins (that is, any pin configured as an input but not connected to a circuit) are notorious for causing strange effects that mess up both digital logic and analog sensing. Some pretty spooky behavior can result, like states that change when you wave your hand over the chip (due to the capacitance created by the proximity of the hand).

        The input pin-state doesn't allow significant current to flow, and all microcontrollers use it as the default state on power-up since it negates the possibility of a short circuit. But if an input isn't connected to anything, it will generate spontaneous readings due to charge accumulation / current drift--some of which might be inductive but it can also be plain old resistive pathways since there are no perfect insulators (for example a PCB might look like a resistor of a few dozen megaohms, or even less if the board is dirty from handling).

        This is easily dealt with by grounding the unused pins, either externally or internally (by switching it to an output low state). This comes up often enough that forgetting to assert disconnected pins to ground is what I'd call a classic "101" embedded hardware design bug. I've done it a few times myself and had various apparently inexplicable results followed by feeling stupidity when I realize whats actually happening.

        • I'm not an electronics expert so tell me if you've heard of this question before. Is "capacitance" the reason the screen image improved when I touched the "rabbit ear" antennas of my old analog TV? Now that Digital TV is here this question loses its relevance, but its old question I never had an answer for.

    • Re: (Score:1, Interesting)

      by mulaz ( 1538147 )
      If we study really small currents, and develop the technology around it, and bring the "normal" currents (~mA) down (to ~uA), a battery that today lasts 1 day (smartphone under heavy use), will last a 1000 times more (3 years).

      Of course, this is true for logical circuits, etc... power used for example for (back)lightning can be brought down only by some level (not even close to uA), where we get close to 100% power->light output.
  • As an ME... (Score:2, Offtopic)

    by Thelasko ( 1196535 )
    As one of the few mechanical engineers on Slashdot, I approve of this experiment.
  • Wait... (Score:4, Interesting)

    by Anonymous Coward on Saturday October 10, 2009 @10:47AM (#29703525)

    “Yet these currents will flow forever, even in the absence of an applied voltage.” is this some form of perpetual energy or am I a fool?

    • by NoYob ( 1630681 )

      “Yet these currents will flow forever, even in the absence of an applied voltage.” is this some form of perpetual energy or am I a fool?

      Try to harness it.

      My guess, the force that's moving the current would be related to the force that keeps the electrons buzzing around the nucleus.

      I'm sure there's going to be someone, sometime, who's going to "develop" some technology with some really good looking math, put wires perpendicular to the rings to get the "induced current" and sell it to Wall Street as a perpetual motion generator - doesn't violate the First Law of Thermodynamics because it's on the quantum level!

      Hmmmmmmmmm, I'm thinking....

    • It's QM. Like vacuum is not real absolute vacuum, and shits zap in/out randomly. Conservation of energy will be statistical, is my guess, and they will have a catch against trying to make use of temporary disequilibrium. Like trying to steal energy from the past and the future with time machine.
      • Re:Wait... (Score:5, Informative)

        by Rising Ape ( 1620461 ) on Saturday October 10, 2009 @11:44AM (#29703899)

        Conservation of energy is absolute, as far as we know, not statistical, even in QM. It would be a major revolution in physics if that weren't the case, as conservation is associated with important symmetries, such as the laws of physics not changing from past to future.

        Entropy increasing *is* statistical, but no, you can't get around it. See Maxwell's Demon or the Brownian ratchet.

        There are existing examples of persisting currents, in superconductors. No way to get energy out without reducing the current, of course, and you have to put energy in to get it back.

        • Re: (Score:2, Interesting)

          by oldhack ( 1037484 )
          Hm... I should have wrote CoE is averaged over time/space instead of "statistical"? Otherwise, stuffs can't pop in/out into "vacuum"?
          • Re: (Score:3, Interesting)

            by Rising Ape ( 1620461 )

            It doesn't pop in and out as such - certainly you can't just pluck particle/antiparticle pairs out of empty space without supplying energy. For a quantized field, the vacuum expectation value (crudely, the average value in empty space) for certain quantities can be non-zero, just like atoms in a ground state have a definite non-zero energy.

            Even if the energy of a ground state is non-zero, you can't take that energy out - energy must be conserved and there's no lower energy state for free space to fall into.

            • by tenco ( 773732 )

              The energy density of the vacuum is in essence undefined (in quantum theory at least - in general relativity it's a different matter, which is where problems come in). Only energy differences matter.

              Is it really undefined? Can't we say it's infinite? Sure, you can discriminate part of the frequencies by setting boundary conditions with a cavity (and that's where the Casimir effect comes in) - but that's it.

          • Hm... I should have wrote CoE is averaged over time/space instead of "statistical"? Otherwise, stuffs can't pop in/out into "vacuum"?

            That's correct. It's an example of the Heisenberg Uncertainty Principle. Time and energy are conjugate variables, so the shorter the time you look at something, the greater the non-conservation of energy, according to the relation Delta t times Delta E < hbar.

    • by frieko ( 855745 )
      Well, yes. The law of conservation of energy states that ALL energy is perpetual. Perhaps you're thinking of perpetual energy production.
    • Re: (Score:2, Insightful)

      by PPH ( 736903 )

      An applied voltage accelerates electrons. In the absence of anything slowing them down, like interactions with atoms where they lose energy, no voltage should be required for them to keep moving forever.

      So the result of this experiment raises the question: Why are these electrons not interacting with anything? We have some good ideas about why they don't in superconducting materials. So this extends the realm of this behavior into other states of matter. Or its new behavior.

    • As long as you're not taking energy out of it, no, it's not. Well, actually, energy is perpetual; it's power that's not. Perpetual motion exists in a vacuum. It just doesn't on earth with all that friction that requires perpetual power to counteract.
      You can also maintain a perpetual current in a supraconductor, as long as you're not messing with the magnetic field it generates. But just like a hard vacuum, it's not a natural state down here.

    • by Richard Kirk ( 535523 ) on Sunday October 11, 2009 @06:29AM (#29709987)

      The energy is perpetual, so you aren't a fool. Congratulations. However, for as long as it lasts, no-one gets any power out of it. It is just a tiny, fixed current going in a circle giving a small, static magnetic field.

      On a smaller scale, consider electrons circling a nucleus. They are waves, and not like little planets orbiting a sun, but some of them are going in circles endlessly. They aren't losing energy because they have to be in one quantum state, or emit or absorb a whole chunk of energy to go to another. They can't slowly leak their orbital energy away and spiral into the nucleus, which is good thing for us as matter as we know it would rapidly cease to exist.

      What we have here in our little ring is the same sort of thing, but on a larger scale. You have lots of electrons, all in a stable state. Instead of a few electrons orbiting a single nucleus, you have a lot of outer electrons spread out amongst a lot of nucleii. If you have a stable state, then the loop will enclose an integer number of magnetic flux quanta. The most likely state, and the lowest energy state if there is no applied is to have no persistent current, and zero flux quanta. However, at a finite temperature, it is likely that the system is not in its lowest energy state. Why doesn't the loop let the flux quanta out and drop to the lowest energy state? Well, the quantum maths is a bit tricky, but a rough explanation goes like this...To let the flux go, one part of the ring has to stop conducting at one point and put up a resistance. This will let out the flux quantum and absorb the energy as it goes. While this makes sense from energy terms, there is no reason why one bit of the loop should do it rather than another. The superconducting SQUID devices mentioned in the article are a superconducting loop with a weak point so you can have all sorts of elegant fun with the physics as flux quanta go in and out.

      So, this is no use as an energy source, but it could be very useful as a form of memory. Suppose you have a loop of 18 carbon atoms with one hydrogen to each - a bit like benzene but bigger. Like benzine, it has a loop of pi electrons above and beneath, and these electrons can do the same thing. The first energy state (one flux quantum in the loop) is about 0.5 eV above the ground state, so it should be stable at room temperature. You can read the energy state non-destructively by approaching a similar loop with a weak point (a bit like a SQUID, again), or you can destructively blank the state by twisting the ring, destroying the pi delocalization. This is not a new idea - I know it was talked about in the eighties.

  • by John Hasler ( 414242 ) on Saturday October 10, 2009 @11:08AM (#29703645) Homepage


    • Re: (Score:3, Informative)

      by ikkonoishi ( 674762 )

      Possibly. It was just measured they need time to figure out what the limits of it are.

    • Re: (Score:2, Interesting)

      I looked into the effect of persistent current a bit and it turns out that someone has figured out how to use it as a photonic memory. Check out the Wikipedia article [] on Ahranov-Bohm nano-rings.

      The Harris Lab website [] has a number of papers on the persistent current effect. The Ahranov-Bohm effect is one of the weirdest observed effects in physics so reading about the persistent current effect that arises from it is (arguably) a fun read.

  • I'm no EE (Score:2, Offtopic)

    by camperdave ( 969942 )
    I'm no electronics engineer or chip designer, but couldn't they make things more compact by going vertical? Chips are always planar. Wouldn't you get a faster IC if you stacked the components instead? Make a sandwich; silicon, insulator, aluminum, insulator, silicon, insulator, aluminum, etc. (The aluminum would be for heat sink purposes.) Build it up 16, or 32, or even 64 layers thick. Each layer could be a processor core.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      They are already developing them.

      googling it and picking a random one

    • Re:I'm no EE (Score:4, Informative)

      by Xiaran ( 836924 ) on Saturday October 10, 2009 @11:17AM (#29703689)
      I am an Electronics Engineer and you are forgetting about heat dissipation. We would love to have 3 dimension integrated circuits but unless we come up with a good way to dissipate the heat they will be little molten balls of almost pure Si(or GaAs).
      • by ahem ( 174666 )

        So the GP suggested a layer of aluminum for just that purpose. Is the heat carrying capacity of aluminum insufficient? What if you had active cooling sucking heat out of the aluminum at the chip's edge?

        • Re:I'm no EE (Score:4, Informative)

          by Anpheus ( 908711 ) on Saturday October 10, 2009 @11:31AM (#29703791)

          It takes a heatsink the size of a small house to deal with current overclocked CPUs and that's on a single plane. The more layers you put between your heatsink and the bottom-most layer of your CPU, the poorer the conduction of heat away from it and the worse off you'll be.

          He's quite right, without a heatsink the latest CPUs instantly rise to over 90C and then reset or throttle themselves down to unusable levels.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            That is assuming the processor manages to react in time. I've seen videos of processors whose heatsinks have been removed, where the processors appear to vaporize. What they actually did was apparently heat so quickly they deformed in such a way as to launch themselves from the socket at great velocity. (Too fast for the cameras to catch with any real clarity.)

            So basically, the processor took off like a rocket, (albeit technically it was a projectile upon leaving the motherboard, rather than having any peri

        • Re: (Score:1, Interesting)

          by Anonymous Coward

          we already do something similar [] and the problem is that it is not sufficient for 3D. Now if we could stack individual components of motherboard on top of each other and use lateral cooling current ...

        • Re:I'm no EE (Score:5, Interesting)

          by amorsen ( 7485 ) <> on Saturday October 10, 2009 @11:33AM (#29703811)

          Is the heat carrying capacity of aluminum insufficient?

          Depends on how thick you make the layer. Look at the kind of heat sink high performance chips today, for chips with just one layer. Multilayer chips need comparable cooling performance per layer.

          You can of course add a few low-power layers to a high-power chip, which may be worth it at some point just to shorten interconnect wires (or in order to use inductive coupling). It's a lot of complexity for what is so far a small gain though.

        • So the GP suggested a layer of aluminum for just that purpose. Is the heat carrying capacity of aluminum insufficient?

          Yes. I'm no EE but I am something of a hardware geek who has installed a lot of processors. There's a reason heatsinks are so massive. If a thin layer of anything was so good that it could be stuck between two processors and provide adequate cooling to both with no airflow, then CPUs wouldn't even have fans.

          What if you had active cooling sucking heat out of the aluminum at the chip's edge?

          Active cooling is not that good. Way better than passive cooling of course, but you're asking too much of it. Assume a "3d" processor was effectively 4 Core 2 Duos sandwiched together. Look at the heatsi

      • by Anonymous Coward

        The problem is not heat dissipation: it is the inefficiency of our computational machinery.

        Logic devices have almost zero efficiency in that for each watt going in, nothing or almost nothing is used to deliver the logic movement. Almost everything is converted into heat, or physical motion.

        So... 500 watts into a server is 500 watts of heat to dissipate... and zero watts of computing (whatever that would be).

        What we need is a more efficient computing design.

      • 3-d is the past, upwards and onwards!

        (Actually, I'm not kidding. The brain is a 4-dimensional circuit/computer. In addition to being spatially extended in three dimensions, computations are also temporally extended (thus adding a fourth). What might be an atomic instruction on a modern "2-d" CPU could require all four dimensions of the brain. Think in terms of remembering a word, or the lines to a poem. You get the first part, and some aspect of that influences the trajectory of the system to move to a s

    • There was a paper released a little while back (and discussed here IIRC) which mentions that when you put water through a smaller and smaller capillary tube, when you get it down to absurdly small sizes the water actually goes faster. I forget what effect was responsible, but perhaps the solution is to have a tiny heat pipe integrated into the IC, and to use another heat pipe to carry heat away from it (at the CPU cooler level.) No solid metal can dissipate heat fast enough for what you suggest; a single-la

  • Sounds like perpetual motion!

  • Nanometers? (Score:4, Funny)

    by one cup of coffee ( 1623645 ) on Saturday October 10, 2009 @11:22AM (#29703729)
    "Modern processors rely on wires mere nanometers wide." -Nothing to see here, move along.
  • But what does it all mean, Basil?
  • wait, did they just say theoretical?

    granted, it's all a theory, but is it really theoretical if they've measured it?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Yes, it just makes the theory stronger. Just like the theory of evolution. For all practical purposes it is fact. We just aren't so arrogant as to call it a law anymore.

    • Yale physicists have successfully measured a theoretical 'persistent current'

      Yes, they said theoretical! They must've measured it theoretically, I'm sure!

      I guess they mean they have measured a quantity that until that point had only be postulated (based on theory).

  • This reminds me of people who plug power strips back into themselves, and then wonder why it doesn't power their devices.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Funny you should say that. I first began considering the idea of "persisstent current" when I realized your setup could actually retain current if there was no resistance.

  • Short explanation (Score:5, Informative)

    by Anonymous Coward on Saturday October 10, 2009 @03:21PM (#29705387)

    I am a solid state physics Ph.d. student. There seems to be a lot of confusion on how these things work, which is unsurprising given the lack of details in this slightly sensationalist story published by Yale about work done at Yale. Hopefully this helps a bit.

    First, these currents don't spontaneously arise out of the blue. There is an external applied magnetic field, so every metal ring has at least 1 flux line passing through it. As most should know, a changing magnetic field induces an electric current. Normally, in non-superconducting metals, inelastic scattering of electrons causes the current to dissipate (ie there is resistance).

    The unique thing about these metal rings is that they are smaller than the electron's phase coherence length, or the distance the electron travels before it is scattered inelastically. Electrons will scatter elastically off of impurities, but those collisions are not dissipative.

    This Yale group by no means discovered this phenomenon, nor are they the first to measure it. What they did was measure it with greater accuracy. The things that have been unclear for awhile are the direction the current travels in and the magnitude. Hopefully these new measurements will shed some light on the matter.

    P.S. I hate Slashdot's comment system. Every time I clicked off this typing box, it refused to accept any input until I clicked randomly around the screen for at least 15 seconds.

    • Re:Short explanation (Score:5, Informative)

      by Anonymous Coward on Saturday October 10, 2009 @03:32PM (#29705465)

      I should also add to this that one must remember that electrons are as much waves as they are particles. Because of the circular geometry, electron wave functions around the loop acquire a phase in integer multiples.

      The group is measuring the changes in magnetic moment that these currents produce.

    • What, besides lowering the temperature, can be done to increase the phase coherence length?

    • Actually, Slashdot's comment system is run using a quantum plugin that actually implements these loop current effects. Most computers run this just fine but you, being a physics Ph.D. student have skewed the system since it can sense that you are starting to understand how it works. It cannot tolerate this encroachment so it is trying to throw you off track by distracting you.

      Do not be discouraged. Look on this as a challenge. Keep clicking randomly and this will confuse it enough for it to start worki

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.