Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Science

Can We Surpass Moore's Law With Reversible Computing? (ieee.org) 118

"It's not about an undo button," writes Slashdot reader marcle, sharing an article by a senior member of the technical staff at Sandia National Laboratories who's studying advanced technologies for computation. "Just reading this story bends my mind." From IEEE Spectrum: [F]or several decades now, we have known that it's possible in principle to carry out any desired computation without losing information -- that is, in such a way that the computation could always be reversed to recover its earlier state. This idea of reversible computing goes to the very heart of thermodynamics and information theory, and indeed it is the only possible way within the laws of physics that we might be able to keep improving the cost and energy efficiency of general-purpose computing far into the future... Today's computers rely on erasing information all the time -- so much so that every single active logic gate in conventional designs destructively overwrites its previous output on every clock cycle, wasting the associated energy. A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect...

[I]t's really hard to engineer a system that does something computationally interesting without inadvertently incurring a significant amount of entropy increase with each operation. But technology has improved, and the need to minimize energy use is now acute... In 2004 Krishna Natarajan (a student I was advising at the University of Florida) and I showed in detailed simulations that a new and simplified family of circuits for reversible computing called two-level adiabatic logic, or 2LAL, could dissipate as little as 1 eV of energy per transistor per cycle -- about 0.001 percent of the energy normally used by logic signals in that generation of CMOS. Still, a practical reversible computer has yet to be built using this or other approaches.

The article predicts "if we decide to blaze this new trail of reversible computing, we may continue to find ways to keep improving computation far into the future. Physics knows no upper limit on the amount of reversible computation that can be performed using a fixed amount of energy."

But it also predicts that "conventional semiconductor technology could grind to a halt soon. And if it does, the industry could stagnate... Even a quantum-computing breakthrough would only help to significantly speed up a few highly specialized classes of computations, not computing in general."
This discussion has been archived. No new comments can be posted.

Can We Surpass Moore's Law With Reversible Computing?

Comments Filter:
  • No (Score:5, Interesting)

    by Kohath ( 38547 ) on Saturday September 09, 2017 @06:08PM (#55166683)

    Moore's Law [wikipedia.org] is about device sizes and economics, not about energy use.

    • Re:No (Score:5, Informative)

      by marcle ( 1575627 ) on Saturday September 09, 2017 @06:20PM (#55166743)

      Moore's Law [wikipedia.org] is about device sizes and economics, not about energy use.

      Absolutely right, editor inserted this headline. The reason I submitted it isn't because this will have any immediate effect on the processor industry, but because the concepts are really interesting, and if they actually have practical application, well, that's amazing.

      • by Kohath ( 38547 )

        It does seem interesting, but the article has limited info of how it could be practical.

        Some sort of very fancy reversible charge storing logic would be good for an ADC design.

        • I'll make an airplane analogy:

          - In 1961, researchers proposed that flying pigs, if they could be found, could be used to replace airplanes. With the rapid technological advances in actual airplanes, though, the research languished for decades
          - In 1973, new research showed that, if flying pigs would exist, they could be herded in such a way that they airline operations would be made a lot more efficient.
          - The research then languished again for many years, but recently more progress has been made. Methods hav

          • Birds?
          • Ah, but in the meantime, genetic research isolated the genes responsible for the condor's giant wing and wingspan. Researchers are in the process of selecting a species of very small pig to attempt for the first time to create a new species of pig, one with actual wings and hence in principle capable of flight according to the syllogism "If pigs had wings, they could fly".

            Also, Pink Unicorn spotted trotting down US 70 near Dover, NC, halfheartedly pursued by gracefully dancing bears! More news at 11!

    • by Anonymous Coward

      Moore's Law is very much about energy use. In fact, the ability to decrease transistor size is directly tied to the ability to control the energy these transistors consume.

      When transistors get smaller they naturally consume less energy. But that's not enough. Significant effort is requires to ensure that they consume even less than that, especially when we're dealing with 22 nm and especially 14 nm processes.

      Why is that? Electromagnetic interference.

      When you're dealing at extraordinarily small scales like n

      • by Kohath ( 38547 )

        Yeah, energy use is the #1 or #2 factor for CPUs. But Moore's Law is not about energy use, it's about device sizing and economics. Transistor scaling is not primarily limited by energy use.

        • But Moore's Law is not about energy use, it's about device sizing and economics.

          To be fair, TFA gets it right. It is only the Slashdot summary headline that mangles Moore.

      • I think you've got the causality wrong.

      • by Agripa ( 139780 )

        Moore's Law is very much about energy use. In fact, the ability to decrease transistor size is directly tied to the ability to control the energy these transistors consume.

        Moore's Law is about the economics of increasing integration. If we had some way to make silicon area cheaper, which has happened on a small scale, then we could duplicate Moore's Law with increasingly large integrated circuits without decreasing transistor size. Moore's Law is not even about performance which was reduced during some process generations.

        In recent fabrication generations for integrated circuits, power has become important because it limits transistor density. Above a certain power per are

    • The silicon medium can only get so thin before it starts becoming improbable that the electrons are where you expect them to be. I remember an article about this in Wired some years ago, talking about Heisenberg Uncertainty, the limits of silicon, and a research team taking advantage of it to produce electron shells without nuclei.

  • Can We Surpass Moore's Law With Reversible Computing? NO. This does nothing to address Moore's law and shows an ignorance of what Moore's law is by posing the question.
  • My favorite Slashdot stories are the ones that I absolutely do not understand. Honestly. I'm a lot more likely to actually read TFA when the summary means absolutely nothing to me.

    • I read the summary as "level-two diabetic logic" so I'm not off to a good start either.

    • by Dutch Gun ( 899105 ) on Saturday September 09, 2017 @06:33PM (#55166801)

      Yeah, this one was a bit of a brain burner. I actually had to RTFA to get a clue as well. Hopefully we get more of these articles. Wouldn't that be nice: tech-heavy stories on a tech-site...

      I'm still going to point out some silliness in the article, mainly, this quote:

      There’s not much time left to develop reversible machines, because progress in conventional semiconductor technology could grind to a halt soon. And if it does, the industry could stagnate, making forward progress that much more difficult. So the time is indeed ripe now to pursue this technology, as it will probably take at least a decade for reversible computers to become practical.

      That seems like a stretch. As soon as we actually hit the wall, there's going to be a great incentive to push forward with alternative technology. In the meantime, the world is not going to collapse because we can't keep increasing our computational power at the same ridiculous rate. In fact, it might actually be nice to take a bit of a breather and just work at hardening and optimizing our existing infrastructure (hah!).

      Rather, it sounds like a marketing pitch for more funding, and seem more than a little self-serving. Still, that's fine. I hope there remains some amount of funding for blue-sky projects like this and quantum computing. Even if it doesn't pan out as hoped, it's very likely we still learn valuable things.

      • There is some Y(x) amount of articles that are written purely as funding pitches for every X of funding that might possibly be funneled to that research, and some D(x, y) that determines if there's anything of value in the research or the article.

      • by nasch ( 598556 )

        The only serious problem I can see is this scenario:

        - conventional processors stop improving much
        - people buy processors less often because they're not getting better
        - some processor makers go out of business due to reduced demand

        Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.

        • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday September 10, 2017 @12:28AM (#55167711) Homepage Journal

          Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.

          It's already happened. There are only three vendors of conventional processors for PCs, and one (Via) trails the others (you know) by a wide margin. There are loads of other vendors who only make embedded processors, even some who specialize in x86 and who used to make processors which went into the competition's motherboards. Now they make whole boards with their own chips and sell them for embedded use. But there used to be at least another handful [wikipedia.org] of corporations which made processors which you could buy and stick into a socket on your PC motherboard.

          And if we look beyond x86, we see more of the same. Oracle looks to be getting out of future SPARC development completely, leaving that to Fujitsu. How long will they be able to justify development of their own architecture? That leaves just IBM.

          • by Anonymous Coward

            Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.

            Oracle looks to be getting out of future SPARC development completely, leaving that to Fujitsu. How long will they be able to justify development of their own architecture? That leaves just IBM.

            I don't know about that. ARM Holdings Ltd. seems to be doing all right. To the point in fact, where it has killed off all other competition in the embedded space. And now Apple is rumoured to bring its tablets into the laptop segment, bringing along their ARM derivative on steroids. Guess this is always going to be who's the bigger bird of prey.

            - https://en.wikipedia.org/wiki/ARM_Holdings

            • I don't know about that. ARM Holdings Ltd. seems to be doing all right. To the point in fact, where it has killed off all other competition in the embedded space.

              We are talking about conventional processors here. Those are embedded processors. We all know that those systems are dominated by ARM. But ARM has also shown no ability to scale their processors up to the point where the single-thread performance is suitable for modern desktop computing.

              I'll grant you that tablets and phones cover many people's needs, as it's a point I've made before. But ARM is not even on the radar for desktop computing. I've tried using a 64-bit, quad-core ARM as a desktop box, and the s

              • I'll grant you that tablets and phones cover many people's needs, as it's a point I've made before. But ARM is not even on the radar for desktop computing. I've tried using a 64-bit, quad-core ARM as a desktop box, and the systems have neither the CPU power nor the bus bandwidth to actually do the job.

                Desktop computing is dying. Most people I know that still have desktop computers, they are gathering dust in the corners. Parents are no longer buying desktops or even laptops for their kids for school. The only consumer market left is handhelds, tablets, and tablets with keyboards. There is still a need for desktop processors in the business/server space and in the cloud space but those spaces have completely different constraints than desktop computing and have freedoms to do things not possible on th

                • Desktop computing is dying.

                  It is leaving behind workstation computing and game consoles which are based on desktop processors. There's still nothing ARM-based which can do that job.

                • completely different constraints than desktop computing

                  Yes, they're even more sensitive to the cost of power than mobile computing is. The chief cost of running a datacentre is not the hardware, but the power, and the cooling.

              • by nasch ( 598556 )

                Who says we're only talking about desktop processors?

        • by Agripa ( 139780 )

          The only serious problem I can see is this scenario:

          - conventional processors stop improving much
          - people buy processors less often because they're not getting better
          - some processor makers go out of business due to reduced demand

          Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.

          - Already happened.
          - Already happened.
          - Already happened.

      • by Kjella ( 173770 )

        That seems like a stretch. As soon as we actually hit the wall, there's going to be a great incentive to push forward with alternative technology.

        Why? How? Is there any process today that would pay 100x to have it solved at 10x the speed? Is there any reason to believe it won't be like the Concorde, techincally superior but not really fast enough to be economically sustainable? We have gigahertz processors with gigabytes or memory and terabytes of storage, what are we really short on? I'd like to think of myself as a computer geek, in fact I'm pretty sure I am one... yet I know I could comfortably buy 128GB of memory but in practice I haven't had

        • Why? How? Is there any process today that would pay 100x to have it solved at 10x the speed?

          Various engineering simulations might be worth it.

          what are we really short on?

          You could always use more memory bandwidth. And you can always use more CPU. We have a ways to go yet before photorealistic imagery is ubiquitous in computer-generated entertainment, for example. Graphics pushes both bandwidth and processing speed.

    • by hord ( 5016115 )

      Overwriting memory releases the old value into the environment as waste heat when the new value is written. A reversible computing circuit would not overwrite the old value and would simply use a new storage location, thus using less energy. The problem is that you quickly run out of memory doing this. The article mentions that the solution is to simply "undo" these old states. That would create a closed (adiabatic) system that is constantly generating new state while cycling old state.

      The claim is that

      • Overwriting memory releases the old value into the environment as waste heat when the new value is written. A reversible computing circuit would not overwrite the old value and would simply use a new storage location, thus using less energy. The problem is that you quickly run out of memory doing this. The article mentions that the solution is to simply "undo" these old states. That would create a closed (adiabatic) system that is constantly generating new state while cycling old state.

        Thank you for the exp

      • They've discovered antispacetime.
      • by Anne Thwacks ( 531696 ) on Sunday September 10, 2017 @03:57AM (#55168117)
        While I accept that my understanding of the article is not perfect, it seems to me that what they are proposing is essentially passive logic.

        Passive logic predates active logic by many hundreds of years. However, although it theoretically requires less energy than active logic using the same technology*, it requires energy. As a result, after a few layers of passive logic (generally two) you require active logic to restore the noise margin. You then discover that the passive logic is slower because the losses in it, while small, are effectively series resistance, and what ever follows it is effectively a capacitance, however small. This is an RC delay circuit. The result is the more layers of passive logic, the slower the whole thing is. To make it go faster, you reduce the passive layers and increase the number of active stages.

        There is another issue too - all the stray Rs and Cs are somewhat indeterminate in value (generally very temperature sensitive), so, in order to make sure everything is in sync, you use clocked logic, and to make it go faster, you keep the layers of logic between registers short (that is what pipelining does).

        In short, in the real world there is a tradeoff between pumping power in to make it go faster, and not pumping power in, and having it go slow.

        This was well known by 1970, and most probably known by all interested parties in 1941.

        Anyone who thinks that logic requires data to be cleared before it is over-written, is still using core memories from the 1970's. No one clears the old result and then writes a new one. The new result overwrites the old one. Preferably with due allowance to avoid the data being used during the transition (requires clocking, requires active devices).

        In short, unless I am completely wrong - in which case, much better written documents are required - the authors of the report have no clue at all.

        * Passive logic as implemented in Victorian railway signalling requires at least a million times more energy per signal transition than (active) 1970's TTL.

        • by Agripa ( 139780 )

          Anyone who thinks that logic requires data to be cleared before it is over-written, is still using core memories from the 1970's. No one clears the old result and then writes a new one. The new result overwrites the old one.

          Dynamic logic [wikipedia.org] erases the previous result before doing the next computation and it is not a technology which died with core memory; it is still used for high performance logic.

      • In theory, if instead of having a state that is overwritten each time, you have a state that "flips" from one state to another then the amount of energy required to flip the state could be significantly less than the energy required to create the state. For instance on a balanced scale with 100 pounds on each side perfectly balanced, a single pound added or removed from one side would cause the state to flip to the other side.

    • This isn't the first time I've heard of reversible computing and its purported benefits, and over the years every time I looked it up I haven't seen significant advancements or implementations. This article is no exception. And I'm still not convinced there exists any design of a reversible-logic processor is practical or useful for general purpose computing, assuming that the physical hardware problems have been solved.

      I would be quite happy to see a software simulation of an 8 bit processor with a simple

  • by haruchai ( 17472 ) on Saturday September 09, 2017 @06:27PM (#55166781)

    therefore, no

  • "...wasting the associated energy."

    Surely the energy is not "wasted", it has been used to create the output.
    • by Nemyst ( 1383049 )
      The point is that most of the energy expended when creating the output was transformed into heat rather than directly used to create said output. Thus, all of that energy was wasted. To make a car analogy, it's like the difference in efficiency between a gasoline-fueled car (roughly 20% of the energy content of gasoline is turned into work) and an electric car (80% instead). Sure, both are doing work when expending all of that energy, but one wastes most of it generating heat and sound.
    • "Surely the energy is not "wasted", it has been used to create the output."

      If you could, in theory, use orders of magnitude less energy to achieve the same output, then a good proportion of the energy consumed is indeed wasted.

  • by Jfetjunky ( 4359471 ) on Saturday September 09, 2017 @06:40PM (#55166819)
    Here is what they mean. Imagine logic elements are people passing notes. Except when you pass a note, the next person reads it, throws it in the trash, then rewrites a new one to pass on. Big waste, right? They are proposing logic gates that simply pass the note along based on its content. Much more efficient, right?.

    The bad news? Good luck doing that at today's speeds. We lose more energy simply biasing the transistors heavily to make them switch faster than we ever do by erasing states. We have heat limitations due to this much more than charge lost every state transition. It might give incremental improvement in density, but it's not some silver bullet.
    • More like, transistors are groups of people sitting close together who stand up and hold hands in various ways from where they stand, reaching down with a free hand to grab one hand of another seated nearby, pulling them up and forcing them to turn in a way that decides where and who that free hand in turn can reach. And once you have a given situation someone is supposed to shout something to the teacher and then they all sit down again. The proposal seems to be that just some of them should sit down, but

    • So, I read this as a sort of Pachinko machine, where the computation flows through a series of gates and those gates aren't reused (as quickly)... with advances in shrinking transistor size, and the reduction in operating frequency this would bring, it might be an interesting twist on parallel computing. Instead of having 80 processors split up a problem and bring it back together, spread out a processor, make it 80x larger and recycle through the gates 80x slower, or maybe only 20x slower and net a 4x spe

    • Switching is an erasure of state
  • by hord ( 5016115 )

    The trick is to undo the operations that produced the intermediate results. This would allow any temporary memory to be reused for subsequent computations without ever having to erase or overwrite it.

    ... which results in thermal dissipation... which results in increased entropy... which is exactly the thing that you were trying to avoid in the first place. Yet Another Free Lunch.

    The only way I can see this working is if you use very low temperature super conducting grids... like they already do in quantum computers. I just can't see any improvement here without material science being involved.

  • Honestly, I think we'll have quantum dot cellular automata [wikipedia.org] before we get reversable computing. In doing so, it would eliminate our power consumption issues in regard to computing. As always, the real problems lie with the manufacturing of these devices.

  • yeah, and? (Score:4, Insightful)

    by eyenot ( 102141 ) <eyenot@hotmail.com> on Saturday September 09, 2017 @06:56PM (#55166863) Homepage

    This is something that calls for a proof of concept in the form of linear programming. Go ahead, show me the machine tree and its related Karnaugh maps and show this bi-directional computation performing several classic computing staples like stacks, sorts and finding primes in a manner that involves fewer steps.

    Information has its limits, too, and laws somewhat similar to thermodynamics appear to govern these limits. If you have some linear function g(c(b(a))) that doesn't necessarily mean you can complete it as g(a,b,c) if c is dependent on b is dependent on a.

    For instance, there are bi-directional programming languages but you still are forced to rethink your problem to be solved in a way that work toward the solution is still being done in reverse, and frankly I doubt that all real-world problems have a solution where time=t can be decremented. For starters if you need more than one output for a given input, you're kind of screwed for any linear task.

    I have to agree with those who see this as a gag to win more funding. It's the equivalent of bringing again e.g. bidirectional programming over to the the hardware level, and go ahead and find me all the amazing examples of what you can do with bidirectional programming languages (go ahead, there are several and some are a number of years old).

  • Deja Pensee (Score:5, Informative)

    by Anonymous Coward on Saturday September 09, 2017 @06:58PM (#55166875)

    It is interesting, as a pure mathematician, to read:

    "[F]or several decades now, we have known that it's possible in principle to carry out any desired computation without losing information -- that is, in such a way that the computation could always be reversed to recover its earlier state."

    Now this 'can get back to earlier state' thing is basically the 'existence of inverses' axiom of group theory. A semigroup is a structure with a well-defined associative operator, but not necessarily an identity, nor existence of inverses. Now going from one computation state to the next, as a CPU does, is essentially a semigroup operation. Or at least something like that.

    Reversible computing is effectively the faithful transformation of an abstract structure (e.g. rotating an icosahedron) on which the possible transformations form a group. Such a condition means that an unbounded number of operations can be chained without loss. This means the transformation must take zero energy. Thus, in fact, no change takes place. That means that what you think is a computation is, in fact, a fixed point that you're somehow conjuring into what appears to be a non-fixed computation. Interestingly, to me this stuff isn't new, nor even recent. What the ancient mystics, yogis and others obsessed over was this sort of aspect of reality.

    Getting back to a less abstract point of view, the problem I see is that if these guys (and girls) insist on reinventing group theory the hard way, they won't even be able to catch up with where group theory was middle of last century. Indeed there is a dire need to more thoroughly think through what computation itself _actually is_. The 'Turing Machine+ChurchTuringThesis' thing is a half-decent first stab, but nothing more. The infinite tape, like the successor and infinity axioms of Peano Arithmetic and ZF Set Theory also, is akin to a naive C programmer assuming that malloc() will never fail. When you're knocking up a quick prototype, and you're not bothered if a malloc() failure crashes the program, fine. On the other hand, Linux kernel module authors seem to understand the need to use malloc() when it works, but never to trust it for critical duties, whether explicitly, or implicitly (via e.g. printf).

    • Yeah, but you can create a memory management structure where malloc() always works as long as it's the only program running, I.e. as a platform for any other programs which must also adhere to the rules of the MMS. For example "object-oriented C", where the platform and every program on it are all implemented in OOC.

      Not to disparage your remarks, because I think your first two paragraphs word my own objections more fundamentally than I managed to. (As a Math Minor merely requisite to Computer Engineering, I

  • So I read the article and have a basic understanding of the technology. I can see how "reversible" applies at the low level, but it is a poor choice for a description of this process. Adiabatic computing might be better, but people who have never taken thermodynamics probably don't understand that word. I'd suggest something like "No Waste Computing" or "No Heat Computing" might be a better description (neither is strictly true, but the potential waste heat is extremely low, i.e. just saying "low heat" does
  • "Still, a practical reversible computer has yet to be built using this or other approaches."

    Since quantum computers of any kind have to be reversible due to the very nature of QM, every realization of quantum computation is a reversible computer.

    This includes the controversial D-Wave machine [wavewatching.net] as well IBM's QC chip that you can play with online [ibm.com].

  • I suggest that people who are really interested in understanding this subject read and understand the papers reprinted in "Maxwell's Demon: Entropy, Information, Computing" (first edition: ISBN: 978-0691605463; second edition: ISBN: 978-0750307598).

  • Imagine you have a container with a wall in the middle. There is a door that can be opened or closed. There is a person who can open or close that door to let individual gas particles through. Now suppose he opens or closes this door such that all the fast moving particles are moved to the left side of the container and all the slow to the right. It turns out that this person can do this without consuming any energy so long as he remembers how he did it in such a way that it can be reversed. The minimu
  • I did not read this article, so I'll just make up some numbers I find plausible.

    Somehow we need to equate entropy (or information loss) with energy. The assumption of 1eV per bit is probably ok. One electron, either changing potential of 1V or not.

    So by making computations reversible, we could avoid this inevitable 1ev loss if the computation is not reversible. Nice. But if we currently burn 1keV per switch, there is no point talking about this technology right now. Let's shave off another 990eV first. Then

    • Using your example, if you have a balanced scale with 1000eV charge on each side, and you can flip it back and forth with a single eV then you are 1000x more efficient than having to move all 1000eV every time you want to flip a switch. This might be more plausible than trying to make 1eV switches.

      • I doubt that the authors of the paper can build a 1eV transistor right now. It can be done in theory. In theory you can also make the irreversible transistor much better.

        And now that I skimmed over the article, it says that only 1 meV is theoretically lost per bit. This makes my point even more valid. We can improve current technology to be one million times more efficient before hitting this thermodynamical barrier.

        Also, you finish your post with an unsupported statement. It might also be less plausible to

  • reverse computation back. Honestly, or was it 1997? Unfortunately, I dumped all my old Bytes due to not enough space, but I know that somewhere in the second half of the 1990's there was already an article about reversible computing. Since in those 20 years time, there haven't been further advancements in this field, I would think that this an idea that is born dead. Also, as reversible computing can be thought of as the electronic equivalent of a weight-counterweight system, I do not see that his helps M
    • Since in those 20 years time, there haven't been further advancements in this field, I would think that this an idea that is born dead.

      There have been billions (if not trillions) of dollars of R&D poured into silicon and existing technology. Even if someone came up with something that potentially could perform better than existing technologies after the same amount of R&D, getting the investment needed to ramp it up to compete with existing technologies would be next to impossible. Unless we hit a brick wall, incremental improvement of existing technologies will likely always be a better path than starting over from scratch with

  • As I understand reversible computing it's basically a recycling of data to preserve electrons before they are allowed to disappate as heat. The idea being that the more you reuse an electron the less heat a chip will create. The problem is not so much that the chips aren't designed this way today its got more to do with how fast chips lose electrons as heat due to the fabrication technology they are built with. As most hardware savy people know, the smaller the chip the less space between transistors there
  • When I see "frictionless", I think perpetual motion machine. Which can't exists because of the laws of thermodynamics. Then there is this "fully reversible" concept here that claims it's not only allowed, but inspired by the laws of thermodynamics. So I'd like to ask : Although we are putting new energy in the processors, how can we make information go through it without it loosing any energy or experiencing any friction? Isn't it impossible? Or how do we compensate?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...