Can We Surpass Moore's Law With Reversible Computing? (ieee.org) 118
"It's not about an undo button," writes Slashdot reader marcle, sharing an article by a senior member of the technical staff at Sandia National Laboratories who's studying advanced technologies for computation. "Just reading this story bends my mind." From IEEE Spectrum:
[F]or several decades now, we have known that it's possible in principle to carry out any desired computation without losing information -- that is, in such a way that the computation could always be reversed to recover its earlier state. This idea of reversible computing goes to the very heart of thermodynamics and information theory, and indeed it is the only possible way within the laws of physics that we might be able to keep improving the cost and energy efficiency of general-purpose computing far into the future...
Today's computers rely on erasing information all the time -- so much so that every single active logic gate in conventional designs destructively overwrites its previous output on every clock cycle, wasting the associated energy. A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect...
[I]t's really hard to engineer a system that does something computationally interesting without inadvertently incurring a significant amount of entropy increase with each operation. But technology has improved, and the need to minimize energy use is now acute... In 2004 Krishna Natarajan (a student I was advising at the University of Florida) and I showed in detailed simulations that a new and simplified family of circuits for reversible computing called two-level adiabatic logic, or 2LAL, could dissipate as little as 1 eV of energy per transistor per cycle -- about 0.001 percent of the energy normally used by logic signals in that generation of CMOS. Still, a practical reversible computer has yet to be built using this or other approaches.
The article predicts "if we decide to blaze this new trail of reversible computing, we may continue to find ways to keep improving computation far into the future. Physics knows no upper limit on the amount of reversible computation that can be performed using a fixed amount of energy."
But it also predicts that "conventional semiconductor technology could grind to a halt soon. And if it does, the industry could stagnate... Even a quantum-computing breakthrough would only help to significantly speed up a few highly specialized classes of computations, not computing in general."
[I]t's really hard to engineer a system that does something computationally interesting without inadvertently incurring a significant amount of entropy increase with each operation. But technology has improved, and the need to minimize energy use is now acute... In 2004 Krishna Natarajan (a student I was advising at the University of Florida) and I showed in detailed simulations that a new and simplified family of circuits for reversible computing called two-level adiabatic logic, or 2LAL, could dissipate as little as 1 eV of energy per transistor per cycle -- about 0.001 percent of the energy normally used by logic signals in that generation of CMOS. Still, a practical reversible computer has yet to be built using this or other approaches.
The article predicts "if we decide to blaze this new trail of reversible computing, we may continue to find ways to keep improving computation far into the future. Physics knows no upper limit on the amount of reversible computation that can be performed using a fixed amount of energy."
But it also predicts that "conventional semiconductor technology could grind to a halt soon. And if it does, the industry could stagnate... Even a quantum-computing breakthrough would only help to significantly speed up a few highly specialized classes of computations, not computing in general."
No (Score:5, Interesting)
Moore's Law [wikipedia.org] is about device sizes and economics, not about energy use.
Re:No (Score:5, Informative)
Moore's Law [wikipedia.org] is about device sizes and economics, not about energy use.
Absolutely right, editor inserted this headline. The reason I submitted it isn't because this will have any immediate effect on the processor industry, but because the concepts are really interesting, and if they actually have practical application, well, that's amazing.
Re: (Score:2)
It does seem interesting, but the article has limited info of how it could be practical.
Some sort of very fancy reversible charge storing logic would be good for an ADC design.
Re: (Score:2)
I'll make an airplane analogy:
- In 1961, researchers proposed that flying pigs, if they could be found, could be used to replace airplanes. With the rapid technological advances in actual airplanes, though, the research languished for decades
- In 1973, new research showed that, if flying pigs would exist, they could be herded in such a way that they airline operations would be made a lot more efficient.
- The research then languished again for many years, but recently more progress has been made. Methods hav
Re: No (Score:1)
Re: (Score:2)
Ah, but in the meantime, genetic research isolated the genes responsible for the condor's giant wing and wingspan. Researchers are in the process of selecting a species of very small pig to attempt for the first time to create a new species of pig, one with actual wings and hence in principle capable of flight according to the syllogism "If pigs had wings, they could fly".
Also, Pink Unicorn spotted trotting down US 70 near Dover, NC, halfheartedly pursued by gracefully dancing bears! More news at 11!
Moore's Law is about energy use. (Score:3, Informative)
Moore's Law is very much about energy use. In fact, the ability to decrease transistor size is directly tied to the ability to control the energy these transistors consume.
When transistors get smaller they naturally consume less energy. But that's not enough. Significant effort is requires to ensure that they consume even less than that, especially when we're dealing with 22 nm and especially 14 nm processes.
Why is that? Electromagnetic interference.
When you're dealing at extraordinarily small scales like n
Re: (Score:2)
Yeah, energy use is the #1 or #2 factor for CPUs. But Moore's Law is not about energy use, it's about device sizing and economics. Transistor scaling is not primarily limited by energy use.
Re: (Score:2)
But Moore's Law is not about energy use, it's about device sizing and economics.
To be fair, TFA gets it right. It is only the Slashdot summary headline that mangles Moore.
Solution: Gas transisters (Score:1)
Re: (Score:2)
Just off the top of my head, the die area is shrinking as a square so power would need to scale at the same rate or your devices will overheat. The easy way is to lower voltages but then you are dealing with distinguishing between ground state and signal i.e. logic levels.
The problem is that power is made up of several different components and the total does *not* scale with the inverse square so power density has risen to the limit which may be economically handled.
If you check the die area for a maximum power of Intel's processors going back several generations, you will find that power per area has stayed roughly constant while area has decreased. Newer Intel processors use less power because they must in order to increase transistor density following Moore's Law.
Re: Moore's Law is about energy use. (Score:1)
I think you've got the causality wrong.
Re: (Score:2)
Moore's Law is very much about energy use. In fact, the ability to decrease transistor size is directly tied to the ability to control the energy these transistors consume.
Moore's Law is about the economics of increasing integration. If we had some way to make silicon area cheaper, which has happened on a small scale, then we could duplicate Moore's Law with increasingly large integrated circuits without decreasing transistor size. Moore's Law is not even about performance which was reduced during some process generations.
In recent fabrication generations for integrated circuits, power has become important because it limits transistor density. Above a certain power per are
Re: No (Score:1)
The silicon medium can only get so thin before it starts becoming improbable that the electrons are where you expect them to be. I remember an article about this in Wired some years ago, talking about Heisenberg Uncertainty, the limits of silicon, and a research team taking advantage of it to produce electron shells without nuclei.
NO (Score:2)
Re:NO (Score:5, Funny)
Wouldn't it be simpler to just find this Moore guy and force him to change his damn law?
Re:NO (Score:5, Funny)
Wouldn't it be simpler to just find this Moore guy and force him to change his damn law?
Got a shovel?
Re:NO (Score:5, Funny)
Yes I do, and strangely enough it's twice as big and much cheaper than the last one I bought 18 months ago.
Re: (Score:2)
someone go get John Cena
Re: (Score:2, Funny)
Just tell Trump that it was Obama that passed it.
Re: (Score:2)
It's not "Moore's Executive Order", it's "Moore's Law".
Re: (Score:2)
Re: (Score:2)
Wouldn't it be simpler to just find this Moore guy and force him to change his damn law?
Unfortunately that will not work because Moore is dead.
It is possible to reduce the noise of a resistor by reducing temperature, bandwidth, or resistance but it is NOT possible to reduce Boltzmann's Constant because Boltzmann is dead. - Analog Devices Application Note 280
two-level adiabatic logic (Score:2)
My favorite Slashdot stories are the ones that I absolutely do not understand. Honestly. I'm a lot more likely to actually read TFA when the summary means absolutely nothing to me.
Re: (Score:2)
I read the summary as "level-two diabetic logic" so I'm not off to a good start either.
Re:two-level adiabatic logic (Score:5, Interesting)
Yeah, this one was a bit of a brain burner. I actually had to RTFA to get a clue as well. Hopefully we get more of these articles. Wouldn't that be nice: tech-heavy stories on a tech-site...
I'm still going to point out some silliness in the article, mainly, this quote:
There’s not much time left to develop reversible machines, because progress in conventional semiconductor technology could grind to a halt soon. And if it does, the industry could stagnate, making forward progress that much more difficult. So the time is indeed ripe now to pursue this technology, as it will probably take at least a decade for reversible computers to become practical.
That seems like a stretch. As soon as we actually hit the wall, there's going to be a great incentive to push forward with alternative technology. In the meantime, the world is not going to collapse because we can't keep increasing our computational power at the same ridiculous rate. In fact, it might actually be nice to take a bit of a breather and just work at hardening and optimizing our existing infrastructure (hah!).
Rather, it sounds like a marketing pitch for more funding, and seem more than a little self-serving. Still, that's fine. I hope there remains some amount of funding for blue-sky projects like this and quantum computing. Even if it doesn't pan out as hoped, it's very likely we still learn valuable things.
Re: two-level adiabatic logic (Score:1)
There is some Y(x) amount of articles that are written purely as funding pitches for every X of funding that might possibly be funneled to that research, and some D(x, y) that determines if there's anything of value in the research or the article.
Re: (Score:3)
The only serious problem I can see is this scenario:
- conventional processors stop improving much
- people buy processors less often because they're not getting better
- some processor makers go out of business due to reduced demand
Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.
Re:two-level adiabatic logic (Score:4, Informative)
Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.
It's already happened. There are only three vendors of conventional processors for PCs, and one (Via) trails the others (you know) by a wide margin. There are loads of other vendors who only make embedded processors, even some who specialize in x86 and who used to make processors which went into the competition's motherboards. Now they make whole boards with their own chips and sell them for embedded use. But there used to be at least another handful [wikipedia.org] of corporations which made processors which you could buy and stick into a socket on your PC motherboard.
And if we look beyond x86, we see more of the same. Oracle looks to be getting out of future SPARC development completely, leaving that to Fujitsu. How long will they be able to justify development of their own architecture? That leaves just IBM.
Re: (Score:1)
Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.
Oracle looks to be getting out of future SPARC development completely, leaving that to Fujitsu. How long will they be able to justify development of their own architecture? That leaves just IBM.
I don't know about that. ARM Holdings Ltd. seems to be doing all right. To the point in fact, where it has killed off all other competition in the embedded space. And now Apple is rumoured to bring its tablets into the laptop segment, bringing along their ARM derivative on steroids. Guess this is always going to be who's the bigger bird of prey.
- https://en.wikipedia.org/wiki/ARM_Holdings
Re: (Score:3)
I don't know about that. ARM Holdings Ltd. seems to be doing all right. To the point in fact, where it has killed off all other competition in the embedded space.
We are talking about conventional processors here. Those are embedded processors. We all know that those systems are dominated by ARM. But ARM has also shown no ability to scale their processors up to the point where the single-thread performance is suitable for modern desktop computing.
I'll grant you that tablets and phones cover many people's needs, as it's a point I've made before. But ARM is not even on the radar for desktop computing. I've tried using a 64-bit, quad-core ARM as a desktop box, and the s
Re: (Score:2)
I'll grant you that tablets and phones cover many people's needs, as it's a point I've made before. But ARM is not even on the radar for desktop computing. I've tried using a 64-bit, quad-core ARM as a desktop box, and the systems have neither the CPU power nor the bus bandwidth to actually do the job.
Desktop computing is dying. Most people I know that still have desktop computers, they are gathering dust in the corners. Parents are no longer buying desktops or even laptops for their kids for school. The only consumer market left is handhelds, tablets, and tablets with keyboards. There is still a need for desktop processors in the business/server space and in the cloud space but those spaces have completely different constraints than desktop computing and have freedoms to do things not possible on th
Re: (Score:2)
Desktop computing is dying.
It is leaving behind workstation computing and game consoles which are based on desktop processors. There's still nothing ARM-based which can do that job.
Re: (Score:2)
completely different constraints than desktop computing
Yes, they're even more sensitive to the cost of power than mobile computing is. The chief cost of running a datacentre is not the hardware, but the power, and the cooling.
Re: (Score:2)
Who says we're only talking about desktop processors?
Re: (Score:2)
The only serious problem I can see is this scenario:
- conventional processors stop improving much
- people buy processors less often because they're not getting better
- some processor makers go out of business due to reduced demand
Then when the industry picks back up again, there are fewer competitors. I have no idea how likely that all is though.
- Already happened.
- Already happened.
- Already happened.
Re: (Score:2)
That seems like a stretch. As soon as we actually hit the wall, there's going to be a great incentive to push forward with alternative technology.
Why? How? Is there any process today that would pay 100x to have it solved at 10x the speed? Is there any reason to believe it won't be like the Concorde, techincally superior but not really fast enough to be economically sustainable? We have gigahertz processors with gigabytes or memory and terabytes of storage, what are we really short on? I'd like to think of myself as a computer geek, in fact I'm pretty sure I am one... yet I know I could comfortably buy 128GB of memory but in practice I haven't had
Re: (Score:2)
Why? How? Is there any process today that would pay 100x to have it solved at 10x the speed?
Various engineering simulations might be worth it.
what are we really short on?
You could always use more memory bandwidth. And you can always use more CPU. We have a ways to go yet before photorealistic imagery is ubiquitous in computer-generated entertainment, for example. Graphics pushes both bandwidth and processing speed.
Re: (Score:2)
Overwriting memory releases the old value into the environment as waste heat when the new value is written. A reversible computing circuit would not overwrite the old value and would simply use a new storage location, thus using less energy. The problem is that you quickly run out of memory doing this. The article mentions that the solution is to simply "undo" these old states. That would create a closed (adiabatic) system that is constantly generating new state while cycling old state.
The claim is that
Re: (Score:2)
Thank you for the exp
Anti-time (Score:1)
Re:two-level adiabatic logic (Score:4, Interesting)
Passive logic predates active logic by many hundreds of years. However, although it theoretically requires less energy than active logic using the same technology*, it requires energy. As a result, after a few layers of passive logic (generally two) you require active logic to restore the noise margin. You then discover that the passive logic is slower because the losses in it, while small, are effectively series resistance, and what ever follows it is effectively a capacitance, however small. This is an RC delay circuit. The result is the more layers of passive logic, the slower the whole thing is. To make it go faster, you reduce the passive layers and increase the number of active stages.
There is another issue too - all the stray Rs and Cs are somewhat indeterminate in value (generally very temperature sensitive), so, in order to make sure everything is in sync, you use clocked logic, and to make it go faster, you keep the layers of logic between registers short (that is what pipelining does).
In short, in the real world there is a tradeoff between pumping power in to make it go faster, and not pumping power in, and having it go slow.
This was well known by 1970, and most probably known by all interested parties in 1941.
Anyone who thinks that logic requires data to be cleared before it is over-written, is still using core memories from the 1970's. No one clears the old result and then writes a new one. The new result overwrites the old one. Preferably with due allowance to avoid the data being used during the transition (requires clocking, requires active devices).
In short, unless I am completely wrong - in which case, much better written documents are required - the authors of the report have no clue at all.
* Passive logic as implemented in Victorian railway signalling requires at least a million times more energy per signal transition than (active) 1970's TTL.
Re: (Score:2)
Anyone who thinks that logic requires data to be cleared before it is over-written, is still using core memories from the 1970's. No one clears the old result and then writes a new one. The new result overwrites the old one.
Dynamic logic [wikipedia.org] erases the previous result before doing the next computation and it is not a technology which died with core memory; it is still used for high performance logic.
Re: (Score:2)
In theory, if instead of having a state that is overwritten each time, you have a state that "flips" from one state to another then the amount of energy required to flip the state could be significantly less than the energy required to create the state. For instance on a balanced scale with 100 pounds on each side perfectly balanced, a single pound added or removed from one side would cause the state to flip to the other side.
Show me the money! (Score:1)
This isn't the first time I've heard of reversible computing and its purported benefits, and over the years every time I looked it up I haven't seen significant advancements or implementations. This article is no exception. And I'm still not convinced there exists any design of a reversible-logic processor is practical or useful for general purpose computing, assuming that the physical hardware problems have been solved.
I would be quite happy to see a software simulation of an 8 bit processor with a simple
Betteridge's Law trumps Moore's Law (Score:3)
therefore, no
Re: (Score:1)
Pepperidge Law remembers.
Wasted energy? (Score:2)
Surely the energy is not "wasted", it has been used to create the output.
Re: (Score:2)
Re: (Score:2)
Your analogy is right on, but tell me it's a waste of energy to produce this sound: https://www.youtube.com/watch?... [youtube.com] ;-)
Re: (Score:2)
"Surely the energy is not "wasted", it has been used to create the output."
If you could, in theory, use orders of magnitude less energy to achieve the same output, then a good proportion of the energy consumed is indeed wasted.
A real summary (Score:3)
The bad news? Good luck doing that at today's speeds. We lose more energy simply biasing the transistors heavily to make them switch faster than we ever do by erasing states. We have heat limitations due to this much more than charge lost every state transition. It might give incremental improvement in density, but it's not some silver bullet.
Re: A real summary (Score:2)
More like, transistors are groups of people sitting close together who stand up and hold hands in various ways from where they stand, reaching down with a free hand to grab one hand of another seated nearby, pulling them up and forcing them to turn in a way that decides where and who that free hand in turn can reach. And once you have a given situation someone is supposed to shout something to the teacher and then they all sit down again. The proposal seems to be that just some of them should sit down, but
Re: A real summary (Score:1)
Err: *"transistors are..." -> "transistor logic arrays are..."
Re: (Score:2)
Can't someone turn this into a car analogy?
Re: (Score:2)
Re: (Score:2)
So, I read this as a sort of Pachinko machine, where the computation flows through a series of gates and those gates aren't reused (as quickly)... with advances in shrinking transistor size, and the reduction in operating frequency this would bring, it might be an interesting twist on parallel computing. Instead of having 80 processors split up a problem and bring it back together, spread out a processor, make it 80x larger and recycle through the gates 80x slower, or maybe only 20x slower and net a 4x spe
Re:switching (Score:1)
No. (Score:2)
The trick is to undo the operations that produced the intermediate results. This would allow any temporary memory to be reused for subsequent computations without ever having to erase or overwrite it.
... which results in thermal dissipation... which results in increased entropy... which is exactly the thing that you were trying to avoid in the first place. Yet Another Free Lunch.
The only way I can see this working is if you use very low temperature super conducting grids... like they already do in quantum computers. I just can't see any improvement here without material science being involved.
Re: (Score:2)
It's only reversible because you have saved state. Where did you save it? It was embedded, energetically, into something that you now have to reverse. Sorry, I don't buy it. Computing logic by its nature is reversible except for the fact that state is not kept.
Don't hold your breath (Score:2)
Honestly, I think we'll have quantum dot cellular automata [wikipedia.org] before we get reversable computing. In doing so, it would eliminate our power consumption issues in regard to computing. As always, the real problems lie with the manufacturing of these devices.
yeah, and? (Score:4, Insightful)
This is something that calls for a proof of concept in the form of linear programming. Go ahead, show me the machine tree and its related Karnaugh maps and show this bi-directional computation performing several classic computing staples like stacks, sorts and finding primes in a manner that involves fewer steps.
Information has its limits, too, and laws somewhat similar to thermodynamics appear to govern these limits. If you have some linear function g(c(b(a))) that doesn't necessarily mean you can complete it as g(a,b,c) if c is dependent on b is dependent on a.
For instance, there are bi-directional programming languages but you still are forced to rethink your problem to be solved in a way that work toward the solution is still being done in reverse, and frankly I doubt that all real-world problems have a solution where time=t can be decremented. For starters if you need more than one output for a given input, you're kind of screwed for any linear task.
I have to agree with those who see this as a gag to win more funding. It's the equivalent of bringing again e.g. bidirectional programming over to the the hardware level, and go ahead and find me all the amazing examples of what you can do with bidirectional programming languages (go ahead, there are several and some are a number of years old).
Re: (Score:2)
University of Florida has very advanced studies in grant writing.
Are you a Tachikoma? (Score:1)
Deja Pensee (Score:5, Informative)
It is interesting, as a pure mathematician, to read:
"[F]or several decades now, we have known that it's possible in principle to carry out any desired computation without losing information -- that is, in such a way that the computation could always be reversed to recover its earlier state."
Now this 'can get back to earlier state' thing is basically the 'existence of inverses' axiom of group theory. A semigroup is a structure with a well-defined associative operator, but not necessarily an identity, nor existence of inverses. Now going from one computation state to the next, as a CPU does, is essentially a semigroup operation. Or at least something like that.
Reversible computing is effectively the faithful transformation of an abstract structure (e.g. rotating an icosahedron) on which the possible transformations form a group. Such a condition means that an unbounded number of operations can be chained without loss. This means the transformation must take zero energy. Thus, in fact, no change takes place. That means that what you think is a computation is, in fact, a fixed point that you're somehow conjuring into what appears to be a non-fixed computation. Interestingly, to me this stuff isn't new, nor even recent. What the ancient mystics, yogis and others obsessed over was this sort of aspect of reality.
Getting back to a less abstract point of view, the problem I see is that if these guys (and girls) insist on reinventing group theory the hard way, they won't even be able to catch up with where group theory was middle of last century. Indeed there is a dire need to more thoroughly think through what computation itself _actually is_. The 'Turing Machine+ChurchTuringThesis' thing is a half-decent first stab, but nothing more. The infinite tape, like the successor and infinity axioms of Peano Arithmetic and ZF Set Theory also, is akin to a naive C programmer assuming that malloc() will never fail. When you're knocking up a quick prototype, and you're not bothered if a malloc() failure crashes the program, fine. On the other hand, Linux kernel module authors seem to understand the need to use malloc() when it works, but never to trust it for critical duties, whether explicitly, or implicitly (via e.g. printf).
Re: Deja Pensee (Score:1)
Yeah, but you can create a memory management structure where malloc() always works as long as it's the only program running, I.e. as a platform for any other programs which must also adhere to the rules of the MMS. For example "object-oriented C", where the platform and every program on it are all implemented in OOC.
Not to disparage your remarks, because I think your first two paragraphs word my own objections more fundamentally than I managed to. (As a Math Minor merely requisite to Computer Engineering, I
Naming this "Reversible Computing" is confusing (Score:2)
Re: Naming this "Reversible Computing" is confusin (Score:1)
Yeah, but, where's the example of an implementation that saves anything?
Patently false statement (Score:2)
"Still, a practical reversible computer has yet to be built using this or other approaches."
Since quantum computers of any kind have to be reversible due to the very nature of QM, every realization of quantum computation is a reversible computer.
This includes the controversial D-Wave machine [wavewatching.net] as well IBM's QC chip that you can play with online [ibm.com].
Re: (Score:2)
All true, yet at its heart every unitary evolution of a quantum algo is necessarily reversible. The energy requirements simply stem from the enormous effort required to cool and insulate the QC chip.
Recommended Reading (Score:2)
I suggest that people who are really interested in understanding this subject read and understand the papers reprinted in "Maxwell's Demon: Entropy, Information, Computing" (first edition: ISBN: 978-0691605463; second edition: ISBN: 978-0750307598).
Thermodynamics explination (Score:2)
That's theory, but in practice? (Score:1)
I did not read this article, so I'll just make up some numbers I find plausible.
Somehow we need to equate entropy (or information loss) with energy. The assumption of 1eV per bit is probably ok. One electron, either changing potential of 1V or not.
So by making computations reversible, we could avoid this inevitable 1ev loss if the computation is not reversible. Nice. But if we currently burn 1keV per switch, there is no point talking about this technology right now. Let's shave off another 990eV first. Then
Re: (Score:2)
Using your example, if you have a balanced scale with 1000eV charge on each side, and you can flip it back and forth with a single eV then you are 1000x more efficient than having to move all 1000eV every time you want to flip a switch. This might be more plausible than trying to make 1eV switches.
Re: (Score:2)
I doubt that the authors of the paper can build a 1eV transistor right now. It can be done in theory. In theory you can also make the irreversible transistor much better.
And now that I skimmed over the article, it says that only 1 meV is theoretically lost per bit. This makes my point even more valid. We can improve current technology to be one million times more efficient before hitting this thermodynamical barrier.
Also, you finish your post with an unsupported statement. It might also be less plausible to
Byte 1998 called, it wants its article about (Score:2)
Re: (Score:2)
Since in those 20 years time, there haven't been further advancements in this field, I would think that this an idea that is born dead.
There have been billions (if not trillions) of dollars of R&D poured into silicon and existing technology. Even if someone came up with something that potentially could perform better than existing technologies after the same amount of R&D, getting the investment needed to ramp it up to compete with existing technologies would be next to impossible. Unless we hit a brick wall, incremental improvement of existing technologies will likely always be a better path than starting over from scratch with
Reuse of electrons. (Score:1)
Don't laws of thermodynamics prohibit this? (Score:1)
Re: (Score:1, Offtopic)
Re: (Score:1)
While I agree some of the causes of your angst, please remember that crowd moderation will always have this kind of side effect.
Any crowd-moderated community will frown upon comments which don't agree with said community's majority point of view/beliefs/preferences. Here on Slashdot it happens to be Linux, Android, F/OSS, and the USA.
Posts disagreeing with group think will be downvoted anywhere, ad if that's not possible, they will simply be ignored with the same outcome. just deal with it or go to a forum
Re: Eliminate moderation (Score:1)
Re: (Score:2)
The moderation system is designed around the law of large numbers, with one filter: the better an account's karma is, the more moderation points it receives in time.
I have been wrongly moderated a couple times, in topics where TFS, TFA or both were incorrectly bashing Microsoft and I pointed it out, but at the same time I expected the moderation to swing that way. But generally I am happy with how it works.
The trick is to set the right expectation and not care too much.