Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Earth Hardware Science Apple Technology

Neuroscience Can't Explain How a Microprocessor Works (economist.com) 169

mspohr writes: The Economist has an interesting story about two neuroscientists/engineers -- Eric Jonas of the University of California, Berkeley, and Konrad Kording of Northwestern University, in Chicago -- who decided to test the methods of neuroscience using a 6502 processor. Their results are published in the PLOS Computational Biology journal. Neuroscientists explore how the brain works by looking at damaged brains and monitoring inputs and outputs to try to infer intermediate processing. They did the same with the 6502 processor which was used in early Atari, Apple and Commodore computers. What they discovered was that these methods were sorely lacking in that they often pointed in the wrong direction and missed important processing steps.
This discussion has been archived. No new comments can be posted.

Neuroscience Can't Explain How a Microprocessor Works

Comments Filter:
  • by Anonymous Coward on Thursday January 19, 2017 @10:10PM (#53700553)

    Anyone with even an elementary education in cognitive science will tell you that attempting to model thought processes is always done according to the dominant technology of the time in question. First it was machinery, then it was circuits, then it was computers.

    This does not mean the model is accurate or even useful.

    • by im_thatoneguy ( 819432 ) on Thursday January 19, 2017 @10:22PM (#53700595)

      This isn't so much about modeling thought processes as it is about illustrating how even in a simplified model one of our debugging approaches fails.

      The logic that they're arguing appears to be:

      "If we can't even properly reverse engineer an extremely simple deterministic computer chip using fault modeling, it's extremely unlikely that we can infer the mechanisms of an extremely complex non-deterministic processor like the brain."

      • by ShanghaiBill ( 739463 ) on Thursday January 19, 2017 @10:34PM (#53700639)

        "If we can't even properly reverse engineer an extremely simple deterministic computer chip using fault modeling, it's extremely unlikely that we can infer the mechanisms of an extremely complex non-deterministic processor like the brain."

        But that logic only makes sense if microprocessors and brains were similar enough that comparable methods could be used to attempt to understand them. But that isn't true. That is like saying you can't understand how to plow a field with a horse if you don't understand how a tractor engine works. Although horses and tractors have some similarities, understanding how one works doesn't really help you with the other.

        • by dwywit ( 1109409 )

          I don't want to enter the high-level debate here - I'm not qualified (and that's not sarcasm) - and I do know that this example doesn't really mean anything, or add to the debate, but:

          Watch this:
          http://www.visual6502.org/JSSi... [visual6502.org]

          then watch this:
          https://www.youtube.com/watch?... [youtube.com]

          and think about them for a minute. It never fails to make me stop and wonder.

        • The point of the argument is to challenge the implicit assumption that current neuroscience methods work as well as people think they do. If you just assume your research methods work, you are resting on blind faith in your methods. One step in showing the need to challenge those foundational assumptions is to use this example to //illustrate// how then can fail. Using microprocessors allows is the luxury of total knowledge as to what we are investigating, at the expense of being quite different to the brain. The quoted bit needs fixing:

          "If we can't even properly reverse engineer an extremely simple deterministic computer chip using fault modeling, it's extremely unlikely that the same fault modelling will work reliably with something extremely complex like the brain."

          It does not show whether or not 'fault modelling' works or not for the brain, but gives good justification for the claim that we cannot take the efficacy of 'fault modelling' for granted when studying the brain.

          • by Bongo ( 13261 )

            Indeed, and I vaguely gather there's the notion that "science" actually starts with "thinking about thinking"
            ie.
            asking not just, how do we know?
            but asking, how do we know whether, how we know, really allows us to know?
            or to be less wordy,
            why do we trust this method?

          • No the point of the article was "questions whether more information is the same thing as more understanding."

            Using computer chips as test subjects does not validate or invalidate research methods used in neuroscience. What validates or invalidates the methods and the results of the methods is how well it predicts human behavior. That's all that counts. Do these results provide insights or not. Studying a different subject matter does not get you any closer to understanding how the human brain functio
            • No the point of the article was "questions whether more information is the same thing as more understanding."

              No, that was not the point of the article at all. The point of the article was that there is an implicit assumption in the field that we just lack sufficient data. That the methodologies used to analyze that data are fine, but because we don't have enough data, we fail to successfully understand cognition. The authors argue that, no, there is not enough but data, but also that the methodologies are flawed; that the methodologies themselves need to be validated. But because we don't have a ground truth with

            • by HiThere ( 15173 )

              If they were looking at calcium channel opening, then I'd agree with you. They appear to be looking at things from a much more abstract level. And their results aren't proof, but certainly raise reasonable questions.

          • For what it's worth, what I'm taking away from this, is that they're willing to question whether their methods of analysis are valid/sufficient, which I'll take as a good sign.
        • But that logic only makes sense if microprocessors and brains were similar enough that comparable methods could be used to attempt to understand them. But that isn't true.

          Actually, they are arguing that it is true. From the article,

          "Obviously the brain is not a processor, and a tremendous amount of effort and time have been spent characterizing these differences over the past century [22, 23, 59]. Neural systems are analog and and biophysically complex, they operate at temporal scales vastly slower than this classical processor but with far greater parallelism than is available in state of the art processors. Typical neurons also have several orders of magnitude more inputs

          • RTFA. No they don't argue it is true. They just argue that the methods should be able to deduce how the micro chip works because they have all this extra data.

            More to the point: "Gaël Varoquaux, a machine-learning specialist at the Institute for Research in Computer Science and Automation, in France, says that the 6502 in particular is about as different from a brain as it could be."

            You knowledge and understanding of brain research is very primitive.
            • No, they argue that the 6502 (or just a microprocessor) is an acceptable model for validating the approaches used in neuroscience to analyze complex data sets, which is exactly what I said in my comment above. In other words, if they can successfully determine the ground truth of the microprocessor using those approaches and with limited a priori knowledge, then the methodologies have potential. Otherwise, they need to be refined until they are able to do this. Validating against an imperfect model is bette

        • But that logic only makes sense if microprocessors and brains were similar enough that comparable methods could be used to attempt to understand them. But that isn't true.

          Yes, there's no guarantee that the methods that work on the brain also have to work on microprocessors. However, there's also no guarantee that they won't work in both cases. There are many methods/observations that are so generally useful that they apply to a huge range of problems. This is important because there's no guarantee that methods that work on one part of the brain also work on another part of the brain. Maybe the part of the brain responsible for facial recognition and the part of the brain tha

      • I agree.

        If your model fails to predict an event, your model is faulty. Full stop.

        So whatever methods they used, aren't enough to capture the 6502. A random number generator given an infinitely long time, would build a 6502 eventually. So the point here is not "It cannot be done." It's simply that "Given the methods we tried--which may be ALL the ones available to us in 2017--we couldn't do it." But we couldn't do it with our tools != nobody could ever do it with newer tools.

        • Right!
          What they need is super fine granularity on an fMRI, which they will have to wait for...
          OR
          Use a normal fMRI on a HUUUUGE brain!
          If only we could find one of those...
      • It's worth mentioning that each person has a brain in their head they can observe in more detail than any MRI will ever give.

        That is, observing your own thoughts isn't perfect, but it can give you a ton of data if you're willing to look.
        • except this isn't about any given thought, or emotion. this isn't about muscle feedback loops and controls.

          This is about how each neuron fires and why does it fire in that order.

          We can make a logic gate, but the brain doesn't use logic gates yet it still gets the correct answer. (sometimes) how it does that is the biggest mystery of neuro science. In the brain memory and processors are one and the same.

          What could a computer do if you gave it 32 gigabytes of level 3 cache? what if you gave it 1 terabyte

          • I'm not really sure what you're saying, and how it relates to what I said.
          • Computer main memory, even SSD drives, are faster than human memory.

            The elementary parts that make up a brain can be emulated by digital logic. The connections and their changes can be emulated by digital logic. The appropriate question is not can it be done, but how can it be done and is it practical to do it?

      • There a fundamental flaw.

        Brain are extremely parallel and highly distributed processing units.
        Some region are more specialised in some tasks, but as a whole, no part of the brain absolutely needs another part for the brain to keep working.

        From that perspective, CPU are a small single function device. They either work, or not. It's hard to have a *half functionning" CPU (unless you very specifically manage to burn a peculier par of the silicon that isn't core to the functionning. I don't see how that would b

        • The 6502 is not a microcoded processor.
          • Yup, have never had experience coding for 6502. (Only from 8088/8086 up)
            Just noticed now that it lacks multiplication/division instruction (and thus probably lacks microcode to do them as a series of addition/substraction and shifts).
            Thank for correcting me.

      • by gweihir ( 88907 )

        I agree. I mean, an 6502 is a pretty simple piece of electronics and a description of the complete functionality and instruction set can be done on 20-30 pages or so. In addition, it is completely deterministic and has a very small internal state (around 8 bytes). If you cannot model that, then forget about modeling more than a single neuron or a very small cluster of neurons.

      • This isn't so much about modeling thought processes as it is about illustrating how even in a simplified model one of our debugging approaches fails.

        The logic that they're arguing appears to be:

        "If we can't even properly reverse engineer an extremely simple deterministic computer chip using fault modeling, it's extremely unlikely that we can infer the mechanisms of an extremely complex non-deterministic processor like the brain."

        I do wonder at what level the reverse engineering is done. Also I wonder if their method was pure enough to initially consider the 6502 to be analog rather than digital. That would be a nice trip down the garden path right from the get go.

        Now I would say that many fields of study at the higher levels, such as economics, medicine, etc. etc., are incomplete. There's a lot still to be learned. And taking a sidestep of looking at an artificial "brain" from a neuroscience perspective is a good way to navel gaze,

      • Ironically it is possible to reverse engineer the brain. Once you begin to understand its core algorithms at heart the brain is not even a very complex machine. The secret to the mind is abstraction and generic logic and the Turing machine - the key to all those is computer and CPU engineering. That's the irony.

    • by ElephanTS ( 624421 ) on Friday January 20, 2017 @06:42AM (#53702163)

      Exactly, I have a degree in cognitive science and this is what we are taught. So much of the language of computers has crept into psychology it's unbelievable. And most of it is wrong and misleading. Hundred years ago the personality was being modelled in hydraulic terms (the new cool tech of the age) and even physical models were made. All wrong of course.

    • It's done more for explanation but not for actual comparison.

      FTA: "Gaël Varoquaux, a machine-learning specialist at the Institute for Research in Computer Science and Automation, in France, says that the 6502 in particular is about as different from a brain as it could be. Such primitive chips process information sequentially. Brains (and modern microprocessors) juggle many computations at once. And he points out that, for all its limitations, neuroscience has made real progress. The ins-and-outs of
  • by Anonymous Coward on Thursday January 19, 2017 @10:15PM (#53700567)

    In order to understand the DNA of an Orange, we "scientists" dissected an alarm clock. This _proved_ that our methods of studing oranges, and fruit in general, have been wrong for centuries.

    • by Tablizer ( 95088 ) on Friday January 20, 2017 @02:50AM (#53701531) Journal

      I see a lesson in humility here by looking at how poor human scientists do at modelling-by-studying-defects in a general sense.

      It suggests that models of the brain derived by seeing what effects damaged sections have on patient behavior may be worse than originally expected.

      • I see a lesson in humility here by looking at how poor human scientists do at modelling-by-studying-defects in a general sense.

        It suggests that models of the brain derived by seeing what effects damaged sections have on patient behavior may be worse than originally expected.

        But just like any science, that's not the only thing they do. They compare different methods and models and come to a consensus. If you see that people who have damage to region X of the brain can't do Y and you see that region X of the brain is active in healthy people when they do Y then you have two points of data that point to the same conclusion. They do the same thing with carbon dating, quantum physics, gravity waves, etc... As long as the different measurements all agree then you assume that you

        • by Tablizer ( 95088 )

          That shows that X has some relationship to Y. But the researchers were caught over-interpreting this with descriptions such as "X controls Y" in the chip experiment.

    • > In order to understand the DNA of an Orange, we "scientists" dissected an alarm clock.

      In the case of the orange, this method isn't wrong [wikipedia.org]...
    • Now I'm annoyed the editors rejected my submission "Carpentry Can't Explain How a Poem Works".

  • by glitch! ( 57276 ) on Thursday January 19, 2017 @10:22PM (#53700593)

    I know I'll catch hell for my religious beliefs, but...

    I think that the 6502 was not the result of evolution, but rather it had a Creator and was the product of Intelligent Design. There are just so many subtle clues that suggest features that were deliberately put in there. Could natural selection really explain how it had two different indirect access modes, one that selects a direct index from an offset, and the other adds the offset to the index?

    These researchers may be trying to apply the wrong methods to a device that is almost certainly the product of a higher power.

    • by Anonymous Coward

      Don't be silly. The 6502 evolved from earlier 4-bit microprocessors. This is clear because "evolved" now refers to anything whatsoever where B follows A.

      If still unconvinced, adjust your confirmation bias upward until you're blissfully avoiding risk of academic crimethink.

      • Re: (Score:3, Funny)

        by Anonymous Coward
        If your elitist, East Coast evolution is real, why has no one found the missing link between 6502 and earlier 4-bit microprocessors?
        • by Anonymous Coward

          Because, obviously, there were only a few necessary mutations between 4-bit and 8-bit processors, and, naturally, each transitional random rearrangement of the transistors met the constraint of full functionality of the processors at each progressive step.

          We only acknowledge one meaning of "IC" here.

          • by umghhh ( 965931 )
            This was one of the most troubling AC discussions I have ever read on /. Not even nonAC posts that I read so far could compare.... It probably is not a proof of Creator's existence but of his enemy for sure.
        • The 6800 was the missing link between the 4 bits and the 6502.
    • by Anonymous Coward on Thursday January 19, 2017 @10:36PM (#53700645)

      And the great and powerful Woz spake thusly: Let the Intel become the brain of my new creation! But lo! The Book of Jobs decreed the creation be cost effective and priced by the Number of the Beast. And so out of the land of Silicon came Forth the 6502 to eat from the Apple tree. Eight shall be the number of bits, no more and no less.

    • by Anonymous Coward on Thursday January 19, 2017 @10:36PM (#53700647)

      All jokes asside, I think the point here was that both devices (6502 or fatty-thinkmeats) were modeled as a black box. I'd be willing to be that a significant fraction of the neuroscientist population would argue for a 6502 being the simpler system, so the blackbox approach should (one would hope) be able to model that device more easily. If they find that their blackbox approach to understanding a 6502 leads to incorrect results, then it raises questions as to the effectiveness of the approach on the thinkmeats.

    • by Jeremi ( 14640 )

      These researchers may be trying to apply the wrong methods to a device that is almost certainly the product of a higher power.

      That may well be the case, but if so, it's also quite clear that the higher power used evolution and natural selection as his development tool.

      If human brains had just been magic'd into existence by divine fiat, there would be no reason for them to look like a specialized version of the brains of earlier hominids (which in turn look like specialized versions of the brains of earlier mammals, and so on for as far back as you care to look).

      • by glitch! ( 57276 ) on Thursday January 19, 2017 @11:14PM (#53700773)

        Obligatory Princess Bride quote:
        "Truly, you have a dizzying intellect."

      • Before we can answer the question, "Is there any reason for brains to look like each other if it was magic'd into existence by divine fiat?" it might be helpful to look into the question, "Do microprocessors look like each other?" Alternatively we can ponder the statement, "If brains were magic'd into existence by divine fiat, there is no reason for brains to either look or not look like others." However if information is self perpetuating, one of the forms it might take is the form of something that has be
      • All of existence magic'd into existence [wikipedia.org] by divine fiat 1 microsecond ago.
        • first time i read this i thought it said "divine fart".

          i'm sure there are people who believe that, but i don't particularly want to meet them.

          or hear them praying.

    • by Anonymous Coward

      Flame war on, it was not Intel Designed it was designed by MOS Technology!!!

    • by Anonymous Coward

      I think that the 6502 was not the result of evolution, but rather it had a Creator and was the product of Intelligent Design.

      And yet there were clearly indications of evolution at work. Subsequent generations, including the 65C02 and 65C816, clearly had not only new instructions that simply didn't exist in the early generation processors, but also expanded addressing and ever faster speeds.

    • the 6502 was not the result of evolution, but rather it had a Creator and was the product of Intelligent Design

      And proof there is in the name: was created 6502 years ago

    • You forgot:

      Checkmate, atheists!

    • by slew ( 2918 )

      Actually natural selection *can* explain how the 6502 had two different indirect access modes.

      The PDP-11 (one of the great ancestor computers) had two different indirect access modes (6n and 7n). The computer eco system flourished and spawned many different types of computer chips, one of those which was the 6800 which shared the instruction set traits from that line. However, later, the computer eco system got more price competitive from descendants from other computer chip lines. This put evolutionary

  • by mmell ( 832646 ) on Thursday January 19, 2017 @10:23PM (#53700597)
    We once had machinery that did computations (example: adding machines). It seemed natural to try to model the brain as a complex machine then.

    We once had electronic circuits designed to perform calculations (example: Enigma). It seemed natural to try to model the brain as a complex electronic device.

    We now routinely use silicon integrated circuits to perform calculations (example: the IBM PC-XT). It seems natural now to try to model the brain as a complex general computing device.

    The take-away point I get from this is that we may need another revolutionary technology or two (fully three-dimensional integrated circuits? IC's based on carbon instead of silicon?) before we can model the sentient mind as similar to an artificially created device. Such advances may also be required before we can create (invent?) a true "artificial intelligence".

    • >The take-away point I get from this is that we may need another revolutionary technology or two (fully three-dimensional integrated circuits? IC's based on carbon instead of silicon?) before we can model the sentient mind as similar to an artificially created device

      Memristors already exist and are going to revolutionize the computing world by combining processing with storage (and eliminating the difference between RAM and long term storage). If somebody knows if that will take 5 or 50 years to get out

      • by Anonymous Coward

        It's been almost 50 years since memristors were proposed (1971) and over 5 years since they were produced in a lab (they were around before I finished undergrad, 7 years ago).

        We have an enormous amount of production infrastructure built around producing transistors and not much else. I would not bet on memristors becoming competitive any time soon.

        • We've been replacing "transistor infrastructures" with ever smaller "transistor infrastructures". They will eventually find a way to make cost effective "memristor infrastructures" and then they will in short order be built... and then improvements will be made and the same cycles will be seen with memristors, and light based designs, and quantum designs... and as an outlier maybe even biological ones. But biologic-nonbiologic interfaces which may employ some of the above technologies will abound as well.
          • We already have flash, which performs the same system function as memristors. I don't see any dramatic advantage coming from the incorporation of memristors in commercial CPUs.
      • by AHuxley ( 892839 )
        Funding is the key. Who would want to risk some wisenheimer AI that needs to learn for years vs really fast sorting now? Would lots of really fast new cheap storage for a lot of information help more? What kind of AI? Something that can learn how to sort better? Recall from a lot of data more quickly? Learn something new from a lot of data given lots of questions? Sort a lot of data really quickly if asked in a new or different way?
        Phrase the funding request to a gov/mil and enjoy decades of fundi
    • Humans are good at pattern recognition. We naturally attempt to fit the data to patterns we know. If your primitive tribe knows nothing of aircraft then saying an airplane is a bird makes sense.
    • Those other things you listed are just different ways to realize a Turing Machine. And through computational equivalence [wikipedia.org], and they're all the same.

      If you really want to blow your mind on something, watch this talk [youtube.com] on all possible sentient spaces, as in the set of possible intelligences/consciousnesses.

    • by dbIII ( 701233 )
      We could probably model it with the hardware we have now (I'm not suggesting anything close to realtime) if we had a better idea of what we are trying to model. There is a lot of electrochemical weirdness going on where tiny traces of things appear to mean something.

      Throwing a shitload of computing power at a problem doesn't work unless you have far more than a vague idea of what you are trying to simulate.

      If we had a magic SF computer available asking "what do you want me to do Dave?" we still have to h
  • by ndykman ( 659315 ) on Thursday January 19, 2017 @10:23PM (#53700599)

    Neurons aren't digital processors. A set of connected neurons isn't either. Neuroscience already knows that it's really difficult to learn about the structure and function of the brain from the available tools. What was more interesting was that they were able to pick up anything. They found that the chip had a master clock, for example.

    There are people already challenging the use of viewing the brains as a computer (signal ins and outs) in terms of really understanding how brains organize and function. So, given all this, it's not surprising that the methods didn't fare well. The neuroscientists already knew they had a very tough task, it's those in CS and AI that are assuming that understanding the brain is the same as understanding a collection of digital circuits.

    • Re: (Score:2, Insightful)

      by TapeCutter ( 624760 )
      Neuroscience can't explain a microprocessor, computer science can't explain a mind. In no way does this mean that neuroscience cannot be advanced by computer science or visa versa.
    • I'm not sure how much the brain has been viewed as a bunch of digital circuits, but what those who make the claim that it has been and therefore our understanding of the brain is flawed as a result, don't seem to know is that artificial analog circuits have been around longer than digital ones.
    • Say you're an alien on an alien world, and some probe you sent out to Earth comes back with a fully-functioning Chevy muscle-car, complete with fuel in the tank and a fully charged battery. Your civilization never used fossil fuels or internal combustion engines, but you're a technological civilization regardless of that. You start the car up, figure out how to make it move and stop, and how to shut it off again; you know what it's for now. Now, carefully, you dismantle the engine and the drivetrain, making
  • C'MON (Score:5, Funny)

    by dmomo ( 256005 ) on Thursday January 19, 2017 @10:31PM (#53700625)

    It's not brain surgery.

  • ...a 6502 is not a brain.

    The issue is, the 6502 is several orders of magnitude less complex than a brain. It could be likened to a massively parallel computer that is running thousands of programs all at once. So it is completely reasonable, on the scale of the brain, to suggest that damage to an area in a dozen people that affects their hearing to draw the conclusion that that part of the brain is responsible for hearing. Damaging a couple transistors in a 6502, a single processor, is akin to damaging a

  • by Anonymous Coward

    It's like asking my boss to explain the technical details of what I do. Whenever he asks me to explain the details, I know it's going to be a really short conversation/meeting. About 3 sentences in, he waves his hands in the air and says "I don't need to know the details!"

  • by Anonymous Coward

    "Neuroscience Can't Explain How a Microprocessor Works" is like saying "Herpetology Can't Explain How a Bicycle Works."

  • by Anonymous Coward

    I think the biggest flaw in this paper is that perturbing an analog system is nothing like perturbing a digital system. To be clear, if the brain is anything comparable to a computer, it is a computer built from millions of parallel analog processors. Perturbing an analog system can be informative in ways that perturbing an digital system would not be -- analog systems can reveal half answers and shades of grey even when severely disrupted. A microprocessor will throw a fit if a single bit gets flipped un

  • I know. It's really not that difficult and does not take any math beyond simple logic.
  • This is such a good idea.
  • by Anonymous Coward
    The microprocessor is the result of decades of research and this experiment is an effort to short-circuit (pun intentional) much of that research. What would be a more interesting experiment would be to start with neurological model of Boolean logic and then present it progressively more challenging problems to solve. It would be very interesting to see if those solutions follow the evolution of Von Neumann machines, Harvard architecture or something entirely different.
    • by ledow ( 319597 )

      What you're proposing is basically a GA: Genetic algorithm.

      Even when you give a system a biological analogy as its base, the results are unpredictable, un-interpretable, and don't confirm to any logical architecture.

      There is a famous example of a chip designed to detect two different fixed frequencies of an input signal, and output which is active (if any). Designing the chip by hand results in a working, logical model of a certain size.

      If you allow GA to run random "evolution" over the circuit contents, p

  • This is why I don't get a nephrologist to fix my car.

  • How do they work?
  • can't explain how a steam engine works. So?

  • ...6502, which was used in early Atari, Apple and Commodore computers

    Oh dear, do we really that stuff here these days?

  • Computer people commenting on neuroscience like they're experts. Yikes. Move along nothing to see here other than a profound lack of knowledge a great ignorance.
  • That these researchers were able to obtain *any* information about the underlying hardware is remarkable. Models can never be completely right; the map will never, ever be the territory. Empirical adequacy [wikipedia.org] is the best we can hope for. Looking at failure states to infer causal connections is exactly what I did as a sysadmin back in the day. These researchers are doing the same thing. It worked for me as a sysadmin, and it works in neuroscience as well, though with one caveat. Ethically, you can't just
  • The patterns were a mishmash of unrelated structures that were as misleading as they were illuminating.

    This pretty much describes the state of every branch of science after a major influx of new data. Just look at the maps of the world produced after Europe became aware of North America. Early maps sometimes show California as an island [wikimedia.org]; and it's not because the cartographer is stupid; he just put the data at his disposal together into what was at the time a plausible conjecture. And in fact the problem might not even have been that he was ignorant. He may have misinterpreted some of the (at that stage)

  • They should've funded a brain-scanning gadget for Apple IIs.

    https://hardware.slashdot.org/... [slashdot.org]

Work is the crab grass in the lawn of life. -- Schulz

Working...