Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Hardware Science Technology

A Mind Made From Memristors 320

Csiko writes "Researchers at Boston University's department of cognitive and neural systems are working on an artificial brain implemented with memristors. 'A memristor is a two-terminal device whose resistance changes depending on the amount, direction, and duration of voltage that's applied to it. But here's the really interesting thing about a memristor: Whatever its past state, or resistance, it freezes that state until another voltage is applied to change it. Maintaining that state requires no power.' Also theoretically described, solid state versions of memristors have not been implemented until recently. Now researchers in Boston claim that memristors are the new key technology to implement highly integrated, powerful artificial brains on cheap and widely available hardware within five years."
This discussion has been archived. No new comments can be posted.

A Mind Made From Memristors

Comments Filter:
  • by Monkeedude1212 ( 1560403 ) on Friday December 03, 2010 @02:57PM (#34436094) Journal

    How do you read that state back without applying any voltage to it?

    I look at the wikipedia page and its all greek to me.

  • by InsaneProcessor ( 869563 ) on Friday December 03, 2010 @02:59PM (#34436132)
    This is nothing like the cognitive human brain. This is only a variable memory device.
    • by ColdWetDog ( 752185 ) on Friday December 03, 2010 @03:03PM (#34436206) Homepage

      This is nothing like the cognitive human brain. This is only a variable memory device.

      Variable memory, eh? Perhaps we can use it to replace politicians.

    • What is a man?
      A miserable little pile of secrets! But enough of this, have at you!
    • This is nothing like the cognitive human brain. This is only a variable memory device.

      My hope is that these artificial brains come with an easy way to back up their memory. Since no one does computer backups, I can imagine it would be the same with their brains...

      I can imagine it now...in 2025...

      Kid: What's wrong with dad?
      Mom: He crashed last night.
      Kid: Did you do a full restore?
      Mom: Yes, but we haven't done a backup since 2012. So he thinks he's 25, doesn't remember you, and he keeps talking about President Palin.

    • Here is why this is just yet another pipe dream: any hardware that we can build can be emulated identically in software. It will perhaps run slower, but it will do the same thing. There have been no software agents that have been modeled for the past 50 years that are anything close to 'real ai', so why would shifting the problem to hardware do anything to advance the underlying problem? Speed and transistor counts don't make up for a lack of understanding.
  • by digitaldc ( 879047 ) * on Friday December 03, 2010 @02:59PM (#34436136)
    Now that's 'Change We Can Believe In!'
  • by mykos ( 1627575 ) on Friday December 03, 2010 @03:03PM (#34436196)
    I'm going to be the laziest bastard alive
  • by dreamchaser ( 49529 ) on Friday December 03, 2010 @03:04PM (#34436218) Homepage Journal

    Ever notice that anytime some cool sounding new development is announced the people behind it say 'we see this having applications in/within/in about five years?

    Call me when you actually have something to show us.

    • by blueg3 ( 192743 )

      Ever notice that anytime an interesting piece of science or technology is talked about, someone complains about how people say "we see this having applications in about five years", even when it's not really relevant?

      • It's perfectly relevant to point out that more times than not they are just pulling that number out of their ass but of course you're free to disagree.

      • by retech ( 1228598 )
        It's so that you forget about it and they're never called out on failing if they do.
      • Ever notice that every time someone complains that people complain about how people say "we see this having applications in about five years", people are making exactly the same complaint five years later?

  • Cybernetic Lifeform Node!

  • Neuromorphic CPUs (Score:5, Informative)

    by Kensai7 ( 1005287 ) on Friday December 03, 2010 @03:05PM (#34436238)

    Even if the rest of the things explained in the article happen many years away, the last couple of paragraphs explain the trend:

    Neuromorphic chips won't just power niche AI applications. The architectural lessons we learn here will revolutionize all future CPUs. The fact is, conventional computers will just not get significantly more powerful unless they move to a more parallel and locality-driven architecture. While neuromorphic chips will first supplement today's CPUs, soon their sheer power will overwhelm that of today's computer architectures.

    The semiconductor industry's relentless push to focus on smaller and smaller transistors will soon mean transistors have higher failure rates. This year, the state of the art is 22-nanometer feature sizes. By 2018, that number will have shrunk to 12 nm, at which point atomic processes will interfere with transistor function; in other words, they will become increasingly unreliable. Companies like Intel, Hynix, and of course HP are putting a lot of resources into finding ways to rely on these unreliable future devices. Neuromorphic computation will allow that to happen on both memristors and transistors.

    It won't be long until all multicore chips integrate a dense, low-power memory with their CMOS cores. It's just common sense.

    Our prediction? Neuromorphic chips will eventually come in as many flavors as there are brain designs in nature: fruit fly, earthworm, rat, and human. All our chips will have brains.

    Hopefully, this is the solution to 2018's problem of reaching atomic levels of miniaturization. We have a breaktrought to continue with Moore's law beyond current technology.

    • Re: (Score:3, Insightful)

      I think Moore's law is becoming increasingly pointless to most of the world. It talks about speed, yet at this point few manufacturers are trying to win speed competitions. It's all about form factor and efficiency. To use a car analogy, the past number of years were the horsepower wars of the late 60s & early 70s. Now we have seen a switch to fuel (energy) economy as the main driver of development.

      That being said, I think it's cool this is a possible future - it's not that be need more power, we need a

      • I think Moore's law is becoming increasingly pointless...It talks about speed...

        Actually I think it talks about transistor density, not CPU frequency (speed). And transistor density keeps going up, year after year. In 2007 we had the CPU that beat Kasparov in 1997 and weighted 1.5 tons. This info is in the article, btw.

        • Re:Neuromorphic CPUs (Score:4, Informative)

          by limaxray ( 1292094 ) on Friday December 03, 2010 @03:50PM (#34436934)
          Actually, it talks about transistor density per unit cost - as long as manufacturing continues to improve and drive down costs, Moore's law will continue beyond the physical limitations of transistor density (stuff will continue to get cheaper even if it doesn't get 'faster').

          I don't understand why most people focus on the maximizing transistor density part when 99% of applications call for minimizing cost.
      • While processing speeds are certainly linked to Moore's law, it is really only about the bi-yearly doubling of the transistor count while keeping prices roughly the same. Increasing the amount of cores and adding more on-die memory are easy ways to keep Moore's law going.

        ...well, easier than decreasing the half-pitch below 12nm.

        By the way, Moore's law applies to memory density and CCD properties as well, neither of which appear to be close to their limits.

      • Re:Neuromorphic CPUs (Score:4, Informative)

        by StikyPad ( 445176 ) on Friday December 03, 2010 @04:14PM (#34437332) Homepage

        I think Moore's law is becoming increasingly pointless to most of the world. It talks about speed

        It doesn't actually talk about speed at all; it talks about the cost of manufacturing chips of 2^n density where n increments every 18-24 months cost remains constant. It is, in fact, exactly what you go on to say is relevant despite the fact that what you're describing IS Moore's Law exactly.

  • ...to implement highly integrated, powerful artificial brains on cheap and widely available hardware within five years.

    *snicker* Is it April 1st already Soulskill?

    Don't get me wrong, this is cool research, but cheap, available, artificial brains in five years? In 2015? Color me skeptical. I say give it 25 at least.

  • by retech ( 1228598 ) on Friday December 03, 2010 @03:25PM (#34436556)
    Pity Frank Herbert isn't still around to see the fruits of his imagination!
  • FTFA...

    By the middle of next year, our researchers will be working with thousands of candidate animats at once, all with slight variations in their brain architectures. Playing intelligent designers, we'll cull the best ones from the bunch and keep tweaking them until they unquestionably master tasks like the water maze and other, progressively harder experiments. We'll watch each of these simulated animats interacting with its environment and evolving like a natural organism. We expect to eventually find the "cocktail" of brain areas and connections that achieves autonomous intelligent behavior. We will then incorporate those elements into a memristor-based neural-processing chip. Once that chip is manufactured, we will build it into robotic platforms that venture into the real world.

    Then, once they become self-aware, we can turn Arnold Schwarzenegger loose on them.

  • by Quince alPillan ( 677281 ) on Friday December 03, 2010 @03:35PM (#34436686)
  • There is no reason to suppose that people would not ally themselves with an artificial brain. People have already aligned themselves with Pol Pot, Idi Amin, Adolf Hitler, and Josef Stalin--allegiances with undisputedly bad people who ultimately served them very poorly. There is every reason to expect that people will form an allegiance to an artificial brain if that artificial brain causes those people to receive adequate food, shelter, and medical care.

    That will be seriously weird. I can envision electio

  • Never mind AI (Score:4, Insightful)

    by Lilith's Heart-shape ( 1224784 ) on Friday December 03, 2010 @03:56PM (#34437036) Homepage
    Couldn't this be used to make cheaper solid-state storage?
    • It can and is being designed for that use but I believe there have been problems with reliability of individual memristor units.. However in a neuromorphic design (non-Von Neumann architecture) you only need a certain percentage of the units to be reliable as the information is highly distributed and fault tolerant. Think of the massive cell death that occurs in Alzheimer's disease, yet patients are still fairly normal well into that process.

      The other main advantage is that you can represent a single syn

  • Why do we think we are so much smarter than those scientists on Caprica. They were much further advanced than us. Shouldn't we be learning from their mistakes instead of trying to recreate them?

  • by Stuntmonkey ( 557875 ) on Friday December 03, 2010 @04:59PM (#34438008)

    This technology fundamentally mistakes what is the hard part about building brains as adaptable as biological ones. The physical instantiation is not important, if the Church-Turing thesis is true. (And if you're saying Church-Turing is false, that's an enormous claim and you'd better have very compelling evidence to back you up.)

    The hard part about building a brain is figuring out the patterns of connectivity between neurons. Biology solves this in some brilliant way, starting from a seed with almost no information (the genome) and implementing some process to incorporate environmental data, self-organizing into a very robust and complex structure with orders of magnitude more information. The great unknown is the process whereby this growth and self-organization occurs. Figure that out, and you'll be able to make any kind of computer you like function as a brain.

    • by Schroedinger ( 141945 ) on Friday December 03, 2010 @05:47PM (#34438708) Homepage

      The process of figuring this out isn't going to occur magically. You need to test out your models at the systems level, with all the components working together. The more powerful the hardware we have to do this the more we can test and refine our models of how the brain achieves the same thing. This is both true if you're trying to model existing neuro architectures (like BU is) or if you're modeling evolutionary approaches like you describe above.

      These memristive neuromorphic architectures hold the promise to get us orders of magnitude more processing speed while also keeping power levels low.

    • by timeOday ( 582209 ) on Friday December 03, 2010 @06:44PM (#34439248)
      The summary really only promises enhanced speed and efficiency, but after reading the article, I agree with your complaint: "Researchers have suspected for decades that real artificial intelligence can't be done on traditional hardware, with its rigid adherence to Boolean logic and vast separation between memory and processing." Huh?

      Now, I have some sympathy for the pragmatic argument that getting good tools into enough hands is the best way to raise the odds of cracking hard problems. Some people will point out (for example) that a modern 3d game like Crysis might have been emulated (at a fraction of real-time speed) 20 years ago, but nobody figured out how, or bothered to do so, (and no, Castle Wolfenstein doesn't count) because hardware limitations made it too cumbersome and only a few parties had the resources to even try.

      Even so, claiming it "can't be done" is going too far. People are building conventional computers that simulate neurons on the order of a cat brain [forbes.com], but programming them is the problem.

      • Yeah, that was the same passage that made me double-take. A surprising thing to read on the IEEE site.

        I agree that performance can matter. Especially so for brains, which interact with the physical world and have to respond on physical timescales (e.g., within hundreds of milliseconds in order to coordinate walking). If the technology were a lot faster than conventional machines for simulating neurons then that would be a meaningful advance, but this was not demonstrated in the article. The central argu

There is very little future in being right when your boss is wrong.

Working...