Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Hardware

A New Approach to Computation Reimagines Artificial Intelligence: Hyperdimensional Computing (quantamagazine.org) 43

Quanta magazine thinks there's a better alternative to the artificial neural networks (or ANNs) powering AI systems. (Alternate URL) For one, ANNs are "super power-hungry," said Cornelia Fermüller, a computer scientist at the University of Maryland. "And the other issue is [their] lack of transparency." Such systems are so complicated that no one truly understands what they're doing, or why they work so well. This, in turn, makes it almost impossible to get them to reason by analogy, which is what humans do — using symbols for objects, ideas and the relationships between them....

Bruno Olshausen, a neuroscientist at the University of California, Berkeley, and others argue that information in the brain is represented by the activity of numerous neurons... This is the starting point for a radically different approach to computation known as hyperdimensional computing. The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector. A vector is simply an ordered array of numbers. A 3D vector, for example, comprises three numbers: the x, y and z coordinates of a point in 3D space. A hyperdimensional vector, or hypervector, could be an array of 10,000 numbers, say, representing a point in 10,000-dimensional space. These mathematical objects and the algebra to manipulate them are flexible and powerful enough to take modern computing beyond some of its current limitations and foster a new approach to artificial intelligence...

Hyperdimensional computing tolerates errors better, because even if a hypervector suffers significant numbers of random bit flips, it is still close to the original vector. This implies that any reasoning using these vectors is not meaningfully impacted in the face of errors. The team of Xun Jiao, a computer scientist at Villanova University, has shown that these systems are at least 10 times more tolerant of hardware faults than traditional ANNs, which themselves are orders of magnitude more resilient than traditional computing architectures...

All of these benefits over traditional computing suggest that hyperdimensional computing is well suited for a new generation of extremely sturdy, low-power hardware. It's also compatible with "in-memory computing systems," which perform the computing on the same hardware that stores data (unlike existing von Neumann computers that inefficiently shuttle data between memory and the central processing unit). Some of these new devices can be analog, operating at very low voltages, making them energy-efficient but also prone to random noise.

Thanks to Slashdot reader ZipNada for sharing the article.
This discussion has been archived. No new comments can be posted.

A New Approach to Computation Reimagines Artificial Intelligence: Hyperdimensional Computing

Comments Filter:
  • by 93 Escort Wagon ( 326346 ) on Sunday June 18, 2023 @12:14AM (#63612024)

    "A hyperdimensional vector, or hypervector, could be an array of 10,000 numbers, say, representing a point in 10,000-dimensional space."

    Also known as a perl hash.

  • Quantum carburetor? Jesus, Morty. You canâ(TM)t just add a sci-fi word to a car word and hope it means something. Huh, looks like somethingâ(TM)s wrong with the microverse battery.

    Well I'll see your hyperdimensional computing and raise you a fermion superfluidity matrix.

  • Word Embeddings (Score:4, Insightful)

    by alecdacyczyn ( 9294549 ) on Sunday June 18, 2023 @12:52AM (#63612062)
    This sounds a lot like what most AI systems already do with their word embedding. It does much of the heavy lifting. Obligatory Computerphile video: https://www.youtube.com/watch?... [youtube.com] Vector Space Models are nothing new.
    • Yeah, they redefined a word, nothing to see. Seek out "word embeddings", "latent space", "transformer inner dimension". It's been used in all AI papers since 2011.
    • Re:Word Embeddings (Score:5, Informative)

      by cstacy ( 534252 ) on Sunday June 18, 2023 @01:50AM (#63612104)

      This sounds a lot like what most AI systems already do with their word embedding.

      Vector Space Models are nothing new.

      Multidimensional representations of various kinds have been the basis of a lot of classic AI, going back to the late 1970s. Frames, K-Lines, Feature Vectors, and so on. The challenge is the specific representation with respect to computational limits. And the early AI processors (such as Hillis's original Connection Machine) were highly parallel machines: in-memory nodes connected by hyperdimensional communication networks. And analogical reasoning was one of the basic approaches (Winston, Mallery, et al). So in that sense, none of this stuff is new.

      The trick is (as always) what is being represented and what operations are available to manipulate that. The current NN/ML systems represent very primitive things -- not, for example, "facts". That includes the LLMs which represent something rather dubious: symbols without any meaning (or any way to really attach any meaning).

      From TFS, it sounds like someone wants to do some real AI for a change, and is looking at the computational power available for more realistic "neurons", and the old basics of actual knowledge representation. The logic/analogy/inference operations would be on top of the hyperdimensional vectors. So this is interesting, and quite beyond what current NNs are doing.

      At least I hope that's what they're talking about; I just read TFS.

    • by Stash of Code ( 5682688 ) on Sunday June 18, 2023 @02:13AM (#63612130)
      Maybe the news here is the number of dimensions. Word embedding implies vectors with a much lower number of dimensions (in his book, François Chollet tells us about 256-, 512- or 1024-dimensional vectors), which is far less than hot encoding requires. Also, considering the huge number of dimensions, perhap's the maths in this space would be more related to semantics ? In their article about word2vec, the authors write that it was a surprise and it may now be intented ? : "Somewhat surprisingly, many of these patterns can be represented as linear translations. For example, the result of a vector calculation vec(“Madrid”) - vec(“Spain”) + vec(“France”) is closer to vec(“Paris”) than to any other word vector" (in "Distributed Representations of Words and Phrases and their Compositionality").
  • Heavy on HYPE, light on dimensional.
  • by Anonymouse Cowtard ( 6211666 ) on Sunday June 18, 2023 @02:11AM (#63612128) Homepage
    The most immediate limitation in current applications is that they are based on a massive data dump of everything said, thought or photographed over the past few years. If this is allowed to feedback on itself then the models that use this approach will stagnate and become repetitivel and predictable.

    It could be that we need to take a step backwards before the next step forward.

  • A machine which could somehow, like the Roman concrete, take advantage of impurity, rather than merely detecting error, correcting error, getting back on its Turing track.
  • I counted at least 6 buzzwords to justify the reasoning.

  • by WaffleMonster ( 969671 ) on Sunday June 18, 2023 @03:22AM (#63612170)

    Sadly have been hearing about PIM for decades and very little has ever seemed to materialize. Closest I know of is Mythic AI's analog processor.. which for some reason I cannot understand is operating on a shoestring budget and not having billions of dollars thrown at it.

    An interesting comparison between HDC and NN:
    https://arxiv.org/pdf/2207.129... [arxiv.org]

  • Represent each input and neuron by its outgoing weights, and you get vectors.

    Or for all the outgoing weights for a layer, represent it with a matrix.

    Before we had hardware acceleration I got a lot of mileage out of BLAS when training ANNs.

  • To solve the problem using hyperdimensional computing, the team first created a dictionary of hypervectors to represent the objects in each image; each hypervector in the dictionary represents an object and some combination of its attributes. The team then trained a neural network to examine an image and generate a bipolar hypervector — an element can be +1 or 1 — that’s as close as possible to some superposition of hypervectors in the dictionary; the generated hypervector thus contains in

  • I distinctly remember hearing about this form of representation for concepts in an undergraduate seminar class I took, but the use case wasn’t AI back then, it was translation. More or less, words in different languages that have identical meanings would live at the same hyper dimensional point, and the closer a non-identical value was to another, the closer their meanings.

    Here we are, 20 years later. I haven’t exactly heard about this work taking the world by storm, which gives me the impressio

    • by ffkom ( 3519199 )
      "Hyperdimensional vectors" are used in industry standards like VSELP [wikipedia.org], probably right now in your mobile phone. For the purpose of robust vector quantization, they are a really useful tool. And I could envision how they may also be useful for some AI applications, but I would, like you, first like to see this tested with good results before making bold claims like the article does.
  • Why I find remarkable in that at the 10,000 foot level it all comes down to matrix operations.
  • no one truly understands what they're doing

    This is false. Just because the *author* doesn't understand how LLMs work, doesn't mean that *no one* does.

    If no one understood, then LLMs wouldn't work, and it wouldn't be possible for competitors to spring up with their own versions. The people who develop LLMs not only understand how they work, but they are able to tweak and mold the output based on requirements. For example, it won't take long before LLM designers figure out how to incorporate advertising into their systems, because they're not going to keep offering their systems to people for free. And as they discover disturbing types of patterns (such as relating to extremist political ideologies) the designers will find ways to exclude those topic areas.

    The average layperson doesn't understand how a computer works, but many of us here on slashdot understand computers and software at varying levels. Why wouldn't LLMs have the same pattern of being well-understood by some, and not at all by others?

    • This is false. Just because the *author* doesn't understand how LLMs work, doesn't mean that *no one* does.

      It's also true just because *your* statement is correct doesn't necessarily mean *anyone* understands either.

      Understanding isn't required to make use of something. People for example have developed drugs without understanding why they work. Some industrial annealing processes exploit chance discoveries of effects that are exploited without underlying explanation.

      If no one understood, then LLMs wouldn't work, and it wouldn't be possible for competitors to spring up with their own versions. The people who develop LLMs not only understand how they work, but they are able to tweak and mold the output based on requirements. For example, it won't take long before LLM designers figure out how to incorporate advertising into their systems, because they're not going to keep offering their systems to people for free. And as they discover disturbing types of patterns (such as relating to extremist political ideologies) the designers will find ways to exclude those topic areas.

      People understand enough to train models and execute them yet training process is based on learned experience and trial and error rather than und

      • You seem to be disagreeing with my statement, but none of your points actually contradict my statement, as far as I can see.

        No one understands ALL of anything. They understand enough to make use of the things they need to make use of. But that doesn't mean no one understands things, including LLMs. Somebody built it, somebody maintains it, somebody enhances it, somebody trains it. Each of those people understand enough to do their jobs. Parts of the system are a black box to some of those people, but not al

        • No one understands ALL of anything. They understand enough to make use of the things they need to make use of. But that doesn't mean no one understands things, including LLMs. Somebody built it, somebody maintains it, somebody enhances it, somebody trains it. Each of those people understand enough to do their jobs. Parts of the system are a black box to some of those people, but not all.

          The quote is talking explicitly about the underlying trained neural network.

          Somebody built it, somebody maintains it, somebody enhances it, somebody trains it. Each of those people understand enough to do their jobs. Parts of the system are a black box to some of those people, but not all.

          This is not a problem of compartmentalization. It's a problem of being able to understand structure of trained neural models. You can communicate with the model through interfaces yet understanding what it is doing to arrive at an output is largely magic to all.

          • I do realize we're talking about the underlying neural network. I've personally built and trained a neural network, back in the 90s. They aren't so mysterious once you start to see how they actually work. People tend to superimpose mystery and human intelligence on what is really, at it's core, a simple structure and process. At it's core, it's really about pattern recognition and using those patterns to make predictions.

    • This is false. Just because the *author* doesn't understand how LLMs work, doesn't mean that *no one* does. If no one understood, then LLMs wouldn't work...

      You've misunderstood the context. It's not at all about understanding about how neural networks are constructed, trained, or queried; it's about understanding the encoded information structure within any particular multi-billion parameter network after they have been trained such that one can know what token sequence they will produce for any given query

      • It certainly is possible to debug LLMs.

        https://blog.allenai.org/intro... [allenai.org]

        A lot of traditional code was impossible or difficult to debug because it was written procedurally, leading to a lot of spaghetti. As the art and science of programming developed, we found ways to reorganize our code in ways that could be empirically debugged. LLMs are no different. An inability to debug them indicates either 1) the programmer lacks the skill or 2) the tool was written in a way so as to make debugging unnecessarily diff

  • Sounds like hype to me.

  • So you represent your data points as "hyperdimensional" vectors. They're still data points, and the patterns that emerge from them can still be analyzed through traditional AI models. AI doesn't care how a data point is represented, it just looks for patterns.

  • This seems to be completely trivial. Essentially all learning programs represent data as vectors in a very high dimensional space. A single 4k color image is a vector in a 24 million dimensional space. Neural nets represent intermediate data in a space of as many dimensions as there are "neurons." Changing bits is irrelevant in the face of efficient error-correcting codes. Etc. etc. Seems trivial. I have not read more than the summary, so maybe the summary is just stupid.

  • Sounds like the computer "Deep Thought" from Douglas Adams' "Hitchhikers Guide to the Galaxy"
  • Changing the virtual model doesn't change the hardware - and neural nets using clusters of neurons with low triggering thresholds are supposed to be pretty efficient, and it's been a while since IBM said they'd build the first 'memristor' that would enable building neural nets in hardware with higher density and lower power use than transistors.

    Did memristors fail for some reason, or are they just coming along a lot more slowly than anticipated?

  • What color hair will this anime girl get?

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...