Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses Hardware

Stealth Startup Plans Fundamentally New Kind of Computer with Circuit-Rearranging Processor (zdnet.com) 107

VCs have given nearly half a billion dollars to a stealth startup called SambaNova Systems to build "a new kind of computer to replace the typical Von Neumann machines expressed in processors from Intel and AMD, and graphics chips from Nvidia."

ZDNet reports: The last thirty years in computing, said CEO Rodrigo Liang, have been "focused on instructions and operations, in terms of what you optimize for. The next five, to ten, to twenty years, large amounts of data and how it flows through a system is really what's going to drive performance." It's not just a novel computer chip, said Liang, rather, "we are focused on the complete system," he told ZDNet. "To really provide a fundamental shift in computing, you have to obviously provide a new piece of silicon at the core, but you have to build the entire system, to integrate across several layers of hardware and software...."

[One approach to training neural networks with very little labeled data] is part of the shift of computer programming from hard-coded to differentiable, in which code is learned on the fly, commonly referred to as "software 2.0." Liang's co-founders include Stanford professor Kunle Olukotun, who says a programmable logic device similar to a field-programmable gate array could change its shape over and over to align its circuitry [to] that differentiated program, with the help of a smart compiler such as Spatial. [Spatial is "a computing language that can take programs and de-compose them into operations that can be run in parallel, for the purpose of making chips that can be 'reconfigurable,' able to change their circuitry on the fly."]

In an interview in his office last spring, Olukotun laid out a sketch of how all that might come together. In what he refers to as a "data flow," the computing paradigm is turned inside-out. Rather than stuffing a program's instructions into a fixed set of logic gates permanently etched into the processor, the processor re-arranges its circuits, perhaps every clock cycle, to variably manipulate large amounts of data that "flows" through the chip.... Today's chips execute instructions in an instruction "pipeline" that is fixed, he observed, "whereas in this reconfigurable data-flow architecture, it's not instructions that are flowing down the pipeline, it's data that's flowing down the pipeline, and the instructions are the configuration of the hardware that exists in place.

This discussion has been archived. No new comments can be posted.

Stealth Startup Plans Fundamentally New Kind of Computer with Circuit-Rearranging Processor

Comments Filter:
  • Yawn (Score:5, Insightful)

    by sexconker ( 1179573 ) on Saturday February 29, 2020 @07:45PM (#59783026)

    Smoke and mirror and faked demos until they finally produce a prototype that's nothing more than an FPGA.
    Sorry investors, you're not catching this unicorn.

    • Is there some tax loophole where these "investors" get to write off all their losses in a minority-owned investment [in a fashion which is much more lucrative than normal write-offs]?

      PRO-TIP: It takes about 12 years to get to the point where the brains of the smartest human kids can be trained to perform simple calculations involving the Fundamental Theorem of Calculus [although only about 1% of those smartest kids will have any clue what the FTC actually says].

      Machine learning in hardware is an awesome
    • for that matter of course any extant processor can do what they're talking about in software. Might be some slower than flexible hard wiring but then just throw more cheap parallel off the shelf hardware at the problem if you're really bottlenecked.

      • more cheap parallel off the shelf hardware

        It's very very difficult to compete with off-the-shelf Turing-Complete-ish Hardware/Compiler combos which have had decades of debugging invested in them already.

        As expensive as this thing would have to be [if they could even debug 99.9% of its functionality in the next decade], it would be damned near impossible to keep up with Intel & AMD & Samsung foundries pushing out generic chips at a tiny fraction of the cost of any specialized chips.

        It seems lik
        • well the blurb quite literally describes a system with a control loader cpu and fpga.

          that the blurb does not mention fpga is what makes it suspect. if they could have super super cheap fpga's okay, they would have something going for them. but if they just have soc+fpga at existing soc+fpga pricing then who cares.

    • My experience is that many tech "investors" get lucky once and make >1000 times their money, leaving them flush with cash and thinking every decision they've even made is correct and that they are experts in everything.

      Such people are easily separated from their money by others with a good story and a bit of faked enthusiasm. "I don't really get how they're going to do it, but the inventor is excited so it must be good, and I can definitely see the applications" is a typical sentence often heard.

      A
    • Not "investing". It's money laundering and/or tax evasion when the bullshit is this deep.

      • What? I guess the whole "lose a billion dollars to cut your tax bill by $300 million" is a new way to evade taxes? That makes about as much sense as "we lose $1 on each unit we sell but we make it up in volume"...
        • Re:Yawn (Score:4, Informative)

          by Tuidjy ( 321055 ) on Saturday February 29, 2020 @09:34PM (#59783208)

          Try "spend 1 million of dirty money and get 300,000 of clean income" that you can move around. Or funnel a bunch of money you cannot explain into vendors that you actually control. Or "lose a few millions that you write off your taxes, because you just paid them to a shell company of yours". Or "drive your company's stockholders bankrupt, because you bought cheap services at exorbitant prices from a company which profits you reap".

          But why should I give you an education in money-laundering? And why do you need one? Hmm...

          • How does the investor get their money back out from their investment, so they can move it around? And "writing off taxes" simply means you lower your tax load, but - at best - that's a savings of 35% of your money you lost. So yeah, lose $100 million, and you get to "save" $35 million on taxes, so you only lost $65 million instead... It's still a massive loss.
        • How about "lose a billion dollars in this tax jurisdiction" To make a billion dollars in another lower taxed place. And as an added bonus, save an extra 300 million as well in the first one.
      • by b3e3 ( 6069888 )
        VC is gambling with extra steps.
    • Re:Yawn (Score:4, Informative)

      by phantomfive ( 622387 ) on Saturday February 29, 2020 @09:04PM (#59783162) Journal
      It's an FPGA management system. It analyzes your code, and programs the FPGA in a way that is optimized for your code. That way if you are an AI researcher, you don't have to become an FPGA expert in addition to a machine learning expert. A lot of FPGA companies have been trying something like this, but I haven't seen one that was successful.

      The motivation is that most deep learning neural networks use the hardware inefficiently. [youtube.com] You can have a bunch of Tensor processing units, but they only get 10%-50% utilization for most NN calculations. So if these guys can figure out how to make it work better, that would be a 10x speedup over current hardware.

      After Google created their TPU (and let's be honest, Nvidia and their graphics cards), a lot of people have had ideas on how to make it better. So now VCs are throwing money at them, hoping something sticks. If the technology is good enough, it could end up being in every cell phone.
      • by mosel-saar-ruwer ( 732341 ) on Saturday February 29, 2020 @09:52PM (#59783248)
        If the technology is good enough, it could end up being in every cell phone

        Why in the name of Phuck Almighty would you want an hardware-based AI in your cellphone?

        So that the "You make-a me so hawny, wanna suckie suckie you big White round-eyes cock" anime pr0n can be tailored to the peccadilloes of each individual incel?

        Will it cum with a $19.95 per month subscription to (((the pr0n cloud)))?
        • Data compression is likely to be AI-driven in the future. It can self-optimize for the content. The compressed stream consists of data and neural network parameters for the decoder.
          • by xonen ( 774419 )

            Data compression is likely to be AI-driven in the future. It can self-optimize for the content. The compressed stream consists of data and neural network parameters for the decoder.

            That's just another snake oil application. Data compression needs to use every bit available, and reproduce a bit-accurate copy of the input, and neural network parameters are anything but that, and even if the net would work, it's very hard to achieve a compression better than with the current algorithms, just now you also added a bunch of overhead.

            The lossless compression algorithms we have today already achieve results very close to the theoretical maximum (or minimum, as you wish). There is no silver bu

        • iPhones already have hardware AI modules. They help you take better pictures [wired.com].
      • This thing is over-hyped as shit, but an FPGA that could reprogram it's structure every clock cycle would be pretty impressive - normally they take upwards of several seconds (on the fast end) to flash a new image onto (some on the order of several minutes.) I have my doubts, but if it's an FPGA with a coprocessor fast enough to reimage it every tick I'd want one.
        • I don't think they're inventing new hardware. It's aimed at people who spend weeks training models, hoping to cut the training time down to days.
        • FPGA configuration is stored in flash but it uses RAM, so you have to program it into the ram from Flash at startup. Note that Flash is slow for both read and write when compared to RAM. I cannot see the use of changing the configuration every clock cycle but I guess the faster the better. The programming time is really the time to read the configuration and store it to the config registers. To do that in a single cycle you will need quite a wide bandwidth (well as wide as the total configuration of the FPG
          • by b3e3 ( 6069888 )
            It doesn't need to reconfigure itself on-the-fly that often to be useful; just being able to do so without a bunch of specialized FPGA knowledge would be great. I'm imagining consumer-level stuff like complex lighting physics and branching enemy AI in games [written through a user-friendly interface], then you switch tasks and Photoshop has really good edge detection, then you play around with bitmining... The stuff GPUs are getting good at, but with hardware that can reconfigure itself for maximum efficien
      • by AmiMoJo ( 196126 )

        Usually when someone comes up with some clever way of optimizing a problem it rapidly becomes obsolete as more general hardware overtakes it. Being more general means high volumes and lower costs so the advantage of more expensive specialised hardware quickly disappears.

        GPUs are a good example. Lots of clever ideas but in the end raw polygon pushing power won. I expect it will be the same with AI, someone will get their strong AI working and then all the effort will go into optimising for that method and ev

        • FPGAs are not fast when compared to ASICs. I good FPGA will be ~40x slower than an ASIC on the equivalent node size. Also FPGAs are much more expensive, you will be looking at 20x + on the cost side per unit. Also FPGAs are much more power hungry then processors for the equivalent processing. FPGAs do not have a big tooling cost, doing an ASIC will cost Millions but if you only need 10 then you can just pay $100 per FPGA and have the job done. There are small FPGAs which can cost under $5 and can still do a
      • by epine ( 68316 )

        I know just enough about hardware multipliers and the internal FPGA fabric to suspect that nobody is ever going to program a multiplier using and-or trees. The fabric latency on a gate-efficient tree would be crazy, while a fat tree would almost certainly nuke your thermal budget.

        In all likelihood, the "logic" is everything else provided by the FPGA in the vicinity of some specific hardware multiplier, which are distributed ubiquitously, like transuranic diplumonium 119 in a nano-crystalline lattice. This

        • Transmeta actually produced useful processors... But the time to market was too long at the current churn rate so they weren't cost-effective. This sort of stuff only makes sense when you can't get performance by any other means, and improvement has stalled. We're not there yet. This is even nuttier than Transmeta.

          • There are a some workloads that use a bunch of 8-bit integer multiplications followed by a 32-bit multiply at the end (and I mean a lot of multiplies, AlphaGo had hardware specifically created to do the 8-bit multiplications, and they were still burning through $3000 a game in electricity costs).

            GPU shaders are basically the same thing, but more general: you compile your shader code, distribute it to the shaders, and then they run for a while.

            This is kind of an intermediate thing between an ASIC and a
            • "So if they are lucky, their product will replace GPUs for deep-learning."

              How are they going to compete with such a high-volume product, though?

              • Contract out manufacturing to a fab like Nvidia does? Not sure I understand the question.
                • GPUs are cheap because of volume, etc.

                  • Oh, I was wondering that too, doesn't deep learning seem like too small a market? But I don't have the numbers, so who knows? All the major clouds have support for deep learning, and maybe Amazon will want something to compete with the TPUs in the Google cloud?
                  • Also worth mentioning these high-end GPUs are not cheap, and I guess Nvidia is probably taking a large profit margin on them. I mean, check out these numbers [nvidia.com]. If you can get $500 per customer, you don't need as many users as Facebook.
    • "There's a series of NAND gates controlled by pin 6." - Jez BSing the investor

      Shooting Fish (1997) [imdb.com]

    • Neural computers (Score:4, Informative)

      by goombah99 ( 560566 ) on Saturday February 29, 2020 @09:31PM (#59783202)

      Neural Computers (NOT neural net) are _likely_ the real break out on Von Neuman machines. There are several startups bringing real silicon to the market this year in sampling, test board quatitities. How you program these is a working problem. But what is know is they have energy compute efficiencies, without exageratiion, 3 to 4 orders of magnitude better than Von Neuman machines, without losing any speed. It appears they will outperform any forseable quantum computer for decades.

      • The energy efficiency is far from certain. People like LeCun argue that the gains in analog neuromorphic Chips are more than offset by the necessary DA and AD conversions unless your entire system is analog (Not likely for now). Related to your programming point, a problem (or feature depending on how you look at it) of analog designs is that every chip is unique, i.e. the same neuron on two chips behaves differently on each. So first, you need to figure out the characteristics of your particular chip. I'
    • Just Google a bit. The bottle neck is actually I/O so IBM on its mainframe processors had dozens of helper processors and hardware to handle I/O and memory security and VM semaphores. We could talk processor multiple instruction extensions, but the video gaming cards are pretty damn potent outboard processors - just wait till they get the 7nm shrink. The supercomputers in the 80's all had smart compilers to optimize for supercomputers, and it was real funny when Fortran , PL/I and Cobol compilers on Fuitsu
    • by ceoyoyo ( 59147 )

      One you strip away the handwaving and jargon, it sounds like they're trying to make a TPU. Like Google did. Already.

    • Exactly this.

      You could achieve most of the benefits by adding a little bit of "content addressable memory" to your conventional Von Neumann processor. This allows you to do table lookups (and therefore switch statements) in a single memory cycle instead of strolling down the table saying "is it this one?".

      We were discussing this over 30 years ago. it was doable then, and its doable now.

      However, unless you live in silicon valley and have access to people who are investing other people's money while takin

      • Log-antilog is expensive in hardware and time. In what sort of application is this better than just doing multiply?
    • Beat me to it! Why no FPGA!
    • yea new stuff seems to come out of china ... vocore's coronas and homebrew cpu's ... with no investors attached. I DO wonder why after all that time neumann went over turing but no one ever went over neumann (unless the mythological quantum computer counts ... which probably has the some structure only qubits ?) what do i know, im so sore i can barely see with my glasses on today
      investors seem to like generation ICO as i like to call it , but i cant find any here who will give me their billions for me sayi
  • Old is new again (Score:5, Interesting)

    by gustavojralves ( 623794 ) on Saturday February 29, 2020 @07:45PM (#59783030) Homepage
    Transmeta?
  • So... we might eventually design something equally capable to a stem cell's design.

    Better late than never...

  • how are they going to predict a new complex operation to a predefined set of registers? how are they going to know beforehand such an operation, complex or not, is going to become actually useful for the optimization of a novel mathematical concept that suddenly gets abused by every compiler?
    • This is not for general purpose programming, it's for things like Apple's neural engine [wikipedia.org]. It is for operations that are performed similarly over and over again, without many if statements. Think of it as Hadoop but for a single computer.
  • by bobstreo ( 1320787 ) on Saturday February 29, 2020 @08:08PM (#59783094)

    you need to add how valuable this is to blockchain operations.

    That should send the VC vultures soaring.

  • Gotta Show proof! (Score:2, Interesting)

    by SirAstral ( 1349985 )

    This is likely a patent gimmick.. put together something that "sounds" like it might work and patent it. Just another example problem with our IP system.

    If these guys have proof, which is not likely, then we finally have one of the pieces needed for constructing actual AI. AI has a requirement for being able to rewire itself... not just able to re-code itself. Humans, rewire, re-code, create, destroy, repair, and replicates... until machines are capable of this we cannot get started. There are likely m

    • They're just making hardware that accelerates certain types of calculations. Just like iPhones have Core ML3 chips now.
    • by Pulzar ( 81031 )

      >>This is likely a patent gimmick.. put together something that "sounds" like it might work and patent it. Just another example problem with our IP system.

      If these guys have proof, which is not likely, then we finally have one of the pieces needed for constructing actual AI.

      What proof are you talking about? I mean, they have working silicon... and there are plenty of FPGAs which "rewire" themselves the same way.

      The point of this is to increase the utilization of the math units, it doesn't do any other

    • AI has a requirement for being able to rewire itself... not just able to re-code itself. ...until machines are capable of this we cannot get started.

      Your statement is based off of meaningless semantics. What this article is discussing must still be a Turing Machine, and nothing more. Thus everything they are doing can be simulated purely in code. The only difference is run-time performance, and as I've stated many times over many years, performance is not the reason we do not have demonstrable true AI or self awareness / conscience. Whether an AI takes 2 days or 2 seconds to formulate a response to a human interaction or question is inconsequential in

  • As other comments say, FPGA, Transmeta.

    Further, this is how all modern computers work. We use high level programming languages with instructions which are translated into groups of smaller instructions, and then we pass those instructions to processors which decompose them into yet smaller instructions which are executed by the functional units... Often in parallel.

    One day when/if we have holographic, three-dimensional optical computing blocks then reconfiguration might actually make sense. But we're still

    • by jythie ( 914043 )
      But now they are throwing machine learning at it!

      To be fair though, a lot of advancements in tech come out of reexamining old ideas and seeing if modern techniques or horsepower are enough to overcome their initial limitations.
  • Comment removed based on user account deletion
  • Moving forward we will harness our ki in a disruptive manner to focus our core synergies in computer blockchain crypto moonbuggie dinosaur. Now where are my millions in VC funding?
  • by meerling ( 1487879 ) on Saturday February 29, 2020 @08:40PM (#59783142)
    They mean Von Neumann Architecture, which is what our computers currently use.
    A Von Neumann Machine is a theorized machine that self replicates. Those don't yet exist outside of scifi.
    Funny how much the meaning changes when someone gets one word wrong.
    • by PPH ( 736903 )

      a theorized machine that self replicates. Those don't yet exist outside of scifi.

      That pile of old PCs in my garage which seems to grow on its own begs to differ.

    • Sorry to blast your bubble.
      There are two "von Neumanns"

      The one who is name sake of the "von Neumann" architecture, which is every processor that has a fetch, decode, execute cycle of instructions.

      The other one is the one who coined the self replicating machines.

  • Rather than stuffing a program's instructions into a fixed set of logic gates permanently etched into the processor, the processor re-arranges its circuits, perhaps every clock cycle, to variably manipulate large amounts of data that "flows" through the chip....

    No matter how I read or think about this I can't see it as anything other than gibberish that sounds vaguely like new-age mysticism. At the end of the day your processor is simply going to be a state machine that has to have well-defined steps from one state to another. Even quantum computing has this requirement.

    There is always the issue of how to do this more efficiently. How to get from one state to another useful state with fewer steps or consuming less power. But there ain't no magic and if there

    • It's possible. It's just not efficient. Think of it this way. Would you like a transformer plane that turns into a robot? Well maybe. Except it's not going to be as good at being a plane as a dedicated plane, or as good at being a robot as a dedicated robot.

      Similarly, while I'm sympathetic to the concept, this just isn't going to as good at any one thing as dedicated hardware that does that one thing. Transformable chips are only any good if you don't know what you want to do with it.

  • So it's Starbridge Systems again? I mean, that was ages ago that they supposedly had on the fly reconfiguring computers that you could shoot holes in an they would keep on ticking.

  • by logicnazi ( 169418 ) <gerdesNO@SPAMinvariant.org> on Saturday February 29, 2020 @09:35PM (#59783210) Homepage

    I know it's a bit pedantic but it's kinda misleading to call chips that rearrange themselves (FPGAs etc) non-Von Neumann architectures. I mean the configuration of the chip is just another state of the machine not fundamentally different than cache coherence and ordering guarantees.

    I mean either we should already be calling GPUs and (current multi-cored) CPUs non Von Neumann machines or call this one one as well.

    • This appears to not be as programmable as an FPGA, more like a programmable bus. That's why they talk about filtering and data flows; it would be optimized in specific ways that make it unsuitable to being a general purpose computer.

      • Yah, I understand that. I'm just objecting to the use of Von Neumann architecture to mean 'general-purpose programable computer.' Whether or not you are a Von Neumann machine is about whether the architecture unifies code and data (instructions are another type of data) and stuff like 'acts like a programmable bus' is too high level for this distinction.

        I know that's how we appear to use it now so I'm just grumbling.

  • They had fine-grain run-time reconfiguration as standard.

  • Something I always found rather fascinating about tech.. how hard it is to tell if something is just a buzz-word laden VC flattering revisiting of past failures, or a real honest reexamination of something that was not previously viable but modern advances might actually make work. I suspect this is more the former than the later in this case though.
  • Seriously?

    This is so ancient stuff, I learnt about it in university, and that was when we still wrote 19 in front of our years.

    Oh look, even WP is up to it: https://en.wikipedia.org/wiki/... [wikipedia.org]

    Literally, I've always wondered why so few of these machines actually exist, so I wish them the best of luck, but /. should know better than to claim this is some new invention.

  • that runs Ada really, really well.
  • by Casandro ( 751346 ) on Sunday March 01, 2020 @05:03AM (#59783720)

    Data-flow based computing actually is even older than our current digital computers. With pure data flows it is hard to manage. Therefore there have been numerous ways to mix this with the flexibility of general purpose computers.

    One rather successful way are vector computers, where an instruction sets up a computing pipeline which in parallel gathers the data, feeds it through the ALUs and stores the results. Eventually RAM became the bottleneck so the advantages became less and less important, but in the 1980s and 1990s this was how most high performance computing was done.

    Another way was the Transputer. You had lots of tiny little CPUs, each with their own RAM and high speed (for the time) Interfaces in between them. You could then make your computation spread out over a herd of those. One interesting aspect of it was that individual Transputers could be so cheap you could use them for embedded systems. This would have created sales volumes making them even more cheap.

    The reason why those ideas don't matter as much as they used to is that RAM hasn't grown in speed as much as logic did. Also the control logic of simple CPUs isn't much of a deal any more as it used to be. A 32x32Bit hardware multiplier is easily more complex than a whole 6202. Going the FPGA route on the other hand, means that you will need to drive fairly long wires inside your chip. Charging and discharging those takes time and energy.

    So I call BS on this, but of course in an age where banks don't pay interest any more investors will pump their money into anything.

    • My guess is that the collective memory of the field spans a shorter time than the memory of some individuals still in the field.
      Therefore, the same roads will be traveled over and over again.

      • Well "Great minds think alike", and it's not uncommon for multiple people to get the same idea independently.

        Now if you found a start up, chances are that you haven't thought about what you are doing a lot. You'll usually be mostly occupied by dealing with investors.

  • Ones that support open standards like Freesync.
  • While this hardware sounds spiffy, I'm trying to envision the debugging process for a system whose configuration changes with every instruction.
    It seems a bigger challenge than the hardware.

  • Alexia pioneered things like self modifying code and super-optimization (some papers were written under Henry Massalin before her transition). Combine that with Transmetta tech and JIT compilation from projects like clang or even Lua and apply it to parallel computing of epiphany supplemented by FPGAs and you might get something useful to a niche market... DARPA may fund it.
  • I got a pitch for a very similar idea at SC2000 (yes, 20 years ago). There seemed to be a lot of very important but unanswered questions about how it was going to actually work. Notably, no sign of an actual production rollout.

    The same questions seem unanswered now. Others here have already done a pretty good job asking those questions.

  • This startup "says" their chip is fundamentally new.

    If you've got to say it in your press release, it's called hype.

    If other people say it on their own, then you've got something.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...