DARPA Funds Development of New Type of Processor (eetimes.com) 84
The Defense Advanced Research Project Agency (DARPA) is funding a completely new kind of non-von-Neumann processor called a HIVE -- Hierarchical Identify Verify Exploit. According to EE Times, the funding is to the tune of $80 million over four-and-a-half years, and Intel and Qualcomm are participating in the project, along with a national laboratory, a university and defense contractor North Grumman. From the report: Pacific Northwest National Laboratory (Richland, Washington) and Georgia Tech are involved in creating software tools for the processor while Northrup Grumman will build a Baltimore center that uncovers and transfers the Defense Departments graph analytic needs for the what is being called the world's first graph analytic processor (GAP). Graph analytic processors do not exist today, but they theoretically differ from CPUs and GPUs in key ways. First of all, they are optimized for processing sparse graph primitives. Because the items they process are sparsely located in global memory, they also involve a new memory architecture that can access randomly placed memory locations at ultra-high speeds (up to terabytes per second). Together, the new arithmetic-processing-unit (APU) optimized for graph analytics plus the new memory architecture chips are specified by DARPA to use 1,000-times less power than using today's supercomputers. The participants, especially Intel and Qualcomm, will also have the rights to commercialize the processor and memory architectures they invent to create a HIVE. The graph analytics processor is needed, according to DARPA, for Big Data problems, which typically involve many-to-many rather than many-to-one or one-to-one relationships for which today's processors are optimized. A military example, according to DARPA, might be the the first digital missives of a cyberattack.
Re: (Score:2)
Computers are machines that help us think. If we can think better, we can do all the rest of those things you are talking about much, much easier. I would say more and better computing power is the only thing that is going to elevate us out of the purely biological drive to expand until collapse.
Re: (Score:2)
cure cancer, mitigate climate change
To be honest, this processor design might have applications for both.
Re: (Score:2)
Imagine a Beowulf cluster!
Re: (Score:2)
Imagine a Beowulf cluster!
Is there such a thing any more? I recently had access to a lot of second-hand low-end machines and thought it'd be fun to set up a cluster, but I couldn't find the software - it had vanished into a haze of different distros.
Re: (Score:2)
Re: Simple question (Score:1)
Amen. DARPA should instead focus on things that matter; such as studies on gender inequality, and the apparent inability of the English language to express any notion that is not perceived as racist and misogynistic by the media. The government should instead spend time identifying the race gender and sexual identity and occupation of everyone alive and focus on how they have been marginalized by evil racist trump supporters who luv Russia and want to destroy mother earth. This is how we should conduct
Re: (Score:2)
I'm not saying your other priorities are unimportant, but you shouldn't expect DARPA to fund non defense related research.
Re: (Score:2)
Re: (Score:2)
DARPA is an acronym for Defense Advanced Research Projects Agency. They spend our money on defense and weapons research.
And as a bonus, it sounds very close to DERPY.
Re: Simple question (Score:2)
Cancer cures, global warming, Mars colonization. All those are pushing computer simulations to their max today. A better processor that could truely "multitask" would be a huge leap in toolset capability for any of those fields.
Re: (Score:1)
They use big data algorithms for medical research. See "folding@home". There's a lot of data out there that needs to be processed, and processing more medical / chemistry data means trying out more combos and quicker / better results. A customized processor that does a specific type of task far faster than traditional processors is a great investment.
Re: (Score:1)
BTW: "the new arithmetic-processing-unit (APU) optimized for graph analytics plus the new memory architecture chips are specified by DARPA to use 1,000-times less power than using today's supercomputers."
A supercomputer designed to scale better for big data processing, that uses 1000 times less power than current supercomputers. Thus, it has 1000 times less cooling needs. Currently we're effectively using networked PCs scale up to fill data centers for our processing needs. A system designed with big data i
Re: (Score:3)
And the reason it needs government funding is because then it's an open platform that anyone can use, instead of locked down with patent lawsuits for decades. This way, it gets built by the best of the best from multiple companies and it's openly publishable technology. The free market gave you Comcast and Verizon, it's DARPA that gave us the internet in the first place.
If only. Typically as part of the deal, DARPA contractors (like Intel and Qualcomm) are allowed to patent (and own the patents) used to commercialize the technology. That may or may not mean an open commercial platform, but it certainly doesn't mean they won't get to own patents on key parts of the technology to potentially keep competitors at a disadvantage.
Re: (Score:2)
You can have a thousand times more, but you can't have a thousand times less!
---
Re: (Score:2)
BTW, I have mod points but think you have a fair question.
Now if you are going to question why we nee
Re: (Score:1)
Overtime and contracts to look after the nuclear weapons stockpile.
Ensure the existing weapons work.
Simulate using the existing weapons.
See what upgrades can be done.
Simulate the new upgrades.
Design nuclear weapons systems.
Simulate their use.
Decades of contracts and new work.
Graph databases (Score:1)
Take a look at neo4j.com. When you organize graph-like data as a graph instead of the typical set of relational tables, you can vastly speed up certain kinds of queries, and thinking about the solution becomes much clearer.
This is generic technology with uses far outside military applications. My own needs are for event correlation, and finding the cause in amongst a lot of data telling you the effects of a systems outage.
Marry graph databases to a CPU that is specially tailored for this kind of work and y
Re: (Score:2)
HIVE (Score:2)
Thats the worst backronym I have ever heard.
Re: (Score:2)
Re: (Score:2)
Barking up the wrong tree. (Score:1)
The problem we have currently is that we are focused on creating specific types of circuits. What we should be doing is working on dirt-cheap generic circuits that can reconfigure into anything you want aka neural network chips. The advantage of these is that you can have flaws in the fabrication process and make up for it by just making a shitload of identical neurons. You could even make up for having a low speed system by having an ungodly number of neurons in a single machine. This opens the door to
Those are called FPGAs (Score:4, Insightful)
Generic chips that can be programmed in to anything you want in the field. It's a huge industry, they get used in everything from your car to your TV, but they have limitations that means they are never going to be a be-all, end-all.
There's a place for processors, FPGAs and ASICs, usually all combined.
Re: (Score:3)
The problem is that FPGAs are really expensive. Neural networking chips are going to be the death of FPGAs because they can be made with cheap-o fab systems.
Re: (Score:2, Informative)
No they're not because FPGAs and neural networks are different architecturally and serve different purposes.
Re: (Score:2)
Neural networks don't work for generic tasks. Try to sort a bank's records with a neural network, or try to use them to display a UI description.
Re: (Score:2)
Wow, you aren't even trying to hide your ignorance.
Re: (Score:2)
Please explain how a neural network would sort bank records. Or how to implement Excel using a neural network. Or ...
Re: (Score:2)
Forget implementing Excel, you could run the damn original by creating an x86 processor in a neural network. You do realize that brains are neural networks, right?
Re: (Score:2)
You do realize that our brains are really, really imperfect: forgetful and illogical?
Re: (Score:2)
Is this your way of admitting that you know jack shit about neural networks?
Re: (Score:2)
I don't know how you think about our brains, but implementing an x86 in them is not a great idea. That's much better done in silicon, where the components are reliable, small and fast. The fastest signal in our CNS travels at about 20m/s.
Re: (Score:2)
There's a 0 missing, but you get my drift.
Re: (Score:2)
I don't know how you think about our brains, but implementing an x86 in them is not a great idea.
you're a step behind this fellow. [slashdot.org]
Re: (Score:2)
Why would you want to do that?
Re: (Score:2)
I never said you would, I said you could. Know the difference.
Re: (Score:2)
1 - If you only said "could" knowing that it's not really a good solution, why post it all?
2 - Given that you seem to agree that is not a good solution (e.g. not efficient), what is your answer to the poster that asked you about how you would sort bank records? (the implied full question of course is "how would you do it efficiently with NN compared to current
Re: (Score:2)
1 - If you only said "could" knowing that it's not really a good solution, why post it all?
To make a point about the flexibility of NNs, duh.
2 - Given that you seem to agree that is not a good solution (e.g. not efficient), what is your answer to the poster that asked you about how you would sort bank records?
Processors designed for NN chips will be faster and scale to new heights. Also, sorting bank records is actually something trained NNs would kick ass at.
Re: (Score:2)
> To make a point about the flexibility of NNs, duh.
You don't seem to get that for a neural network there is no difference between memory and processing.
> sorting bank records is actually something trained NNs would kick ass at.
I really would like a demonstration of that. Input: a few million bank records. Output: the same bank records, ordered according to some criterion like SSN, bank account or credit.
But you're a troll, aren't you?
Re: (Score:2)
No, they're not. A brain is a neural network as much as a tree (plant) is a tree (data structure)
I say that with a bit of rhetorical fun, but neural networks aren't actually made from neurons. They don't get drunk and have sex (in no particular order). They don't have a chemical nature and unknown features not the computer "neurons" have DNA.
Raytracing anyone? (Score:1)
On the minus side, they will create Skynet or something more insidious, with slimy humans in charge.
On the plus side, isn't this how you would make hardware ideal for a raytracer? Fixated on memory, a big heap of global memory (can't really dice the memory locality too much, rays have to go all over in the whole scene) with huge bandwiths flying everywhere, and you just want do zillions of intersections.
The "Graph Acceleration Processor" could operate on e.g. a sparse octree, to give a simple example?
I woul
It's an accelerator chip (Score:4, Informative)
An application-specific integrated circuit, or ASIC. Not a new type of CPU.
Re: (Score:2)
My distinction is not at all arbitrary. An "application processor" is the same thing as an auxiliary CPU; they'll lack things like PCIe lanes and paged memory management. Compare to some ARM chips that'd have a cripped 100% turing-complete ARM core next to a proper CPU for "media processing", basically as an extra FPU to decode video, but unable to execute POSIX software (or a real OS kernel) for having no MMU.
Re: (Score:2)
It's an ASIC in the sense that a CPU is an ASIC.
Re: (Score:2)
Programmability is commonplace in accelerator chips. For example, GPUs. The article's chip cannot support an operating system on its own.
Awesome! (Score:2)
Content Addressable File Store (Score:5, Interesting)
This reminds me of the Content Addressable File Store [wikipedia.org] that ICL developed some 50 years ago. OK: different implementation, but today a huge amount of RAM is affordable whereas CAFS needed to search for the data on disk.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
This is what happened with mini-computers. They typically were designed for business processing workloads and had features (e.g. special disk capabilities for hi
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Why would you compare to a general-purpose computer? This will be a special-purpose computer, the relevant comparison is to the performance of other architectures in the same problem domain.
In a similar vein, the operating system itself doesn't necessarily need to leverage the new architecture particularly well, except for the performance-critical subsystems such as memory management - and even that might be handled primarily by the client software. Heck, initially it might not even use an operating syste
non-specific failure mode (Score:2)
This kind of weird name is given to pie-eyed future technology projects so that when the dust settles no-one really knows precisely what didn't pan out.
Because odds are, they're going to have to fund this again—with an inkling of clue & a vaguely comprehensible name—before this twinkle finally deposits a nugget, third time lucky.
What else were my professors wrong about? (Score:2)
Back when I was in university for engineering, I had a professor tell us that there was nothing better than Von Neumann architecture so don't bother looking for it. One has to wonder what else university professors are wrong about.