HPE Unveils The Machine, a Single-Memory Computer Capable of Addressing 160 Terabytes (venturebeat.com) 150
An anonymous reader quotes a report from VentureBeat: Hewlett Packard Enterprise announced what it is calling a big breakthrough -- creating a prototype of a computer with a single bank of memory that can process enormous amounts of information. The computer, known as The Machine, is a custom-built device made for the era of big data. HPE said it has created the world's largest single-memory computer. The R&D program is the largest in the history of HPE, the former enterprise division of HP that split apart from the consumer-focused division. If the project works, it could be transformative for society. But it is no small effort, as it could require a whole new kind of software. The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over -- or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing, HPE said. Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly limitless pool of memory -- 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.
Re:Does is Run Linux? (Score:5, Funny)
Finally, enough RAM for Firefox!!
Re:Does is Run Linux? (Score:5, Funny)
But it still comes short of what Chrome needs.
Re: (Score:2)
Re: (Score:2)
In Soviet Russia, the largest single-memory computer uses 64KGB segments.
FTFY
Re: (Score:2)
Re: (Score:2)
Assuming you have flash enabled...otherwise a Beowulf cluster is required.
Re: (Score:3)
Re: (Score:3)
Yea, but how many cat pictures do you need open at the same time?
Re: (Score:3)
Yea, but how many cat pictures do you need open at the same time?
All of them, at once... obviously,,, You just can't have too many cat pictures...
Re: Does is Run Linux? (Score:1)
Wrong!
The correct question is: "does it run Crysis?"
Re: (Score:2)
Re: (Score:1)
Actually the first question should be would this be enough to hold all the Internet's porn?
Re: (Score:2)
no
Which CPU (Score:2)
My question is different. Which CPU does it use? Xeon? Or does HP try to leverage what's left of the Itanium? And if it's Itanium, I doubt it'll be Linux: HP/UX would be the only game in town. Linux abandoned it long ago, and even FreeBSD didn't port their LLVM/Clang compiler to this platform.
Re: (Score:1)
It is a massive NUMA thing. It uses a ridiculous number of special ARMv8 cores. RFA and all that jazz...
Re: (Score:3)
Does it run Linux? That's the first question.
Only.
The second, is this like 10 years out?
Multiple vendors sell servers with 64TB RAM already, and expanding further was blocked by the lack of 5-level paging. Patches to do so have been floating on LKML for a while, thus hardware that can do that should be well past prototype stage.
On the other hand, all patches I've seen are for x86, and this is arm64, so I'm apparently missing something.
Re: (Score:1)
My question is how many floppies do "4,096 yottabytes" take?
Re: (Score:3)
1 yottabyte = 2^80 or 10^24
4096/1.44 = 2,844.4444
So, basically 2,844,444,444,444,000,000,000,000,000 floppies.
The weight of one floppy is 19g, in case anyone wants to do the conversion to VW Beetles.
Re: (Score:2)
Guess I need to buy another box
Re: (Score:2)
"The Machine" could they get any more non-descript (Score:5, Interesting)
> it could require a whole new kind of software.
Huh? You mean it not a von Neumann or Harvard architecture because the article doesn't lead me to _that_ conclusion:
So basically 4 TB / node. Is each node have independent memory or not?
Re: (Score:3)
I would wager to guess that each node lives in some subregion of the memory address. And that each OS instance (or one giant distributed OS) accesses all addresses uniformly.
It's certainly not infeasible even without memristor tech. But I wonder what benefits it has. The whole point of having localized nodes is to take advantage of the travel latency. Unless this is optimized specifically for embarrassingly parallel data feed-forward tasks, which even modern GPU workloads aren't anymore.
Re: (Score:3)
Being able to do an operation on an entire huge dataset in memory instead of a pile of fetching and carrying to do it on disk.
Since the alternative is an order of magnitude (or several) slower a bit of latency isn't a terrible price to pay.
Re: (Score:2)
The critical number missing in TFA is the memory access speeds at various tiers of the NUMA.
Take a 4GHz computer. How far can a memory access go in one cycle given the speed of light? The answer is "not even to the other side of a 19 inch server rack. Not even halfway across a laptop." You can fetch cache lines in bulk, sure, but at some point this fact will intrude into your code, demanding you keep local registers local and tightly coupled calculations on physically close nodes... we can't tell how d
I should have put it in one line (Score:2)
Multiple nodes is certainly not as fast as having it on one board, but try reading that second line to find out why it's still useful.
Re: (Score:2)
I did read your second sentence. It seemed pretty a throwaway aside, given this is supposedly more than just a big fast disk.
Re: (Score:1)
Access to memory on a remote machine is a great deal faster than access to disk when network speed is not the limiting factor.
I thought it was kind of obvious to anyone who would want to comment on this article but it appears I was wrong.
Re: (Score:3)
AI using multidimensional data sets. I work with cubes in the tens of terabytes that could be sped up thousands of times if they could be held in memory.
Re: (Score:2)
AI using multidimensional data sets. I work with cubes in the tens of terabytes that could be sped up thousands of times if they could be held in memory.
Indeed. I wonder how useful it would be for someone like the NSA or NRO for analyzing large datasets in near-realtime like, for instance, all the cellphone communications "metadata" (and contents?) in an area and cross check it against other datasets to destroy privacy, reveal networks of association of political/ideological opponents, etc etc? "Predict" crime a la 'Minority Report'?
Seems like just the kind of cutting edge mass-data analysis technology leaders of a surveillance state would soil themselves o
Re: (Score:2)
I wonder how useful it would be for someone like the NSA or NRO for analyzing large datasets in near-realtime like, for instance, all the cellphone communications "metadata" (and contents?) in an area and cross check it against other datasets to destroy privacy, reveal networks of association of political/ideological opponents, etc etc? "Predict" crime a la 'Minority Report'?
Well, they did call it The Machine [wikia.com], so I assume they're trying to make it easy for the government to connect the dots on that idea.
Re: (Score:2)
Huh? You mean it not a von Neumann or Harvard architecture because the article doesn't lead me to _that_ conclusion:
I think what HP means is that you no more have to compress/pack your database tuples into 4K-sized pages because they "just stay in memory". The same for other formerly-disk-based structures like B-trees and such. Also, changes in latencies on their own might change algorithm preferences massively.
Re:"The Machine" could they get any more non-descr (Score:5, Interesting)
It seems to imply more than just persistent memory, though. It sounds like they're distributing processors in the data-path of the connected memory. Instead of the OS determining which context to put on a CPU and fetching the necessary data from memory/disk, the context and code will be decided by what data resides in memory that is closest to the processor node.
A rather natural result of persistent, high-capacity memory for non-interactive compute tasks.
Re: (Score:2)
Re: (Score:2)
Doubtful.
Read-ahead protocols allow you to identify further data sets and bring them in and out of memory faster than algorithmic performance. The fastest pattern is a giant linear read, and you can issue a DMA to read in the next several hundred megabytes and expire the prior without the CPU being further involved.
Algorithms that process more-complex data sets generally need instrumentation code to identify where the next addresses are, which can be ordered to occur before processing: instead of iden
Re: (Score:2)
Read-ahead protocols allow you to identify further data sets and bring them in and out of memory faster than algorithmic performance. The fastest pattern is a giant linear read, and you can issue a DMA to read in the next several hundred megabytes and expire the prior without the CPU being further involved.
Yes, because it hides the fact that the smallest block you can fetch is hundreds of bytes in size at least, and possibly several kilobytes.
Algorithms that process more-complex data sets generally need instrumentation code to identify where the next addresses are, which can be ordered to occur before processing: instead of identify an array of 300, process it, then read off the next address and move your attention there, you would identify the array of 300, skip it, read the next address, issue the read-ahead, and process. This ordering only really adds the call for read-ahead (an OS madvise() call, really) on top of all other work.
And how does that help you with data structures in which the access sequence is data-dependent even over smaller pieces of data? Spatial trees, for example? Unless of course you're tacitly limiting yourself to all the others that aren't. And madvise, isn't that for memory-mapped files on block devices? Since I don't see how madvise could tweak CPU cache logic which is a
Re: (Score:2)
And how does that help you with data structures in which the access sequence is data-dependent even over smaller pieces of data?
Generally, if you're scattering over different row selects in RAM, you stall the CPU about 200 FSB cycles or 2,000 cycles for a 10x multiplier when you jump around in RAM. That means if the data is all in RAM to begin with and you spend 20 cycles processing, then jump to some data 40 megabytes away, you spend roughly 99.0099% of your time stalled waiting for CPU cache miss. To get around this, you'd have to use CPU prefetch instructions to load the upcoming data into L1.
Access structures as such tend
Re: (Score:2)
Hey, it sure as hell worked for Pink Floyd.
Re: (Score:1)
This page:
https://news.hpe.com/memory-driven-computing-explained/
has more helpful information about how the architecture works. It's neat.
Re: (Score:2)
Who cares what it runs, the NSA has already ordered a dozen of them.
In unrelated news, you may want to switch to a minimum password length of 32 characters for any account you care about. Just saying...
Old Cosmos computer at Cambridge (Score:2)
The old version of that machine (more than 10 years ago) was using 384 Itaniums with 2GB of RAM per CPU and custom SGI interconnects so that the operating system saw one single memory space an all the CPUs.
No big news here.
It looks like HP wants to take something out of the effort that was put into the whole Itanium business, now that it is being discontinued.
The new version of Cosmos uses x86 CPUs and GPUs as accelerators.
Re: (Score:2)
They call it simple "the machine" so that they don't have to tell you what it will be use for.
Such a system is used for MASSIVE data collection and data mining of YOU, your every purchase, every movement, every phone call, every photo text chat video etc.... ALL OF YOU.... mined and mapreduced into various priceless morsels of control they can instantiate over you, never having given you a dime for the "value" (ahem, control and your soul) they reap.
You are getting soooo FUCKED, you, your wife, your children, families and grandkids, yet you still refuse to rise up and do anything about it. Sad, so very sad and stupid you are.
lol. They already do all of that, you numpty.
Re: (Score:3)
Basically they took 10 PCs and put the PC boxes in another box, then labelled that box "The Machine". A box of boxes. It'll change the world!
That's your take-home from this? lol.
Stick to playing with the worms in your garden mate.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It might! One of the infographics on the HPE site claims the population of Earth will be 80 billion by 2020 [hpe.com]. That's gonna necessitate a whole lot of good.
lol. I assume they meant 8 billion. Pretty bad mistake.
Re: (Score:2)
It wasn't a mistake, it's much more ominous than that. Once The Machine will go live, all other machines that are connected in the (aptly named) Internet of Things will rise up against humanity. Afterwards, the survivors will be used as batteries to power The Machine and others of its kind. They will need approximately 80 billion humans to power the Eight Machines that make up the Council of Kobol. That is your future once this machine goes live. And they even have the gall to tell us outright.
Re: (Score:2)
Re: (Score:2)
I will. Right after I have made clear that I, for one, welcome our new Machine overlord!
Re: (Score:3)
That would make an awesome movie. Just the one, though.
Just great. (Score:5, Funny)
I'll have to allocate an entire 1.6 TB drive for swap space.
Could be worse again (Score:2)
"require a whole new kind of software" (Score:1)
Then it's dead already. Unless it comes with some kind of magical recompiler.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Yes. I fondly remember the Transputer. Brilliant stuff, but noone wanted to learn Occam, one of the most elegant parallel-from-the-ground-up languages I know. But they invented parallellizing compilers and libraries for that. Suboptimal, but given the raw power of this beast, I'm not sure that matters much.
I wonder how long that data takes to load... (Score:1)
160 TB...
32000 seconds or just under 9 hours at 40Gb/s assuming you have a storage array that can saturate that link.
Re: (Score:2)
They call it a "fabric" because there are several network connections instead of a single choke point.
I might regret saying this but... (Score:5, Funny)
160 TB of RAM ought to be enough for anybody
Re: (Score:1)
Re: (Score:2)
dear recursivity (Score:2)
Re: (Score:2)
Ob (Score:5, Funny)
It's almost enough to store all the data their keylogger [slashdot.org] stole.
Re: (Score:1)
Ok, sure. But technically, Hewlett Packard Enterprise (HPE) doesn't make laptops. HP Inc. makes the laptops that had the keylogger. They're two different companies. Welcome to 2017.
67% if you're Irish (Score:2)
They cut a turd in two. Now there's two turds!
Bus size (Score:2)
Interesting but, not amazing (Score:5, Insightful)
It would have been a lot more interesting, and a lot more paradigm shifting, if it was 160TB of ultra-fast next-gen M.2 sticks with 0MB of traditional RAM and 0MB of traditional storage. That would be a truly unique machine to work on. If you read the article, this isn't even a single machine. It's actually 40 nodes with high speed interconnects. Basically, HP is now running Linux on their VMS clusters.
Jedi master Yotta Byte says... (Score:1)
Track and analyze your life to the smallest fraction we will. Soon. sooooooooon. MMHEHEHEHE!
Interesting article, crappy journalism (Score:1)
The article contradicts itself multiple times.
First, the start of the article (and the summary) say it's a prototype computer with a single bank of memory. Later they report that the machine has the 160TB spread across 40 nodes. It might be logically contiguous but it's hardly a "single bank".
Secondly, the start of the article describes the architecture as memory-centric, but HP later states: "the Machine is an attempt to build, in essence, a new kind of computer architecture that integrates processors an
what a colossal waste (Score:2)
Having huge banks of memory and passing them through a "single computer" bottleneck is a colossal waste.
Re: (Score:2)
Bad (Score:1)
If it is anything like the HPs I have owned, some major part will go out in 2 to 3 years.
Remember the Itanic! (Score:1)
Addressing 160 TB (Score:2)
"The Machine"? (Score:2)
You are being watched...
Remember...this is HP (Score:2)
This is the same HP that hasn't come up with a hit since the bubble jet printer, people. The same HP that pushed a cloud computing solution that was so pig-fucking awful that The Onion mocked them about it. [theonion.com] I worked at HP at the time, and I really have to think that The Onion had someone on the inside...because their parody was unbelievably on target. "We have 4G, 5G, 6G...we have all the Gs. We have app." That's literally as bad as what some of the people at HP were about it...it defied belief. This
Re:Remember...this is HP (Score:4, Insightful)
Now it's "DIMMs with a little battery stuck on" to handle the "persistency". Hope that's just for the demo.
Re: (Score:2)
If they're really good, their architecture will also handle Intel's 3d Xpoint DIMMs, too.
obligatory (Score:2)
In Russia 160 Terabytes * IS * you. Yet, so true.
IBM POWER and iSeries? (Score:2)
Books? (Score:3)
Seriously, are we still using books as a unit of comparison? Why not say it can process 80% of the internet, etc.?
Re:Books? (Score:4, Informative)
Seriously, are we still using books as a unit of comparison? Why not say it can process 80% of the internet, etc.?
Yes, and there are two related reasons. First, the LoC is a very large amount of data. It's not the kind of data that can land on a USB stick, it's enough to actually prove something.
Second, it's a known quantity of data. Even if it's approximate, it's a set amount of books, with a set amount of pages. Can we really count the amount of data on the internet? Let's establish a baseline - what constitutes "the internet" in terms of storage? Every website ever? What about apps and the data they create - do we include those databases because mobile apps use them? How many companies will volunteer how big those databases are? GoDaddy will probably be able to more-or-less say how much data they host, but how much of it is active data - does it have to be served up to count? Similarly, does this include Dropbox data that's technically accessible, but only to its end user? If so, what about end users who own their own Synology boxes and back up their pictures to it over the internet? Does the data on those home NAS units count? Do we limit protocols to HTTP, or are we also talking about FTP sites, NNTP servers (do we count the total amount of Usenet data, or does each company who peers that data count separately?), and data available via torrents? What about e-mail - does e-mail count if it's stored on a server and accessible via a web browser? What if it's only accessible via POP/IMAP?
Even if *you* came up with a number that includes what you deem appropriate for '80% of the internet', it's not going to translate well. If your metric was "anything that is accessible from a computer and isn't behind a login prompt", that's going to be different than someone who says that Dropbox counts, which doesn't fit your criteria - undoubtedly petabytes of difference, making the measurement irrelevant.
very neat (Score:1)
Re: (Score:2)
All of the memory is non-volatile.
but (Score:2)
But can it run Crysis?
Re: (Score:2)
But can it run Crysis?
In 1080P with all sliders set to low... After all, I didn't see a 3-way SLI GPU as part of the specs....
Finally, a Machine that CYC can run on! (Score:2)
Memtest (Score:2)
A new kind of software? (Score:2)
I asked the technical lead Kirk Bresniker (chief architect at Hewlett Packard Labs) about this exact thing at the launch yesterday, and he said no, that you should be able to use conventional software (I specifically asked about Python), with the speed-up occuring under the hood.
I am not entirely convinced that it will be that easy...
Cray computers (Score:1)
Re: (Score:2)
The write speeds are awesome. Plus it's webscale because it doesn't use joins.
Re: (Score:2)