Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
HP Hardware Technology

HP Unveils 'The Machine,' a New Computer Architecture 257

pacopico writes: HP Labs is trying to make a comeback. According to Businessweek, HP is building something called The Machine. It's a type of computer architecture that will use memristors for memory and silicon photonics for interconnects. Their plan is to ship within the next few years. As for The Machine's software, HP plans to build a new operating system to run on the novel hardware. The new computer is meant to solve a coming crisis due to limitations around DRAM and Flash. About three-quarters of HP Labs personnel are working on this project.
This discussion has been archived. No new comments can be posted.

HP Unveils 'The Machine,' a New Computer Architecture

Comments Filter:
  • by The123king ( 2395060 ) on Wednesday June 11, 2014 @01:00PM (#47214557)
    What's the point in running a brand new OS on it? Is HP-UX not good enough? Or the many other *NIX's? I'll put money on Linux being ported to it before it even ships to Joe Public
    • by maliqua ( 1316471 ) on Wednesday June 11, 2014 @01:01PM (#47214573)

      as someone who has worked extensively with HP-UX:

      No its not good enough.

      As for other Unixes well HP likes to sing their own song even if its off key and makes no sense

    • I love linux/unix, but that sounds kind of sad to me.

      • by Immerman ( 2627577 ) on Wednesday June 11, 2014 @02:05PM (#47215489)

        I doubt it, forever is a long time. But I imagine most OSes for centuries to come will have bsd or linux in their ancestry. It's simply a matter of efficient allocation of resources - a modern OS is a massive, complicated system - why reinvent the wheel when you can adopt extremely flexible existing technology for free? Certainly there may be room for other OSes, but only if you're doing something fundamentally new, and probably initially simple. Otherwise why waste the resources building something from scratch when you could instead spend those resources refining or replacing the specific bits of a 'nix that *almost* does what you want?

        And actually I find the prospect heartening. In the consumer market early on we had a wide variety of Oses, one for each machine almost, and all of them were embarrassingly simple by modern standards, and highly incompatible with each other. Then the PC and DOS took over, and it was... well not *good*, but adequate. And the proliferation of DOS, and later Windows opened the consumer world to the easy exchange of software and data, rather than being unable to share your C64 stuff with the CP/M user down the street. Obviously all the non-PC users missed out, but they had become a small minority. The only problem was that Microsoft was an expansive monopolistic tyrant, and any time it expanded into new markets it did everything in it's power to crush any competition, destroying many good products and companies, and leaving us with barely adequate MS products across a wide spectrum of the lucrative business software spectrum. And of course they ruthlessly defended their core OS market, which was so often the key to crushing their competitors.

        Then Linux grew up, and today we are beginning to have a vibrantly competitive OS market once again. True, it's mostly Linux-based, but Linux has become so flexible that various distros, especially specialty stuff, can bear little resemblance to each other - and yet software built for one distro can generally be recompiled for another with only minimal porting effort. A world of many varied and competing, yet mostly interoperating OSes is within sight.

        Now if we could just settle on some sort of cross-linux application wrapper so that something like PortableApps.com could be possible for Linux I'd be happy. There've been several projects attempting such a thing, but so far none has gained significant traction - the best option so far seems to be to use Windows programs and Wine - I can't tell you how many programs I use that I simply can't find for a modern Linux distro - they get abandoned for one reason or another, and without binary-level backwards compatibility or someone competent and interested in porting them to each new release they become practically impossible to run. Meanwhile I can still run those old DOS 2 programs pretty much anywhere with at most an emulation layer.

        • by Lennie ( 16154 )

          One way is to use a Linux container.

          Also look up: Docker

          • Docker is certainly the most promising candidate I've seen in a while, primarily due to it's Enterprise appeal and the momentum it may gain as such. I looked at it a while back though, and seem to remember it being a rather more specific solution than what we'd need for general application portability. Certainly could be a solid core to build off of though.

            • by Lennie ( 16154 ) on Thursday June 12, 2014 @06:18AM (#47220669)

              I have a feeling when a large number of projects on github (or the new dockerhub) includes a Dockerfile and maybe a file for orchestration (like OpenStack Heat template) that will make it even easier to deploy any project things will really start to take off even more than they are now for open source and free software.

        • by lgw ( 121541 ) on Wednesday June 11, 2014 @03:17PM (#47216341) Journal

          Well, we need some evolution, one way or another. The "page file" is a relic of an ancient time, and needs to vanish from the kernel, along with the difficulty of dealing with potential page faults anywhere in your kernel code.

          I suspect they're unifying memory and local storage in a more fundamental way. It would sure make life easier if you could (in user space) just mark some memory as "persistent" when you allocate it, and let the OS worry about caching and performance, but doing that right isn't easy or obvious.

          As "disk" performance gets closer to RAM, new approaches become practical. Previous attempts to unify memory and disk went nowhere as disk was just too slow to take explicit file control away from devs. Previous attempts to do away with directory-based filesystems and go with a sea of tagged documents and a metadata database have crashed on the rocks of low disk performance. But those ideas are good in principle, they just weren't appropriate for actual hardware.

          Fast persistent memory changes what it's practical to do, and fanciful new approaches to the basics of OS design are suddenly no longer academic wankery.

          • Re: (Score:3, Insightful)

            by The123king ( 2395060 )
            The only reason RAM exists is because it's many magnitudes faster than "disk". Once non-volatile memory is up to the same speeds as volatile, RAM will cease to exist
            • Re: (Score:3, Insightful)

              by budgenator ( 254554 )

              The only reason RAM exists is because it's many magnitudes faster than "disk". Once non-volatile memory is up to the same speeds as volatile, RAM will cease to exist

              That's what they always said, and I suspect always will.

          • Re: (Score:3, Interesting)

            by Anonymous Coward

            Eh? Ever used as AS/400 (System i)? Only the biggest-selling mini-computer system of its time. OS/400 had "flat" addressing, treats disc addresses and memory as the same space. And don't give me that BS about performance - I used to run an entry-level model (9406 E35) supporting >250 green screens with 48 MB (yes, megabytes) of main memory, and still achieve sub-second response times at the terminals. Response times at the Windows PCs were a bit slower, but they were using screen-scraped adaptations of t

            • by Duhavid ( 677874 ) on Wednesday June 11, 2014 @10:31PM (#47219341)

              I was sysadmin on an AS/400 back in the 90's. I am pretty sure we had an E35 somewhere in the cycle of upgrades. I know we started off with something "lower", but still a 35. I think it was a B35. I would not say it was fast, but it was fast enough.

              We had remote offices, SDLC lines CSU/DSUs and workstation controllers. I don't think we had 250 terminals, but we did have more than 100.
              The last upgrade we did was to a PowerPC based CPU. Ran a tape, swapped a card, instantly faster. Field rep allowed me to do the card swap.

              It was a good machine. The HAL was for everything, not just the OS. When we did the upgrade I spoke of, we didn't have to recompile user apps, the tape loaded the new HAL, I expect.

          • by Uecker ( 1842596 )

            I suspect they're unifying memory and local storage in a more fundamental way. It would sure make life easier if you could (in user space) just mark some memory as "persistent" when you allocate it, and let the OS worry about caching and performance, but doing that right isn't easy or obvious.

            You create and mmap a file. There is your persistent memory.

          • by AmiMoJo ( 196126 ) * on Thursday June 12, 2014 @07:18AM (#47220955) Homepage Journal

            Previous attempts to do away with directory-based filesystems and go with a sea of tagged documents and a metadata database have crashed on the rocks of low disk performance. But those ideas are good in principle, they just weren't appropriate for actual hardware.

            They were always a terrible idea because they don't scale in the human mind. For a music collection you can just about deal with artist name, album name, song name... But even when it comes to things like "genera" how many people can remember if a particular song they want to hear counts a pop, or rock, or soft rock, or maybe it was prog-rock, or is that "prog rock" or "progrock"?

            It gets worse for documents. With a folder system you can drill down. It serves as a memory aid. With tags you need to search and sift through search results unless you can remember the name of that particular thing you needed, or some other fairly unique identifier. I'd contend that tagging is more effort than organizing in folders too, especially if you want to change tags in bulk without separating collections of related documents accidentally.

            There are ways to reduce these problems with fuzzy search terms, hierarchical tags and the like, but they are all just lame attempts to polish a turd.

            • With any form of tagging, classification, or categorisation system 47% end up as "misc" and 63% get filed under "other".

            • by lgw ( 121541 )

              With a worthwhile system, you can still have the appearance of folders if you want that. That's as UI thing. But there's no reason for layout on disk to mirror that. The mainframe architecture I developed for in the 90s worked just that way: no unix/windows-style file system ("files" were fixed partitions), but the user saw files while the disk saw efficiently-tiled file data, and a (simple, fast, non-relational) DB held all the metadata. Directories existed only as a UI affordance, not as a filesystem

    • Maybe it's not a completely new OS? IIRC they still own what's left of BeOS and HP-UX, along with having access to the BSDs. I agree it'd be dumb to start from scratch, but you could pilfer what you want from those sources and build something "new" from it without even needing to worry about the GPL.

    • by asmkm22 ( 1902712 ) on Wednesday June 11, 2014 @01:14PM (#47214771)
      What's the point of running *nix on it? If the architecture is so much different that they have to rewrite tons of OS code to support it, why not just build their own?
      • by perpenso ( 1613749 ) on Wednesday June 11, 2014 @01:28PM (#47214987)

        What's the point of running *nix on it? If the architecture is so much different that they have to rewrite tons of OS code to support it, why not just build their own?

        *nix is the fastest path to a stable and highly usable platform. Only a small portion of *nix interfaces with the architecture. They only have to rewrite that small portion.

        Plus with *nix you have a rather large base of application software to run as well.

        That said, could other parts of *nix or apps be reworked to take advantage of the architecture, possibly. But such efforts do not need to be part of v1.0.0. They can be part of subsequent versions if and when profiling indicates an issue or opportunity.

      • Is it really all that different, though? It seems like the big difference is new hardware performing similar roles. It would be like running a new OS to run an SSD.
        • A better analogy would be like building a new OS to run without any RAM present on the computer.
        • by gamanimatron ( 1327245 ) on Wednesday June 11, 2014 @04:33PM (#47217095) Journal

          When your 500GB "disk" is directly addressable on the system bus and has the same latency as RAM, some of the design decisions in existing *nix look a bit questionable. Example: Does the additional work of implementing virtual memory (fundamental to most kernels) still make sense? How necessary is a file system *at all*? Could it be replaced with some other method of indexing data?

          You certainly could just stick most of the storage in a ramdisk and run linux, but there might be massive performance gains to be had in the file (data?) serving and database spaces if the server software and the kernel it's running on are designed specifically for stable direct addressing of everything.

    • Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.

      Obviously, it needs z/OS.

    • by Anonymous Coward on Wednesday June 11, 2014 @01:21PM (#47214879)

      "Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. Another team is working on a stripped-down version of Linux with similar aims; another team is working on an Android version, looking to a point at which the technology could trickle down to PCs and smartphones." RFTA.

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday June 11, 2014 @01:23PM (#47214919) Journal
      I'd imagine that if you are building something that breaks binary compatibility and likely incorporates a fairly minimal set of hardware for which borrowing a BSD driver or something would be convenient(new system architecture, and aimed at big iron, so compatibility with mom and dad's scanner isn't an issue), you are in about as good a position as you could possibly be to discard some of the accumulated sins of the past.

      It's also quite possible that the 'new OS' bit will be something more akin to a hypervisor and abstraction layer(whether the level of abstraction is closest to your basic VM, more like an LPAR, or follows some of the more service-level stuff to provide 'SQL database', 'Object storage', etc. is anyone's guess at present), and it simply wouldn't gain much from trying to cut and adapt an existing OS to size. What runs on top, may well include "yeah, here's the POSIX environment from HP-UX" or "Here's a Linux kernel modified to interact efficiently with the abstractions our OS supplies", since legacy code has massive inertia; but that won't be the 'new OS' itself.
    • by operagost ( 62405 ) on Wednesday June 11, 2014 @01:35PM (#47215113) Homepage Journal
      It's clearly a smokescreen to their secret plan of porting OpenVMS to it.
    • If they're starting from scratch, I hope they will design for security rigor from the start. Recommend Multics as a case study. Not saying copy from architecture, but learn from intellectual approach. See http://www.multicians.org/hist... [multicians.org]
    • by meta-monkey ( 321000 ) on Wednesday June 11, 2014 @01:47PM (#47215277) Journal

      From what I gather, memory management, which is a large part of what an OS does, would be completely different on this architecture as there doesn't seem to be a difference between RAM and disk storage. It's basically all RAM. This eliminates the need for paging. You'd probably need a new file system, too.

    • From the description, it sounds like a mainframe. Maybe it'll run zOS!

    • by Anonymous Coward

      Sitting at HP Discover in Vegas. Head of HP labs just confirmed it is based on Linux to maintain POSIX compatibility.

      The whole thing sounds exciting.

    • >>What's the point in running a brand new OS on it? Is HP-UX not good enough? Or the many other *NIX's? I'll put money on Linux being ported to it before it even ships to Joe Public

      Much as I like unixes (way back using early slackware distributions, now since 10 years on OSX), I do think that it is time for some real innovation. Unix dates from, what, 1970 or so. More than 40 years ago. We were all playing vinyl records for music back then. I think it would be good if a mainstream company (outside
  • Inspiring (Score:5, Insightful)

    by Anonymous Coward on Wednesday June 11, 2014 @01:01PM (#47214583)

    Finally! I'm so glad there's something to feel intrigued about in technology. I miss all the corporate labs doing amazing things.

    • exactly. nice to see what was once a great company trying to do something new and interesting. Compared to chasing the consulting racket like IBM, milking the enterprise customers -- which doesn't seem like it can sustain a company as large as HP for the long term.

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday June 11, 2014 @01:26PM (#47214949) Journal

      Finally! I'm so glad there's something to feel intrigued about in technology. I miss all the corporate labs doing amazing things.

      Unfortunately, while three quarters of the lab are working on that project, the other 25% are working on a way to make it rely on proprietary consumables and require 'FPU head cleaning' with tedious frequency.

    • Comment removed based on user account deletion
    • by Osgeld ( 1900440 )

      well its a nice set of buzzwords and such, but it smells of vapor to me

  • New OS? (Score:2, Insightful)

    There is probably major problem in using "it" with Linux, I wonder what the problem is....
    • Along with the new O/S, they are also working on getting both Linux, and (oddly) Android running on it.

      If you RTFA, you'd see that they'd like to re-structure the O/S to take full advantage of the systems planned giganto memory capacity, instead of being built around shuffling data on and off disk.

      • Re:No, no problem. (Score:4, Insightful)

        by bluefoxlucid ( 723572 ) on Wednesday June 11, 2014 @01:30PM (#47215017) Homepage Journal

        The article tells me it's bullshit. Applications aren't written to wait for the memory bus; they're written to ask the kernel for resources, and handle that by waiting or operating asynchronously. If they wait, then they just block until the kernel returns--they don't go, "Oh, it's going to be a while, so I'll execute getSomeTea()..." There's nothing in applications to deal with timing.

        From an OS perspective, execute-in-place has been a thing for years. Linux run from NAND uses XIP, hence why some JFFS2 configurations compress and some don't. Many implementations don't compress in small-RAM embedded systems, using MMIO to map the JFFS2 file system as a physical memory address and jump to it accordingly. That means Linux loads an mmap()ed binary into VMA by creating a page table entry that points to the MMIO page associated physically with the NAND, and not with any real RAM.

    • Read the article, and you'll get your answer.
    • by deKernel ( 65640 )

      If I had to bet, I would think the MMU will have a very different behavior so that alone might cause a drastic change that would necessitate a "new" OS.

  • Hail Mary (Score:5, Interesting)

    by Ralph Wiggam ( 22354 ) on Wednesday June 11, 2014 @01:05PM (#47214645) Homepage

    It’s a bold strategy Cotton. Let’s see if it pays off for them.

    If this doesn't work out, I can't see HP staying in business as an independent company.

  • Where have you been? It's alright we know where you've been!

    .
  • Now instead of RTFM [wikipedia.org] we can all RATM [wikipedia.org].

  • by sirwired ( 27582 ) on Wednesday June 11, 2014 @01:14PM (#47214761)

    The article yammers on and on about how the O/S will be built based on memory-driven I/O instead of file-system based I/O. However, IBM's i/OS (a.k.a. OS/400) has been built on memory-mapped I/O from the beginning (circa 1988.) (And it has a DB-driven "filesystem" that Microsoft has been unable to ship despite about 25 years of failure.)

    I know it's not quite the same thing, but I cannot imagine that this new O/S will somehow eliminate the need for flash and/or disk. I don't see them managing to get the memristor cost down enough to entirely replace disk/flash. If they had actually shipped some of the things before now, I could maybe believe it, but they haven't.

  • I can't wait for the marketing campaign. How ironic would it be if Pink Floyd licensed "Welcome to The Machine" for the media blitz?
    • There'a all kinds of possibilities. Machines of Loving Grace, Rage Against the Machine, NIN's Pretty Hate Machine album. There's a roller derby player going by Pretty Skate Machine.
    • I can't wait for the marketing campaign. How ironic would it be if Pink Floyd licensed "Welcome to The Machine" for the media blitz?

      Not as ironic as the campaign for Windows 95, which used The Rolling Stones' "Start Me Up." Recall that the lyrics included the phrase "You make a grown man cry."

    • Probably better than using the theme music from Person of Interest [wikipedia.org]. Naming anything "The Machine" while that show is still going seems like poor marketing to me. Unless they're shopping it to the NSA.

  • by BoRegardless ( 721219 ) on Wednesday June 11, 2014 @01:15PM (#47214787)

    Well, Meg Whitman had the guts to say "Find them some money" when HPLabs proposed the "Machine." I wish HP all the success.

    It is about time some corporation stepped up to the plate other than Apple and jump-starts mega-improvement in major devices.

    My first time sharing "Mini-computer" (was not mini sized), desktop engineering computer (using mag-strips pre-HP45), & then the HP35-41-45-75 were all incredible computing devices for their day.

  • When a person wants to do something such as run Microsoft Word, the computer’s central processor will issue a command to copy the program and a document from the slow disk it had been sitting on and bring it temporarily into the high-speed memory known as DRAM that sits near the computer’s core, helping ensure that Word and the file you’re working on will run fast. A problem with this architecture, according to computing experts, is that DRAM and the Flash memory used in computers seem unable to keep pace with the increase in data use.

    The author gives the problem that to access data the computer goes to the slow disk, and pulls the data in the fast memory so it can be operated against. Then the article goes on to say that memory can't keep up with the demand. That seems backwards to me. Isn't the problem they're trying to solve deals with how spinning disks have not had their data access speed increase at the pace of the rest of computer components, not memory?

    • by Nemyst ( 1383049 )
      I may be wrong, but that quote sounds like they're saying DRAM/Flash hasn't been able to keep up with the amount/size of data. The Word example's not the greatest, but in many computational fields you'll need hundreds of gigabytes of data to fit in RAM, which can get rather complicated.
  • by Anonymous Coward on Wednesday June 11, 2014 @01:41PM (#47215179)

    The new computer does not run on electricity. It runs on a new fuel cell that requires ink.

  • The new computer is meant to solve a coming crisis due to limitations around DRAM and Flash.

    Would someone like to elaborate on this "coming crisis" that memristors magically solve?

    I can think of plenty of limitations (in the present) to DRAM and flash that merely throwing money at the problem can't solve. I can also think of a few good uses for viable memristor technology (instant-wake hibernating-as-the-default-state computers as the obvious first use). I can't, however, think of any "crisis" that a
    • Would someone like to elaborate on this "coming crisis" that memristors magically solve?

      Only if they're more resistant to cosmic rays than transistors.

  • Good news... (Score:5, Insightful)

    by ndykman ( 659315 ) on Wednesday June 11, 2014 @01:48PM (#47215283)

    While HP Labs may not be what it was, it is good to see that HP finally has a CEO that will give them the funding they need to go for the big ideas. We need more research and development funding period. The government needs to increase funding for the NSF and other organizations. And, yes, big companies need to start making long term investments. Microsoft Research is growing. It seems HP Labs is growing again.

    Let's hope other big players step up too. I'm tired of money being thrown at yet another mobile application and having that being held up as a paragon of innovation. People are being critical of HP investing in this while Facebook throws 19B of assets at a messaging application? What's wrong with this picture?

    • I'm all for more funding for researching new cutting edge technology but Whitman is going about it the wrong way. HP is laying off remote workers instead of the "dead weight" that routinely performs more poorly than their peers. What people don't understand is that remote workers at HP usually are stellar employees that had have to relocate due to some life event. Otherwise the possibility of remote work isn't even entertained. To cut the remote workers first, HP is taking themselves out before the competit
      • I disagree about your comment regarding "remote workers". I know of some folks in this position that are essentially giving work second or third priority. Some who won't come in when they're needed because it's "working from home day". Others call in to have workers physically located in the plant to do their hands on work for them. Meg is correct in requiring remote workers to return to the office. While some are more productive, there are MANY taking scamming the system and doing nearly nothing, rece
  • Will the output be limited to a single number?
    • The question is...will it be a Social Security Number or the number 42?

      Either way, you're going to need memristors when you're processing that much data.

  • by WindBourne ( 631190 ) on Wednesday June 11, 2014 @02:26PM (#47215699) Journal
    HP made their massive profits by controlling their IP and making everything in-house. In this case, they have outsourced a great deal of this work. As such, it will be in China within 2 years. At that point, whitman will lose everything.

    As somebody that used to work for HP, I am saddened by this. They have great tech, but whitman's run for short-term profits is destroying the company.
    • Even if there are Chinese clones, anything that makes people want to buy more computers is good for HP. They'd get a chunk of the sales.
  • by mpicpp ( 3454017 ) on Wednesday June 11, 2014 @02:35PM (#47215791)
    i am remember hp having visions of replacing x86 with a new architecture and then AMD did x86-64. hp should know by now that a totally new hardware platform and totally new operating system isnt going to fly very far. Why not replace ethernet and tcp/ip while they are at it....
    • by Prune ( 557140 )
      Is this post a joke? This change is far, far deeper than the changes of instruction set and CPU architecture that comprises the difference between Itanium and x86. This is about making fundamentally more powerful hardware on the most basic level to break the physical limits approached by current applied technology.

      And that's just one problem with your post. The other is the criticism of different CPU architectures and instruction sets, pointing out Itanium's failure and forgetting the enormous success of
  • So I am guessing this is planned for corporate servers?

    Or will everyone be playing Crysis 3 on Windows 10, The Machine edition?

  • What kind of ink will this computer take?

  • What about that 3D Printer blurp ad with some dumb looking blonde on it? Or did those people lie also?
  • Data General killed itself inventing a New Machine with a soul [wikipedia.org].
  • Great name, assuming that their goal is to see how much pain a user can endure before going insane.

    http://princessbride.wikia.com... [wikia.com]

  • Some more details (Score:5, Informative)

    by tk2x ( 247295 ) on Wednesday June 11, 2014 @05:43PM (#47217665)

    I'm sitting in the conference room where this was just announced at the HP Discover conference. The idea is to use photonics for interconnects, so that the limitations of copper don't require physical proximity to memory. And they want to use oxygen atoms with doubly-negative charge (ions) for data storage. The concept is to partner with universities to do some fundamental research and major changes in OS design to have a machine that can scale processor access to 160 PB of memory storage in microseconds.

    None of this comprises fundamentally new ideas, but they are working hard to actually make it happen, which is pretty cool.

  • A single addressing space that eliminates the distinction between memory and bulk storage (disk). Where have we seen this before? [wikipedia.org]

    Multics implemented a single level store for data access, discarding the clear distinction between files (called segments in Multics) and process memory. The memory of a process consisted solely of segments which were mapped into its address space. To read or write to them, the process simply used normal CPU instructions, and the operating system took care of making sure that all

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...