Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
HP Hardware Technology

HP Unveils 'The Machine,' a New Computer Architecture 257

pacopico writes: HP Labs is trying to make a comeback. According to Businessweek, HP is building something called The Machine. It's a type of computer architecture that will use memristors for memory and silicon photonics for interconnects. Their plan is to ship within the next few years. As for The Machine's software, HP plans to build a new operating system to run on the novel hardware. The new computer is meant to solve a coming crisis due to limitations around DRAM and Flash. About three-quarters of HP Labs personnel are working on this project.
This discussion has been archived. No new comments can be posted.

HP Unveils 'The Machine,' a New Computer Architecture

Comments Filter:
  • by maliqua ( 1316471 ) on Wednesday June 11, 2014 @02:01PM (#47214573)

    as someone who has worked extensively with HP-UX:

    No its not good enough.

    As for other Unixes well HP likes to sing their own song even if its off key and makes no sense

  • Inspiring (Score:5, Insightful)

    by Anonymous Coward on Wednesday June 11, 2014 @02:01PM (#47214583)

    Finally! I'm so glad there's something to feel intrigued about in technology. I miss all the corporate labs doing amazing things.

  • New OS? (Score:2, Insightful)

    by should_be_linear ( 779431 ) on Wednesday June 11, 2014 @02:04PM (#47214629)
    There is probably major problem in using "it" with Linux, I wonder what the problem is....
  • by Anonymous Coward on Wednesday June 11, 2014 @02:09PM (#47214693)

    "New from scratch" generally means mostly pilfered from BSDs and other sources then repackaged obfuscated and closed.

  • by asmkm22 ( 1902712 ) on Wednesday June 11, 2014 @02:14PM (#47214771)
    What's the point of running *nix on it? If the architecture is so much different that they have to rewrite tons of OS code to support it, why not just build their own?
  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday June 11, 2014 @02:23PM (#47214919) Journal
    I'd imagine that if you are building something that breaks binary compatibility and likely incorporates a fairly minimal set of hardware for which borrowing a BSD driver or something would be convenient(new system architecture, and aimed at big iron, so compatibility with mom and dad's scanner isn't an issue), you are in about as good a position as you could possibly be to discard some of the accumulated sins of the past.

    It's also quite possible that the 'new OS' bit will be something more akin to a hypervisor and abstraction layer(whether the level of abstraction is closest to your basic VM, more like an LPAR, or follows some of the more service-level stuff to provide 'SQL database', 'Object storage', etc. is anyone's guess at present), and it simply wouldn't gain much from trying to cut and adapt an existing OS to size. What runs on top, may well include "yeah, here's the POSIX environment from HP-UX" or "Here's a Linux kernel modified to interact efficiently with the abstractions our OS supplies", since legacy code has massive inertia; but that won't be the 'new OS' itself.
  • by perpenso ( 1613749 ) on Wednesday June 11, 2014 @02:28PM (#47214987)

    What's the point of running *nix on it? If the architecture is so much different that they have to rewrite tons of OS code to support it, why not just build their own?

    *nix is the fastest path to a stable and highly usable platform. Only a small portion of *nix interfaces with the architecture. They only have to rewrite that small portion.

    Plus with *nix you have a rather large base of application software to run as well.

    That said, could other parts of *nix or apps be reworked to take advantage of the architecture, possibly. But such efforts do not need to be part of v1.0.0. They can be part of subsequent versions if and when profiling indicates an issue or opportunity.

  • Re:No, no problem. (Score:4, Insightful)

    by bluefoxlucid ( 723572 ) on Wednesday June 11, 2014 @02:30PM (#47215017) Homepage Journal

    The article tells me it's bullshit. Applications aren't written to wait for the memory bus; they're written to ask the kernel for resources, and handle that by waiting or operating asynchronously. If they wait, then they just block until the kernel returns--they don't go, "Oh, it's going to be a while, so I'll execute getSomeTea()..." There's nothing in applications to deal with timing.

    From an OS perspective, execute-in-place has been a thing for years. Linux run from NAND uses XIP, hence why some JFFS2 configurations compress and some don't. Many implementations don't compress in small-RAM embedded systems, using MMIO to map the JFFS2 file system as a physical memory address and jump to it accordingly. That means Linux loads an mmap()ed binary into VMA by creating a page table entry that points to the MMIO page associated physically with the NAND, and not with any real RAM.

  • by meta-monkey ( 321000 ) on Wednesday June 11, 2014 @02:47PM (#47215277) Journal

    From what I gather, memory management, which is a large part of what an OS does, would be completely different on this architecture as there doesn't seem to be a difference between RAM and disk storage. It's basically all RAM. This eliminates the need for paging. You'd probably need a new file system, too.

  • Good news... (Score:5, Insightful)

    by ndykman ( 659315 ) on Wednesday June 11, 2014 @02:48PM (#47215283)

    While HP Labs may not be what it was, it is good to see that HP finally has a CEO that will give them the funding they need to go for the big ideas. We need more research and development funding period. The government needs to increase funding for the NSF and other organizations. And, yes, big companies need to start making long term investments. Microsoft Research is growing. It seems HP Labs is growing again.

    Let's hope other big players step up too. I'm tired of money being thrown at yet another mobile application and having that being held up as a paragon of innovation. People are being critical of HP investing in this while Facebook throws 19B of assets at a messaging application? What's wrong with this picture?

  • by Immerman ( 2627577 ) on Wednesday June 11, 2014 @03:05PM (#47215489)

    I doubt it, forever is a long time. But I imagine most OSes for centuries to come will have bsd or linux in their ancestry. It's simply a matter of efficient allocation of resources - a modern OS is a massive, complicated system - why reinvent the wheel when you can adopt extremely flexible existing technology for free? Certainly there may be room for other OSes, but only if you're doing something fundamentally new, and probably initially simple. Otherwise why waste the resources building something from scratch when you could instead spend those resources refining or replacing the specific bits of a 'nix that *almost* does what you want?

    And actually I find the prospect heartening. In the consumer market early on we had a wide variety of Oses, one for each machine almost, and all of them were embarrassingly simple by modern standards, and highly incompatible with each other. Then the PC and DOS took over, and it was... well not *good*, but adequate. And the proliferation of DOS, and later Windows opened the consumer world to the easy exchange of software and data, rather than being unable to share your C64 stuff with the CP/M user down the street. Obviously all the non-PC users missed out, but they had become a small minority. The only problem was that Microsoft was an expansive monopolistic tyrant, and any time it expanded into new markets it did everything in it's power to crush any competition, destroying many good products and companies, and leaving us with barely adequate MS products across a wide spectrum of the lucrative business software spectrum. And of course they ruthlessly defended their core OS market, which was so often the key to crushing their competitors.

    Then Linux grew up, and today we are beginning to have a vibrantly competitive OS market once again. True, it's mostly Linux-based, but Linux has become so flexible that various distros, especially specialty stuff, can bear little resemblance to each other - and yet software built for one distro can generally be recompiled for another with only minimal porting effort. A world of many varied and competing, yet mostly interoperating OSes is within sight.

    Now if we could just settle on some sort of cross-linux application wrapper so that something like PortableApps.com could be possible for Linux I'd be happy. There've been several projects attempting such a thing, but so far none has gained significant traction - the best option so far seems to be to use Windows programs and Wine - I can't tell you how many programs I use that I simply can't find for a modern Linux distro - they get abandoned for one reason or another, and without binary-level backwards compatibility or someone competent and interested in porting them to each new release they become practically impossible to run. Meanwhile I can still run those old DOS 2 programs pretty much anywhere with at most an emulation layer.

  • by WindBourne ( 631190 ) on Wednesday June 11, 2014 @03:26PM (#47215699) Journal
    HP made their massive profits by controlling their IP and making everything in-house. In this case, they have outsourced a great deal of this work. As such, it will be in China within 2 years. At that point, whitman will lose everything.

    As somebody that used to work for HP, I am saddened by this. They have great tech, but whitman's run for short-term profits is destroying the company.
  • by lgw ( 121541 ) on Wednesday June 11, 2014 @04:17PM (#47216341) Journal

    Well, we need some evolution, one way or another. The "page file" is a relic of an ancient time, and needs to vanish from the kernel, along with the difficulty of dealing with potential page faults anywhere in your kernel code.

    I suspect they're unifying memory and local storage in a more fundamental way. It would sure make life easier if you could (in user space) just mark some memory as "persistent" when you allocate it, and let the OS worry about caching and performance, but doing that right isn't easy or obvious.

    As "disk" performance gets closer to RAM, new approaches become practical. Previous attempts to unify memory and disk went nowhere as disk was just too slow to take explicit file control away from devs. Previous attempts to do away with directory-based filesystems and go with a sea of tagged documents and a metadata database have crashed on the rocks of low disk performance. But those ideas are good in principle, they just weren't appropriate for actual hardware.

    Fast persistent memory changes what it's practical to do, and fanciful new approaches to the basics of OS design are suddenly no longer academic wankery.

  • by The123king ( 2395060 ) on Wednesday June 11, 2014 @05:57PM (#47217299)
    The only reason RAM exists is because it's many magnitudes faster than "disk". Once non-volatile memory is up to the same speeds as volatile, RAM will cease to exist
  • by budgenator ( 254554 ) on Wednesday June 11, 2014 @09:54PM (#47218947) Journal

    The only reason RAM exists is because it's many magnitudes faster than "disk". Once non-volatile memory is up to the same speeds as volatile, RAM will cease to exist

    That's what they always said, and I suspect always will.

  • by Lennie ( 16154 ) on Thursday June 12, 2014 @07:18AM (#47220669)

    I have a feeling when a large number of projects on github (or the new dockerhub) includes a Dockerfile and maybe a file for orchestration (like OpenStack Heat template) that will make it even easier to deploy any project things will really start to take off even more than they are now for open source and free software.

One man's constant is another man's variable. -- A.J. Perlis

Working...