Forgot your password?
typodupeerror
Operating Systems Software Hardware

Dell Considers Bundling Virtualization on Mobos 138

Posted by Zonk
from the wave-of-the-future dept.
castrox writes "Ars Technica is reporting that Dell may be considering bundling virtualization on some of their motherboards. No more dual boot or VMs inside the running OS? 'Any way you slice it, though, putting the hypervisor in a chunk of flash and letting it handle loading the OS is the way forward, especially for servers and probably even for enterprise desktops. Boot times, power consumption, security, and flexibility are all reasons to do this ... The big question is: which hypervisor will Dell bundle with its machines? Vance suggests hypervisors from XenSource and VMware as two options, but I think that VMware is the most likely candidate since it seems to be the x86 virtualization solution of choice for the moment. However, if Dell doesn't try too hard to lock it down, this system could easily be modified in an aftermarket fashion to include almost any hypervisor that could fit on the flash chip.'"
This discussion has been archived. No new comments can be posted.

Dell Considers Bundling Virtualization on Mobos

Comments Filter:
  • Overwhelming Support (Score:3, Interesting)

    by Doc Ruby (173196) on Thursday August 09, 2007 @02:09PM (#20172637) Homepage Journal
    Dell's gonna have a hell of a time supporting these complex features while it's closing down its call centers [google.com].
    • Re: (Score:3, Funny)

      by Kadin2048 (468275) *
      Was anyone with a clue actually calling Dell's call centers anyway?

      The only reason I've ever called a manufacturer's tech support line in years has been to get an RMA. And it's generally just irritating when they insist on taking me through their little script before they'll admit defeat and return the piece of junk.

      The purpose of those call centers is probably mostly for "cupholder calls," and less so for support on their higher end products, which is where the virtualization hardware would be (at least in
      • by Forge (2456) <kevinforge@@@gmail...com> on Thursday August 09, 2007 @03:34PM (#20173735) Homepage Journal
        Close.

        A few tips on calling Dell tech support if you are a competent engineer who diagnosed the problem before reporting it.

        1. For a home PC the techs are so incompetent that it's easier to just lie about the nature of the problem. I.e. If your hard drive is on the fritz, making rattly sounds and loosing data just say "The drive is completely dead. When I connect it the BIOS doesn't even admit that it's there".

        2. Gold support is better than economy or even silver, but not for the reasons on dell.com. It's better because they connect you to the most competent support guys almost immediately when you call the gold support line. Competent engineers know when they are speaking to an equal and will dispatch the required parts immediately. They also send out "just-in-case parts".

        3. Call late at night if your warranty allows it. The brightest tech support guys in Texas know that the graveyard shift is the best time to work. Less traffic on the commute, more pay and more time available for none work related tasks. Your shortest and most fruitful calls will be at 2:00 AM.

        4. Don't be afraid to hang up. I once had an external tape drive (PV 110T) that was bursting tapes whenever I initiated a backup. The tech support guy insisted that I must reboot the server so I could see if the drive shows up in the BIOS before he could go any further. I hung up, called back latter and got a brighter support guy who dispatched a replacement drive in around 5 minutes.

        • #2 is the most important, I find. Dell's non-Gold support is worthless. On the other hand, their Gold support is pretty darn good. Needless to say, every system we buy (which are mostly laptops) are bought with the Gold support and Complete-Care. The former gets me to techs who speak some form of English (a southern accent is the worst I get) and the later covers the occasional, "I spilled soda on my keyboard" errors. Which, considering the nature of the users I support, happen with alarming frequency
        • by no1nose (993082)
          That is all very good advice. I work for a state government and we tend to get very good support from Dell. I think that item #4 (Don't be afraid to hang up) is among the best advice. It works on other customer care centers, too.

          It is absurd (but true) that you can call a given company 5 times and get 5 different answers from the various phone-drones on the other end.
          • by leenks (906881)
            Why is this absurd? If you call 5 times and get 5 different people of course you are going to get different answers.

            FWIW I'm in the UK, and my organisation has bought Dell servers with 3yr bronze support, and we've never had any problems. An engineer turns up the next working day to do the swaps, and that's the end of the problem (everything from a failed fan unit in a disk array through to complete motherboard, ram, and cpu, and PSU replacement.

            Maybe it depends what lines you buy from too?
        • Admittedly, I haven't had cause to call Dell, but this works well for my ISP:

          1. Be honest. I know it's unusual advice, but if you attempt to bullshit your way through something, you may piss off the tech if they know what you're talking about -- or worse, they might believe you and skip a crucial step you didn't think you had to do.
          2. Be polite. Some of the following suggestions may require you to say something sort of condescending, so try your damnedest not to sound that way. And it goes without saying -- do
          • by Kadin2048 (468275) *
            I envy your optimism. My experiences have only reinforced my cynicism, however.

            I always start my (thankfully frequent, usually RMA-related) tech-support calls with "hey, I'm on a bad connection, I might get disconnected...". Really, this is just my polite way of saying 'if you turn out to be dumber than a bag of hammers, I'm just going to hang up and call back in twenty and see if I can get someone better than you.' It's possible that I should just be up-front about this, but I figure why make enemies, even
            • if you turn out to be dumber than a bag of hammers, I'm just going to hang up and call back in twenty and see if I can get someone better than you.

              Well, you can be up front about it without being an asshole -- and he might actually say "You know what, you're right, I can't handle this -- lemme get my supervisor."

              Or you can specifically ask for the supervisor, etc... Point is, my goal is to get the problem solved, and if the first tech I call can't help me, I probably want the next tier up.

              Asking flat-ou

      • by JazzLad (935151)

        I assume corporations have direct access to Dell to process RMAs and warranty work, request on-site service, etc., without going through a callcenter drone.


        Boy, I wish! At least their academic accounts don't seem to. Last time I had to RMA one for the university I worked for I had to sit on hold for a spell & then promise a nice bananna to get my RMA.

        Oh, well.
      • by Lockejaw (955650)

        Was anyone with a clue actually calling Dell's call centers anyway?
        I don't care if anyone with a clue is calling. I really only care whether anyone with a clue is answering!
      • by jimicus (737525)
        Certainly true with desktop support from practically any tier 1 OEM.

        However, Dell's server support is a different kettle of fish entirely. Certainly in the UK, as soon as they know you're calling about a server with a support contract they connect you straight to a call centre in Ireland which is staffed by people with at least a modicum of intelligence and the ability to speak English clearly. Probably because there's more money in servers, and more to be lost by pissing off the bloke who's almost certai
    • by Jack9 (11421)
      Customers will just be calling Walmart shortly. Closing call centers and storefronts is just good business given the new opportunity to sell out of Walmart.
    • by QuantumRiff (120817) on Thursday August 09, 2007 @04:01PM (#20174091)
      The Roseburg, OR call center closure really pissed off the town.. They gave Dell an tax exemption, saving them $5mil over 5 years.. They also spent $1mil on other "incentives" and infrastructure upgrades to attract them to the area. As soon as that Tax exemption was over, they closed down the doors.. Just before, they made some of the best techs there go over seas and train their replacements.. The employees were told they were opening up an "additional" call center, not moving theirs.. Apparently, they also were a crappy tenant and trashed the building they were in...

      I don't think Dell is going to be selling many more PC's in southern Oregon for a while...
      • by Lockejaw (955650)
        In response to closures like that, I've heard of cities adding a stipulation that says the company has to pay fees, back taxes, etc. if they close up and move out.
        • by polaris20 (893532)
          That would definitely be a good idea, as Dell would certainly deserve to be charged back taxes if that's all true.
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      Well it looks like it's only one call center that was closed [forbes.com]:

      He said the company announced plans in May to reduce employment worldwide by 10 percent. He said the Roseburg location is the only such center in the United States to close.

      And also from the next paragraph it seems that the reason was obvious:

      Frink said the closure has nothing to do with a lawsuit filed by employees of the Roseburg center in February, claiming Dell violated federal and state wage and hour laws.

    • by _KiTA_ (241027)
      The Dell Oregon call center was a consumer sales call center. With the new Walmart deal and some consolidation of other callcenters, it was unfortunately made redundant.

      Kinda sucks.
  • Dell considers bundling virtualization on mofos

    or

    Dell considers bundling virtualization on hobos

    not pretty either way.

  • In what way is this functionally different than the same hypervisor being installed on a bootable USB flash drive/IDE-attached CompactFlash card/[insert other stupid-simple method of booting from flash]?

    • by BuR4N (512430)
      "In what way is this functionally different than the same hypervisor being installed on a bootable USB flash drive/IDE-attached CompactFlash card"

      Its more secure having the actually memory embedded inside the machine instead on the outside in a port, accessible for anyone that have physicall access to your office.
      • by jmorris42 (1458) *
        > Its more secure having the actually memory embedded inside the machine instead on the outside
        > in a port, accessible for anyone that have physicall access to your office.

        So? CF to IDE bridge taped down in a drive bay. Flash to IDE header gadget plugged direct to an IDE header. They even have em that plug direct to USB headers on the MoBo now. Give em a while and they will have em to direct plug to SATA, assuming they don't now and I just didn't see em last time I was looking stuff like that.

        Poin
        • yes, but its a marketing feature, not something that is of any real importance
        • by Ironsides (739422)
          1) Cost. They would have to design the mobos and test them.
          2) The IDE header is not going to be used in profesional servers. For one, they don't have IDE anymore. They have SATA or SCSI.
          3) The USB headers are not going to have as high of an uptime compared to something dell could build onto the motherboard (in theory, supposing dell does'nt screw up. This is required due to what most server buyers need is reliability for servers that run 24/7/365.25. Adding in what you suggested, the first thing to f
          • Re: (Score:3, Insightful)

            by adolf (21054)
            3) The USB headers are not going to have as high of an uptime compared to something dell could build onto the motherboard (in theory, supposing dell does'nt screw up. This is required due to what most server buyers need is reliability for servers that run 24/7/365.25. Adding in what you suggested, the first thing to fail would most likely be either the flash or the adapter.

            I take issue with everything you say here.

            There is no qualitative reason why USB should not have, as you say, "as high of an uptime" as
            • by Ironsides (739422)
              Gah, typo. Meant to say USB adapter, not USB header. As in the compact flash to USB adapter.
              Also, I was talking about what the GGP was saying about a Flash to IDE, which would be a CONSUMER FLASH CARD with a CONSUMER IDE ADAPTER. It was with this following sentence in mind that I wrote (3).

              So? CF to IDE bridge taped down in a drive bay. Flash to IDE header gadget plugged direct to an IDE header. They even have em that plug direct to USB headers on the MoBo now.

              Both of which would be the most likely
              • by adolf (21054)
                I have a CF card mounted on an expansion card bracket at the back of the case. A simple thing, really: PCB with a 4-pin power connector, CF slot, 40-pin IDE connector, and a couple of LEDs for status, all fastened to a bracket so that the card protrudes neatly through a slot at the back of the case.

                It's definitely a "consumer" adapter -- I think I paid $8, total, to have it delivered to Ohio from Hong Kong. But like most mass-produced electronic items in this millennium, the soldering is quite good, and
                • by Ironsides (739422)
                  A profesional grade CF to IDE adapter wouldn't exist for servers. Partly because servers don't use IDE. By the way, in this case, I am talking about servers coming from companies that build servers to order, not some computer that was custom built by an individual to be a server.

                  Now, as to what the CF to whatever interface would have, that would be a bit more than you describe?

                  Lets see, a bit of redundancy, designed and tested to be in use most of the time, temperature extreme testing, guaranteed thro
                  • by adolf (21054)
                    I guess I'll play along.

                    By your definition of "server," it seems we only have three such built-to-order machines here in use here at the shop. They're all Prolaint ML330s of various generations, custom ordered from Compaq or HP. The oldest one has SCSI RAID, the newest one has IDE RAID. All include at least one additional IDE port for the CD-ROM drive.

                    So I guess that some servers do use IDE, since these particular ones all seem to be serving just fine.

                    "Ah," I hear you say, "but those machines are ancient
      • by couchslug (175151)
        "Its more secure having the actually memory embedded inside the machine instead on the outside in a port, accessible for anyone that have physicall access to your office."

        The same pieces could easily be inside the case. Not all USB ports are external. Of course, SATA CF adapters have been available for sometime:

        http://www.fastsilicon.com/storage-reviews/addonic s-adsahdcf-sata-cf-adapter-review-6.html?Itemid=27 [fastsilicon.com]

        By the way, anyone have links to tutorials for installing a hypervisor to such a setup?
      • Why not just "embed" it in the first 20 megs or so of the hard drive? (Or 100 megs, or 1 gig, given the size of modern storage...)

        The only advantage I see to doing it with flash is that they could lock it down, and also, you could theoretically hot-swap SATA (or USB) drives, each with an OS on it (and maybe a "saved image" from the virtualizer, like hibernating). Even if you don't actually physically hot-swap them, you could spin down the drive you're not using.

        Of course, if it was me doing this, I'd just g
    • by Chirs (87576)
      It's vendor-supported.
      • by badfish99 (826052)
        Anything is vendor-supported if I pay for vendor support. It doesn't have to be embedded in a flash chip.

        The advantage of this is that it is vendor-supported by a vendor of Dell's choice. Presumably they then give Dell a kick-back. OK, that's an advantage for Dell, not for the purchaser.
    • by Albanach (527650) on Thursday August 09, 2007 @02:27PM (#20172865) Homepage

      In what way is this functionally different than the same hypervisor being installed on a bootable USB flash drive/IDE-attached CompactFlash card/[insert other stupid-simple method of booting from flash]?
      The difference is that it's a supported set up from a major manufacturer. That means that when you pay for 24x7x365 support you are not faced with being told that you've modified the hardware and they can't support your setup. Indeed if your flash card dies a sudden death, the Dell engineer

      can be there within four hours and should actually be carrying a spare.

      For a hobbyist at home I doubt there's much of a difference at all, but for folk paying big $$$ for enterprise solutions, this is probably very welcome.
      • by Znork (31774)
        If you were paying big $$$ for enterprise support, would you get a server with GRUB or LILO embedded on the motherboard?

        Would you buy one with the kernel and initrds on flash installed on the motherboard?

        Personally I wouldnt; Dell has no competence in those areas, and even should they try to build it, they'd end up constantly trailing the OS vendors, introducing random bugs and being far less integrated and standardized than what the mainline products are.

        I see little difference in the hypervisor area; hard
    • Well, as I said in this post [slashdot.org], not much. The only things I can think of are that it doesn't rely on any external devices and would be directly supported by Dell. It would be a real boon to corporate IT departments using virtualization to consolidate servers, since IT managers are often loathsome to use any such configuration that isn't officially vendor-supported.
    • I'm guessing that you might get a slight advantage not having to wait for the bios to reach a point where it has usb functioning - and possibly the ability to read the chip faster off the board than over usb. Just wags on my part. I personally don't get the big deal over doing it this way as compared to the way a hypervisor loads now to run on bare metal. It might take a touch longer to boot - but so what? I'm not bouncing my servers that often anyway. And on the desktop? That's where I really struggl
    • by Burz (138833) on Thursday August 09, 2007 @02:56PM (#20173213) Journal
      Presumably having Dell's hypervisor load instantly at power-up could prevent other virtualizers from running, including hypervisor-based rootkits like Blue Pill.
      • Presumably having Dell's hypervisor load instantly at power-up could prevent other virtualizers from running, including hypervisor-based rootkits like Blue Pill.

        Not if it's really doing its job.

        A virtual machine should be able to virtualize another layer of similar virtual machines - including instances of itself. Otherwise there's something defective about the virtualization.
        • by WeblionX (675030)
          So you're saying it's hypervisors all the way down?
        • There are certain chunks of hardware, actual CPU instructions, etc which have been introduced recently to make virtualization more efficient.

          However, I don't think it would do very well against something like Blue Pill, because that could just as easily implement a softer virtualizer -- it would just appear to run a little slower.
    • Support and TCO.
      If I have a Dell provided chip on a Dell motherboard which goes out, they will fix it. If I have a Mickey-Mouse setup with a USB flash device, you can bet they are going to try and blame that for my woes first. And, guess who is on the hook for fixing it if it goes south? Moreover, the difference in cost is going to be slight. This chip will probably raise the overall price of the motherboard by a couple hundred, at most. The time I spend futzing around with getting an external solutio
    • by Hatta (162192)
      In what way is this functionally different than the same hypervisor being installed on a bootable USB flash drive/IDE-attached CompactFlash card/[insert other stupid-simple method of booting from flash]?

      Is there such a thing? How would one do this?
      • by adolf (21054)
        Yes. It's easy.

        Anything which can boot and run from an IDE disk can also run from a Compact Flash card, with the right adapter (Google for one). I've got things ranging from an old version of Slackware running on a flash-based 386 laptop, to a diskless Windows XP machine, which use this trick.

        You see, CF cards inherently know how to act just like it is a regular IDE disk drive. The adapters are completely passive, and exist merely to supply power to the card and convert the small pin layout of a CF card
  • by Anonymous Coward
    How is adding more layers going to make anything faster?
    It isn't like Vista will be loading less drivers because of the extra layer.
    • by WyrdOne (96731)
      This is not targeted at the Consumer market. This *is* marketed at the software and developer markets. Typically those already running VMWare's ESX products or similar tech.

      Basically it means faster startup time and possibly faster performance for VM servers.
      • by hedwards (940851)
        That all depends upon specifically how things are implemented. Hardware RAID isn't necessarily faster than software RAID is, hardware virtualizers haven't so far always been faster than software ones either.

        If they get it right, then it should be at least competitive. Plus with some luck it should have some type of enhanced security over what software can do.
    • by Mattsson (105422)
      Not all improvements are there to produce more speed.
      Sometimes, an improvement will give better functionality at the cost of a little speed.
      And with the speed we have in our pc's today, it does seem more rational to concentrate on improving funtionality and reliability rather than speed.
    • "How is adding more layers going to make anything faster?"

      "Faster" is not the goal. Better machine utilization is. In the Windows PC world sysadmins know that loading multiple functions all running on the same machine is inviting trouble and can crash Windows so they spread their servers out. This allows the admin to consolidate the servers back into one machine by running multiple copies of the Windows OS on one server. He gets the stability gain of running one task on a box biox he stops wasting so ma
      • by jimicus (737525)
        Even with a Unix-based OS, there's something to be said for separating processes between virtual systems.

        It improves security - an exploit leaves one virtual server (and hence one service) vulnerable, not everything.

        It improves reliability - a service which is known to have knock-on effects if it screws up can have those knock-on effects limited to just one virtual server.

        It also makes scaling individual services and migrating between hardware far easier - if you haven't yet had to go down the SAN route, up
  • by dagar (84678) on Thursday August 09, 2007 @02:28PM (#20172889)
    IBM is already doing this on their iSeries (AS/400). In order to manage it you have to have a Hardware Management Console (an x86 xSeries machine running Linux and their management software). I really think that they have done a good job of the virtualization, it also lets IBM throttle back the CPU. We have a 1000CPW (IBM's performance index) machine that with the Power5 1.5Ghz processor is limited to 43% utilization. In order to get all 100% of the CPU (2400CPW), we would have to pay through the nose.
    • by kpharmer (452893)
      I'm not going to use the right terminology here (since it changes quarterly) but...

      this is just one pricing option: you can buy everything up front, or you can pay more to have them put in 'emergency' resources - that can be added later if you need it.

      This later scenario can be good if you want to avoid overbuying but still have resources available in case you wildly underestimated what you'd need.

  • There seem to be a lot more options for "virtualization" lately than VMWare, but never having needed to use multiple OS's at one time, I'm clueless as to the details of how these all work. Are they taking advantage of some new functionality on Intel/AMD chips?

    Is there some sort of overview for this stuff?

  • TPM (Score:1, Interesting)

    by Anonymous Coward
    Amusingly, this + a mechanism for telling the hypervisor what programs to trust and how, was the original end goal of the whole TPM/palladium movement..
  • I personally love how the poster of the article invents a hypothetical security problem with a hypothetical and non existent hardware solution, at which point he/she discusses the details of a potential hypothetical hack.
  • by tji (74570) on Thursday August 09, 2007 @02:43PM (#20173073)
    As others mentioned, similar things can be done now -- an IDE/Flash boot into a minimal hypervisor Linux for Xen or KVM. That would also allow some flexibility, to maybe run a few things directly on the hardware. I would be very interested in an approach like this for my home Linux server.

    For larger enterprise uses, the really simple hypervisor is nice. Just slap another box in there, and it is quickly added to your compute cluster. If they do it right, that system could even net-boot and auto-install the latest hypervisor image when it's first added. Factor in VMWare's "VMotion" stuff, where VMs can be moved among compute nodes in a cluster, and that simple compute node, along with a big NAS, is really slick.
  • Virtualisation I have no doubt is extremely useful in certain applications. I howerver have no use for it on any PC I own or work on. I exclusively use linux and I don't want Windows or OS/X or anything else running alongside it. I *WANT* my OS to have full control over the machine - its faster , its more flexible and theres less to go wrong (not to mention who's to say a hypervisor couldn't be hacked by a virus somehow?). I don't want some virtual hardware locked into the BIOS that may or may not have feat
    • I suppose a hpyervizor doesn't need or take control of hardware components the way an O/S would but even so, I'd be concerned that a virus if it could somehow get into the flash ROM (or be compulsorily included there by the US National Security Agency) might be undetectable to O/S based virus scanning as the Boot ROM doesn't appear as a mountable volume and is never checked....
    • by dpilot (134227)
      Virtualization is not just for multiple OS's.

      One use you might be interested in is a security barrier. The base system boots, but very little really runs on it. Instead you start guest images, and the stuff runs under the guests. Compromise a guest and you haven't compromised the machine. In fact, one thing you might run on the host is an Intrusion Detection System that monitors the guests and shuts down any that might go rogue. Better yet, you could "freeze" the rogue by ceasing to schedule CPU cycles to i
      • Compromise a guest and you haven't compromised the machine.

        What outside the "guest" is of any use to a desktop user?

        I'm with the OP, I don't want Windoze or OSX so I don't want a non free VM getting between me and my OS or my OS and hardware. I don't have boot or power management problems with my OS, so the VM offers me nothing.

        • by dpilot (134227)
          Run your user account inside a guest, and at least the base OS won't get compromised, and you won't need to reinstall. Run your browser and/or email inside a guest inside your account, and you won't have to worry about virii or web nasties compromising your precious code and data. It's all about damage limitation/confinement.

          I don't want a non-free VM, either. I'm figuring that right now Linux has so darned many virtualization options that whenever I have the right hardware, I can just pick one.

          This also pr
          • by Sancho (17056)
            Maybe the people who make operating systems should fix these problems WITHIN THE OS. Operating systems are supposed to do this anyway!
            • by dpilot (134227)
              They should. I just consider it another layer. By the OpenBSD philosophy, you don't need a firewall. I try and run my systems that way, but I use a firewall, anyway. I'm not sure exactly how many layers I'd like to have, I guess it depends on how expensive they are. But I do know that I want more than 1 layer, at least.
    • Re: (Score:3, Insightful)

      by EvilMagnus (32878)
      Virtualization can be really useful to make sure you're making use of all available resources.

      Consider a development environment. You might have ten developers, each with their own server. For most of the time, most of the capabilities of those development boxes are being unused, but they're still taking up space and power in your datacenter.

      If you could virtualize those 10 dev boxes down to two or three bigger boxes, you could:
      - save on space and power in your data center
      - ensure you're using your availabl
    • Re: (Score:2, Interesting)

      by Uruz 7 (986742)
      Aren't you being a bit selfish? If you don't want Windows or Mac then don't install them. It's likely that your BIOS has support for tons of things which you are not using nor forced to use. And since you're a Linux user, I'm sure you're aware of all the crap that you'll probably never have to enable in the kernel but it's there if you want it.

      I'm not really sure what you mean by slippery slope either. Slippery slope to what? More features? I also don't think this is for the desktop market. I couldn'
      • by Viol8 (599362)
        >I'm not really sure what you mean by slippery slope either. Slippery slope to what? More features? I

        Completely undetectable viruses and worms, remote disablement of PC hardware , frankly anything you want to do with the maqchine if the hypervisor is compromised somehow since you won't ever detect it in the OS. An OS is called an Operating System because it operates the system. If its little more than some sock puppet on a hypervisor then whats its purpose other than a glorified scheduler?
    • by jma05 (897351)
      Suit yourself. I run Linux 95% of the time. But I find a VM very useful. Some of my hardware is Windows only or just a pain to set up on Linux. So I use Windows in a VM with USB support and it saves me the trouble of worrying about Linux compatibility for occational use devices. There are many niche tools that are Windows only. I don't know the technical implications of having the motherboard manage VT, but I am wondering if it makes providing better access to graphics cards from the VM. That could mean a s
    • by khb (266593)
      Clearly you don't debug Operating Systems for a living ;> Recall that the first VMs were on the IBM mainframes so that the OS developers wouldn't crash the machine on each other.

      Similarly for debugging or otherwise doing risky things with one's OS/configuration. Having a VM makes it a lot faster and easier to recover or to examine a troublesome system.

      And even if you only want to run Linux, there are many different distros and kernels to chose from. If you are developing software to be portable, being a
      • by Viol8 (599362)
        "Clearly you don't debug Operating Systems for a living"

        Which bit of "Virtualisation I have no doubt is extremely useful in certain applications." didn't you understand? If you're developing OSes for a living I doubt you use bog standard off the shelf kit.

        "If you are developing software to be portable"

        Developing portable software is simple - its called static linking. Something a lot of idiots calling themselves developers should remember.
  • I think PS3s already get shipping with a built-in hypervisor to manage installing guest OSs in VMs on the console. Ostensibly it's a feature, but doing so has given them enough control to prevent access to accelerated graphics so people don't use the console to play games they downloaded and are instead forced to buy. There's certainly precedent for this, and we're sure to see a lot more of this in the future. Hopefully the PC market is competitive enough that Dell won't be restricting their own hypervisor
  • reminds me of ... (Score:5, Insightful)

    by Anonymous Coward on Thursday August 09, 2007 @02:50PM (#20173149)
    DRM (Score:3, Insightful)
    by Frank T. Lofaro Jr. (142215) on Tuesday June 07, @05:12PM (#12751680)
    (http://www.linux.com/)

    They are doing this for DRM.

    Their Hypervisor will enforce DRM, so even linux can't override it.

    They'll make it so all device drivers must be signed to go into the
    Hypervisor which will be the only thing with any I/O privs that aren't
    virtualized.

    They'll make it so new hardware has closed interfaces and can only be
    supported by a driver at the Hypervisor level.

    Any drivers in any OS level won't be able to circumvent the DRM, since
    they'll just THINK they are talking to hardware, but will get virtual
    hardware instead - and the Hypervisor won't let it read any protected
    content through the virtual I/O, it will blank it out (e.g. all zero
    bytes from the "soundcard") or something similar.

    The drivers designed for the Hypervisor won't work in any higher level,
    since they'll need to do a crypographic handshake with the hardware to
    verify it is "real" and the hardware will also monitor bus activity so
    it'll know if any extraneous activity is occur (as it would if it was
    being virtualized).

    Everything will have a standard interface to the O/S, so Linux will still
    run but be very limited and slowed down - since only Windows will be
    allowed "preferred" access to hardware, other O/S will be deliberately
    crippled.

    They'll say you can still run Linux.

    Hardware manufacturers won't release specs, they'll say use the Hypervisor
    and you can still use Linux.

    You'll still need to buy Windows to use any hardware - Linux won't even
    boot on the raw hardware.

    MS doesn't care if Linux isn't killed - the above allows them lock in - no
    windows - your PC won't boot - since nothing but the Hypervisor will know
    how to talk to the IDE card, etc.

    What about manufacturers that want to support open interfaces, etc?
    Microsoft will deny them a key which they will need to talk to the
    Hypervisor - and the Hypervisor will refuse to talk to them.

    Support anything other than solely the Hypervisor and you can't use the
    Hypervisor. No Windows - lose too many sales.

    And they can say other O/S's are still allowed.

    They'll just not be able to give you freedom to use your hardware as you
    see fit (DRM, need to pay more to get software to unlock other features
    on your hardware), only Windows will run well, and you need a Windows
    license and Hypervisor for every PC or else it is unbootable.
    • Reality check (Score:4, Insightful)

      by Wesley Felter (138342) <wesley@felter.org> on Thursday August 09, 2007 @03:43PM (#20173841) Homepage
      Let's be clear; Dell is talking about servers with built-in hypervisors. Extrapolating these plans to desktop PCs is just unfounded speculation.

      Their Hypervisor will enforce DRM, so even linux can't override it.

      Servers don't care about DRM.

      They'll make it so all device drivers must be signed to go into the
      Hypervisor which will be the only thing with any I/O privs that aren't
      virtualized.


      OK, this is true. ESX requires special drivers.

      They'll make it so new hardware has closed interfaces and can only be
      supported by a driver at the Hypervisor level.


      On the contrary; Dell has been driving companies like Broadcom and Adaptec to open up and offer open source drivers. AFAIK the only reason we have the tg3 driver is because Dell told Broadcom to provide Linux drivers.
    • by Pitawg (85077)
      Agreed. Completely.

      Being closer to a theorist though, I am looking at the Hypervisor taking part in the new unconstitutional legal system, where the hardware will also provide a virtual snoop. GWBOS will boot from the network, no local files needed, and potential for mass observation.

      You thought Sony's root kit was something? Try the hardwired version in the hardware.

      You can call people crazy for this kind of conjecture, but now it is all "legal" for the moment. What executive orders or classified "requests
  • by querist (97166) on Thursday August 09, 2007 @02:52PM (#20173173) Homepage
    This frightens me on so many levels that it is difficult to know where to start. Unless that hypervisor is burned into a non-rewritable form of storage (e.g. ROM), it will be subverted.

    As it has been demonstrated at Black Hat by the illustrious Ms. Rutowska, (as well as being fairly obvious to anyone familiar with hypervisors) a hypervisor is below the OS and can be impervious to the OS's probing, but it still lies between the OS and the hardware.

    Properly implemented, this could be a very good thing. With no disrespect intended toward Dell, I suspect that the first several implementations (at least) will leave the resulting systems vulnerable to subversion, and this subversion would be difficult, at best, to detect.

    This is an interesting concept, and it could be used for "good", but as the saying goes "the devil is in the details". The idea is good, it is the potential implementation that worries me.

    Full Disclosure: I have a Ph.D. (2006) in InfoSec.
    • [...]I suspect that the first several implementations (at least) will leave the resulting systems vulnerable to subversion, and this subversion would be difficult, at best, to detect.
      I know! Whatever will happen to my CVS servers?
    • Re: (Score:3, Insightful)

      by Wesley Felter (138342)
      So where are all the ESX exploits?
      • by miffo.swe (547642)
        I work pretty much with vmware and it does have its fair share of quirks and bugs. Sometime drivers (on the host side) stops working, hostmachines wont start when stopped/started but after having reset the host instead five times?? That for med suggests that bugs arent absent. Some of theese bugs are probably possible to use for exploits. My strong suspicion is that vmware isnt at all that safe but for now its much easier to break into the Windows machines running as guests directly instead.
    • by charlesnw (843045)
      Um. Huh? What? Why will this be subverted? You mean at the factory? How is this any different then other virtulization solutions? The problem with being to focused on security and theory (which seeing that you just got your Phd means you have been for several years) is that you tend to forget real word details. Any system isn't 100% secure. We know that. So what is the point of bringing this up? Virtulization is a very useful technology in a whole lot of areas. Especially security. Makes it much easier to
      • by querist (97166)
        >the problem with being focused .... you tend to forget real world details.

        I work full-time in industry in InfoSec. Please try to avoid such baseless attacks in an attempt to support your flawed reasoning. Also, I worked full-time WHILE pursuing my Ph.D., so I have fully immersed in real-world InfoSec during and after my doctoral studies.

        I am not spreading FUD. If you read the entire post, you would have seen the reasoning. Current rootkit detection and other malware detection relies on the operating sys
    • by CTho9305 (264265)
      Unless that hypervisor is burned into a non-rewritable form of storage (e.g. ROM), it will be subverted.

      As it has been demonstrated at Black Hat by the illustrious Ms. Rutowska, (as well as being fairly obvious to anyone familiar with hypervisors) a hypervisor is below the OS and can be impervious to the OS's probing, but it still lies between the OS and the hardware.


      I think trusted computing takes care of that for you. The Trusted Platform Module will give you a cryptographic hash of all running software;
  • For Vista, the only option OS wise will the the more expensive models. Both Vista Basic and Premium aren't allowed to run on any kind of VM. I guess that will limit Dell's usage for home users.
  • by Sloppy (14984) on Thursday August 09, 2007 @03:34PM (#20173739) Homepage Journal

    It's easy to see how moving more stuff from the disk to flash is "slicker" and can make things load a little bit quicker (but seriously: how much? I doubt transferring hypervisors, kernels, or boot managers (e.g. grub) from disk is a major factor in boot times). But what's so special about hypervisors? Forget making this "solution" so specific. Just build a few dozen megabytes of disk-like (bootable) flash into the board, and let the user decide if they just want to use it for a hypervisor, or move a whole bunch more stuff into there in an effort to try to get their modern machine boot as fast as an Amiga.

    The one thing that it occurs to me that such an answer would really help with, is working around a certain (dumb) Linux limitation. Booting off EVMS is tricky (or at least it was, last time I looked). Move your boot off-disk, then you can EVMS your whole disk.

    And what's this about "security?" The article doesn't explain why it mentions security, and that's not a surprise, because there's no reason it would be more secure. As other have pointed out, "security" is obviously being used as a codeword for something very, very different (i.e. having the machine serve someone else's interest (e.g. MPAA) at the expense of the user's interest).

    • by imgunby (705676)
      (but seriously: how much? I doubt transferring hypervisors, kernels, or boot managers (e.g. grub) from disk is a major factor in boot times).


      I'm not sure about how it would affect overall boot time, but as to the how much... milliseconds compared to nanoseconds. It's a considerable speed boost, but again, I don't think it would dramatically improve boot times.

  • Dell Considers Bundling Virtualization on Motherboards

    There, fixed that for you. Asshat.
  • Forcing me (the computer's owner) to give up control of the lowest level of my computer. At which point they [Computer makers + Media corporations + MS] will be free to insert every kind of phone-home rootkit, DRM, "trusted" computing and other shit they want. Of course, they will because it's in their financial interests to be able to force you and me to pay any price they want, no matter how extortionate it may be. And since they've forced me out of the bottom-most level, there's nothing I can do to get r
  • I am surprised no-one called all these 3 things:

    1. The license of MS does not allow DRM content (like playing a dvd) in a virtual machine. Unless dell can get a different license from MS.
    2. Virtualsation still comes with performance cost. 3% up to 50%. Not good for your benchmarks. Unless you think a Pentium II 450Mhz still is fast enough.
    3. Drivers. Forget directX 9c or directX 10. Forget Vista Aero.

    On a company box there is no problem running in a hypervisor since all 3 points are not important there. Pe

Save yourself! Reboot in 5 seconds!

Working...