Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software Hardware

Dell Considers Bundling Virtualization on Mobos 138

castrox writes "Ars Technica is reporting that Dell may be considering bundling virtualization on some of their motherboards. No more dual boot or VMs inside the running OS? 'Any way you slice it, though, putting the hypervisor in a chunk of flash and letting it handle loading the OS is the way forward, especially for servers and probably even for enterprise desktops. Boot times, power consumption, security, and flexibility are all reasons to do this ... The big question is: which hypervisor will Dell bundle with its machines? Vance suggests hypervisors from XenSource and VMware as two options, but I think that VMware is the most likely candidate since it seems to be the x86 virtualization solution of choice for the moment. However, if Dell doesn't try too hard to lock it down, this system could easily be modified in an aftermarket fashion to include almost any hypervisor that could fit on the flash chip.'"
This discussion has been archived. No new comments can be posted.

Dell Considers Bundling Virtualization on Mobos

Comments Filter:
  • by Anonymous Coward on Thursday August 09, 2007 @02:22PM (#20172813)
    How is adding more layers going to make anything faster?
    It isn't like Vista will be loading less drivers because of the extra layer.
  • by Albanach ( 527650 ) on Thursday August 09, 2007 @02:27PM (#20172865) Homepage

    In what way is this functionally different than the same hypervisor being installed on a bootable USB flash drive/IDE-attached CompactFlash card/[insert other stupid-simple method of booting from flash]?
    The difference is that it's a supported set up from a major manufacturer. That means that when you pay for 24x7x365 support you are not faced with being told that you've modified the hardware and they can't support your setup. Indeed if your flash card dies a sudden death, the Dell engineer

    can be there within four hours and should actually be carrying a spare.

    For a hobbyist at home I doubt there's much of a difference at all, but for folk paying big $$$ for enterprise solutions, this is probably very welcome.
  • reminds me of ... (Score:5, Insightful)

    by Anonymous Coward on Thursday August 09, 2007 @02:50PM (#20173149)
    DRM (Score:3, Insightful)
    by Frank T. Lofaro Jr. (142215) on Tuesday June 07, @05:12PM (#12751680)
    (http://www.linux.com/)

    They are doing this for DRM.

    Their Hypervisor will enforce DRM, so even linux can't override it.

    They'll make it so all device drivers must be signed to go into the
    Hypervisor which will be the only thing with any I/O privs that aren't
    virtualized.

    They'll make it so new hardware has closed interfaces and can only be
    supported by a driver at the Hypervisor level.

    Any drivers in any OS level won't be able to circumvent the DRM, since
    they'll just THINK they are talking to hardware, but will get virtual
    hardware instead - and the Hypervisor won't let it read any protected
    content through the virtual I/O, it will blank it out (e.g. all zero
    bytes from the "soundcard") or something similar.

    The drivers designed for the Hypervisor won't work in any higher level,
    since they'll need to do a crypographic handshake with the hardware to
    verify it is "real" and the hardware will also monitor bus activity so
    it'll know if any extraneous activity is occur (as it would if it was
    being virtualized).

    Everything will have a standard interface to the O/S, so Linux will still
    run but be very limited and slowed down - since only Windows will be
    allowed "preferred" access to hardware, other O/S will be deliberately
    crippled.

    They'll say you can still run Linux.

    Hardware manufacturers won't release specs, they'll say use the Hypervisor
    and you can still use Linux.

    You'll still need to buy Windows to use any hardware - Linux won't even
    boot on the raw hardware.

    MS doesn't care if Linux isn't killed - the above allows them lock in - no
    windows - your PC won't boot - since nothing but the Hypervisor will know
    how to talk to the IDE card, etc.

    What about manufacturers that want to support open interfaces, etc?
    Microsoft will deny them a key which they will need to talk to the
    Hypervisor - and the Hypervisor will refuse to talk to them.

    Support anything other than solely the Hypervisor and you can't use the
    Hypervisor. No Windows - lose too many sales.

    And they can say other O/S's are still allowed.

    They'll just not be able to give you freedom to use your hardware as you
    see fit (DRM, need to pay more to get software to unlock other features
    on your hardware), only Windows will run well, and you need a Windows
    license and Hypervisor for every PC or else it is unbootable.
  • by Anonymous Coward on Thursday August 09, 2007 @03:30PM (#20173685)
    Dell considers bungling virtualization on mobos
  • by EvilMagnus ( 32878 ) on Thursday August 09, 2007 @03:41PM (#20173819)
    Virtualization can be really useful to make sure you're making use of all available resources.

    Consider a development environment. You might have ten developers, each with their own server. For most of the time, most of the capabilities of those development boxes are being unused, but they're still taking up space and power in your datacenter.

    If you could virtualize those 10 dev boxes down to two or three bigger boxes, you could:
    - save on space and power in your data center
    - ensure you're using your available resources more efficiently (the cpus and RAM aren't idle most of the time; they're actually being used)
    - makes it easier to 'add another box' to the mix if you get a new hire. Setting up a new dedicated (virtual) development server takes a matter of minutes, and can all be done in software for no additional cost. This is especially true if you keep all your server images and data on a shared network storage device (or hook the host OS box up to a SAN).

    There's the increased risk of downtime from hardware failures, but buy the right boxes for the host OS and that's not a problem.

    Dell's solution, if it works, would be really neat. It would probably simplify the act of virtualization even more, and means *none* of the host CPU or RAM is taken up running the VM server. It's all available for guest OS use.
  • Reality check (Score:4, Insightful)

    by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday August 09, 2007 @03:43PM (#20173841) Homepage
    Let's be clear; Dell is talking about servers with built-in hypervisors. Extrapolating these plans to desktop PCs is just unfounded speculation.

    Their Hypervisor will enforce DRM, so even linux can't override it.

    Servers don't care about DRM.

    They'll make it so all device drivers must be signed to go into the
    Hypervisor which will be the only thing with any I/O privs that aren't
    virtualized.


    OK, this is true. ESX requires special drivers.

    They'll make it so new hardware has closed interfaces and can only be
    supported by a driver at the Hypervisor level.


    On the contrary; Dell has been driving companies like Broadcom and Adaptec to open up and offer open source drivers. AFAIK the only reason we have the tg3 driver is because Dell told Broadcom to provide Linux drivers.
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday August 09, 2007 @03:49PM (#20173915) Homepage
    So where are all the ESX exploits?
  • by adolf ( 21054 ) <flodadolf@gmail.com> on Thursday August 09, 2007 @04:13PM (#20174223) Journal
    3) The USB headers are not going to have as high of an uptime compared to something dell could build onto the motherboard (in theory, supposing dell does'nt screw up. This is required due to what most server buyers need is reliability for servers that run 24/7/365.25. Adding in what you suggested, the first thing to fail would most likely be either the flash or the adapter.

    I take issue with everything you say here.

    There is no qualitative reason why USB should not have, as you say, "as high of an uptime" as anything else which plugs into a computer. In fact, the opposite is likely to be true: USB, having finally grown into something that generally doesn't suck, has been tested and revised for over a decade, and is far more likely to be resolutely reliable than any newly-developed interface technology which has not been so rigorously abused. It's a single point of failure, sure, but it share that disadvantage with SCSI, SATA, PCI Express, and all other likely candidates for connection.

    I would further like to submit that the first thing to fail in any flash-based installation in a personal computer will be either the flash chip itself, its interface chip (ala "adapter"), or one of the supporting components (resistors, capacitors - that sort of stuff).

    Finally, I'd like to speculate that all Dell will be doing is installing a flash device onto a USB bus. The hardware and software to accomplish this were finished years ago, and thus long ago entered the category of being free (as in beer) for Dell (particularly their marketing departments) to take advantage of.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...