Dell Considers Bundling Virtualization on Mobos 138
castrox writes "Ars Technica is reporting that Dell may be considering bundling virtualization on some of their motherboards. No more dual boot or VMs inside the running OS? 'Any way you slice it, though, putting the hypervisor in a chunk of flash and letting it handle loading the OS is the way forward, especially for servers and probably even for enterprise desktops. Boot times, power consumption, security, and flexibility are all reasons to do this ... The big question is: which hypervisor will Dell bundle with its machines? Vance suggests hypervisors from XenSource and VMware as two options, but I think that VMware is the most likely candidate since it seems to be the x86 virtualization solution of choice for the moment. However, if Dell doesn't try too hard to lock it down, this system could easily be modified in an aftermarket fashion to include almost any hypervisor that could fit on the flash chip.'"
Probably a dumb question... (Score:1, Insightful)
It isn't like Vista will be loading less drivers because of the extra layer.
Re:Yes, but: So what? (Score:5, Insightful)
can be there within four hours and should actually be carrying a spare.
For a hobbyist at home I doubt there's much of a difference at all, but for folk paying big $$$ for enterprise solutions, this is probably very welcome.
reminds me of ... (Score:5, Insightful)
by Frank T. Lofaro Jr. (142215) on Tuesday June 07, @05:12PM (#12751680)
(http://www.linux.com/)
They are doing this for DRM.
Their Hypervisor will enforce DRM, so even linux can't override it.
They'll make it so all device drivers must be signed to go into the
Hypervisor which will be the only thing with any I/O privs that aren't
virtualized.
They'll make it so new hardware has closed interfaces and can only be
supported by a driver at the Hypervisor level.
Any drivers in any OS level won't be able to circumvent the DRM, since
they'll just THINK they are talking to hardware, but will get virtual
hardware instead - and the Hypervisor won't let it read any protected
content through the virtual I/O, it will blank it out (e.g. all zero
bytes from the "soundcard") or something similar.
The drivers designed for the Hypervisor won't work in any higher level,
since they'll need to do a crypographic handshake with the hardware to
verify it is "real" and the hardware will also monitor bus activity so
it'll know if any extraneous activity is occur (as it would if it was
being virtualized).
Everything will have a standard interface to the O/S, so Linux will still
run but be very limited and slowed down - since only Windows will be
allowed "preferred" access to hardware, other O/S will be deliberately
crippled.
They'll say you can still run Linux.
Hardware manufacturers won't release specs, they'll say use the Hypervisor
and you can still use Linux.
You'll still need to buy Windows to use any hardware - Linux won't even
boot on the raw hardware.
MS doesn't care if Linux isn't killed - the above allows them lock in - no
windows - your PC won't boot - since nothing but the Hypervisor will know
how to talk to the IDE card, etc.
What about manufacturers that want to support open interfaces, etc?
Microsoft will deny them a key which they will need to talk to the
Hypervisor - and the Hypervisor will refuse to talk to them.
Support anything other than solely the Hypervisor and you can't use the
Hypervisor. No Windows - lose too many sales.
And they can say other O/S's are still allowed.
They'll just not be able to give you freedom to use your hardware as you
see fit (DRM, need to pay more to get software to unlock other features
on your hardware), only Windows will run well, and you need a Windows
license and Hypervisor for every PC or else it is unbootable.
Re:Top two possible misspellings: (Score:1, Insightful)
Re:I don't want a hypervisor thanks (Score:3, Insightful)
Consider a development environment. You might have ten developers, each with their own server. For most of the time, most of the capabilities of those development boxes are being unused, but they're still taking up space and power in your datacenter.
If you could virtualize those 10 dev boxes down to two or three bigger boxes, you could:
- save on space and power in your data center
- ensure you're using your available resources more efficiently (the cpus and RAM aren't idle most of the time; they're actually being used)
- makes it easier to 'add another box' to the mix if you get a new hire. Setting up a new dedicated (virtual) development server takes a matter of minutes, and can all be done in software for no additional cost. This is especially true if you keep all your server images and data on a shared network storage device (or hook the host OS box up to a SAN).
There's the increased risk of downtime from hardware failures, but buy the right boxes for the host OS and that's not a problem.
Dell's solution, if it works, would be really neat. It would probably simplify the act of virtualization even more, and means *none* of the host CPU or RAM is taken up running the VM server. It's all available for guest OS use.
Reality check (Score:4, Insightful)
Their Hypervisor will enforce DRM, so even linux can't override it.
Servers don't care about DRM.
They'll make it so all device drivers must be signed to go into the
Hypervisor which will be the only thing with any I/O privs that aren't
virtualized.
OK, this is true. ESX requires special drivers.
They'll make it so new hardware has closed interfaces and can only be
supported by a driver at the Hypervisor level.
On the contrary; Dell has been driving companies like Broadcom and Adaptec to open up and offer open source drivers. AFAIK the only reason we have the tg3 driver is because Dell told Broadcom to provide Linux drivers.
Re:Please, do not make this the only option (Score:3, Insightful)
Re:Yes, but: So what? (Score:3, Insightful)
I take issue with everything you say here.
There is no qualitative reason why USB should not have, as you say, "as high of an uptime" as anything else which plugs into a computer. In fact, the opposite is likely to be true: USB, having finally grown into something that generally doesn't suck, has been tested and revised for over a decade, and is far more likely to be resolutely reliable than any newly-developed interface technology which has not been so rigorously abused. It's a single point of failure, sure, but it share that disadvantage with SCSI, SATA, PCI Express, and all other likely candidates for connection.
I would further like to submit that the first thing to fail in any flash-based installation in a personal computer will be either the flash chip itself, its interface chip (ala "adapter"), or one of the supporting components (resistors, capacitors - that sort of stuff).
Finally, I'd like to speculate that all Dell will be doing is installing a flash device onto a USB bus. The hardware and software to accomplish this were finished years ago, and thus long ago entered the category of being free (as in beer) for Dell (particularly their marketing departments) to take advantage of.