Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Databases Networking Hardware

Reasonable Hardware For Home VM Experimentation? 272

cayenne8 writes "I want to experiment at home with setting up multiple VMs and installing sofware such as Oracle's RAC. While I'm most interested at this time with trying things with Linux and Xen, I'd also like to experiment with things such as VMWare and other applications (Yes, even maybe a windows 'box' in a VM). My main question is, what to try to get for hardware? While I have some money to spend, I don't want to, or need to, be laying out serious bread on server room class hardware. Are there some used boxes, say on eBay to look for? Are there any good solutions for new consumer level hardware that would be strong enough from someone like Dell? I'd be interested in maybe getting some bare bones boxes from NewEgg or TigerDirect even. What kind of box(es) would I need? Would a quad core type processor in one box be enough? Are there cheap blade servers out there I could get and wire up? Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."
This discussion has been archived. No new comments can be posted.

Reasonable Hardware For Home VM Experimentation?

Comments Filter:
  • by AltGrendel ( 175092 ) <ag-slashdot@exit 0 . us> on Sunday March 22, 2009 @05:50PM (#27292223) Homepage
    Just check the BIOS to make sure that you can set the MB for virtualization.
  • by TinBromide ( 921574 ) on Sunday March 22, 2009 @05:53PM (#27292243)
    I ran my first virtual machine on an athlon 2200+ with 768 megs of ram. If it can run windows 7, you can run a VM or 3 (Depending on how heavy you want to get). Essentially take your computer, subtract the cycles and ram required to run the OS and background programs, that's the hardware you have left over to run the os. If the guest OS was compatible with your original hardware, chances are it'll work just fine in the OS.
  • Memory (Score:5, Informative)

    by David Gerard ( 12369 ) <> on Sunday March 22, 2009 @05:55PM (#27292267) Homepage
    64-bit Linux host and as absolutely much memory as you can possibly install.
  • by Deleriux ( 709637 ) on Sunday March 22, 2009 @05:55PM (#27292273)

    I personally use qemu-kvm and im quite happy with it. Thats running on a dual core machine with 2G of ram (probably not enough ram though!).

    For the KVM stuff you need have chips which support Intels VT or AMDS AMD-V so your processor is the most important aspect. A quad core would probably be suitable too if you can buy that.

    For just experimentation usage its a fantastic alternative to VMWare (I personally got sick of having to recompile the module every time my Kernel got updated).

    On my box myself i've had about 6 CentOS VMs running at once but frankly there were not doing much most of the time. Ultimately its going to boil down to how much load you inflict on VMS underneath, my experience with it has not been very load heavy so I could probably stretch to 9vms on my hardware which is probably on the lower end of the consumer range these days.

    The most important bits are your CPU and RAM. If your after something low spec you can do dual core 2g ram but you could easily beef that up to quad core 8G RAM to give you something you can throw more at.

    Oh and Qemu without KVM is painstakingly slow - I wouldn't suggest it at all.

  • What I use. (Score:4, Informative)

    My VM server rig is decidedly low-end compared to many I've seen, but it certainly gets the job done. I custom built the box, mostly from components bought on NewEgg; it has a dual-core AMD64 chip (soon to be upgraded to a quad-core), 3 GB RAM, and about 500 GB total drive space between two IDE (yeah, I know, will upgrade to SATA at some point) drives.

    The machine runs Ubuntu Server with VMWare Server 2. I can easily run several Debian and Ubuntu VPS nodes on it under light load, and I use it for experimentation with virutal LANs and dedicated purpose VMs. I periodically power up a Windows Server 2003 VM, which uses a lot more resources, but it's still fine for testing purposes.
  • by rackserverdeals ( 1503561 ) on Sunday March 22, 2009 @06:00PM (#27292329) Homepage Journal

    You can find lots of used servers on eBay that you can mess around with. Sun's v20z servers are pretty cheap and have a decent amount of power.

    A lot of the stuff I've run across is rack mounted and keep in mind that rack mounted servers are loud [] in most cases. So it may not be the best thing to play around with in your home or office.

    You don't really need any special CPU to mess around with virtualization, you won't get "full" virtualization but I don't think that will stop you. For more info check out, this page [].

    I'm currently running a number vm's in my desktop using Sun'x VirtualBox (xVM) whatever they're calling it now. Even within some of the solaris VM's I'm running solaris containers so I'm doing virtualization upon virtualization and my processor doesn't have Virtualization technology support.

    If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [] there is an option to filter by CPU's that support virtualization technology.

    If you're primary focus is Oracle RAC, you may want to look at Oracle VM [] which is Xen based.

  • ESX Whiteboxing info (Score:2, Informative)

    by MartijnL ( 785261 ) on Sunday March 22, 2009 @06:05PM (#27292383)
    ESX whiteboxing information can be found in a number of places but you might want to start here: [] []
  • by Anonymous Coward on Sunday March 22, 2009 @06:06PM (#27292395)
    I get along quite happily running 5 or 6 VMs on a Dell Vostro slimline desktop, Core2Quad, 8GB ram and a 10k RPM disk that cost me no more than £400 6 months ago, and thats using Microsofts HyperV server (free download, runs as the hypervisor itself so theres no Windows Server instance underneath it - don't mistake 'HyperV Server' for 'HyperV for Windows Server 2008', they are not the same :) ).
  • Re:8 core Mac Pro (Score:5, Informative)

    by jackharrer ( 972403 ) on Sunday March 22, 2009 @06:13PM (#27292457)

    I hope you're joking... That's waaaay too expensive.

    I can run up to 4 VM on my laptop (Lenovo T60) with 3GB a and Core 2 Duo 2GHz without any problems. Often I need to work on 3 machines (design one + cluster for testing) and it works really well together. Problem is that disk subsystem sucks, so I suggest you invest in some RAID, but processor or memory wise it's enough. If you run Linux, you can run more of them as they use less memory and processor usage is also nicer. Just stay away from GUI as X uses abysmal amount of processing power in remote VM for anything more that 800x600.

    You don't really need anything very expensive - most of commodity hardware nowadays runs VMWare Server easily. It's also free so even sweeter. Just choose processor that supports virtualisation, as that speeds up everything a lot.

  • by cthornhill ( 950065 ) on Sunday March 22, 2009 @06:26PM (#27292583)
    I strongly advise you to do your homework before spending money on non-server class hardware (or before selecting a server for that matter). VMWare runs on a lot of hardware, but it also fails badly on a lot of consumer grade motherboards. There are some list (White Box Hardware Lists such as []) you can check. After spending some time on name server HW and on White Box gear, I can tell you that the name server gear is a lot more compatible, easier to work with, and worth the money (if you have it). If you are doing casual stuff and don't mind the considerable pain you will have to go through to get patches and select disks systems and other components, consumer gear will let you play a bit. As for doing anything serious with more than one VM on a box - not likely. Xen is a commitment, as is VMWare or any other VM system. It is going to eat the box if you do anything other than dabble in it, and you are going to spend some real money if you intend to do much with VMWare (think $3K - $5K to get very serious). Running a VM is easy. Running multiple servers, backups, external disk systems, etc. is real work and costs real money for all the extra stuff you will need. If you stick to Linux you can save a bunch, but if you intend to do any real work with MS Servers, you are going to need several licenses, and iSCSI targets, back up tools, etc...You won't actually learn too much before you go to that level that you can't learn with VMWare Workstation (a great product but not anything like a production server environment). You can get you feet wet for nothing but time with most of these tools, but you can't get real, in depth experience with what it takes to run a production cluster, replication, remote storage, live replication, and all the rest of the things you need for real production unless you actually set up a production like system - that means real servers (White Box or name brand) and lots of hardware. You won't be able to see much with less than 8 cores, and 16GB plus some local RAID and iSCSI network targets. You can get started, but if you are going to spend money, I really think you should start to buy gear that is going to build towards a real server environment or you should stick to home systems and maybe run VMWare Workstation or some other stand alone VM just to play with it. VM Mode Linux (not very popular today) or some Xen sets for personal use would give you some understanding of VM concepts, but not a lot of basis for real production issues (at least they did not for me and I was a pretty heavy development user). Production VM deployments have a lot of issues that all take real in depth study, and lots of resources (iron) to get right. On the other hand, you can get a Supermicro, a Dell, or HP server with dual Xenon quad cores for less than $4000 with some nice disk. Get 4 or 5 containers under a VM and set up a replication to another server and a remote iSCSI disk and then you have enough to start to actually do real learning on. Of course the license fees will be way more than the hardware costs, if you are using MS tools and VMWare. ESXi is OK but unless you are going to go deep and do it all the hard way (hack the OS) you can't do a lot with the free version. With Xen, if all you want is to run a couple versions of Linux, just get a quad core box and have some doesn't really give you much production knowledge, but you will have some interesting test you can try. What I am really saying is - with only 4 cores you can do some useful things to support development,and you might make a nice personal server for you private web sites, but you don't have enough iron to experience the real issues of production VM management. If you are going past what a developer does (or a tester) and looking at a operations type environment you will need 8 to 16 cores on multiple boxes. This is a lot more than a home user typically wants to spend. IMO you also can't really expect to be really good on more than one system unless you do it day in a
  • Much better solution (Score:5, Informative)

    by codepunk ( 167897 ) on Sunday March 22, 2009 @06:26PM (#27292589)

    Amazon EC2 is what I use for stuff like this, both windows and linux boxes everything available at a single push of a button. I
    also use it alot for development, fire up a machine load and go.

  • by itomato ( 91092 ) on Sunday March 22, 2009 @06:27PM (#27292595)
    Reading 'cayenne8', I can't help but imagine a V8 Porsche, and because I'm a car guy, for good or bad, this shifts the focus of my comment toward resources, specifically what is available, versus what is acceptable or tolerable.

    Let's say you're a one-man Lab, incorporating all the SA, Developer, and Midware functions into your 'play' with this environment. How much time will each environment spend heavily plowing into loads?

    If your intent is to deploy RAC in a multitude of scenarios, in short order, with a minimum of intervention, you may be able to get away with $1500 to $2500 worth of NewEgg parts (think high throughput - RAID, Max. RAM, Short access times, etc.) and the virtualization technology of your choice. Personally, I find VirtualBox capable of everything I need as far as virtualization and deployment goes, however, you need to be able to leverage 'fencing', with likely puts you into VMWare territory.
    Fortunately, VMWare Server is 'free', and CentOS and OpenSuSE support some of the more advanced features of HA on Linux. Then again, if we're looking at resources as a major factor, then Redhat and Novell might be worth looking at, as they both offer 60 to 90-day evaluation licenses for their Enterprise Linux products, which may offer a prettier and more 'honest' (per the documentation and common expectations) implementation of their respective HA features than the freely-available, and in some cases, in-flux versions of the same software.

    As far as RAC goes, take a look at the requirements for RAC, per Oracle's installation guidelines,, and size/spec from there. I believe you can get away with 16GB - total, if you have the capability to size the VM's memory access, or otherwise configure the amount of addressable memory, or put uo with or hack Oracle's RAC installation pre-flight. There is also valuable documentation available on your chosen OS vendor's sit, which may even be Oracle, who knows.. []

    You may be hell-bent on performance, however, and you may be looking for the ultimate grasp of technological perfection, as it exists at Sun Mar 22 17:29:59 EDT 2009. In this case, you may want to look at Xen, which is available on Solaris as their 'xVM' technology, as well as on various Linuxes and BSDs.
    On the other hand, you may be a Mac guy, with a decked-out Octo-core Xeon Mac Pro, where you have the option of Parallels and Virtual PC and something else, in addition to Sun's VirtualBox mentioned above.

    Ultimately, things to keep in mind may be shared disk requirements, fencing options, and VM disk and memory access.
  • Two machines (Score:5, Informative)

    by digitalhermit ( 113459 ) on Sunday March 22, 2009 @06:29PM (#27292609) Homepage

    You can do Oracle with just a single machine running multiple VMs; however, if you really want to get serious, you should consider building two physical machines. One each machine, create a virtual or two with 1-2G of RAM. for the shared disk, use DRBD volumes between the two.

    My test RAC cluster has two AMD X2 64-bit systems with two gigabit NICs each. CompUSA has a similar machine for about $212 on sale this week. Newegg prices are similar. You'll need to add a couple extra Gig NIC and some more storage. Still should cost under $400 each.

    On each physical system I used CentOS 5.2 with Xen. I created LVMs on the physical machines as the root volumes. Also carved out a separate volume to back the shared volume. Then I carved out a xen virtual machine on each with 1.5G each. I put the DRBD network on one pair of NICs. The other pair was used for the network and heartbeat (virtual ethernet devices).

  • by MeanMF ( 631837 ) on Sunday March 22, 2009 @06:36PM (#27292663) Homepage
    Choosing the "dual processor" option in a VM isn't necessarily a good idea, especially if you have a lot of VMs running. It means that whenever the VM needs physical CPU time, it has to wait until two cores free up. And when it does get CPU time, it will always use two cores, even if it's not doing anything with the second one. So if there is a lot of competition for CPU, or if you're running a dual-processor VM on a dual-core host, it can actually cause things to run much slower than if all of the VMs were set to single-processor.
  • by codepunk ( 167897 ) on Sunday March 22, 2009 @06:54PM (#27292823)

    Well yes it will not teach you how to plug a ethernet interface into a switch. However the poster in this case said he wants to run
    in a vm environment but money is limited. In this case if he want's to play with big boxes, configuration testing etc there
    is no better option available to him than EC2.

  • by marynya ( 735459 ) on Sunday March 22, 2009 @07:15PM (#27292969)
    The main requirement is enough RAM for two operating systems plus some extra for the virtualization system. The CPU is less important. I run Windows XP Pro as a virtual system on a Linux host with VMware Workstation 6. It is a 5-year-old Athlon 3000+ box with 1 GB of RAM. I allocate 512 MB to Windows, which is about the minimum for XP. Current Linux distributions need at least 256 MB and VMware is something of a memory hog itself so 1 GB is about the minimum RAM for this setup. Windows is perhaps just a smidgen slower than it would be if running natively on the same hardware but the difference is minimal. It does not have much effect on the speed of Linux apps running simultaneously. Things bog down fast if you try to run more than one virtual system simultaneously but VMware is good at using multiple processors for this. I did some work which involved running up to 6 instances of FreeBSD simultaneously on an 8-core Xeon system with 4 GB RAM. Up to 6 it did not slow down much. Over 6 it got sludgy. Have fun! Mike
  • by johnthorensen ( 539527 ) on Sunday March 22, 2009 @07:16PM (#27292985)
    The biggest thing that you have to watch out for with VMWare ESXi is the hardware compatibility list. You will run into trouble with two major components: RAID controllers and network adapters.

    The network adapter solution is simple: buy the most plain-jane Intel PCI or PCIe adapter that you can find. Examples of ones that are known to work right out of the box are the Intel PWLA8391 GT [] (single-port PCI) and the Intel EXPI9402PT [] (dual-port PCIe). I own both of these and can personally confirm operation with the latest version of VMWare ESXi.

    The drive controller situation is both complicated and -- more importantly -- expensive. Overall, Adaptec seems to be the most well-supported controller hardware out there. I have tried LSI controllers, but they often don't play well with desktop boards. Unfortunately for experimenters, the built-in RAID on practically every Intel motherboard is completely unsupported in RAID mode. Obviously no enterprise environment would be using on-board RAID like that, but it would be nice to have for experimentation.

    Which brings me to my favorite storage solution for ESXi: Openfiler []. Openfiler is an open-source NAS/SAN solution based on rPath Linux. It turns any supported PC into a storage applicance, and can share its storage in a plethora of ways. In the case of a virtualization effort, it has two major things going for it: it supports any storage controller that Linux supports, and it supports iSCSI and NFS.

    If, say, you do have a machine sitting there with Intel on-board RAID, you can install Openfiler there. While the hardware might not work under ESXi, it'll work great for Openfiler. Even better, Openfiler also supports Linux software RAID which can be superior when it comes to disaster recovery (no need to have a specific controller card to see your data). With this in mind, you'll be able to get Openfiler running on just about any hunk of shit box you have sitting around.

    Once you have Openfiler set up, you can take the next step in virtualization-on-the-cheap: installing ESXi on a USB flash drive. There are a number of tutorials on the web for this (just google 'ESXi USB flash install'), but the basic process amounts to extracting the drive image from the ESXi installation archive and simply writing it to flash with dd (on Linux) or physdiskwrite (on Windows). Once this is done, you can plug the flash drive into nearly *any* recent x86 hardware and it will boot ESXi. A really neat feature that you get along with this is the ability to substitute hardware with ease, and upgrade to later versions of ESXi simply by swapping the flash drive.

    Once you have ESXi installed, create an iSCSI volume on your Openfiler box. Then, use the VMWare management software to connect the ESXi box to your Openfiler iSCSI volume. You can then create virtual disks and machines from the actual USB-flash-booted VMWare host, all of which will be stored on your Openfiler machine. You may also want to try experimenting with NFS instead of iSCSI. There are a couple proponents of this out there that say under certain circumstances it's even faster than iSCSI. It also makes backing up your virtual machines a little simpler since an NFS share is generally easier to get to than iSCSI from most machines. Another cool aspect of the Openfiler-based configuration is that you will get access to another whiz-bang feature of VMWare called vMotion. Since the VMs and their disks are stored centrally, you can actually move the VM execution from one ESXi box to another - on the fly.

    In all, this is a great way to get your feet wet in virtualization because you can have a pretty sophisticated setup with very basic commodity hardware. If you want to go the extra mile and get really fancy, put a dedicated gigabit NIC (or two, bonded) in each box and enable jumbo frames; the SAN will be more than fast enough most anything you'd like to do.

    Good luck!
  • My hints (Score:5, Informative)

    by kosmosik ( 654958 ) <[kos] [at] []> on Sunday March 22, 2009 @07:22PM (#27293037) Homepage

    Well you don't clearly state what you wish to accomplish nor how much money you have so it is hard to answer. But maybe such setup will be OK.

    Build yourself custom PCs.

    Storage server:
    - good and big enclosure which can fit large ammount of drives
    - moderate 64bit AMD processor (really any - you will not be doing any serious processing on storage server)
    - any ammount of RAM (really 1 or 2 gigs will be enough)
    - mobo with good SATA AHCI support (for RAID) and NIC (any - for management) onboard
    - one 1Gb PCI-* NIC with two ports
    - 6x SATA2 NCQ HDD (any size you need) dedicated for working in RAID - software based (dmraid) RAID1+0 array configuration

    Virtualization servers (2 or more):
    - you need the virtualization servers to have the same config
    - any decent enclosure you can get
    - the fastest 64bit AMD processor you can get preferably tri or quad core (it will do the processing for guests) with VT extensions
    - as much RAM as you can get/fit into the machine
    - mobo with VT support, one (any - for management) NIC onboard
    - one 1Gb PCI-* NIC with two ports
    - one moderate SATA disk for local storage (you will be using it just to boot the hypervisor) or disk-on-chip module

    Network switch and cables:
    - any managed 1Gb switch with VLAN and EtherChannel support, HP are quite good and not as expensive as Cisco
    - good CAT6 FTP patchcords

    General notes for hardware:
    - make sure all of the PC hardware is *well* supported by Linux since you will be using Linux :)
    - if you can get better (quality wise) components, good enclosures, power supplies, drives etc. - since it is a semi server setup you don't like it to fail for some stupid reason

    Network setup:
    - make two VLANS - one for storage, other for management
    - plug onboard NICs into management VLAN
    - plug HBA NICs into storage VLAN
    - configure ports for EtherChannel and use bonding on your machines for greater throughput

    Software used:
    - for storage server just use Linux
    - for virtualization servers use Citrix XenServer5 (it is free, has nice management options, supports shared storage and live motion) or vanilla Xen on Linux, don't bother with VMWare Server, VMware ESX and Microsoft solutions are expensive

    Storage server setup:
    - install any Linux distro you like (CentOS would not be a bad choice)
    - use 64bit version
    - use dmraid for RAID and LVM for volume management
    - share your storage via iSCSI (iSCSI Enterprise Target is in my opinion best choice)

    Virtualization servers setup:
    - install XenServer5 (or any distro with Xen - CentOS won't be bad)
    - use interface bonding
    - dont use local storage for VMs - use storage network instead

    Well here it is. Quite powerfull and cheap virtualization solution for you.

  • by koko775 ( 617640 ) on Sunday March 22, 2009 @07:22PM (#27293039)

    Don't get an abit motherboard, or at least don't get their Intel P35-based boards. I can't speak to the rest of their stuff, but putting my Abit IP35-based computer to sleep and waking it back up actually *disables* the VM extensions, either freezing upon waking if any were running, or ensuring none start until I power off (reset doesn't cut it).

    Other than that, I recommend a Core 2 Quad with lots and lots of RAM, and an array of 1TB SATA drives to RAID.

    Also of note: Windows 7 doesn't let you use a real hard drive partition; it needs a hard disk file, at least on KVM, which is pretty awesome.

  • by BagOBones ( 574735 ) on Sunday March 22, 2009 @08:02PM (#27293351)

    The biggest problem I see with those getting into virtualization is that they think that virtualizing things makes them magically need fewer resources.

    You can share CPU time as most apps will not drive the CPU 100%, having said that it is often best to have as many cores as you can afford.

    Do not over allocate your RAM, if you can have as much ram as needed for how much you allocate to the VMs, if you over lap you will get a huge performance hit.

    Sparse disk is a fairly new feature only in some VM systems, you will need lots of disk for all of the VMs, also you will probably want to run them on different LUNs or disk groups so you don't get lots of thrashing on the drives.

    If you are only running 1 or 2 VMs as a test then really all you need is to up the ram a little and make sure the host meets the minimum specs of the VM applications.

  • by boyter ( 964910 ) on Sunday March 22, 2009 @08:17PM (#27293471) Homepage

    I just did this myself. I ended up just shooting for cheap hardware on the theory that if it breaks in 2 years I can just replace it. I have a Quad Core Phernom with 8 gig of RAM and two 750gig drives. Chucked VMWare on it and havent had any issues running about 8 or so VM's on it. It also serves up media using TVersity and is a network share dump as well.

    The biggest issue I have had so far, is Disk Driver perfomance. If you are planning on running multiple concurrent VM's then go for as many HDD's as you can. Stick the most load intensive ones on seperate drives and you will really see the benefits.

  • Meh (Score:3, Informative)

    by jav1231 ( 539129 ) on Sunday March 22, 2009 @08:27PM (#27293541)
    I was running VM's back in the days of DOS. First with Taskview and later with Deskview running 4 concurrent DOS v5 sessions on a single-core 8088! And if they slowed down I'd just push the turbo button and go from 4.77Mhz to 8Mhz! oooWEEEE! That's right! And I'd tote that 45lbs IBM-XT all the way to the snow! And I LIKED IT!
  • by Sycraft-fu ( 314770 ) on Sunday March 22, 2009 @08:40PM (#27293621)

    The main thing you need for VMs is memory. There isn't really any good way for VMs to share memory, they each need their own. So decide what you want to give each system, and make sure you've got that much on the host plus like 1GB for the host OS and VM software. Good news is RAM is cheap. You should be able to pick up plenty for not much money. If you get a system based on a 975X or P35 chipset, you should be able to drop 8GB of RAM in it. Ought to be more than plenty. Those are cheap and plentiful these days too. Plus, they use DDR2 RAM, which is currently the cheapest. An Intel DP35DP motherboard might be a good choice.

    As for a processor, kinda depends on how hard the VMs will be working. That they can share. So if they are mostly sitting idle, like say a web server serving up static pages, you can get away with not a whole lot of CPU power. If you want them all to be working all the time, you need more. A Core 2 Duo will probably do just fine if you that's what you've got or you need to keep the cost as low as possible. However, this is a case where a quad core would make more sense so that's a good way to go if you can. Goes double if it's the same price. Like say you can get a 2.4GHz Core 2 Quad for the same price as a 3.0GHz Core 2 Duo. While for a desktop you'd probalby want the duo, get the quad in this case. Might look at the Q6600 or Q8300. Both are under $200 and would do a real nice job. Note that the Q8300 is going to need a P35 board, teh 6600 will work on a 975 board.

    Disks are a real big "it depends." VMs can be set to grow as they need more space, and so you can in theory have a bunch of VMs sharing one small disk, along with the OS. However, that can lead to performance problems. Harddrives suck at random access, and if a bunch of VMs get going on it at the same time, that's what you'll get. So ideally you'd have one VM per harddisk. In reality, that's probably overkill unless you've got lots of disks laying around. However if your VMs will be heavy disk access, you might want to consider getting 2 drives for them since drives are cheap. Either way, the best idea is to have the preallocate all the space they need for their virtual drives. You get better performance that way, even though it wastes drive space, but again, drives are cheap. Maybe start off with one drive for the VMs and if you find they are getting bogged down, buy another and move them over. They are just files on the drive so easy to move.

    Those are the biggest factors to think about. You get a quad core, good amount of RAM, and enough disk space, you should get great performance. If you need to save money, don't feel like a dual core won't work fine. Really the only thing not to cheap out on is RAM. You need to have enough, virtual memory is WAY too slow. So if you want 4 VMs with 1GB each, have not less than 5GB in the system.

    Supposing you do have plenty of cash and want to further increase performance one other thing you can look at is NICs. VMs don't do a great job of sharing NICs presently. VMWare is actually working on that, but right now you get ideal performance with one NIC per VM. Not normally a big deal but if your VMs do lots of traffic it can matter. So if you want, get more NICs. One of those multi-port NIC cards works just as well. This really isn't all that necessary, but you can do it if you are after the best performance.

  • by Anonymous Coward on Sunday March 22, 2009 @08:42PM (#27293647)

    I'm doing this now, running a company infrastructure on Xen3.2 and 2 non-server class machines. These are Gigabyte and MSI Core 2 Duo motherboards running at 2.6 and 3Ghz. Each with 4GB of RAM, dual GE NICs, RAID1 drives. Nothing special.

    Application Systems are:
      - enterprise email/calendaring/IM
      - CRM
      - document management, file/print
      - project management
      - VPN
      - internal website / wiki
      - VoIP/PBX
      - Monitoring, PKI to manage VPN credentials
      - LDAP for authentication across all these systems.

    Each VM can be migrated to the alternate host-server with minimal downtime (sub 1-second).
    Backups are rdiff-backup based - complete VM backups take less than 2 minutes. Most of the machines are only 10GB disk images. DMS is 20GB since we're a document heavy enterprise.

    Total CPU is hardly ever over 20% utilized, basically, only during backups. Because Linux grabs available RAM for disk buffers, it is all used, but everything easily fits on a single 4GB RAM box with excellent performance. This is nice so system upgrades don't impact running systems, but most of the work can still be performed during work hours. Having 2 boxes lets me perform system upgrades without any risk to the "production" system.

    I'm running 64-bit Ubuntu for all.

    I tried VMware - it wouldn't load ESXi on my hardware and VMware server is "too heavy." For some of out customers, VMware is the best solution, but they are Windows shops.

    I use VirtualBox on a laptop, but it isn't ready for enterprise use. Another year or so and it will be stable enough. 3 of my partners also use VBox on their laptops. It's easier to setup a VM with Linux than fighting with cygwin.

    Xen and Linux go together nicely. I plan to bring up a few Windows VMs - I've read they work fine under Xen3.2, but haven't had time to try them yet.

    I've blogged about most of what I've learned along the way. Learning about systems, applications, Xen, and other virtualization issues.

  • Re:Memory (Score:2, Informative)

    by tautog ( 46259 ) on Sunday March 22, 2009 @08:55PM (#27293735)

    Mod up the parent. Tons of ram is the most critical component and a 64-bit host should be mandatory.

    Second, a multi-core processor (don't care which, pick your poison) makes things feel snappier.

    Lastly, multiple monitors are really nice. Find a card with multiple digital outputs and a couple of decent LCDs make for crisp, fast, and pleasant display. Spend a little jack here - I picked up a set of Viewsonic 19" widescreens recently for about $320 (for two). Go for high-res, it's worth it.

    For reference, I'm running a dual core intel (e2140, I think) with 4gb of ram. Ubuntu 8.10 runs virtualbox loaded with win2k8 server and WinXP Pro very nicely. I'm debating on whether to add another 4gb or to build a SATA array for my data and VM images.

  • by adisakp ( 705706 ) on Sunday March 22, 2009 @09:38PM (#27293993) Journal
    Just FYI, to make paragraphs in a slashdot comment use the "p" tag for html []. You can also use the "br" tag to insert line breaks.
  • Re:8 core Mac Pro (Score:3, Informative)

    by chipset ( 639011 ) on Sunday March 22, 2009 @10:14PM (#27294205) Homepage

    I sold my Dual Processor G5 for $1800 after owning it for 2 years. I paid $2000 for it. That's a pretty low cost of ownership, if you ask me.

  • by monkeySauce ( 562927 ) on Sunday March 22, 2009 @10:40PM (#27294329) Journal

    I'm also using QEMU+KVM. I built an inexpensive server with parts from Newegg. Total cost was around $500, about 9 months ago.

    I second the knock against VMWare (free server edition). I used it for a bit and got sick of it not working after kernel upgrades, or not working because it complained about it's config being incomplete/invalid, etc. Xen, VirtualBox or QEMU are all better products, IMO.

    I would say that the number of guest VM's you can run concurrently really only depends on your RAM, where performance of them will depend on your CPU and what you have them doing.

    I have an Athlon64 x2 and 8 GB RAM and am running 10 guest VM's (8 Linux, 1 FreeBSD, 1 Windows) with lots of RAM to spare, and CPU under VERY low load. I overspent on RAM so I could easily add more VM's. Some of my guest VM's are running various servers/services that a few other people connect to, but most are testing or development machines that aren't really doing much unless I connect to them, and I can only do so much at one time, so it works well even with just a dual core CPU.

    No surprise, but the biggest resource hog is Windows. It uses the most CPU time of any of the guests, even though it isn't serving anything or doing anything (or at least it shouldn't be) most of the time. If you want to virtualize multiple windows installations I would put the most money toward your CPU.

  • by 427_ci_505 ( 1009677 ) on Sunday March 22, 2009 @11:10PM (#27294493)

    To add some support for this, for basic VM needs;

    I have a Core 2 Duo (T9300 on a Lenovo Thinkpad) laptop that runs 3 instances of linux at the same time.

    The host is a 64 bit linux, and the VMs are 32 and 64 bit linux guests.
    I've done basic text editing and messing around in 64-guest, while playing music and watching youtube and IMing and w/e on 64-host and compiling linux on 32-guest. Didn't break a sweat, and used about 1.2GB of Ram total (I have 4 installed). So for basic tinkering any new-ish machine should have no problems.

    This was with VMWare Player, btw.

  • by Kamokazi ( 1080091 ) on Sunday March 22, 2009 @11:18PM (#27294535)

    Yep, you don't need power, or server class hardware if this is just a test setup.

    Now everything is entirely dependent on your setup, but your biggest factor is going to be RAM. Unless you are running SQL or something else CPU intensive, RAM will be your limiting factor on how many VMs you can run in most cases.

    The most cost effective solution I think would be to build some whitebox AMD 64 X2 systems from Newegg and load them down with 8GB of RAM...should run you about $350-400 each. One of those systems could run several VMs. If you think one of them might need some CPU horsepower, you should be able to build a Core 2 E8200 system for about $100 more.

    I would also suggest building an iSCSI storage box to house the VMs and their data. Openfiler ( does a great job of this. iSCSI is a technology that works very will with virtualization and its failover capability. For a system like this, you just need a coupla big hard drives, mobo capable of RAID, and 1GB of RAM.

  • by Anonymous Coward on Sunday March 22, 2009 @11:58PM (#27294741)

    I'm also running qemu-kvm on a 2GB Turion laptop (hp dv2412ca - $799 1.5 years ago). I'm very very happy with it. Here's my setup:

    - debian with 2.6.28 kernel (native)
    - windows 2000 server (vm with 512MB)
    - windows xp pro (32 bit) (vm with 384MB)
    - windows xp pro (64 bit) (vm with 512MB)

    Performance is awesome even though the laptop's CPU only runs at 1.7Ghz. Also I was very happy to have them all appearing as seperate hosts on my home network - all my other computers (including the networked printer) see them as individual, independant hosts: (debian) (windows 2000 server) (xp pro 32 bit) (xp pro 64 bit)

    Stability is great too - the windows 2000 server has gone months without a reboot.

    If I happen to need more RAM for one reason or another, I tend to shut down the 64 bit XP VM.

    Good luck!

    If you do go with a laptop, you have to be careful about the CPU. (Maybe that's not the case anymore?) I almost bought a TK-53 instead of a TL-58 AMD cpu, and the TK-53, even though it's the same generation, doesn't have the virtualization support! Gah!

  • by Anonymous Coward on Monday March 23, 2009 @12:04AM (#27294773)

    The basics to virtualization comes down to the # of cores, the amount of memory and the # of spindles, though if you've read through the latest reviews of SSD's on Anandtech you can replace spindles with Intel X-25m or X-25e drives. In a virtual environment random reads/writes are FAR more common than any sequential read/write access, therefore you either want a high spindle count or fast SSD drives, depending on your budget.

    A quad-core system with 4-8 gb or more (depends all on how much memory you want to allocate per VM) and either alot of SATA drives or one or more Intel SSD's will be very fast.

    Most VM's that I've seen miss the boat on the disk throughput but deliver on the amount of memory and RAM. Given those requirements you can find a quad-core processor + mb for ~$200 (I bought mine for 180), 8 GB RAM (PC2-6400) ~$60-80, and one intel ssd for $325. You should be extremely happy with the performance. If you need storage, I'd suggest buying some sata drives in addition.

  • Re:8 core Mac Pro (Score:3, Informative)

    I routinely run three or four guest VMs (Debian/Ubuntu/Win2K) concurrently on my laptop. Dual core AMD64, 3 GB RAM, 200GB drive (Toshiba). You must be doing something wrong.
  • Re:8 core Mac Pro (Score:1, Informative)

    by masshuu ( 1260516 ) on Monday March 23, 2009 @01:24AM (#27295077)

    i don't see why thats a problem, back when my desktop was running with 1 core and 2 gigs of ram, i ran 4-5 Virtual machines at the same time(not a lot of ram for them, but linux doesn't need alot of ram)

    now i have 8 gigs and both cores usable, i don't worry about much

"Well, it don't make the sun shine, but at least it don't deepen the shit." -- Straiter Empy, in _Riddley_Walker_ by Russell Hoban