Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Databases Networking Hardware

Reasonable Hardware For Home VM Experimentation? 272

cayenne8 writes "I want to experiment at home with setting up multiple VMs and installing sofware such as Oracle's RAC. While I'm most interested at this time with trying things with Linux and Xen, I'd also like to experiment with things such as VMWare and other applications (Yes, even maybe a windows 'box' in a VM). My main question is, what to try to get for hardware? While I have some money to spend, I don't want to, or need to, be laying out serious bread on server room class hardware. Are there some used boxes, say on eBay to look for? Are there any good solutions for new consumer level hardware that would be strong enough from someone like Dell? I'd be interested in maybe getting some bare bones boxes from NewEgg or TigerDirect even. What kind of box(es) would I need? Would a quad core type processor in one box be enough? Are there cheap blade servers out there I could get and wire up? Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."
This discussion has been archived. No new comments can be posted.

Reasonable Hardware For Home VM Experimentation?

Comments Filter:
  • by MacColossus ( 932054 ) on Sunday March 22, 2009 @06:48PM (#27292207) Journal
    Xeon based, easy access to multiple drive bays, dual gigabit ethernet, etc. Runs linux, Windows, Mac OS X
    • Re:8 core Mac Pro (Score:5, Informative)

      by jackharrer ( 972403 ) on Sunday March 22, 2009 @07:13PM (#27292457)

      I hope you're joking... That's waaaay too expensive.

      I can run up to 4 VM on my laptop (Lenovo T60) with 3GB a and Core 2 Duo 2GHz without any problems. Often I need to work on 3 machines (design one + cluster for testing) and it works really well together. Problem is that disk subsystem sucks, so I suggest you invest in some RAID, but processor or memory wise it's enough. If you run Linux, you can run more of them as they use less memory and processor usage is also nicer. Just stay away from GUI as X uses abysmal amount of processing power in remote VM for anything more that 800x600.

      You don't really need anything very expensive - most of commodity hardware nowadays runs VMWare Server easily. It's also free so even sweeter. Just choose processor that supports virtualisation, as that speeds up everything a lot.

      • Expensive relative to what? An arbitrary machine with the same specs? I doubt that.

        Expensive relative to what a home user (even one who wants to run VMs) actually needs? Absolutely, you can get away with a machine for a few hundred dollars and get plenty of cpu power and ram.

      • Basically, as long as each virtual node isn't doing any WORK, you don't need any special hardware. And even if they are doing some work, but just not a lot. We have 5 Linux Xen VMs in production on a 1600Mhz Celeron with 768MB of RAM, works fine, no problems.

        The CPU is almost irrelevant - you'll need whatever CPU you'd need to do all the things you're doing, plus some overhead, but it's not like it falls apart.

        RAM is the only critical thing. You need at least 96 MB for the host and 24MB for each additional live Xen VM, as I recall (That's probably not precisely right) But you'll naturally be swapping a ton if you do that. A more reasonable VM has 128M - 256MB of RAM itself, so you need that for each active VM. But again, that's only for each one running at a time.

        Or if you are going to swap a bunch, get better disks :)

        In any case, I definitely wouldn't climb the price curve of equipment to do this; don't buy anything on the bleeding edge - look at arstechnica and just max the RAM on a value box - or maybe upgrade the MB to something that takes more RAM.

        Used, commodity computer equipment is usually not price effective compared to the cheap end of what's still available new. But pay attention to the price point where it's cheaper to get (and power, while they're on) TWO value boxes than to pump up the one box you've been thinking of higher.

  • by Anonymous Coward on Sunday March 22, 2009 @06:48PM (#27292213)

    ...something like this? []

  • by BadAnalogyGuy ( 945258 ) <> on Sunday March 22, 2009 @06:50PM (#27292221)

    I ran into this same situation and found the best cost/performance setup was a Beowulf cluster of netbooks.

    You get the cumulative power of those Atom processors and have a huge memory pool to run the VMs within.

  • by AltGrendel ( 175092 ) <> on Sunday March 22, 2009 @06:50PM (#27292223) Homepage
    Just check the BIOS to make sure that you can set the MB for virtualization.
    • Re: (Score:3, Informative)

      by koko775 ( 617640 )

      Don't get an abit motherboard, or at least don't get their Intel P35-based boards. I can't speak to the rest of their stuff, but putting my Abit IP35-based computer to sleep and waking it back up actually *disables* the VM extensions, either freezing upon waking if any were running, or ensuring none start until I power off (reset doesn't cut it).

      Other than that, I recommend a Core 2 Quad with lots and lots of RAM, and an array of 1TB SATA drives to RAID.

      Also of note: Windows 7 doesn't let you use a real har

    • Re: (Score:2, Informative)

      by 427_ci_505 ( 1009677 )

      To add some support for this, for basic VM needs;

      I have a Core 2 Duo (T9300 on a Lenovo Thinkpad) laptop that runs 3 instances of linux at the same time.

      The host is a 64 bit linux, and the VMs are 32 and 64 bit linux guests.
      I've done basic text editing and messing around in 64-guest, while playing music and watching youtube and IMing and w/e on 64-host and compiling linux on 32-guest. Didn't break a sweat, and used about 1.2GB of Ram total (I have 4 installed). So for basic tinkering any new-ish machine sho

  • by TinBromide ( 921574 ) on Sunday March 22, 2009 @06:53PM (#27292243)
    I ran my first virtual machine on an athlon 2200+ with 768 megs of ram. If it can run windows 7, you can run a VM or 3 (Depending on how heavy you want to get). Essentially take your computer, subtract the cycles and ram required to run the OS and background programs, that's the hardware you have left over to run the os. If the guest OS was compatible with your original hardware, chances are it'll work just fine in the OS.
    • Re: (Score:3, Informative)

      by Kamokazi ( 1080091 )

      Yep, you don't need power, or server class hardware if this is just a test setup.

      Now everything is entirely dependent on your setup, but your biggest factor is going to be RAM. Unless you are running SQL or something else CPU intensive, RAM will be your limiting factor on how many VMs you can run in most cases.

      The most cost effective solution I think would be to build some whitebox AMD 64 X2 systems from Newegg and load them down with 8GB of RAM...should run you about $350-400 each. One of those systems c

  • Dell XPS Studio (Score:5, Interesting)

    by drsmithy ( 35869 ) <drsmithy@gmai[ ]om ['l.c' in gap]> on Sunday March 22, 2009 @06:54PM (#27292247)

    Dell currently have the Studio XPS (2.66Ghz Core i7, 3G RAM, 500G HDD) going for US$800 - for a basic home virtualisation server, it's hard to go past, especially if you spend another US$80 or so to bump the RAM up to 9GB. I can't imagine you could build it yourself for a whole lot less (depending on how you value your time, of course).

    (Damn, sometimes I wish I lived in the US. Stuff is just so bloody cheap there.)

    • by dbIII ( 701233 )

      I can't imagine you could build it yourself for a whole lot less (depending on how you value your time, of course).

      It's not as if you even have to set jumpers now - do you really need more than an hour?

      As for really cheap, there's bound to be somewhere near you that does direct imports from Asia. Iwill buries Dell in quality every time and at the more expensive end Supermicro boards come directly from there anyway even if it is a US company.

      • Re: (Score:3, Insightful)

        by kenh ( 9056 )

        What the original poster said was that it woul dbe difficult to save money by building it yourself, and he's right.

        An i7 920 CPU is between $250-300.

        An i7 x58 MB is $200-300.

        A comprable video card is $100.

        Six Gigs of DDR3 RAM is $100-200.

        500 Gig SATA HD plus DVD-RW is another $100

        Chassis is $75+.

        Power Supply is $50+

        Keyboard/mouse are another $15-50.

        If you want the Vista OS license, that's another $100-200, depending on version/source.

        I'd put the DIY cost at $225 (CPU/cooler) + $200 (MB) + $100 (Video) + $15

    • You can get a pretty beefy Dell PowerEdge server with a quad core processor for less than $800. Look at the Small Business section under Tower Servers. I was actually thinking about picking one up for this same reason just the other week!
      • by drsmithy ( 35869 )

        You can get a pretty beefy Dell PowerEdge server with a quad core processor for less than $800.

        It won't be as fast as the Core i7 in the XPS studio, however, especially for virtualisation. It's also not going to have the same RAM capacity (4 slots vs 6).

        (This equation may change in the next week or so when the i7-based Xeons are "officially" released.)

        • Perhaps. I was referring more to the added parallelism for running multiple operating systems. You can get one with a single Quad Core fairly cheap. If you're really adventurous you can configure it with a second processor for a little more cash. The base model T605 starts at $700 though I'd be inclined to go for a little higher end model. A quick look at the 2900 III configured with 2 Quad Core CPUs and 12 GB ram is about $2600. I don't know what the original poster's price range is but I've spent that mu
    • If you are getting a Dell, look at the PowerEdge or some of the Precision lines (Poweredge is server and Precision is workstation). The servers look ugly but are extremely cheap compared to a workstation - What you lose is Audio, Video and stuff like that, but instead you'll get Linux supported RAID, a solid chassis and overall better value for money for Virtualization applications. Oh, and you apparently get US based customer support by phone if that is important for you.
      Try and look for an offer, and D
  • Memory (Score:5, Informative)

    by David Gerard ( 12369 ) <slashdot&davidgerard,co,uk> on Sunday March 22, 2009 @06:55PM (#27292267) Homepage
    64-bit Linux host and as absolutely much memory as you can possibly install.
    • Re: (Score:2, Informative)

      by tautog ( 46259 )

      Mod up the parent. Tons of ram is the most critical component and a 64-bit host should be mandatory.

      Second, a multi-core processor (don't care which, pick your poison) makes things feel snappier.

      Lastly, multiple monitors are really nice. Find a card with multiple digital outputs and a couple of decent LCDs make for crisp, fast, and pleasant display. Spend a little jack here - I picked up a set of Viewsonic 19" widescreens recently for about $320 (for two). Go for high-res, it's worth it.

      For reference, I'm r

  • by Deleriux ( 709637 ) on Sunday March 22, 2009 @06:55PM (#27292273)

    I personally use qemu-kvm and im quite happy with it. Thats running on a dual core machine with 2G of ram (probably not enough ram though!).

    For the KVM stuff you need have chips which support Intels VT or AMDS AMD-V so your processor is the most important aspect. A quad core would probably be suitable too if you can buy that.

    For just experimentation usage its a fantastic alternative to VMWare (I personally got sick of having to recompile the module every time my Kernel got updated).

    On my box myself i've had about 6 CentOS VMs running at once but frankly there were not doing much most of the time. Ultimately its going to boil down to how much load you inflict on VMS underneath, my experience with it has not been very load heavy so I could probably stretch to 9vms on my hardware which is probably on the lower end of the consumer range these days.

    The most important bits are your CPU and RAM. If your after something low spec you can do dual core 2g ram but you could easily beef that up to quad core 8G RAM to give you something you can throw more at.

    Oh and Qemu without KVM is painstakingly slow - I wouldn't suggest it at all.

    • Re: (Score:3, Informative)

      by monkeySauce ( 562927 )

      I'm also using QEMU+KVM. I built an inexpensive server with parts from Newegg. Total cost was around $500, about 9 months ago.

      I second the knock against VMWare (free server edition). I used it for a bit and got sick of it not working after kernel upgrades, or not working because it complained about it's config being incomplete/invalid, etc. Xen, VirtualBox or QEMU are all better products, IMO.

      I would say that the number of guest VM's you can run concurrently really only depends on your RAM, where performanc

    • The most important bits are your CPU and RAM.

      I do a lot of my work in VMs and can tell you that the most important things, in order are:

      1) RAM
      2) RAM
      3) Disk speed

      You need a 64-bit CPU and you want it to be dual-core at least. But other than that it's basically a minor issue. All CPUs are 64-bit now except netbook Atom. VMs generally run say 10% slower in terms of CPU speed, if that, so just choose a CPU slightly beefier than what you'd need if you weren't using VMs.

      Why is RAM so important? Because IO is a bottleneck normally and with VMs you are ju

    • Re: (Score:2, Informative)

      by Anonymous Coward

      I'm also running qemu-kvm on a 2GB Turion laptop (hp dv2412ca - $799 1.5 years ago). I'm very very happy with it. Here's my setup:

      - debian with 2.6.28 kernel (native)
      - windows 2000 server (vm with 512MB)
      - windows xp pro (32 bit) (vm with 384MB)
      - windows xp pro (64 bit) (vm with 512MB)

      Performance is awesome even though the laptop's CPU only runs at 1.7Ghz. Also I was very happy to have them all appearing as seperate hosts on my home network - all my other computers (including the networked printer) see

  • by ya really ( 1257084 ) on Sunday March 22, 2009 @06:57PM (#27292293)
    I currently run VMware Workstation with an Intel Q6600. VMware has a setting to choose to use one or 2 of the cores. Generally, for Linux VMs, one core is enough (unless you decide on GUI). If one goes for Windows Vista/7, 2 is better for performance, but one works okay for XP.

    Ram is dirt cheap right now on Newegg as well. I have 8gb of Corsair ddr2 ram I got for 50 dollars after rebates. Non GUI, you can get by with 384-512mb of ram, but otherwise, id go with at least 1024 or more.

    The nicer part of VMware Workstation is it now supports Directx 9.0c (but with only shader level 2, still working on 3). Expect a 10 or so perecent in performance droppage though for gaming depending on how many resources you allocate.

    Your needs look a bit bigger than mine (mostly trashing VMs and running test software before doing something crazy to the actual box). A bigger CPU such as a Xenon might be more to your liking, since you can have 2 of them for a total of 8 cores (leading to lots of VMs).
    • Pretty much the base for my ESX lab. Q6600 to support SMP guests, 64-bit O/Ss, whitebox config (Asus P5something or other, 8GB RAM, small SATA drive for ESX install and local vmfs storage) and an iSCSI / NFS server for testing vmotion and such. It's ironic, but when my collegues and I do testing for consultant contracts, we have better lab environments in our basements than the companies for which we're doing work. It actually faster to mock up a design or implementation by RDPing to home and doing the wor
    • by MeanMF ( 631837 ) on Sunday March 22, 2009 @07:36PM (#27292663) Homepage
      Choosing the "dual processor" option in a VM isn't necessarily a good idea, especially if you have a lot of VMs running. It means that whenever the VM needs physical CPU time, it has to wait until two cores free up. And when it does get CPU time, it will always use two cores, even if it's not doing anything with the second one. So if there is a lot of competition for CPU, or if you're running a dual-processor VM on a dual-core host, it can actually cause things to run much slower than if all of the VMs were set to single-processor.
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Not necessarily. Look up "relaxed co-scheduling." It's been in there since around 2006. (Another reason why VMware outperforms the others.)

    • by Joe U ( 443617 )

      I'm also using a Q6600, Vista x64 host and 4GB (soon to be 8) of RAM and a RAID 10 array. Quad Core is great for VMWare and the Q6600 is an inexpensive workhorse. Go with Quad processors for VMs (XEON for your workload), this is one case where the extra cores will be of use.

  • What I use. (Score:4, Informative)

    by ( 1195047 ) <philip.paradis@p ... net minus author> on Sunday March 22, 2009 @06:58PM (#27292305) Homepage Journal
    My VM server rig is decidedly low-end compared to many I've seen, but it certainly gets the job done. I custom built the box, mostly from components bought on NewEgg; it has a dual-core AMD64 chip (soon to be upgraded to a quad-core), 3 GB RAM, and about 500 GB total drive space between two IDE (yeah, I know, will upgrade to SATA at some point) drives.

    The machine runs Ubuntu Server with VMWare Server 2. I can easily run several Debian and Ubuntu VPS nodes on it under light load, and I use it for experimentation with virutal LANs and dedicated purpose VMs. I periodically power up a Windows Server 2003 VM, which uses a lot more resources, but it's still fine for testing purposes.
    • I signed up to check out how well KVM/qemu supported security testing of virtual machines for a network security class I'm taking. The box I'm running the VMs on looks like this. Dual core AMD X2/64 CPU:

      [dave@bend ~/]# cat /proc/cpuinfo
      processor : 0
      vendor_id : AuthenticAMD
      cpu family : 15
      model : 67
      model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+
      stepping : 3
      flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall n

  • Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."

    Er, if you're running VMs, you inherently have "cheap shared disk" - the disk in the host that any of the VMs can access. :)

  • by rackserverdeals ( 1503561 ) on Sunday March 22, 2009 @07:00PM (#27292329) Homepage Journal

    You can find lots of used servers on eBay that you can mess around with. Sun's v20z servers are pretty cheap and have a decent amount of power.

    A lot of the stuff I've run across is rack mounted and keep in mind that rack mounted servers are loud [] in most cases. So it may not be the best thing to play around with in your home or office.

    You don't really need any special CPU to mess around with virtualization, you won't get "full" virtualization but I don't think that will stop you. For more info check out, this page [].

    I'm currently running a number vm's in my desktop using Sun'x VirtualBox (xVM) whatever they're calling it now. Even within some of the solaris VM's I'm running solaris containers so I'm doing virtualization upon virtualization and my processor doesn't have Virtualization technology support.

    If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [] there is an option to filter by CPU's that support virtualization technology.

    If you're primary focus is Oracle RAC, you may want to look at Oracle VM [] which is Xen based.

    • by CAIMLAS ( 41445 )

      If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [] there is an option to filter by CPU's that support virtualization technology.

      You can do that with "desktop" class CPUs, too - just fine. Only substantial difference between Opteron and Phenom 1 and 2 is the ability to have multiple CPUs; a Phenom or even an Athlon 64 x2, or I believe an Intel Core or Intel Core Duo, will do the job just fine. They all (iirc) have VT extensions.

  • by roc97007 ( 608802 ) on Sunday March 22, 2009 @07:00PM (#27292337) Journal

    You can run virtual instances on practically anything. I use VMWare Workstation on an older AMD Athlon 3200+ (the machine on which I'm typing this) and get acceptable performance if I only have one instance booted at a time. You're not going to be playing video-intensive games on the instance, but it'll work find

    I maintain a few websites (my blog, a gallery, couple other things) on an old server class machine in the garage. Companies often scrap servers after the 3 year warranty expires, or they've finished depreciating (depending on individual business rules) and they're often fast enough to make reasonable virtual servers. Often you can pick them up at a scrap sale or surplus store, or, if your company has an IT department, get permission to snag a machine that's about to be scrapped.

    I recently brought up VMWare's free bare-metal hypervisor ESXi and was surprised at how easy it was to set up and create instances. VMWare has a free Physical-to-Virtual converter you might want to experiment with. It works great with Windows, but is kinda hit-and-miss with Linux.

  • ESX Whiteboxing info (Score:2, Informative)

    by MartijnL ( 785261 )
    ESX whiteboxing information can be found in a number of places but you might want to start here: [] []
  • Not that much (Score:3, Interesting)

    by Beached ( 52204 ) on Sunday March 22, 2009 @07:05PM (#27292393) Homepage

    You can do it "well" on a dual core with 4GB of ram. Even less, but with todays prices you can get a system for a couple hundred if you watch for sales. RAM is you biggest killer that you will notice. Then again, with quad cores with VM assistance going for under $200CDN, thats relatively cheap. If you're worried about HD performance a couple 500GB drives striped will give you over 100MB/s of read speed a relatively small investment.

  • we have about 4 machines with 2 quadcore running ESX and about 100 machines (many linux and windows) and 64GB of ram in each esx node... and we have still about 50% of resources free

    so grab one quad core machine, with lots of ram (for oracle RAC+ASM+DB you will need at least about 4GB for the 3 RAC nodes, the more the better)

    as this is for testing, i would but a plain quadcore PC, with 6 to 8 Gb of ram, install a linux 64bit with xen or vmware esxi

    if you have more money, you can buy more ram or even cpu, bu

  • by cthornhill ( 950065 ) on Sunday March 22, 2009 @07:26PM (#27292583)
    I strongly advise you to do your homework before spending money on non-server class hardware (or before selecting a server for that matter). VMWare runs on a lot of hardware, but it also fails badly on a lot of consumer grade motherboards. There are some list (White Box Hardware Lists such as []) you can check. After spending some time on name server HW and on White Box gear, I can tell you that the name server gear is a lot more compatible, easier to work with, and worth the money (if you have it). If you are doing casual stuff and don't mind the considerable pain you will have to go through to get patches and select disks systems and other components, consumer gear will let you play a bit. As for doing anything serious with more than one VM on a box - not likely. Xen is a commitment, as is VMWare or any other VM system. It is going to eat the box if you do anything other than dabble in it, and you are going to spend some real money if you intend to do much with VMWare (think $3K - $5K to get very serious). Running a VM is easy. Running multiple servers, backups, external disk systems, etc. is real work and costs real money for all the extra stuff you will need. If you stick to Linux you can save a bunch, but if you intend to do any real work with MS Servers, you are going to need several licenses, and iSCSI targets, back up tools, etc...You won't actually learn too much before you go to that level that you can't learn with VMWare Workstation (a great product but not anything like a production server environment). You can get you feet wet for nothing but time with most of these tools, but you can't get real, in depth experience with what it takes to run a production cluster, replication, remote storage, live replication, and all the rest of the things you need for real production unless you actually set up a production like system - that means real servers (White Box or name brand) and lots of hardware. You won't be able to see much with less than 8 cores, and 16GB plus some local RAID and iSCSI network targets. You can get started, but if you are going to spend money, I really think you should start to buy gear that is going to build towards a real server environment or you should stick to home systems and maybe run VMWare Workstation or some other stand alone VM just to play with it. VM Mode Linux (not very popular today) or some Xen sets for personal use would give you some understanding of VM concepts, but not a lot of basis for real production issues (at least they did not for me and I was a pretty heavy development user). Production VM deployments have a lot of issues that all take real in depth study, and lots of resources (iron) to get right. On the other hand, you can get a Supermicro, a Dell, or HP server with dual Xenon quad cores for less than $4000 with some nice disk. Get 4 or 5 containers under a VM and set up a replication to another server and a remote iSCSI disk and then you have enough to start to actually do real learning on. Of course the license fees will be way more than the hardware costs, if you are using MS tools and VMWare. ESXi is OK but unless you are going to go deep and do it all the hard way (hack the OS) you can't do a lot with the free version. With Xen, if all you want is to run a couple versions of Linux, just get a quad core box and have some doesn't really give you much production knowledge, but you will have some interesting test you can try. What I am really saying is - with only 4 cores you can do some useful things to support development,and you might make a nice personal server for you private web sites, but you don't have enough iron to experience the real issues of production VM management. If you are going past what a developer does (or a tester) and looking at a operations type environment you will need 8 to 16 cores on multiple boxes. This is a lot more than a home user typically wants to spend. IMO you also can't really expect to be really good on more than one system unless you do it day in a
  • Much better solution (Score:5, Informative)

    by codepunk ( 167897 ) on Sunday March 22, 2009 @07:26PM (#27292589)

    Amazon EC2 is what I use for stuff like this, both windows and linux boxes everything available at a single push of a button. I
    also use it alot for development, fire up a machine load and go.

  • by itomato ( 91092 ) on Sunday March 22, 2009 @07:27PM (#27292595)
    Reading 'cayenne8', I can't help but imagine a V8 Porsche, and because I'm a car guy, for good or bad, this shifts the focus of my comment toward resources, specifically what is available, versus what is acceptable or tolerable.

    Let's say you're a one-man Lab, incorporating all the SA, Developer, and Midware functions into your 'play' with this environment. How much time will each environment spend heavily plowing into loads?

    If your intent is to deploy RAC in a multitude of scenarios, in short order, with a minimum of intervention, you may be able to get away with $1500 to $2500 worth of NewEgg parts (think high throughput - RAID, Max. RAM, Short access times, etc.) and the virtualization technology of your choice. Personally, I find VirtualBox capable of everything I need as far as virtualization and deployment goes, however, you need to be able to leverage 'fencing', with likely puts you into VMWare territory.
    Fortunately, VMWare Server is 'free', and CentOS and OpenSuSE support some of the more advanced features of HA on Linux. Then again, if we're looking at resources as a major factor, then Redhat and Novell might be worth looking at, as they both offer 60 to 90-day evaluation licenses for their Enterprise Linux products, which may offer a prettier and more 'honest' (per the documentation and common expectations) implementation of their respective HA features than the freely-available, and in some cases, in-flux versions of the same software.

    As far as RAC goes, take a look at the requirements for RAC, per Oracle's installation guidelines,, and size/spec from there. I believe you can get away with 16GB - total, if you have the capability to size the VM's memory access, or otherwise configure the amount of addressable memory, or put uo with or hack Oracle's RAC installation pre-flight. There is also valuable documentation available on your chosen OS vendor's sit, which may even be Oracle, who knows.. []

    You may be hell-bent on performance, however, and you may be looking for the ultimate grasp of technological perfection, as it exists at Sun Mar 22 17:29:59 EDT 2009. In this case, you may want to look at Xen, which is available on Solaris as their 'xVM' technology, as well as on various Linuxes and BSDs.
    On the other hand, you may be a Mac guy, with a decked-out Octo-core Xeon Mac Pro, where you have the option of Parallels and Virtual PC and something else, in addition to Sun's VirtualBox mentioned above.

    Ultimately, things to keep in mind may be shared disk requirements, fencing options, and VM disk and memory access.
    • The only difficultly with putting RAC on "a" machine is that it is the configuration of the networking that tends to be the major PITA.

      All that will get sidestepped by going virtual. If you don't expect to be maintaining the hardware, it is not likely to matter.

      • by itomato ( 91092 )

        That's why I use VitrtualBox, The ability to set up bridged and NAT'ed networks is easy and reliable.

        With Xen or the like, there's quite a bit more that needs to be accounted for, and now that I think about it, multiple physical NICs may not be a bad idea for this one-box lab.

  • Two machines (Score:5, Informative)

    by digitalhermit ( 113459 ) on Sunday March 22, 2009 @07:29PM (#27292609) Homepage

    You can do Oracle with just a single machine running multiple VMs; however, if you really want to get serious, you should consider building two physical machines. One each machine, create a virtual or two with 1-2G of RAM. for the shared disk, use DRBD volumes between the two.

    My test RAC cluster has two AMD X2 64-bit systems with two gigabit NICs each. CompUSA has a similar machine for about $212 on sale this week. Newegg prices are similar. You'll need to add a couple extra Gig NIC and some more storage. Still should cost under $400 each.

    On each physical system I used CentOS 5.2 with Xen. I created LVMs on the physical machines as the root volumes. Also carved out a separate volume to back the shared volume. Then I carved out a xen virtual machine on each with 1.5G each. I put the DRBD network on one pair of NICs. The other pair was used for the network and heartbeat (virtual ethernet devices).

  • VirtualBox is fairly good even on mediocre hardware. The more RAM and CPU the better, but you don't need a quad-core with 8 gigs of RAM just to run a virtualizer. Heck, you don't even need a dual core for that. Do make sure you have lots of RAM though (I have ~2 gigs, and ~2 gigs swap as well, though Linux never uses it anyway). YMMV, so don't use this info for anything mission-critical.

  • by Minwee ( 522556 ) <> on Sunday March 22, 2009 @07:37PM (#27292675) Homepage
    The difference between 'server class' hardware and you beige box PC is that the more expensive 'server' is a lot more reliable and has extra remote access and hardware monitoring features. That's about it. If all you want is to run virtual machines in a test environment, just get a desktop with a hefty CPU and a whole whack of RAM and you're set. A good 'gaming' machine without the video card would be fine. You don't need to spend extra for a 'server'.
  • Really for getting started you just need memory. Everything else is just a convenience in terms of performance and won't really buy you more functionality.

    I run XP as my host OS with just 2GB of physical RAM, and then do development in a 768MB Linux partition under that using VMWare Workstation. You can do the same thing for free with Xen or VMWare Player or Server.

    When 2-4GB is not enough, then either upgrade your workstation to a 64-bit OS and throw in as much memory as you can fit/afford, or bring up ano

  • by CAIMLAS ( 41445 ) on Sunday March 22, 2009 @07:53PM (#27292803) Homepage

    You say you want to go "cheap", that you don't want to spent too much money, yadda yadda... and then you go on to mention things like "cheap" shared disk and "cheap" blade servers?

    What you realistically need and want are two different things.

    I'd suggest a cheap quad-core AMD Phenom II system with 8G or so of RAM. Nothing too fancy. that I assume you're going to be running a Windows host OS, or VMWare ESX. More RAM will be needed for the Windows host OS, obviously.

    Absolute lowest-end hardware you'd want to look at getting is an AMD Athlon 64 x2 or Intel Core (IIRC) based system. In other words, you want/need the VT support, or it'll be purely an emulated environment, and substantially slower than native (30%?), not just marginally (10%?).

    I recommend AMD hardware because it's got a better price/performance point, and because unlike the other stuff in the "reasonable midprice" range for Intel, it's got the memory controller/north bridge integrated into the CPU (for newer gen stuff). I'd say go Phenom or Phenom II without any hesitation.

    With a CPU like this [], there's no reason you couldn't build a full system for around $450-500, sans storage. You could probably find a suitable "starter"/deal system for $300 from TigerDirect that'll do the job just fine with a little more RAM and another drive.

    For disk, just go with an SATA RAID card (LSI are good) and 3 1Tb disks. That's about as cheap as you'll get and still have room to work.

  • Really.. it all boils down to your monthly utility fees and what you are willing to pay...

    You can pick up 1-off servers being ditched by corporations (if you degauss the drives and certify that you will destroy them if you ever stop using them, you may get the drives as well) - otherwise, it will probably be sans hard drives, for next to nothing....

    I picked up a test platform, Two Dual Core 2.66Ghz 64 Bit Xeons, 16GB RAM, 8 hot swap U320 72GB Drives, battery backed raid caching controller, dvd, floppy, 2 x

  • But the question is, how many VM's do you plan on running at once?

    I installed a 2 Node RAC environment on Vmware using my laptop which was a 2Gz Intel Core 2 Duo with 2GB of ram. (Instructions here [])

    So you don't need something super powerful if you don't plan on leaving them all running 24x7 and just startup the ones you are playing with at the time. A Quad core system with at least 4GB of ram should and lots of disk should be plenty.

    I would stay away from running any of your environments on external USB

  • I've been thinking of maxing out my 8 Core Mac Pro with 32GB ram, gobs of disk space and installing XenServer or Vmware ESX server and boot via rEFit. Another option I'm considering is picking up a new box altogether. I have my eye on a Dell server with 2 4x core AMD CPUs. With a 300GB disk and 32GB ram, it goes for just over $3k. Add in a few SAS drives and you're around $4k, but you have a highly capable system capable of running more VMs than you probably need.
  • For modeling something like RAC, a dual-core anything with tons of RAM would be necessary.

    However, the devil's advocate in me is saying to not go virtual with this project unless you have some speedy-fast fiber channel SAN at your disposal. Reason being: you aren't going to see the same performance in the VMs as you would with physical hardware. Especially with the database backend that is constantly thrashing your drives depending on load.

    • Also, I forgot to add, VMs defeat the purpose of clustering if the physical hardware fails. Meaning: one physical machine down, n nodes in the cluster down.
  • by marynya ( 735459 ) on Sunday March 22, 2009 @08:15PM (#27292969)
    The main requirement is enough RAM for two operating systems plus some extra for the virtualization system. The CPU is less important. I run Windows XP Pro as a virtual system on a Linux host with VMware Workstation 6. It is a 5-year-old Athlon 3000+ box with 1 GB of RAM. I allocate 512 MB to Windows, which is about the minimum for XP. Current Linux distributions need at least 256 MB and VMware is something of a memory hog itself so 1 GB is about the minimum RAM for this setup. Windows is perhaps just a smidgen slower than it would be if running natively on the same hardware but the difference is minimal. It does not have much effect on the speed of Linux apps running simultaneously. Things bog down fast if you try to run more than one virtual system simultaneously but VMware is good at using multiple processors for this. I did some work which involved running up to 6 instances of FreeBSD simultaneously on an 8-core Xeon system with 4 GB RAM. Up to 6 it did not slow down much. Over 6 it got sludgy. Have fun! Mike
  • Buy all the memory you can afford. Then buy some more.

    Virtualization is a memory pig. Cool, fun to play with, but still a memory pig.


  • by johnthorensen ( 539527 ) on Sunday March 22, 2009 @08:16PM (#27292985)
    The biggest thing that you have to watch out for with VMWare ESXi is the hardware compatibility list. You will run into trouble with two major components: RAID controllers and network adapters.

    The network adapter solution is simple: buy the most plain-jane Intel PCI or PCIe adapter that you can find. Examples of ones that are known to work right out of the box are the Intel PWLA8391 GT [] (single-port PCI) and the Intel EXPI9402PT [] (dual-port PCIe). I own both of these and can personally confirm operation with the latest version of VMWare ESXi.

    The drive controller situation is both complicated and -- more importantly -- expensive. Overall, Adaptec seems to be the most well-supported controller hardware out there. I have tried LSI controllers, but they often don't play well with desktop boards. Unfortunately for experimenters, the built-in RAID on practically every Intel motherboard is completely unsupported in RAID mode. Obviously no enterprise environment would be using on-board RAID like that, but it would be nice to have for experimentation.

    Which brings me to my favorite storage solution for ESXi: Openfiler []. Openfiler is an open-source NAS/SAN solution based on rPath Linux. It turns any supported PC into a storage applicance, and can share its storage in a plethora of ways. In the case of a virtualization effort, it has two major things going for it: it supports any storage controller that Linux supports, and it supports iSCSI and NFS.

    If, say, you do have a machine sitting there with Intel on-board RAID, you can install Openfiler there. While the hardware might not work under ESXi, it'll work great for Openfiler. Even better, Openfiler also supports Linux software RAID which can be superior when it comes to disaster recovery (no need to have a specific controller card to see your data). With this in mind, you'll be able to get Openfiler running on just about any hunk of shit box you have sitting around.

    Once you have Openfiler set up, you can take the next step in virtualization-on-the-cheap: installing ESXi on a USB flash drive. There are a number of tutorials on the web for this (just google 'ESXi USB flash install'), but the basic process amounts to extracting the drive image from the ESXi installation archive and simply writing it to flash with dd (on Linux) or physdiskwrite (on Windows). Once this is done, you can plug the flash drive into nearly *any* recent x86 hardware and it will boot ESXi. A really neat feature that you get along with this is the ability to substitute hardware with ease, and upgrade to later versions of ESXi simply by swapping the flash drive.

    Once you have ESXi installed, create an iSCSI volume on your Openfiler box. Then, use the VMWare management software to connect the ESXi box to your Openfiler iSCSI volume. You can then create virtual disks and machines from the actual USB-flash-booted VMWare host, all of which will be stored on your Openfiler machine. You may also want to try experimenting with NFS instead of iSCSI. There are a couple proponents of this out there that say under certain circumstances it's even faster than iSCSI. It also makes backing up your virtual machines a little simpler since an NFS share is generally easier to get to than iSCSI from most machines. Another cool aspect of the Openfiler-based configuration is that you will get access to another whiz-bang feature of VMWare called vMotion. Since the VMs and their disks are stored centrally, you can actually move the VM execution from one ESXi box to another - on the fly.

    In all, this is a great way to get your feet wet in virtualization because you can have a pretty sophisticated setup with very basic commodity hardware. If you want to go the extra mile and get really fancy, put a dedicated gigabit NIC (or two, bonded) in each box and enable jumbo frames; the SAN will be more than fast enough most anything you'd like to do.

    Good luck!
    • You could also checkout FreeNAS as an alternative to Openfiler, just depends on if you want to run Linux or FreeBSD on your NAS.

      Second, the cheap crap they put on motherboards and call 'RAID' is generally nothing of the sort. Its almost always handled by the CPU itself, either via the driver or System Management mode of the CPU and as such is no better than using the software RAID provided by your OS. In most cases its better to use the software RAID as its made to work with your OS in a the most efficien

      • Good comments, but vMotion most certainly does work with ESXi. Yes you need Virtual Center, but ESX is not a prerequisite.

        In the long term, I believe that VMWare sees greater uptake of ESXi vs. ESX since it is a lot thinner and plays better in a dense environment.
      • Also, I should mention that the reason I drug the whole Intel RAID into the mix is that ESX/ESXi does not support software RAID, so if you want RAID you *have* to have some sort of hardware solution (even if the processing is done on the host CPU). So, for those that are experimenting it would be nice to support the Intel RAID since it's free for the having on most recent boards.

        For the reasons detailed above in both our posts, it would be even nicer if ESX/ESXi supported software RAID. However, given i
  • I've bought a small Shuttle K45 system, adding my own Intel chip and extras in there. Cost about $450 for my setup. About to put VMWare Server on it. I'll let you know how it works out.

  • My hints (Score:5, Informative)

    by kosmosik ( 654958 ) <kos&kosmosik,net> on Sunday March 22, 2009 @08:22PM (#27293037) Homepage

    Well you don't clearly state what you wish to accomplish nor how much money you have so it is hard to answer. But maybe such setup will be OK.

    Build yourself custom PCs.

    Storage server:
    - good and big enclosure which can fit large ammount of drives
    - moderate 64bit AMD processor (really any - you will not be doing any serious processing on storage server)
    - any ammount of RAM (really 1 or 2 gigs will be enough)
    - mobo with good SATA AHCI support (for RAID) and NIC (any - for management) onboard
    - one 1Gb PCI-* NIC with two ports
    - 6x SATA2 NCQ HDD (any size you need) dedicated for working in RAID - software based (dmraid) RAID1+0 array configuration

    Virtualization servers (2 or more):
    - you need the virtualization servers to have the same config
    - any decent enclosure you can get
    - the fastest 64bit AMD processor you can get preferably tri or quad core (it will do the processing for guests) with VT extensions
    - as much RAM as you can get/fit into the machine
    - mobo with VT support, one (any - for management) NIC onboard
    - one 1Gb PCI-* NIC with two ports
    - one moderate SATA disk for local storage (you will be using it just to boot the hypervisor) or disk-on-chip module

    Network switch and cables:
    - any managed 1Gb switch with VLAN and EtherChannel support, HP are quite good and not as expensive as Cisco
    - good CAT6 FTP patchcords

    General notes for hardware:
    - make sure all of the PC hardware is *well* supported by Linux since you will be using Linux :)
    - if you can get better (quality wise) components, good enclosures, power supplies, drives etc. - since it is a semi server setup you don't like it to fail for some stupid reason

    Network setup:
    - make two VLANS - one for storage, other for management
    - plug onboard NICs into management VLAN
    - plug HBA NICs into storage VLAN
    - configure ports for EtherChannel and use bonding on your machines for greater throughput

    Software used:
    - for storage server just use Linux
    - for virtualization servers use Citrix XenServer5 (it is free, has nice management options, supports shared storage and live motion) or vanilla Xen on Linux, don't bother with VMWare Server, VMware ESX and Microsoft solutions are expensive

    Storage server setup:
    - install any Linux distro you like (CentOS would not be a bad choice)
    - use 64bit version
    - use dmraid for RAID and LVM for volume management
    - share your storage via iSCSI (iSCSI Enterprise Target is in my opinion best choice)

    Virtualization servers setup:
    - install XenServer5 (or any distro with Xen - CentOS won't be bad)
    - use interface bonding
    - dont use local storage for VMs - use storage network instead

    Well here it is. Quite powerfull and cheap virtualization solution for you.

  • I not only run this at home, but at lots of small business customers. Has 3Ghz Pentium D (dual core, 64-bit). Get 2 large SATA drives (500G or more) and 2G or more ECC memory. Starting price is $400, but by the time you get the memory and disk upgraded, it is about $600, $800 with onsite maintenance. A big benefit for me for home use was it is *quiet*. It has a single large (and therefore quiet) fan with ducting to draw air over the CPU heatsink. Look for it in the "small business section" of Dell.


  • You should have at least 2 of these: AMD-V or Intel-VT Capable Motherboard and Processor Combo (for those interested in running Hyper-V or other enhanced VM setups) I prefer the Intel branded boards for my setups, never let me down... At least 4GB of RAM per box Cheap SATA drive 100GB maybe? 2 or more Intel Pro 1000 NICs (can get them for about 35 bucks on newegg) You should also get: Any box with a P4 or similar should work for this. Setup Openfiler or FreeNAS. If you are playing with VMs, shared storag
  • Don't go overkill. (Score:3, Interesting)

    by GiMP ( 10923 ) on Sunday March 22, 2009 @08:44PM (#27293205)

    I run a VPS hosting company, my job is to research, setup, and maintain a cluster/grid of servers running Xen with hundreds of guests (virtual machines). For testing and even for deployment, we've used machines as simple as a single-core AMD 3800 with 80GB disks in RAID-1, and 1GB of RAM. These aren't the most profitable machines, as they can only support as many virtual machines as can pay for the electricity and square footage, but they work perfectly fine for up to approximately 12 guests. I do highly recommend a dual-processor or dual-core system, though.

    If you want to know how much you can stress a system, for highly-dense numbers of guests, I try not to load more than 15 guests and 2GB of RAM per CPU core. Of course, if you plan to have a low-density of guests (say one guest per core), you'll need to adjust accordingly.

    I found that for my home office, where I often have pretty excessive needs such as installing multiple operating systems and performing multiple large compiles at the same time, a dual quad-core system with 16GB of RAM is overkill. Right now, I'm using a single quad-core workstation with 8GB of RAM and it works pretty well for me, and is probably still a bit more than I need.

  • by BagOBones ( 574735 ) on Sunday March 22, 2009 @09:02PM (#27293351)

    The biggest problem I see with those getting into virtualization is that they think that virtualizing things makes them magically need fewer resources.

    You can share CPU time as most apps will not drive the CPU 100%, having said that it is often best to have as many cores as you can afford.

    Do not over allocate your RAM, if you can have as much ram as needed for how much you allocate to the VMs, if you over lap you will get a huge performance hit.

    Sparse disk is a fairly new feature only in some VM systems, you will need lots of disk for all of the VMs, also you will probably want to run them on different LUNs or disk groups so you don't get lots of thrashing on the drives.

    If you are only running 1 or 2 VMs as a test then really all you need is to up the ram a little and make sure the host meets the minimum specs of the VM applications.

  • installing stuff to run it as a hobby and not push any major data sets through the system you really only need to worry about RAM and disk capacity (for storing the VM files which will house the OS and programs). Just get as much RAM as you can so you can give each VM its own normal amount of RAM (500MB-4GB) depending on which applications are in the VMs. You probably want each VM to have at least 10GB of disk space so calculate that in to your overall disk capacity requirements. Your hardware in the end wi
  • Speaking as an Oracle DBA who has done a little of this, I can tell you to get a lot of RAM. I would say that an MB that can be expanded to at least 8 GB is the way to go. You might get by with only 4GB for a while, but you will eventually want more, give the relative costs of RAM.

    Oracle is always RAM hungry, and VM's multiply that.

  • by boyter ( 964910 )

    I just did this myself. I ended up just shooting for cheap hardware on the theory that if it breaks in 2 years I can just replace it. I have a Quad Core Phernom with 8 gig of RAM and two 750gig drives. Chucked VMWare on it and havent had any issues running about 8 or so VM's on it. It also serves up media using TVersity and is a network share dump as well.

    The biggest issue I have had so far, is Disk Driver perfomance. If you are planning on running multiple concurrent VM's then go for as many HDD's as you c

  • The only crucial prerequisite for VMs is having enough extra RAM for the overhead of the host OS to run nicely. The host OS should use spare ram over and above that to cache disk access, which boosts VM performance.

    The second rule of thumb is don't blow money on top spec hardware.

    DDR2 RAM is cheap, load it up. This is the only real fun killer if you don't have enough, all other advice here is non-essential, any non-dinosaur box is fine for fiddling with VMs.

    An interesting note a discrete graphics c
  • Meh (Score:3, Informative)

    by jav1231 ( 539129 ) on Sunday March 22, 2009 @09:27PM (#27293541)
    I was running VM's back in the days of DOS. First with Taskview and later with Deskview running 4 concurrent DOS v5 sessions on a single-core 8088! And if they slowed down I'd just push the turbo button and go from 4.77Mhz to 8Mhz! oooWEEEE! That's right! And I'd tote that 45lbs IBM-XT all the way to the snow! And I LIKED IT!
  • by Sycraft-fu ( 314770 ) on Sunday March 22, 2009 @09:40PM (#27293621)

    The main thing you need for VMs is memory. There isn't really any good way for VMs to share memory, they each need their own. So decide what you want to give each system, and make sure you've got that much on the host plus like 1GB for the host OS and VM software. Good news is RAM is cheap. You should be able to pick up plenty for not much money. If you get a system based on a 975X or P35 chipset, you should be able to drop 8GB of RAM in it. Ought to be more than plenty. Those are cheap and plentiful these days too. Plus, they use DDR2 RAM, which is currently the cheapest. An Intel DP35DP motherboard might be a good choice.

    As for a processor, kinda depends on how hard the VMs will be working. That they can share. So if they are mostly sitting idle, like say a web server serving up static pages, you can get away with not a whole lot of CPU power. If you want them all to be working all the time, you need more. A Core 2 Duo will probably do just fine if you that's what you've got or you need to keep the cost as low as possible. However, this is a case where a quad core would make more sense so that's a good way to go if you can. Goes double if it's the same price. Like say you can get a 2.4GHz Core 2 Quad for the same price as a 3.0GHz Core 2 Duo. While for a desktop you'd probalby want the duo, get the quad in this case. Might look at the Q6600 or Q8300. Both are under $200 and would do a real nice job. Note that the Q8300 is going to need a P35 board, teh 6600 will work on a 975 board.

    Disks are a real big "it depends." VMs can be set to grow as they need more space, and so you can in theory have a bunch of VMs sharing one small disk, along with the OS. However, that can lead to performance problems. Harddrives suck at random access, and if a bunch of VMs get going on it at the same time, that's what you'll get. So ideally you'd have one VM per harddisk. In reality, that's probably overkill unless you've got lots of disks laying around. However if your VMs will be heavy disk access, you might want to consider getting 2 drives for them since drives are cheap. Either way, the best idea is to have the preallocate all the space they need for their virtual drives. You get better performance that way, even though it wastes drive space, but again, drives are cheap. Maybe start off with one drive for the VMs and if you find they are getting bogged down, buy another and move them over. They are just files on the drive so easy to move.

    Those are the biggest factors to think about. You get a quad core, good amount of RAM, and enough disk space, you should get great performance. If you need to save money, don't feel like a dual core won't work fine. Really the only thing not to cheap out on is RAM. You need to have enough, virtual memory is WAY too slow. So if you want 4 VMs with 1GB each, have not less than 5GB in the system.

    Supposing you do have plenty of cash and want to further increase performance one other thing you can look at is NICs. VMs don't do a great job of sharing NICs presently. VMWare is actually working on that, but right now you get ideal performance with one NIC per VM. Not normally a big deal but if your VMs do lots of traffic it can matter. So if you want, get more NICs. One of those multi-port NIC cards works just as well. This really isn't all that necessary, but you can do it if you are after the best performance.

  • by Anonymous Coward

    I'm doing this now, running a company infrastructure on Xen3.2 and 2 non-server class machines. These are Gigabyte and MSI Core 2 Duo motherboards running at 2.6 and 3Ghz. Each with 4GB of RAM, dual GE NICs, RAID1 drives. Nothing special.

    Application Systems are:
    - enterprise email/calendaring/IM
    - CRM
    - document management, file/print
    - project management
    - VPN
    - internal website / wiki
    - VoIP/PBX
    - Monitoring, PKI to manage VPN credentials
    - LD

  • For the best performance in Virtualization, buy a Nehalem CPU, a core i7, 4 cores. We are using those at work and the benchmarks are amazing.

    If you don't have much money, buy the low end quad core, core i7 920, throw in 8GB of memory and you will have plenty of power and memory to throw in a couple of 1vCPU VMs, the performance is pretty good. If you have money, buy a dual core i7, 8 or 16GB, then you have something that smoke. VMWare esxi is pretty cool. I believe they have in the work a version optimize

  • You don't need a lot of cores for VM hosts. But you do need lots of RAM, since each VM can take a huge chunk.

    So, essentially, you don't need anything "special" hardware wise to use VM's. And I recommend using Linux + VirtualBox. []

  • If you want to run linux processes with isolation from your physical machine, install an OpenVZ enabled kernel plus the openvz packages. It nicely isolates processes running inside each container; there is minimal virtualisation overhead (so you don't need a bigger machine).

    Also the container root filesystem is an ordinary directory on your host. This means you can put multiple containers into a large filesystem and they share the available space, you can backup or copy containers trivially, and you can e

  • Both Nehalem and Barcelona (Phenom) are quad-core and most importantly, support EPT and NPT respectively. This feature has significant impact on virtualization performance.

    If you want to run 4 VMs, you'll probably want to have a fair bit of memory. 4GB would be good, 8GB would be better.

  • Memory is a bit more key than processor speed IMO. Any recent Core 2 Duo should be more than adequate to run a VM. You'll find you can run Vista, and XP in 1 GB and 512 MB very comfortably. If you plan on running multiple VM's at the same time, you will definitely need 3-4 GB of RAM.

    If you have need for each VM to have access to specific hardware like a DVDRW or whatnot, you can either connect and disconnect it as needed to each VM, or if you need it on both at the same time, you'll want a box that you c
  • by spinkham ( 56603 ) on Sunday March 22, 2009 @11:54PM (#27294395)

    Memory, and lots of it. Nothing else will help as much for running multiple VMs.
    Memory is dirt cheap, I recently bought 8 gigs of ECC ram for ~100 USD. Of course, over 3-4 gigs, and you need a 64 bit OS, I use Ubuntu 64, but I know others who use Vista 64 to good effect.

    At least 2 cores, 3 or for doesn't hurt either. There's great value in both AMD and Intel at the moment, Intel owns the top end, but at the low end or midrange AMD tends to have the better value.

    If possible, get a separate drive for at least your main OS, and run the VMs off their own drive. More spindles == more IO, I run 6 drives in my box, one for the OS, and 4 raid 5 for my homedir for speed, capacity, and safety, and one drive bay I swap out for a spare I keep offsite that holds my backups. Linux software raid is great for this use, and with modern multi-core processors you won't notice the overhead.

    If you can only afford maxing out one thing though, make it the memory.

  • Take your minimum disk and RAM requirements for a single server, multiply by how many VMs you want, these are your minimum disk and RAM requirements for the host. There is no minimum CPU speed, slower will be slower and faster will be faster, but a test rig won't fail to work just because you have a single core.

    I'm running a small cluster of load-balanced LAMP VMs on my laptop; 128MB / 3GB each, sharing a single 1GHz CPU :P

  • I recently built a box with a quad core ADM 64bit processor, 4GB Ram, and a nice NvIDIA graphics card for under $500, and I use KVM/QEMU as a hypervisor, running 64-bit Vista under 64-bit Linux. Works great.

    Just about any machine you can buy these days can do full virtual. If not, get your money back!

  • by Anonymous Coward

    The basics to virtualization comes down to the # of cores, the amount of memory and the # of spindles, though if you've read through the latest reviews of SSD's on Anandtech you can replace spindles with Intel X-25m or X-25e drives. In a virtual environment random reads/writes are FAR more common than any sequential read/write access, therefore you either want a high spindle count or fast SSD drives, depending on your budget.

    A quad-core system with 4-8 gb or more (depends all on how much memory you want to

Did you hear that two rabbits escaped from the zoo and so far they have only recaptured 116 of them?