Reasonable Hardware For Home VM Experimentation? 272
cayenne8 writes "I want to experiment at home with setting up multiple VMs and installing sofware such as Oracle's RAC. While I'm most interested at this time with trying things with Linux and Xen, I'd also like to experiment with things such as VMWare and other applications (Yes, even maybe a windows 'box' in a VM). My main question is, what to try to get for hardware? While I have some money to spend, I don't want to, or need to, be laying out serious bread on server room class hardware. Are there some used boxes, say on eBay to look for? Are there any good solutions for new consumer level hardware that would be strong enough from someone like Dell? I'd be interested in maybe getting some bare bones boxes from NewEgg or TigerDirect even. What kind of box(es) would I need? Would a quad core type processor in one box be enough? Are there cheap blade servers out there I could get and wire up? Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."
8 core Mac Pro (Score:5, Funny)
Re:8 core Mac Pro (Score:5, Informative)
I hope you're joking... That's waaaay too expensive.
I can run up to 4 VM on my laptop (Lenovo T60) with 3GB a and Core 2 Duo 2GHz without any problems. Often I need to work on 3 machines (design one + cluster for testing) and it works really well together. Problem is that disk subsystem sucks, so I suggest you invest in some RAID, but processor or memory wise it's enough. If you run Linux, you can run more of them as they use less memory and processor usage is also nicer. Just stay away from GUI as X uses abysmal amount of processing power in remote VM for anything more that 800x600.
You don't really need anything very expensive - most of commodity hardware nowadays runs VMWare Server easily. It's also free so even sweeter. Just choose processor that supports virtualisation, as that speeds up everything a lot.
Re: (Score:2)
Expensive relative to what? An arbitrary machine with the same specs? I doubt that.
Expensive relative to what a home user (even one who wants to run VMs) actually needs? Absolutely, you can get away with a machine for a few hundred dollars and get plenty of cpu power and ram.
Parent++; you basically don't need specia hardware (Score:4, Interesting)
Basically, as long as each virtual node isn't doing any WORK, you don't need any special hardware. And even if they are doing some work, but just not a lot. We have 5 Linux Xen VMs in production on a 1600Mhz Celeron with 768MB of RAM, works fine, no problems.
The CPU is almost irrelevant - you'll need whatever CPU you'd need to do all the things you're doing, plus some overhead, but it's not like it falls apart.
RAM is the only critical thing. You need at least 96 MB for the host and 24MB for each additional live Xen VM, as I recall (That's probably not precisely right) But you'll naturally be swapping a ton if you do that. A more reasonable VM has 128M - 256MB of RAM itself, so you need that for each active VM. But again, that's only for each one running at a time.
Or if you are going to swap a bunch, get better disks :)
In any case, I definitely wouldn't climb the price curve of equipment to do this; don't buy anything on the bleeding edge - look at arstechnica and just max the RAM on a value box - or maybe upgrade the MB to something that takes more RAM.
Used, commodity computer equipment is usually not price effective compared to the cheap end of what's still available new. But pay attention to the price point where it's cheaper to get (and power, while they're on) TWO value boxes than to pump up the one box you've been thinking of higher.
Re:8 core Mac Pro (Score:4, Interesting)
1 is a web server (linux) for dev testing.
1 is a photo album server (winxp) for sharing my pics with friends and family.
1 is a VM (winxp) I dedicate for downloading stuff off the net (BT, IRC).
1 is a VM for browsing sites and connecting to work. This one erases everything when I shut it down, in case I get any crap-ware from browsing.
The only thing preventing me from running more would be that my laptop only handles 3GB of memory and 4 VMs plus my host applications get close to reaching that limit. And swapping sucks.
Re:8 core Mac Pro (Score:5, Funny)
Wow... Your computing experience sounds like a real pain in the ass.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
I sold my Dual Processor G5 for $1800 after owning it for 2 years. I paid $2000 for it. That's a pretty low cost of ownership, if you ask me.
Re:8 core Mac Pro (Score:4, Interesting)
How about... (Score:5, Funny)
...something like this? [xkcd.com]
Great question! (Score:5, Funny)
I ran into this same situation and found the best cost/performance setup was a Beowulf cluster of netbooks.
You get the cumulative power of those Atom processors and have a huge memory pool to run the VMs within.
Re: (Score:2)
Just about any Dual core and up. (Score:5, Informative)
Re: (Score:3, Informative)
Don't get an abit motherboard, or at least don't get their Intel P35-based boards. I can't speak to the rest of their stuff, but putting my Abit IP35-based computer to sleep and waking it back up actually *disables* the VM extensions, either freezing upon waking if any were running, or ensuring none start until I power off (reset doesn't cut it).
Other than that, I recommend a Core 2 Quad with lots and lots of RAM, and an array of 1TB SATA drives to RAID.
Also of note: Windows 7 doesn't let you use a real har
Re: (Score:2, Informative)
To add some support for this, for basic VM needs;
I have a Core 2 Duo (T9300 on a Lenovo Thinkpad) laptop that runs 3 instances of linux at the same time.
The host is a 64 bit linux, and the VMs are 32 and 64 bit linux guests.
I've done basic text editing and messing around in 64-guest, while playing music and watching youtube and IMing and w/e on 64-host and compiling linux on 32-guest. Didn't break a sweat, and used about 1.2GB of Ram total (I have 4 installed). So for basic tinkering any new-ish machine sho
Re: (Score:2, Troll)
I run about 3-4 different VM's on a dual core with 4 gigs of ram on any given day.
My dual core 2006 Gateway laptop with 2G ram did this [youtube.com] - almost every version of Windows running at once on top of Ubuntu 8.04 with eye candy. It's not a 64-bit machine, either, so I've known for a while that fairly low-end computers can run virtualization software fairly well.
need special hardware? (Score:5, Informative)
Re: (Score:3, Informative)
Yep, you don't need power, or server class hardware if this is just a test setup.
Now everything is entirely dependent on your setup, but your biggest factor is going to be RAM. Unless you are running SQL or something else CPU intensive, RAM will be your limiting factor on how many VMs you can run in most cases.
The most cost effective solution I think would be to build some whitebox AMD 64 X2 systems from Newegg and load them down with 8GB of RAM...should run you about $350-400 each. One of those systems c
Re: (Score:2)
Some time ago, you could run RAC on twin-tailed firewire... Now I can't find the article.
Re: (Score:2)
One here [oracle.com] and another here [puschitz.com]. Both are for older versions (3&4) of RHEL but the same principles apply.
As someone who works with Oracle RAC and RHEL regularly, I'd recommend skipping the shared physical disk completely and using NFS instead. You could (and we do in testing) run the NFS server virtualised as well.
Dell XPS Studio (Score:5, Interesting)
Dell currently have the Studio XPS (2.66Ghz Core i7, 3G RAM, 500G HDD) going for US$800 - for a basic home virtualisation server, it's hard to go past, especially if you spend another US$80 or so to bump the RAM up to 9GB. I can't imagine you could build it yourself for a whole lot less (depending on how you value your time, of course).
(Damn, sometimes I wish I lived in the US. Stuff is just so bloody cheap there.)
Re: (Score:2)
It's not as if you even have to set jumpers now - do you really need more than an hour?
As for really cheap, there's bound to be somewhere near you that does direct imports from Asia. Iwill buries Dell in quality every time and at the more expensive end Supermicro boards come directly from there anyway even if it is a US company.
Re: (Score:3, Insightful)
What the original poster said was that it woul dbe difficult to save money by building it yourself, and he's right.
An i7 920 CPU is between $250-300.
An i7 x58 MB is $200-300.
A comprable video card is $100.
Six Gigs of DDR3 RAM is $100-200.
500 Gig SATA HD plus DVD-RW is another $100
Chassis is $75+.
Power Supply is $50+
Keyboard/mouse are another $15-50.
If you want the Vista OS license, that's another $100-200, depending on version/source.
I'd put the DIY cost at $225 (CPU/cooler) + $200 (MB) + $100 (Video) + $15
Re: (Score:2)
Re: (Score:2)
You can get a pretty beefy Dell PowerEdge server with a quad core processor for less than $800.
It won't be as fast as the Core i7 in the XPS studio, however, especially for virtualisation. It's also not going to have the same RAM capacity (4 slots vs 6).
(This equation may change in the next week or so when the i7-based Xeons are "officially" released.)
Re: (Score:2)
Re: (Score:2)
Try and look for an offer, and D
Memory (Score:5, Informative)
Re: (Score:2, Informative)
Mod up the parent. Tons of ram is the most critical component and a 64-bit host should be mandatory.
Second, a multi-core processor (don't care which, pick your poison) makes things feel snappier.
Lastly, multiple monitors are really nice. Find a card with multiple digital outputs and a couple of decent LCDs make for crisp, fast, and pleasant display. Spend a little jack here - I picked up a set of Viewsonic 19" widescreens recently for about $320 (for two). Go for high-res, it's worth it.
For reference, I'm r
Depends how many VMS your running. (Score:5, Informative)
I personally use qemu-kvm and im quite happy with it. Thats running on a dual core machine with 2G of ram (probably not enough ram though!).
For the KVM stuff you need have chips which support Intels VT or AMDS AMD-V so your processor is the most important aspect. A quad core would probably be suitable too if you can buy that.
For just experimentation usage its a fantastic alternative to VMWare (I personally got sick of having to recompile the module every time my Kernel got updated).
On my box myself i've had about 6 CentOS VMs running at once but frankly there were not doing much most of the time. Ultimately its going to boil down to how much load you inflict on VMS underneath, my experience with it has not been very load heavy so I could probably stretch to 9vms on my hardware which is probably on the lower end of the consumer range these days.
The most important bits are your CPU and RAM. If your after something low spec you can do dual core 2g ram but you could easily beef that up to quad core 8G RAM to give you something you can throw more at.
Oh and Qemu without KVM is painstakingly slow - I wouldn't suggest it at all.
Re: (Score:3, Informative)
I'm also using QEMU+KVM. I built an inexpensive server with parts from Newegg. Total cost was around $500, about 9 months ago.
I second the knock against VMWare (free server edition). I used it for a bit and got sick of it not working after kernel upgrades, or not working because it complained about it's config being incomplete/invalid, etc. Xen, VirtualBox or QEMU are all better products, IMO.
I would say that the number of guest VM's you can run concurrently really only depends on your RAM, where performanc
Re: (Score:2)
The most important bits are your CPU and RAM.
I do a lot of my work in VMs and can tell you that the most important things, in order are:
1) RAM
2) RAM
3) Disk speed
You need a 64-bit CPU and you want it to be dual-core at least. But other than that it's basically a minor issue. All CPUs are 64-bit now except netbook Atom. VMs generally run say 10% slower in terms of CPU speed, if that, so just choose a CPU slightly beefier than what you'd need if you weren't using VMs.
Why is RAM so important? Because IO is a bottleneck normally and with VMs you are ju
Re: (Score:2, Informative)
I'm also running qemu-kvm on a 2GB Turion laptop (hp dv2412ca - $799 1.5 years ago). I'm very very happy with it. Here's my setup:
- debian with 2.6.28 kernel (native)
- windows 2000 server (vm with 512MB)
- windows xp pro (32 bit) (vm with 384MB)
- windows xp pro (64 bit) (vm with 512MB)
Performance is awesome even though the laptop's CPU only runs at 1.7Ghz. Also I was very happy to have them all appearing as seperate hosts on my home network - all my other computers (including the networked printer) see
Reccomend a Quad Core CPU (Score:3, Interesting)
Ram is dirt cheap right now on Newegg as well. I have 8gb of Corsair ddr2 ram I got for 50 dollars after rebates. Non GUI, you can get by with 384-512mb of ram, but otherwise, id go with at least 1024 or more.
The nicer part of VMware Workstation is it now supports Directx 9.0c (but with only shader level 2, still working on 3). Expect a 10 or so perecent in performance droppage though for gaming depending on how many resources you allocate.
Your needs look a bit bigger than mine (mostly trashing VMs and running test software before doing something crazy to the actual box). A bigger CPU such as a Xenon might be more to your liking, since you can have 2 of them for a total of 8 cores (leading to lots of VMs).
Re: (Score:2)
Re:Reccomend a Quad Core CPU (Score:5, Informative)
Re: (Score:3, Interesting)
Not necessarily. Look up "relaxed co-scheduling." It's been in there since around 2006. (Another reason why VMware outperforms the others.)
Re: (Score:2)
Re: (Score:2)
I'm also using a Q6600, Vista x64 host and 4GB (soon to be 8) of RAM and a RAID 10 array. Quad Core is great for VMWare and the Q6600 is an inexpensive workhorse. Go with Quad processors for VMs (XEON for your workload), this is one case where the extra cores will be of use.
What I use. (Score:4, Informative)
The machine runs Ubuntu Server with VMWare Server 2. I can easily run several Debian and Ubuntu VPS nodes on it under light load, and I use it for experimentation with virutal LANs and dedicated purpose VMs. I periodically power up a Windows Server 2003 VM, which uses a lot more resources, but it's still fine for testing purposes.
Me too (sort of) (Score:2)
Shared disk (Score:2)
Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."
Er, if you're running VMs, you inherently have "cheap shared disk" - the disk in the host that any of the VMs can access. :)
Lots of deals on eBay (Score:5, Informative)
You can find lots of used servers on eBay that you can mess around with. Sun's v20z servers are pretty cheap and have a decent amount of power.
A lot of the stuff I've run across is rack mounted and keep in mind that rack mounted servers are loud [rackserverdeals.com] in most cases. So it may not be the best thing to play around with in your home or office.
You don't really need any special CPU to mess around with virtualization, you won't get "full" virtualization but I don't think that will stop you. For more info check out, this page [sandervanvugt.nl].
I'm currently running a number vm's in my desktop using Sun'x VirtualBox (xVM) whatever they're calling it now. Even within some of the solaris VM's I'm running solaris containers so I'm doing virtualization upon virtualization and my processor doesn't have Virtualization technology support.
If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [newegg.com] there is an option to filter by CPU's that support virtualization technology.
If you're primary focus is Oracle RAC, you may want to look at Oracle VM [oracle.com] which is Xen based.
Re: (Score:2)
If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [newegg.com] there is an option to filter by CPU's that support virtualization technology.
You can do that with "desktop" class CPUs, too - just fine. Only substantial difference between Opteron and Phenom 1 and 2 is the ability to have multiple CPUs; a Phenom or even an Athlon 64 x2, or I believe an Intel Core or Intel Core Duo, will do the job just fine. They all (iirc) have VT extensions.
Used or scrapped server-class machine (Score:5, Interesting)
You can run virtual instances on practically anything. I use VMWare Workstation on an older AMD Athlon 3200+ (the machine on which I'm typing this) and get acceptable performance if I only have one instance booted at a time. You're not going to be playing video-intensive games on the instance, but it'll work find
I maintain a few websites (my blog, a gallery, couple other things) on an old server class machine in the garage. Companies often scrap servers after the 3 year warranty expires, or they've finished depreciating (depending on individual business rules) and they're often fast enough to make reasonable virtual servers. Often you can pick them up at a scrap sale or surplus store, or, if your company has an IT department, get permission to snag a machine that's about to be scrapped.
I recently brought up VMWare's free bare-metal hypervisor ESXi and was surprised at how easy it was to set up and create instances. VMWare has a free Physical-to-Virtual converter you might want to experiment with. It works great with Windows, but is kinda hit-and-miss with Linux.
ESX Whiteboxing info (Score:2, Informative)
Not that much (Score:3, Interesting)
You can do it "well" on a dual core with 4GB of ram. Even less, but with todays prices you can get a system for a couple hundred if you watch for sales. RAM is you biggest killer that you will notice. Then again, with quad cores with VM assistance going for under $200CDN, thats relatively cheap. If you're worried about HD performance a couple 500GB drives striped will give you over 100MB/s of read speed a relatively small investment.
some cpu and lots of RAM (Score:2, Interesting)
we have about 4 machines with 2 quadcore running ESX and about 100 machines (many linux and windows) and 64GB of ram in each esx node... and we have still about 50% of resources free
so grab one quad core machine, with lots of ram (for oracle RAC+ASM+DB you will need at least about 4GB for the 3 RAC nodes, the more the better)
as this is for testing, i would but a plain quadcore PC, with 6 to 8 Gb of ram, install a linux 64bit with xen or vmware esxi
if you have more money, you can buy more ram or even cpu, bu
Do your homework before purchasing White Box HW (Score:3, Informative)
Re: (Score:2)
Wow. That's the first time I've seen a /. comment that completely filled my screen. Thank you.
Also tl;dr.
Re:Do your homework before purchasing White Box HW (Score:5, Funny)
and if you're not careful, VMWare apparently makes the Enter key inoperable :)
Re: (Score:3, Informative)
Re: (Score:2)
Regarding replication, Vmotion, storage Vmotion etc all require VirtualCenter, which (runs only on Windows and) is not free.
That being said, ESX is great. I paid for it before it was free and still pay for support.
Didn't know there was a free PtoV either.. I will have to recheck that.
Much better solution (Score:5, Informative)
Amazon EC2 is what I use for stuff like this, both windows and linux boxes everything available at a single push of a button. I
also use it alot for development, fire up a machine load and go.
Re: (Score:3, Informative)
Well yes it will not teach you how to plug a ethernet interface into a switch. However the poster in this case said he wants to run
in a vm environment but money is limited. In this case if he want's to play with big boxes, configuration testing etc there
is no better option available to him than EC2.
Re: (Score:2, Insightful)
Actually, if he's looking to play around with Oracle RAC, he's looking at virtualization technology to do that without having to buy multiple servers. In that case, Amazon EC2 will be a good idea.
If he's more interested in playing with Xen than RAC, then no.
What does your budget allow? (Score:3, Informative)
Let's say you're a one-man Lab, incorporating all the SA, Developer, and Midware functions into your 'play' with this environment. How much time will each environment spend heavily plowing into loads?
If your intent is to deploy RAC in a multitude of scenarios, in short order, with a minimum of intervention, you may be able to get away with $1500 to $2500 worth of NewEgg parts (think high throughput - RAID, Max. RAM, Short access times, etc.) and the virtualization technology of your choice. Personally, I find VirtualBox capable of everything I need as far as virtualization and deployment goes, however, you need to be able to leverage 'fencing', with likely puts you into VMWare territory.
Fortunately, VMWare Server is 'free', and CentOS and OpenSuSE support some of the more advanced features of HA on Linux. Then again, if we're looking at resources as a major factor, then Redhat and Novell might be worth looking at, as they both offer 60 to 90-day evaluation licenses for their Enterprise Linux products, which may offer a prettier and more 'honest' (per the documentation and common expectations) implementation of their respective HA features than the freely-available, and in some cases, in-flux versions of the same software.
As far as RAC goes, take a look at the requirements for RAC, per Oracle's installation guidelines,, and size/spec from there. I believe you can get away with 16GB - total, if you have the capability to size the VM's memory access, or otherwise configure the amount of addressable memory, or put uo with or hack Oracle's RAC installation pre-flight. There is also valuable documentation available on your chosen OS vendor's sit, which may even be Oracle, who knows.. [oracle.com]
You may be hell-bent on performance, however, and you may be looking for the ultimate grasp of technological perfection, as it exists at Sun Mar 22 17:29:59 EDT 2009. In this case, you may want to look at Xen, which is available on Solaris as their 'xVM' technology, as well as on various Linuxes and BSDs.
On the other hand, you may be a Mac guy, with a decked-out Octo-core Xeon Mac Pro, where you have the option of Parallels and Virtual PC and something else, in addition to Sun's VirtualBox mentioned above.
Ultimately, things to keep in mind may be shared disk requirements, fencing options, and VM disk and memory access.
YMMV
Re: (Score:2)
The only difficultly with putting RAC on "a" machine is that it is the configuration of the networking that tends to be the major PITA.
All that will get sidestepped by going virtual. If you don't expect to be maintaining the hardware, it is not likely to matter.
Re: (Score:2)
That's why I use VitrtualBox, The ability to set up bridged and NAT'ed networks is easy and reliable.
With Xen or the like, there's quite a bit more that needs to be accounted for, and now that I think about it, multiple physical NICs may not be a bad idea for this one-box lab.
Two machines (Score:5, Informative)
You can do Oracle with just a single machine running multiple VMs; however, if you really want to get serious, you should consider building two physical machines. One each machine, create a virtual or two with 1-2G of RAM. for the shared disk, use DRBD volumes between the two.
My test RAC cluster has two AMD X2 64-bit systems with two gigabit NICs each. CompUSA has a similar machine for about $212 on sale this week. Newegg prices are similar. You'll need to add a couple extra Gig NIC and some more storage. Still should cost under $400 each.
On each physical system I used CentOS 5.2 with Xen. I created LVMs on the physical machines as the root volumes. Also carved out a separate volume to back the shared volume. Then I carved out a xen virtual machine on each with 1.5G each. I put the DRBD network on one pair of NICs. The other pair was used for the network and heartbeat (virtual ethernet devices).
VirtualBox (Score:2)
VirtualBox is fairly good even on mediocre hardware. The more RAM and CPU the better, but you don't need a quad-core with 8 gigs of RAM just to run a virtualizer. Heck, you don't even need a dual core for that. Do make sure you have lots of RAM though (I have ~2 gigs, and ~2 gigs swap as well, though Linux never uses it anyway). YMMV, so don't use this info for anything mission-critical.
Don't bother with 'server' hardware (Score:4, Insightful)
All you need is memory (Score:2)
Really for getting started you just need memory. Everything else is just a convenience in terms of performance and won't really buy you more functionality.
I run XP as my host OS with just 2GB of physical RAM, and then do development in a 768MB Linux partition under that using VMWare Workstation. You can do the same thing for free with Xen or VMWare Player or Server.
When 2-4GB is not enough, then either upgrade your workstation to a 64-bit OS and throw in as much memory as you can fit/afford, or bring up ano
Cheese on a bread budget (Score:3, Interesting)
You say you want to go "cheap", that you don't want to spent too much money, yadda yadda... and then you go on to mention things like "cheap" shared disk and "cheap" blade servers?
What you realistically need and want are two different things.
I'd suggest a cheap quad-core AMD Phenom II system with 8G or so of RAM. Nothing too fancy. that I assume you're going to be running a Windows host OS, or VMWare ESX. More RAM will be needed for the Windows host OS, obviously.
Absolute lowest-end hardware you'd want to look at getting is an AMD Athlon 64 x2 or Intel Core (IIRC) based system. In other words, you want/need the VT support, or it'll be purely an emulated environment, and substantially slower than native (30%?), not just marginally (10%?).
I recommend AMD hardware because it's got a better price/performance point, and because unlike the other stuff in the "reasonable midprice" range for Intel, it's got the memory controller/north bridge integrated into the CPU (for newer gen stuff). I'd say go Phenom or Phenom II without any hesitation.
With a CPU like this [newegg.com], there's no reason you couldn't build a full system for around $450-500, sans storage. You could probably find a suitable "starter"/deal system for $300 from TigerDirect that'll do the job just fine with a little more RAM and another drive.
For disk, just go with an SATA RAID card (LSI are good) and 3 1Tb disks. That's about as cheap as you'll get and still have room to work.
It depends on how much you want to spend on power. (Score:2)
Really.. it all boils down to your monthly utility fees and what you are willing to pay...
You can pick up 1-off servers being ditched by corporations (if you degauss the drives and certify that you will destroy them if you ever stop using them, you may get the drives as well) - otherwise, it will probably be sans hard drives, for next to nothing....
I picked up a test platform, Two Dual Core 2.66Ghz 64 Bit Xeons, 16GB RAM, 8 hot swap U320 72GB Drives, battery backed raid caching controller, dvd, floppy, 2 x
I've setup Oracle RAC on VMware (Score:2)
But the question is, how many VM's do you plan on running at once?
I installed a 2 Node RAC environment on Vmware using my laptop which was a 2Gz Intel Core 2 Duo with 2GB of ram. (Instructions here [blogspot.com])
So you don't need something super powerful if you don't plan on leaving them all running 24x7 and just startup the ones you are playing with at the time. A Quad core system with at least 4GB of ram should and lots of disk should be plenty.
I would stay away from running any of your environments on external USB
I'm thinking of doing the same thing myself (Score:2)
hmmm (Score:2)
For modeling something like RAC, a dual-core anything with tons of RAM would be necessary.
However, the devil's advocate in me is saying to not go virtual with this project unless you have some speedy-fast fiber channel SAN at your disposal. Reason being: you aren't going to see the same performance in the VMs as you would with physical hardware. Especially with the database backend that is constantly thrashing your drives depending on load.
Re: (Score:2)
You just need enough RAM (Score:3, Informative)
Memory, memory and more memory (Score:2)
Buy all the memory you can afford. Then buy some more.
Virtualization is a memory pig. Cool, fun to play with, but still a memory pig.
...laura
Openfiler + USB Flash is a great way to do ESXi. (Score:3, Informative)
The network adapter solution is simple: buy the most plain-jane Intel PCI or PCIe adapter that you can find. Examples of ones that are known to work right out of the box are the Intel PWLA8391 GT [newegg.com] (single-port PCI) and the Intel EXPI9402PT [newegg.com] (dual-port PCIe). I own both of these and can personally confirm operation with the latest version of VMWare ESXi.
The drive controller situation is both complicated and -- more importantly -- expensive. Overall, Adaptec seems to be the most well-supported controller hardware out there. I have tried LSI controllers, but they often don't play well with desktop boards. Unfortunately for experimenters, the built-in RAID on practically every Intel motherboard is completely unsupported in RAID mode. Obviously no enterprise environment would be using on-board RAID like that, but it would be nice to have for experimentation.
Which brings me to my favorite storage solution for ESXi: Openfiler [openfiler.com]. Openfiler is an open-source NAS/SAN solution based on rPath Linux. It turns any supported PC into a storage applicance, and can share its storage in a plethora of ways. In the case of a virtualization effort, it has two major things going for it: it supports any storage controller that Linux supports, and it supports iSCSI and NFS.
If, say, you do have a machine sitting there with Intel on-board RAID, you can install Openfiler there. While the hardware might not work under ESXi, it'll work great for Openfiler. Even better, Openfiler also supports Linux software RAID which can be superior when it comes to disaster recovery (no need to have a specific controller card to see your data). With this in mind, you'll be able to get Openfiler running on just about any hunk of shit box you have sitting around.
Once you have Openfiler set up, you can take the next step in virtualization-on-the-cheap: installing ESXi on a USB flash drive. There are a number of tutorials on the web for this (just google 'ESXi USB flash install'), but the basic process amounts to extracting the drive image from the ESXi installation archive and simply writing it to flash with dd (on Linux) or physdiskwrite (on Windows). Once this is done, you can plug the flash drive into nearly *any* recent x86 hardware and it will boot ESXi. A really neat feature that you get along with this is the ability to substitute hardware with ease, and upgrade to later versions of ESXi simply by swapping the flash drive.
Once you have ESXi installed, create an iSCSI volume on your Openfiler box. Then, use the VMWare management software to connect the ESXi box to your Openfiler iSCSI volume. You can then create virtual disks and machines from the actual USB-flash-booted VMWare host, all of which will be stored on your Openfiler machine. You may also want to try experimenting with NFS instead of iSCSI. There are a couple proponents of this out there that say under certain circumstances it's even faster than iSCSI. It also makes backing up your virtual machines a little simpler since an NFS share is generally easier to get to than iSCSI from most machines. Another cool aspect of the Openfiler-based configuration is that you will get access to another whiz-bang feature of VMWare called vMotion. Since the VMs and their disks are stored centrally, you can actually move the VM execution from one ESXi box to another - on the fly.
In all, this is a great way to get your feet wet in virtualization because you can have a pretty sophisticated setup with very basic commodity hardware. If you want to go the extra mile and get really fancy, put a dedicated gigabit NIC (or two, bonded) in each box and enable jumbo frames; the SAN will be more than fast enough most anything you'd like to do.
Good luck!
Re:Openfiler + USB Flash is a great way to do ESXi (Score:2)
You could also checkout FreeNAS as an alternative to Openfiler, just depends on if you want to run Linux or FreeBSD on your NAS.
Second, the cheap crap they put on motherboards and call 'RAID' is generally nothing of the sort. Its almost always handled by the CPU itself, either via the driver or System Management mode of the CPU and as such is no better than using the software RAID provided by your OS. In most cases its better to use the software RAID as its made to work with your OS in a the most efficien
Re: (Score:2)
In the long term, I believe that VMWare sees greater uptake of ESXi vs. ESX since it is a lot thinner and plays better in a dense environment.
Re: (Score:2)
For the reasons detailed above in both our posts, it would be even nicer if ESX/ESXi supported software RAID. However, given i
Generic server from Shuttle (Score:2)
I've bought a small Shuttle K45 system, adding my own Intel chip and extras in there. Cost about $450 for my setup. About to put VMWare Server on it. I'll let you know how it works out.
My hints (Score:5, Informative)
Well you don't clearly state what you wish to accomplish nor how much money you have so it is hard to answer. But maybe such setup will be OK.
Build yourself custom PCs.
Storage server:
- good and big enclosure which can fit large ammount of drives
- moderate 64bit AMD processor (really any - you will not be doing any serious processing on storage server)
- any ammount of RAM (really 1 or 2 gigs will be enough)
- mobo with good SATA AHCI support (for RAID) and NIC (any - for management) onboard
- one 1Gb PCI-* NIC with two ports
- 6x SATA2 NCQ HDD (any size you need) dedicated for working in RAID - software based (dmraid) RAID1+0 array configuration
Virtualization servers (2 or more):
- you need the virtualization servers to have the same config
- any decent enclosure you can get
- the fastest 64bit AMD processor you can get preferably tri or quad core (it will do the processing for guests) with VT extensions
- as much RAM as you can get/fit into the machine
- mobo with VT support, one (any - for management) NIC onboard
- one 1Gb PCI-* NIC with two ports
- one moderate SATA disk for local storage (you will be using it just to boot the hypervisor) or disk-on-chip module
Network switch and cables:
- any managed 1Gb switch with VLAN and EtherChannel support, HP are quite good and not as expensive as Cisco
- good CAT6 FTP patchcords
General notes for hardware: :)
- make sure all of the PC hardware is *well* supported by Linux since you will be using Linux
- if you can get better (quality wise) components, good enclosures, power supplies, drives etc. - since it is a semi server setup you don't like it to fail for some stupid reason
Network setup:
- make two VLANS - one for storage, other for management
- plug onboard NICs into management VLAN
- plug HBA NICs into storage VLAN
- configure ports for EtherChannel and use bonding on your machines for greater throughput
Software used:
- for storage server just use Linux
- for virtualization servers use Citrix XenServer5 (it is free, has nice management options, supports shared storage and live motion) or vanilla Xen on Linux, don't bother with VMWare Server, VMware ESX and Microsoft solutions are expensive
Storage server setup:
- install any Linux distro you like (CentOS would not be a bad choice)
- use 64bit version
- use dmraid for RAID and LVM for volume management
- share your storage via iSCSI (iSCSI Enterprise Target is in my opinion best choice)
Virtualization servers setup:
- install XenServer5 (or any distro with Xen - CentOS won't be bad)
- use interface bonding
- dont use local storage for VMs - use storage network instead
Well here it is. Quite powerfull and cheap virtualization solution for you.
Dell 440SC (Score:2)
I not only run this at home, but at lots of small business customers. Has 3Ghz Pentium D (dual core, 64-bit). Get 2 large SATA drives (500G or more) and 2G or more ECC memory. Starting price is $400, but by the time you get the memory and disk upgraded, it is about $600, $800 with onsite maintenance. A big benefit for me for home use was it is *quiet*. It has a single large (and therefore quiet) fan with ducting to draw air over the CPU heatsink. Look for it in the "small business section" of Dell.
Dra
Good White Box Setup for VM Test Lab (Score:2)
Don't go overkill. (Score:3, Interesting)
I run a VPS hosting company, my job is to research, setup, and maintain a cluster/grid of servers running Xen with hundreds of guests (virtual machines). For testing and even for deployment, we've used machines as simple as a single-core AMD 3800 with 80GB disks in RAID-1, and 1GB of RAM. These aren't the most profitable machines, as they can only support as many virtual machines as can pay for the electricity and square footage, but they work perfectly fine for up to approximately 12 guests. I do highly recommend a dual-processor or dual-core system, though.
If you want to know how much you can stress a system, for highly-dense numbers of guests, I try not to load more than 15 guests and 2GB of RAM per CPU core. Of course, if you plan to have a low-density of guests (say one guest per core), you'll need to adjust accordingly.
I found that for my home office, where I often have pretty excessive needs such as installing multiple operating systems and performing multiple large compiles at the same time, a dual quad-core system with 16GB of RAM is overkill. Right now, I'm using a single quad-core workstation with 8GB of RAM and it works pretty well for me, and is probably still a bit more than I need.
You can share CPU time but you need disk and ram (Score:3, Informative)
The biggest problem I see with those getting into virtualization is that they think that virtualizing things makes them magically need fewer resources.
You can share CPU time as most apps will not drive the CPU 100%, having said that it is often best to have as many cores as you can afford.
Do not over allocate your RAM, if you can have as much ram as needed for how much you allocate to the VMs, if you over lap you will get a huge performance hit.
Sparse disk is a fairly new feature only in some VM systems, you will need lots of disk for all of the VMs, also you will probably want to run them on different LUNs or disk groups so you don't get lots of thrashing on the drives.
If you are only running 1 or 2 VMs as a test then really all you need is to up the ram a little and make sure the host meets the minimum specs of the VM applications.
If you are only... (Score:2)
Oracle and VM's (Score:2)
Speaking as an Oracle DBA who has done a little of this, I can tell you to get a lot of RAM. I would say that an MB that can be expanded to at least 8 GB is the way to go. You might get by with only 4GB for a while, but you will eventually want more, give the relative costs of RAM.
Oracle is always RAM hungry, and VM's multiply that.
Having just done this (Score:2, Informative)
I just did this myself. I ended up just shooting for cheap hardware on the theory that if it breaks in 2 years I can just replace it. I have a Quad Core Phernom with 8 gig of RAM and two 750gig drives. Chucked VMWare on it and havent had any issues running about 8 or so VM's on it. It also serves up media using TVersity and is a network share dump as well.
The biggest issue I have had so far, is Disk Driver perfomance. If you are planning on running multiple concurrent VM's then go for as many HDD's as you c
Forumla for best results. (Score:2)
The second rule of thumb is don't blow money on top spec hardware.
DDR2 RAM is cheap, load it up. This is the only real fun killer if you don't have enough, all other advice here is non-essential, any non-dinosaur box is fine for fiddling with VMs.
An interesting note a discrete graphics c
Meh (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Desqview required a 286.
["Wooooosh." Just so nobody else has to do it.)
Takes less than you'd think (Score:3, Informative)
The main thing you need for VMs is memory. There isn't really any good way for VMs to share memory, they each need their own. So decide what you want to give each system, and make sure you've got that much on the host plus like 1GB for the host OS and VM software. Good news is RAM is cheap. You should be able to pick up plenty for not much money. If you get a system based on a 975X or P35 chipset, you should be able to drop 8GB of RAM in it. Ought to be more than plenty. Those are cheap and plentiful these days too. Plus, they use DDR2 RAM, which is currently the cheapest. An Intel DP35DP motherboard might be a good choice.
As for a processor, kinda depends on how hard the VMs will be working. That they can share. So if they are mostly sitting idle, like say a web server serving up static pages, you can get away with not a whole lot of CPU power. If you want them all to be working all the time, you need more. A Core 2 Duo will probably do just fine if you that's what you've got or you need to keep the cost as low as possible. However, this is a case where a quad core would make more sense so that's a good way to go if you can. Goes double if it's the same price. Like say you can get a 2.4GHz Core 2 Quad for the same price as a 3.0GHz Core 2 Duo. While for a desktop you'd probalby want the duo, get the quad in this case. Might look at the Q6600 or Q8300. Both are under $200 and would do a real nice job. Note that the Q8300 is going to need a P35 board, teh 6600 will work on a 975 board.
Disks are a real big "it depends." VMs can be set to grow as they need more space, and so you can in theory have a bunch of VMs sharing one small disk, along with the OS. However, that can lead to performance problems. Harddrives suck at random access, and if a bunch of VMs get going on it at the same time, that's what you'll get. So ideally you'd have one VM per harddisk. In reality, that's probably overkill unless you've got lots of disks laying around. However if your VMs will be heavy disk access, you might want to consider getting 2 drives for them since drives are cheap. Either way, the best idea is to have the preallocate all the space they need for their virtual drives. You get better performance that way, even though it wastes drive space, but again, drives are cheap. Maybe start off with one drive for the VMs and if you find they are getting bogged down, buy another and move them over. They are just files on the drive so easy to move.
Those are the biggest factors to think about. You get a quad core, good amount of RAM, and enough disk space, you should get great performance. If you need to save money, don't feel like a dual core won't work fine. Really the only thing not to cheap out on is RAM. You need to have enough, virtual memory is WAY too slow. So if you want 4 VMs with 1GB each, have not less than 5GB in the system.
Supposing you do have plenty of cash and want to further increase performance one other thing you can look at is NICs. VMs don't do a great job of sharing NICs presently. VMWare is actually working on that, but right now you get ideal performance with one NIC per VM. Not normally a big deal but if your VMs do lots of traffic it can matter. So if you want, get more NICs. One of those multi-port NIC cards works just as well. This really isn't all that necessary, but you can do it if you are after the best performance.
I'm doing this now ... (Score:2, Informative)
I'm doing this now, running a company infrastructure on Xen3.2 and 2 non-server class machines. These are Gigabyte and MSI Core 2 Duo motherboards running at 2.6 and 3Ghz. Each with 4GB of RAM, dual GE NICs, RAID1 drives. Nothing special.
Application Systems are:
- enterprise email/calendaring/IM
- CRM
- document management, file/print
- project management
- VPN
- internal website / wiki
- VoIP/PBX
- Monitoring, PKI to manage VPN credentials
- LD
Nehalem Box (core i7) + VMWare ESXi (Score:2)
For the best performance in Virtualization, buy a Nehalem CPU, a core i7, 4 cores. We are using those at work and the benchmarks are amazing.
If you don't have much money, buy the low end quad core, core i7 920, throw in 8GB of memory and you will have plenty of power and memory to throw in a couple of 1vCPU VMs, the performance is pretty good. If you have money, buy a dual core i7, 8 or 16GB, then you have something that smoke. VMWare esxi is pretty cool. I believe they have in the work a version optimize
RAM, RAM, RAM (Score:2)
You don't need a lot of cores for VM hosts. But you do need lots of RAM, since each VM can take a huge chunk.
So, essentially, you don't need anything "special" hardware wise to use VM's. And I recommend using Linux + VirtualBox. http://www.virtualbox.org/ [virtualbox.org]
Use OpenVZ (Score:2)
If you want to run linux processes with isolation from your physical machine, install an OpenVZ enabled kernel plus the openvz packages. It nicely isolates processes running inside each container; there is minimal virtualisation overhead (so you don't need a bigger machine).
Also the container root filesystem is an ordinary directory on your host. This means you can put multiple containers into a large filesystem and they share the available space, you can backup or copy containers trivially, and you can e
Nehalem or Barcelona + lots of RAM (Score:2)
Both Nehalem and Barcelona (Phenom) are quad-core and most importantly, support EPT and NPT respectively. This feature has significant impact on virtualization performance.
If you want to run 4 VMs, you'll probably want to have a fair bit of memory. 4GB would be good, 8GB would be better.
Any dual-core with 3-4 GB will be fine (Score:2)
If you have need for each VM to have access to specific hardware like a DVDRW or whatnot, you can either connect and disconnect it as needed to each VM, or if you need it on both at the same time, you'll want a box that you c
Memory, memory, memory. (Score:3, Interesting)
Memory, and lots of it. Nothing else will help as much for running multiple VMs.
Memory is dirt cheap, I recently bought 8 gigs of ECC ram for ~100 USD. Of course, over 3-4 gigs, and you need a 64 bit OS, I use Ubuntu 64, but I know others who use Vista 64 to good effect.
At least 2 cores, 3 or for doesn't hurt either. There's great value in both AMD and Intel at the moment, Intel owns the top end, but at the low end or midrange AMD tends to have the better value.
If possible, get a separate drive for at least your main OS, and run the VMs off their own drive. More spindles == more IO, I run 6 drives in my box, one for the OS, and 4 raid 5 for my homedir for speed, capacity, and safety, and one drive bay I swap out for a spare I keep offsite that holds my backups. Linux software raid is great for this use, and with modern multi-core processors you won't notice the overhead.
If you can only afford maxing out one thing though, make it the memory.
Simple (Score:2)
Take your minimum disk and RAM requirements for a single server, multiply by how many VMs you want, these are your minimum disk and RAM requirements for the host. There is no minimum CPU speed, slower will be slower and faster will be faster, but a test rig won't fail to work just because you have a single core.
I'm running a small cluster of load-balanced LAMP VMs on my laptop; 128MB / 3GB each, sharing a single 1GHz CPU :P
Cheap! (Score:2)
Just about any machine you can buy these days can do full virtual. If not, get your money back!
cores, memory, spindles (Score:2, Informative)
The basics to virtualization comes down to the # of cores, the amount of memory and the # of spindles, though if you've read through the latest reviews of SSD's on Anandtech you can replace spindles with Intel X-25m or X-25e drives. In a virtual environment random reads/writes are FAR more common than any sequential read/write access, therefore you either want a high spindle count or fast SSD drives, depending on your budget.
A quad-core system with 4-8 gb or more (depends all on how much memory you want to