Red Hat Acquiring Cloud Storage Company Gluster 34
Julie188 writes "One of the more interesting aspects of Red Hat's acquisition of virtual storage vendor Gluster on Tuesday is how it drags Red Hat into bed with its cloud competitor OpenStack. Red Hat made waves over the summer in the open source community when one of its executives threw punches at OpenStack's community, saying the community amounted to not much more than a bunch of press releases. In July, Gluster contributed its Connector for OpenStack. It enables features such as live migration of VMs, instant boot of VMs, and movement of VMs between clouds on a GlusterFS environment. While Fedora has already said that its upcoming Fedora 16 would support OpenStack, Fedora is a community distro and not beholden to Red Hat. However, Red Hat today promised that it would continue to support and maintain Gluster's contribution to OpenStack. It didn't, however, to promise to quit the smack talk."
Awesome (Score:2)
Re: (Score:2)
This is great news, Redhat will keep it open source. I'm glad Oracle didn't get their hands on it and commercialize it like they did MySQL (The commercial plugins in 5.5.16 is what I'm referencing).
I much prefer Redhat's approach.
I couldn't agree more, they have a track record for doing the right thing.
Re: (Score:3, Informative)
Best part of acquisition: Gluster fsck
Unfortunately not it would seem according to this. [gluster.com]
As your volume size grows beyond 32TBs, fsck (filesystem check) downtime becomes a huge problem. GlusterFS has no fsck. It heals itself transparently with very little impact on performance.
Re: (Score:3)
So OpenStack is a hypervisor independent private cloud API. Its corporate backers include Rackspace, NASA, and Dell. There is a similar competing product called CloudStack, by Citrix. The Citrix CloudStack team has integrated a number of OpenStack components into their own product, and have contributed code back to OpenStack as well.
As far as I know, RHEV does not compete with either of those products head on. RHEV is for managing kvm, and maybe xen, hypervisor(s). It is primarily a management frontend
Re: (Score:1)
There is also deltacloud ( aeolus, etc ). Deltacloud aim to manage "clouds" with different backend, like libvirt for xen, kvm, lxc, vmware, etc.
Re: (Score:1)
Not for long... http://ovirt.org/. Kick-off workshop in November.
Summary not asinine enough (Score:1)
Could you try a little harder to gin up some phony controversy around Fedora?
Less insane support? (Score:2, Interesting)
Maybe it will become part of the RHEL distro now, instead of the insane support contracts they had, at $800/node per year for 5 email support calls. For a FS that works better on more nodes... we quickly went running when they told us the costs. That kind of support doesn't work well on a cluster.
Re: (Score:1)
We were quoted on two Gluster servers, replicated. The answer was 'no support on Ubuntu', we'd have to switch to their ISO install, and $8500/yr for support.
Re: (Score:2)
Re: (Score:1)
There is some openstack rpm in Fedora ( see http://fedoraproject.org/wiki/OpenStack [fedoraproject.org] ), so they started to port.
Re: (Score:2)
I can appreciate the resistance on the vague "cloud" subject, but the criticism of virtualization is strange. You're talking about virtualization robbing the enterprise of CPU cycles when, in today's world of servers starting at 8 cores and going up, the average CPU utilization is something like 2% or less. So it's the bare metal servers that are robbing the enterprise, by using budget to by 98% of something that they don't need (or seldom need). This is disregarding the major boon of virtualization to end
Re: (Score:2)
Well, this is all true. You have to know your workload.
There are workloads where performance -15% is still twice the performance the workload needs. Each year going buy, the number of workloads for which that is true grows. Alot. It's not that hard to make a database do 30,000 IOPS in a VMware environment, presupposing the right network and storage to support that. 30,000 IOPS covers a hella-lot of workloads (the vast majority of all corporate workloads), but certainly not all workloads, and let's be honest
Re: (Score:2)
Our physical hardware deployment time, from ordering? probably measured in months. A VM? Minutes.
Virtualisation these days robs you of less than 1% CPU and not much RAM, 50 VMs take a lot less hardware/space/power/cooling than 50 physical hosts, and in fact caching advantages mean they'll usually perform a lot better than 50 physical hosts too.
Re: (Score:2)
Yes; you can start doing things like using fusionio as a storage cache aggregation point; that would be prohibitively expensive to do on 50 physical hosts, but if you do it on just one (or two) virtualization hosts, it hardly costs anything. Cached read IOPS can jump into the 100,000 range.
Likewise with IO fabric. When 40gig Ethernet fabric comes out, we will be able to upgrade a few fat hosts affordably enough, but that's nonsense talk for the 50 physical hosts use case.
We're already 100% 10GE to all our E