Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cloud Red Hat Software Data Storage Virtualization IT Linux

Red Hat Strips Down For Docker 44

angry tapir writes Reacting to the surging popularity of the Docker virtualization technology, Red Hat has customized a version of its Linux distribution to run Docker containers. The Red Hat Enterprise Linux 7 Atomic Host strips away all the utilities residing in the stock distribution of Red Hat Enterprise Linux (RHEL) that aren't needed to run Docker containers. Removing unneeded components saves on storage space, and reduces the time needed for updating and booting up. It also provides fewer potential entry points for attackers. (Product page is here.)
This discussion has been archived. No new comments can be posted.

Red Hat Strips Down For Docker

Comments Filter:
  • I know! (Score:4, Funny)

    by Anonymous Coward on Thursday March 05, 2015 @07:29PM (#49192639)

    I know I know! They also took out the Linux kernel, leaving only systemd.

    • by Zarjazz ( 36278 )

      "Removing unneeded components .." like systemd .. oh damn, someone beat me to that joke already.

  • by turkeydance ( 1266624 ) on Thursday March 05, 2015 @07:38PM (#49192693)
  • Containers.. (Score:5, Informative)

    by BrookHarty ( 9119 ) on Thursday March 05, 2015 @08:12PM (#49192913) Journal

    I've been using debian vservers in the past, and now lxc. RedHat 7 and its LXC integration is amazing. I use KVM as my hypervisor of choice, so I'm already using virtual machine manager, so now I can manage my LXC hosts with VMM, its really a nice touch.

    What really interests me is LXD. LXC containers in a real isolated container that I can just move. Right now, I'm stuck to zipping and moving LXC's directories if I want to move them. I tend to use OS containers stripped down, because I want app/tcp/ssh/nrpe installed, so I can make sure the service is alarmed, and I use ssh for remote management.

    Docker tends to be aimed at enterprise usage, if you have lots of single applications appliances, you can roll out and tear down, docker is a great idea.
    That is a different use case, so I don't need docker, but docker is built on LXC, so I get that added benefits from support from Redhat. (and Centos7 support)

    I'm running an IT shop, so my servers run for years, and I need to be able to manage, and support them. LXC containers is the perfect middle ground for me. LXD is the only thing I'm missing, moving file based containers.

    So, I'm happy docker is pushing technology, because the stack it runs on is also benefiting from it.

    BTW, I wish Redhat would support LXC VM's on its REHV (ovirt) platform, then I could consolidate even more VM's into single VM's. Guests with bridges with macs are filtered due to IP spoofing rules. Kinda silly when RedHat pushes LXC on 7, but doesn't test LXC on its Visualization platform.

    • I'm using WebVirtMgr for KVMs (libvirt) but it doesn't do LXCs, though libvirt does. Proxmox does both, but I don't want to pay for it (at my scale, it doesn't make sense) ... what else is out there, something which can handle both KVMs and LXCs and hopefully LXDs even, although if I want that I'll probably just use a KVM

    • Juju is able to orchestrate both LXC and KVM on several different cloud environments. Juju employs a slightly different paradigm than Docker, building on top of cloud images rather than an image based workflow. It surprises me that Docker gets so much attention in this space. I have used both and still prefer Juju for the flexibility. With Juju I am able to nest LXC inside Amazon instances or use LXC on my laptop to make it appear as cloud environment.

      A quick google search turns up a document on this ve

    • by Lennie ( 16154 )

      I don't think Docker is aimed at the enterprise, it's aimed at making it easier to deploy applications.

      Let's take a really complicated cloud application,.... the OpenStack services.

      Docker can used to deploy Openstack in 3 min.:

      https://www.youtube.com/watch?... [youtube.com]

  • not everyone knows Docker is yet another piece of cloud wankery

    • Re: (Score:2, Interesting)

      by solios ( 53048 )

      Indeed. I'm too busy struggling to stay almost not quite embarrassingly behind on front-end buzzword compliance, and now this? I'd have no idea what it was if I wasn't friends with a devops specialist. Ditto Chef, Hadoop, and a few other extremely specific buzzword compliant "concepts" tech writers whisper about in worshipful tones.

      I kinda miss the era in which a general computing proficiency was possible. Specialization used to be for insects.

      • by Tom ( 822 )

        I kinda miss the era in which a general computing proficiency was possible. Specialization used to be for insects.

        It still is. But when you have millions of people working in IT, instead of thousands, there's space for insects. Doesn't mean you have to become one.

        To any new technology that people worship I say: Give me one hour on the Internet, then I'll know what I need to know about it and you can worry about the implementation details if you like it so much.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Funny thing is, most people working in the DevOps field are generalists (myself included). People with a mix of infra and development backgrounds, with a broad range of skills across multiple disciplines, and maybe one or two "deep" skills. I spent years bouncing between system administration and development and couldn't make my mind up which direction I wanted to take because everything was interesting and I wanted to play with ALL the things. Then DevOps became more than an obscure buzzword and I found my

          • by Tom ( 822 ) on Friday March 06, 2015 @06:21AM (#49195123) Homepage Journal

            I'd be interested to see which distro can get their image down to the smallest (functional) size.

            LFS, of course. Or any other non-distro approach. What do you need a distro for if all you want is the kernel and basic system functions? It's not so difficult to start with zero and get to a shell prompt. Been there, done that.

            The really interesting approach would be to have a deployment distro - a way to add packages to such an image from outside, without having all the packaging crap and its dependencies on the image itself.

            I think what you really want is a build system that can install to the image.

          • [...]

            I'd be interested to see which distro can get their image down to the smallest (functional) size. Strip the OS down to just the absolute minimum required to boot it up, then leave it upto the docker image creators to decide what services to enable. It's a great way to minimize attack vectors, keep image size down and make the container nice and lightweight.

            A few years ago for a special purposed built box, I gutted a Slackware install, modified the disk scheduler in the kernel and removed every driver and every module that my hardware didn't use. My memory is a foggy on the numbers, but I believe the install itself was under a handful of GB (with my development tool chain and libraries) and booted to run level 3 using somewhere between 64-128 MB RAM (I think it was actually in the 32 MB range, but that sounds too small for me to be confident about it) and pa

        • by solios ( 53048 )

          That's a fair point.

          Still, there's plenty of room for the /. editors to pad the copy with a brief explanation of whatever the thing is - like how 2/3 of any article about North Korea or the Iranian nuclear program is boilerplate that people who follow the subject have read dozens of times already. The people who know what the thing is skip over those parts and newbies don't have to go somewhere else for an explanation.

  • by coofercat ( 719737 ) on Friday March 06, 2015 @09:12AM (#49195627) Homepage Journal

    I don't get it... what's the for? is it for the host running the containers, or for the containers themselves?

    I set up a bit of Docker goodness at work because I needed to do some stuff in RHEL5, 6 and 7 sort of simultaneously. I found getting the base image of a RHEL system into a container to be annoyingly hard - first of all, you somehow have to know what all the bajillions of 'base' packages are that you're going to need. Then you make your container and spin it up to a bash prompt. Great - all looking good, right? Wrong. For any other packages you want to install you need an RPM repo, only Redhat give you a satellite - for which you need a client license. You'll need one of those for every container you ever create - that can't be right, can it?

    Maybe I'm completely missing the Chosen Path here, but getting Dockers up and going in an enterprise setting seems remarkably fiddly. That said, being able to spin up a considerably smaller container would be very welcome. I'm not so sure having a stripped down host to run them on necessarily excites me all that much, but whatever it takes to get the bloat out of distributions is fine with me.

Always draw your curves, then plot your reading.

Working...