Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Goodbye Docker on CentOS, Hello Ubuntu (linux-toys.com)
71 points by rusher81572 on Jan 17, 2016 | hide | past | favorite | 41 comments


I would rather say "goodbye docker", period. Having used it a bit, I have concluded that it is just not a very good or mature tool. Once you scratch under the surface, things cease to be easy or stop working at all. It makes you wonder how it's been this long and nobody stumbled upon an issue that you uncover 30 minutes into tool's use.

Docker does have it's uses, but in the majority of cases you are better off using native OS package and dependency management (RPM/YUM in the case on Redhat-based distros). One very obvious thing is that package managers usually track versions and dependencies and allow for install actions to happen based on the versions delta. With Docker you just replace the whole environment, which is fine, unless some of the data context is outside of Docker.

I think the majority of Docker uses are of a class "I can't figure out how to manage dependencies for a given OS, so I am going to skirt the issue by using Docker".


You say that as if its a bad thing. Isnt the evolution in development within making it more intuitive? I love the 10,000 line man page as much as the next dev, except I Dont. Docker solves a problem, standizing scalability and microservices without the need to think about whether its on the same or diff machines from developing the application. Honestly, I think it could be better. I wouldnt mind if the volume concept was easier for me to wrap my head around. I would like the cleanup process to be less worrysome. But thet have built an awesome solution and it opensource. What more can you ask for?


Have you tried Chef, Puppet, Ansible etc?


Well... that might be a reason why it's popular. I think the very snappy boot up process for a container is also very appealing. For example, t2.micro's on AWS seem to take at least 30 seconds which is frustratingly slow if you're changing stuff often but only touching application/service dependencies.


It's not "docker or VMs", it's "docker or other container frameworks". Compared to those, Docker seems to come with a ludicrous list of caveats and workarounds.


Good point. I haven't kept up with the maturity status of other container frameworks though so I'm not sure how prevalent stuff like Rocket is in the wild.


Having worked for some time with configuration management and backend development and seeing the growing pains that happen after any "non-trivial" use case of it like: obscure declarative languages/DSLs, tangled messes of declarative vs scripted configurations, inter-dependencies that are not so obvious until you hit a corner case, etc., I can say that Docker has been a breeze, even when I encounter a breaking bug in a new release that breaks it somehow.

It's easier to get our devs wrapping all of their dependencies in a container, it's easier to just deploy the container, it's easier to separate what's "state" and "data" that should be maintained from the application and what's throwable.

Docker helps a lot to get away from the snowflake-machine mentality, everything is ephemeral so think carefully what you really need not to be like that and treat those cases like they should.

I do understand a lot of criticism about Docker but having worked with almost every mainstream configuration management tool (CFEngine, Puppet, Chef and Ansible) I can say I truly prefer to only have to care about Docker (or containers, I'm taking a look at other solutions atm) than the tangled mess that every single of them become later on.

And I'm sorry, only using "package managers" don't cut for the vast majority of deployments, you still have to manage configuration files, env vars and all of the other ugly mess.

How long have you worked with a scale of hundreds to thousands of automated servers?


> Having worked for some time with configuration management and backend development and seeing the growing pains that happen after any "non-trivial" use case of it like: obscure declarative languages/DSLs, tangled messes of declarative vs scripted configurations, inter-dependencies that are not so obvious until you hit a corner case, etc., I can say that Docker has been a breeze, even when I encounter a breaking bug in a new release that breaks it somehow.

I don't really see how the Docker model is better. I wouldn't trust a person who wrote a terrible Puppet file to work on a Docker container.

I can understand that from a Devops point a view, it's more convenient, however you can easily just build a VM image with Puppet and deploy that to your billions of servers.


"It makes you wonder how it's been this long and nobody stumbled upon an issue that you uncover 30 minutes into tool's use."

Either you're doing something wrong or everybody's extremely lucky. If you've used Docker for 30 minutes I would bet on the former :)


Or everyone got used to it and don't consider it being an issue.

I believe that the filesystem layering feature in Docker is an anti-feature. It depends on unstable kernel features to work properly and doesn't really address the caching issues properly. Dependencies are usually in a tree, not linear like presented in a Dockerfile.


Exactly!

My company is extensively using docker. It has made deployment and maintaining different versions too smooth.


>I can't figure out how to manage dependencies for a given OS

It's actually: "Distro-provided dependencies are two-three years old and I don't want to deal with backports, ppas or whatever to get a fresh stable version of Python, Node or anything else."


By default, Docker will use the AUFS storage backend if available, and then fall back to devicemapper on loopback.

RHEL, CentOS, and Fedora do not ship the AUFS kernel module because it is not part of the mainline Linux kernel and is unlikely to be included in future, and these distros have an "upstream first, no out-of-tree bits" policy. Instead, they recommend using devicemapper on LVM [1][2].

The same advice is provided in the official Docker documentation [3]:

> Docker hosts running the devicemapper storage driver default to a configuration mode known as loop-lvm... The mode is designed to work out-of-the-box with no additional configuration. However, production deployments should not run under loop-lvm mode... The preferred configuration for production deployments is direct lvm.

You might consider using CentOS Atomic Host, which comes preconfigured with LVM thin pools.

OverlayFS is also an alternative, but it can be problematic. It only implements a subset of the POSIX standard [4], which can cause some programs to fail.

[1] http://www.projectatomic.io/blog/2015/06/notes-on-fedora-cen... [2] https://access.redhat.com/documentation/en/red-hat-enterpris... [3] https://docs.docker.com/engine/userguide/storagedriver/devic... [4] https://docs.docker.com/engine/userguide/storagedriver/overl...


Yeah, I'm a bit surprised that the author didn't try direct LVM for their device. The speed up is noticeable.


The author of the original blog is confusing the issue of base-image of a Docker image and the host OS on which the Docker daemon is running. They are completely orthogonal issues. Yes, there've been plenty of complaints about DM performance and space reclamation issues. By all means, switch the host OS from CentOS/Fedora to Ubuntu if it alleviates the problems. The base-image is a completely different matter. There is no reason to switch from CentOS/Fedora to Ubuntu just because you changed the host OS. This is point of the filesystem isolation Docker provides.


The base-image is a completely different matter. There is no reason to switch from CentOS/Fedora to Ubuntu just because you changed the host OS

>> Yes, there is. There is a known issue using AUFS(Which Ubuntu uses for Docker) with CentOS/Fedora images:

https://github.com/docker/docker/issues/6980

To make it easier, I just changed the base image from CentOS/Fedora to Ubuntu so I do not have to worry about it.


You can find some notes on tuning devicemapper here:

https://jpetazzo.github.io/assets/2015-03-03-not-so-deep-div...

If you've got a new enough kernel though (i.e. 3.18+), you're best off using Overlay for your storage driver. It's fast and doesn't require a lot of tuning.


Last I checked, OverlayFS did not play well with tools like Yum/Pip. Has this been fixed?


> you're best off using Overlay for your storage driver.

As long as you don't mind gifting root to your container tenants.


I strongly DO NOT recommend using devicemapper as storage in Docker. Every time we have tried, and every customer who has tried, has failed medium to long term, in bad ways. It became so painful that we literally blacklist devicemapper as a supported filesystem in the Discourse installer.

We waited a year for this to "stabilize" but it never did.


I haven't used AUFS or overlay(fs), only devicemapper (thin provisioning) and Btrfs. Btrfs is faster than devicemapper, but even creating and removing containers seems slower than it ought to be considering how little delta there is. The create/remove time with 'btrfs sub crea/del' is much faster than docker create or remove (container), so I'm not really sure where the delays are.


Good comment. Did you try on CentOS or Ubuntu?


Fedora Cloud Atomic 23 on an Intel NUC.


Never had performance issues with the Docker device-mapper graph driver on Fedora 23. No tweaking was necessary. I imagine CentOS is similar although it may not use the latest and greatest features.

There is not enough information in this blog post to say exactly what the problem is, but switching distros may be overkill here.


For some background: https://developerblog.redhat.com/2014/09/30/overview-storage...

Overlay(fs) is likely going to be the way forward.


From the op: "I enjoyed its minimal install to create a light environment, intuitive installation process, and it’s package manager."

CentOS is neither minimal nor light. Ubuntu also isn't either of these things. Both of these distributions are more targeted towards convenience and ease of use, which means a lot of features/services that are generally unnecessary are enabled by default. The main reasons to use CentOS is compatibility with proprietary software made for RHEL, Ubuntu for people already familiar with its desktop version, or if needing to buy enterprise support for either.

If the op is primarily looking for minimal and light, he should look at pretty much any other major popular Linux distribution like Debian proper, Slackware, or Gentoo before CentOS or Ubuntu.


The minimal CentOS install is pretty lightweight. It's not a tiny distro by any means, but for keeping w/in the RHEL/CentOS realm, it's really pretty good. It's the only version that I install on servers, just to keep the excess cruft out (no X, no Gnome, no extra services, etc...).


Ubuntu has "Ubuntu server" which is Ubuntu without X, GNOME and other desktop services.

In addition, there is Ubuntu Minimal (https://help.ubuntu.com/community/Installation/MinimalCD) which is the most minimal you can get in Ubuntu.


I just did an install of CentOS with the minimal setting, and I am amazed by the crap that is loaded up vs not loaded. Don't need sound, don't need IPv6, don't need wireless. The base install is missing stuff like nano, wget, network related tools like traceroute, and several other packages. Thankfully, bluetooth or CUPs wasn't in the list this time.


There's still a whack of services that should be disabled on a server like rpc/nfs, bluetooth, avahi, etc (depending on 5.x/6.x/7.x) even if you go with a minimal install. Bloat aside, they can be security risks.


Yeah, this surprised me too, given the stated interest in a light-weight, server (no GUI) platform.

But we see this a lot - people jump onto Ubuntu despite not having an interest in running Unity, Mir, upstart, etc, or having a slightly fancier installer, or different schedule on release of stable / long term support versions.

I suspect it's simply that for many people unfamiliar with the various GNU/Linux distros, Ubuntu is the one they've heard of more often. And while most people marvel at the sophisticated package management system, few ponder why the package files don't have a .ubu suffix.


He installed Ubuntu Server, which does not have a graphical interface. Being "heard of more often" translates into the most up-to-date package manager, a well-known and long-term supported system, and wider software compatibility than any other distro, all great qualities for a server.


The real package manager itself comes upstream from Debian; Ubuntu just normally puts fragile GUIs around it. Most of the positives for Ubuntu come from Debian. The Ubuntu specific stuff on the other hand sometimes breaks core functionality like booting off software RAID being broken in 12.x, which has worked fine in Debian for at least a decade.

Generally speaking, a server environment is not the right place to run bleeding edge software. Debian's conservatism is quite sensible when it comes to production server environments, and traditionally (before the whole systemd debacle) they were pretty good at prioritizing stability over the feature of the week demanded by more desktop-centric folks. There's Ubuntu and other derivatives that can cater to their specific needs without compromising the integrity of the distribution in general. And if you have a specific application that requires a more recent version, you can always make use of a 3rd party repository (that you'd of course have carefully vetted first), a binary package, or just rolling your own from source. It makes much more sense to take those extra steps for specific applications than take a blanket approach to rolling out new code all over your production environment.


Yes, you're right wrt the GUI. I'd kind of tangentially gone off about the 'I need a better GNU/Linux distro - I'll use a derivative of Debian' angle. I had a look for a package list for current Ubuntu Server LTS (now 20 months old) but couldn't find one. I expect it includes upstart, given the vintage. Nonetheless.

Could you please explain this reasoning:

  > Being "heard of more often" translates into the most up-to-date package manager
I think you're agreeing with me when I say Ubuntu is heard of more often, and you say it's well-known.

Support -- have you compared the length of support for Debian stable and Ubuntu LTS releases? Did you conclude that Ubuntu has long-term support and Debian does not?

Can you also explain what you mean by:

  > [Ubuntu has] wider software compatibility than any other distro
I'd be curious what software runs on Ubuntu that doesn't run on, Debian, CentOS/RHEL, etc.


Yup!


I had a different takeaway from that particular line. Maybe I'm understanding it wrong.

For many years I would craft a system from the Ubuntu minimal install. It appears CentOS has a similar installer.[0] With Ubuntu's one could create a pretty good minimal install.

[0] https://www.centos.org/download/


That was my intention.


I've had different trouble with Docker on CentOS 7. Last November, I created a POC Kubernetes cluster on top of the shipping CentOS 7 Docker infrastructure (1.8 at the time), and then beat the hell out of it. After a week or so, I lost a node to XFS file system corruption in the Docker image tree. The only solution I could find that worked was uninstalling Docker, wiping the Docker image tree in /var, and then reinstalling Docker. Kubernetes would then resume distributing containers to the node. Every node died in this manner at least once during the POC. With Kubernetes managing the containers, it wasn't a disaster - just really annoying.


Any chance you can link to a "getting started" docker on Ubuntu writeup?


This one was posted on HN a few days ago. Not specifically Ubuntu-related, but includes Ubuntu. Also aimed at EC2 rather than DO, however one imagines the ideas behind a fully portable ersatz VM approach are, well, fully portable.

https://news.ycombinator.com/item?id=10890233


I should write one that compliments that post. Good idea =)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: