If you’re in the cloud or datacenter space, chances are you’ve been hearing about the term “containers” and “Docker” non-stop for the past few years. Ever stop and wonder what this is all about?
The purpose of this tutorial is to get your feet wet with Docker and to standardize your understanding of how Docker and Containers work.
Getting Started with Docker
Getting Docker to work on your server is quite simple. Available packages are in both deb and rpm and in many distributions it comes already in official repository and needs to be simply installed. Once you have Docker successfully installed on your system, test it with running a hello world command:
$ docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
03f4658f8b78: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
…
After a successful installation, it is time to get our hands dirty. It is time to run our first container, but what is a container? And what’s so different compared to a virtual machine?
Linux containers contain applications or programs in a way that keep them isolated from the host system that they run on. Containers behave like its own machine – to the outside eye, they look like its own independent server. However, here’s where the difference comes into play: unlike a virtual machine, rather than creating and running an entire operating system within the virtualized instance, containers don’t need to replicate an entire operating system, only the individual components they need in order to operate. As you can imagine, this means less overhead, less space, and increased performance. To put this into perspective, with an ideal container set up, you can have as much as 4 to 6 times the number of server application instances as you can compared to running Xen or KVM VMs on the same hardware. You get a lot more application bang for your server buck.
Running container with Docker
Alright. So now, we have Docker installed, and we understand what containers are…
It’s now time to run our first container with Docker! For our first container we will run Ubuntu. So we will put this from the repository:
$ docker pull ubuntu
If you get a permission error, don’t forget to use sudo. The pull command will fetch the Ubuntu image from Docker’s official repository servers.
Once done, we can deploy a container with this OS.
$ docker run -i -t ubuntu /bin/bash
To run image we will use the command run. Type in terminal
$ docker run ubuntu ls -l
We can verify the container was successfully installed:
$ docker run ubuntu ls -l
total 48
drwxr-xr-x 2 root root 4096 Mar 2 16:20 bin
drwxr-xr-x 5 root root 360 Mar 18 09:47 dev
drwxr-xr-x 13 root root 4096 Mar 18 09:47 etc
drwxr-xr-x 2 root root 4096 Mar 2 16:20 home
drwxr-xr-x 5 root root 4096 Mar 2 16:20 lib
……
The ls -l part enabled us to see the listing from Docker.
There you have it! Docker is successfully installed on your server, and you have successfully created your first container. The possibilities are endless from here.
Docker and containers are a great tool for developers and webmasters! With Docker you can test your applications, visualize your platform, and increase productivity.
Special thanks to Nathan from NFP Hosting for sponsoring this tutorial.
Related Posts:
- [CYBER MONDAY] VirMach VPS Huge deals as low as $4/year! - November 27, 2017
- [CYBER MONDAY] HudsonValleyHost – $4.99/mo 5GB OpenVZ (normally $20/mo) and 40% OFF select items! - November 27, 2017
- [CYBER MONDAY] YourLastHost – Shared Hosting Specials! - November 27, 2017
Portainer is a simple management UI for docker. All you have to do is:
docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Read more here: https://github.com/portainer/portainer
Thank you so much, never knew what i was missing.
Can Docker be deployed from within a KKM or OpenVZ VPS, or must it be deployed at the higher physical server level? I ask because LEB VPSs are so affordable. If not, is there a market for low end Docker access? I’d love to deploy cloud-based apps from the ease of Docker.
Thoughts?
I meant KVM.
You can do it within a KVM, but you’re basically adding overhead.
I’m not sure that the overhead really matters when using docker. The entire use case is for keeping the same environment across your applications for dev teams and even production without having to spend tons of time in setup. This is in addition to benefits already mentioned.
I would assume that when using KVMs, you’d already have a unified environment for the shared tasks with a common disk image, but I see your point.
Well then you have to transfer your entire kvm image between all your devs etc instead of a single dockerfile that only downloads changes.
KVM is more than good enough for Docker. Some XEN based server are also good. (like EC2). OpenVZ won’t be much of suitable option I guess.
Why wouldn’t openvz be a suitable option to use a docker container in? Your comment adds no real value other than biased speculation.
The problem is that typically OpenVZ hosting services run a much older version of the kernel (typically 2.X) that either doesn’t support, or doesn’t fully support the namespaces and/or nested namespaces.
Since OpenVZ kernel 042stab105.4 it is possible to run Docker inside containers.
https://openvz.org/Docker_inside_CT
You’d have to be running a _very_ old kernel version for this to be the case. In fact I had issues running Debian 9 and Ubuntu 17 templates on kernels older than this, so you wouldn’t even be able to use the latest operating systems with any provider running a kernel this old.
Although you are right `some` use cases may be limited.
Yes – which only supports older versions of docker and doesn’t support creation of bridged networks – which in turn limits the kinds of topologies you can use. And the VFS backend doesn’t support COW
The biggest issue is the older version of docker – which in turn means that every tutorial will require some adjustment to get working (which is fine for someone who has worked with docker for a while – not so good for someone new to it). For all these reasons, KVM is ‘better’ as a tool to ‘get your feet wet with Docker and Containers’.
The article compares docker to virtualization such as Xen or KVM. Why? This is not what docker is meant to be. Using docker inside some virtualization is completely acceptable and in some cases required.
Although you might be right, some sources to back these claims would really help the newbies out there (like me).
Sure, on the meta description on good ole’ google it says ”
Docker – Build, Ship, and Run Any App, Anywhere
https://www.docker.com/
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.”
– https://www.docker.com/what-docker
“Docker containers are lightweight by design and ideal for enabling microservices application development. Accelerate development, deployment and rollback of tens or hundreds of containers composed as a single application. Whether building new microservices or transitioning monoliths to smaller services, simple to use tools make it easy to compose, deploy and maintain complex applications.”
“Integrate modern methodologies and automate development pipelines with the integration of Docker and DevOps. The isolated nature of containers make it conducive to rapidly changing environments by eliminating app conflicts and increasing developer productivity. Docker enables a true separation of concerns to accelerate the adoption of DevOps processes.”
“Cloud migration, multi-cloud or hybrid cloud infrastructure require frictionless portability of applications. Docker packages applications and their dependencies together into an isolated container making them portable to any infrastructure. Eliminate the “works on my machine” problem once and for all. Docker certified infrastructure ensures the containerized applications work consistently.”
LXD is better if you want a VM-like environment that’s secure by default