Docker
Applications Era
In today world, we are all surrounded by apps and websites. We use our smartphones and computers to browse around the internet and use all the web services through our mobile apps or browsers. All these millions of web-based data is coming somewhere far from some computers which would be located in some datacenter. We generally call them servers; these servers could be those physical machines that we see racked up in a data center with all those flashing lights and cables.
If we take some examples like Amazon, Google, Netflix, Goibibo etc, all these businesses are running on applications or we can say their applications are their business. This makes a very important point that we cannot separate their business with their application.
Application needs compute resource to run and that comes from the server where they hosted their application. In olden days when we did not have any virtualization or cloud computing, we use to run them directly on a physical server. So, if I want to host an application on 10 web servers, I need ten physical servers under load balancer serving the web traffic.
• There is Capital expenditure or CapEx required.
• There is Operational expenditure (OpEx), like cooling, power, admins to maintain that server farm. So, if I want to increase the capacity and add more servers I need to spend money and time on the above-mentioned process. This is very common as business starts from a very small user base and then users/consumers traffic increases if the business is doing well.
We deploy one application per server because we want our applications to be isolated. For example, if we need a web app, DB app, and a few backend apps.
We may end up having multiple physical systems each running a single instance of that app.
In 2013 the dotCloud PaaS business was struggling and the company needed a new lease of life. To help with this they hired Ben Golub as new CEO, rebranded the company as “Docker, Inc.”, got rid of the dotCloud PaaS platform, and started a new journey with a mission to bring to Docker and containers to the world.
Docker relies on Linux kernel features, such as namespaces and groups, to ensure resource isolation and to package an application along with its dependencies. This packaging of the dependencies enables an application to run as expected across different Linux operating systems. It’s this portability that’s piqued the interest of developers and systems administrators alike.
But when somebody says “Docker” they can be referring to any of at least three things:
When most of the people talk about Docker they generally refer to the Docker Engine. Docker engine runs and orchestrates containers. As of now, we can think docker engine like a hypervisor. The same way as hypervisor technology that runs virtual machines, the Docker Engine is the core container runtime that runs containers.
These servers are very expensive and we need to do a lot of maintenance for them.
• We need to procure a server. A process where we place an order for the purchase.• There is Capital expenditure or CapEx required.
• There is Operational expenditure (OpEx), like cooling, power, admins to maintain that server farm. So, if I want to increase the capacity and add more servers I need to spend money and time on the above-mentioned process. This is very common as business starts from a very small user base and then users/consumers traffic increases if the business is doing well.
We deploy one application per server because we want our applications to be isolated. For example, if we need a web app, DB app, and a few backend apps.
We may end up having multiple physical systems each running a single instance of that app.
So, every time we need a new app to run we buy servers, install OS and set up our app on that. And most of the time nobody knew the performance requirements of the new application! This meant IT had to make guesses when choosing the model and size of servers to buy. As a result, IT did the only reasonable thing - it bought big fast servers with lots of resiliency. After all, the last thing anyone wanted - including the business - was under-powered servers. Most part of the time these physical server computer resource will be under-utilized as low as 5-10% of their potential capacity. A tragic waste of company capital and resources.
Virtualization Revolution
VMware gave the world the virtual machine and everything changed after that. Now we could run multiple applications isolated in separate OS but in the same physical server.
In the virtualization chapter, we discussed the benefits and features of virtualization, The Hypervisor Architecture.
Problems with Hypervisor Architecture
Now we know that every VM has its own OS, which is a problem. OS needs a fair amount of resources like CPU, Memory, Storage etc. We also maintain OS licenses and nurse them regularly like patching, upgrades, config changes. We wanted to host an application but collected a good amount of fats over our infra, we are wasting OpEx and CapEx here. Think about shipping a VM from one place to another place, this sounds a great idea that if we could bundle everything in a VM image and ship it to the other person doesn’t need to setup VM from scratch can directly run the VM from the image. We did it in a Vagrant chapter where we download preinstalled VM and have just run it.
But these images are heavy and bulky as they contain OS with the app. Booting them is a slow process. So being portable it’s not convenient to ship the VM every time.
Shipping an application bundled with all the dependencies/libraries in an image without OS. Hmm, sounds like we solved a big problem there. That’s what containers are.
Think about setting up an application in a VM or a physical machine. We need OS setup, dependencies, an application deployed and some config changes in the OS. We follow a series of steps to set up all these like setting up a LAMP stack. If we could bundle all these into one container and ship it, then admins don’t need to do any setup on the target, all we need to do is pull a container image and run it.
Containers
If virtual machines are hardware virtualization then containers are OS virtualization. We don’t need a real OS in the container to install our application. Applications inside the containers are dependent on Host OS kernel where its running. So, if I have hosted java application like inside the container it will use all the java libraries and config files from container data, but for computing resource, it's relying on the Host OS kernel. Containers are like other processes that run in an Operating System but it's isolated, its processes, files, libraries, configurations are contained within the boundaries of the container. Containers have their own process tree and networking also. Every container will have an IP address and port on which the application inside a container is running. This may sound like a virtual machine but it’s not, remember VM has its own OS and containers does not.
Containers are very lightweight as it just has the libraries and application. So that means less computer resource is utilized and that means more free space to run more container's. So, in terms of resources also we are saving CapEx &OpEx. Containers are not a modern technology, it was around us in different forms and technologies. But Docker has brought it to a whole new level when it comes to building, shipping and managing containers.
Docker
Docker, Inc. started its life as a platform as a service (PaaS) provider called dotCloud. Behind the scenes, the dotCloud platform leveraged Linux containers. To help them create and manage these containers they built an internal tool that they nick-named “Docker”. And that’s how Docker was born!In 2013 the dotCloud PaaS business was struggling and the company needed a new lease of life. To help with this they hired Ben Golub as new CEO, rebranded the company as “Docker, Inc.”, got rid of the dotCloud PaaS platform, and started a new journey with a mission to bring to Docker and containers to the world.
Docker relies on Linux kernel features, such as namespaces and groups, to ensure resource isolation and to package an application along with its dependencies. This packaging of the dependencies enables an application to run as expected across different Linux operating systems. It’s this portability that’s piqued the interest of developers and systems administrators alike.
But when somebody says “Docker” they can be referring to any of at least three things:
- Docker, Inc. the company
- Docker the container runtime and orchestration technology
- Docker the open source project
When most of the people talk about Docker they generally refer to the Docker Engine. Docker engine runs and orchestrates containers. As of now, we can think docker engine like a hypervisor. The same way as hypervisor technology that runs virtual machines, the Docker Engine is the core container runtime that runs containers.
There are so many Docker technologies that get integrated with the docker engine to automate, orchestrate or manage Docker containers.
For more information about Visualpath, visit www.visualpath.in and follow the company on Facebook and Twitter.
Comments
Post a Comment