Containers - a simple overview
In my time outside of the workforce I have been trying to learn new skills to continue going forward in the cloud. In the first week or so, I ran across an issue (something like error code 2) when trying to publish a container image to GCP. After a bit of Google-Fu to understand what might be going on, I found myself wondering what I could learn about containers/docker/kubernetes and all things cloud native while I was able.
All that to say, my learning path is underway to know more about containers. Yep, there are many paths to investigate for sure, but Docker seems the best choice right now. It’s something that will leverage Windows Subsystem for Linux (WSL) and run mostly without incident (unless I do something I shouldn’t) on my Surface Pro 3. In addition - This will lead me right into Kubernetes to continue learning and getting smarter about these things that appear to make the cloud such an interesting place.
Containers, Services, and Apps… what now?
There are tons of people online who know more about containers than I do - but they do not live behind this blog - so … its all me, at least for now and I am curious about how this all works. Let’s see if I am getting the gist of it.
A container is pretty much the next evolution of the virtual machine… to grossly oversimplify.
A server, when I started out doing this stuff, was a large box that sat in a rack. It ran Windows and likely an application for the business. Each application, say the ERP software and the CRM software that helped your company do the things - had it’s own server.
The expense to run the server (and likely the licenses for software) were all capitalized - meaning, spend money now and ammortize it over the life of the thing you purchased - maybe licensing being the exception (pay us every year and we will make sure you get updates/support).
Then, the IT industry pivoted slightly and ways to get more servers on less hardware came about. VMware - at least where I learned virtualization was the thing to use. You could take a few physical servers (or lots of physical servers - depending on how big you were) and move your 1:1 applications on to virtual machines running in multiple on these new physical servers. This brought the cost per server up (then down)… buy hugely powerful hardware and run many virtual machines on that hardware, thus getting more bang for the server buck.
This allowed multiple servers to share in processor, disk, and memory hardware from within the host physical servers. Which meant to improve performance by ensuring that one server was not running one app… and sitting at 10% utilization because it was bigger than needed for that app alone.
Tip - remember server hardware for 1:1 physical deployments costs money - and performance is huge… so buy the biggest possible server hardware for your shared calculator app to ensure that nobody underperforms… again, I’m grossly oversimplifying.
Virtualized environments allowed one host to run many “servers” they behave like and are managed like their 1:1 physical counterparts - just ask the licensing people - there can be a considerable cost for that yet.
A containerized environment moves this virtualization and resource sharing from hardware, up into the OS. It allows your applications to share one copy of the OS and run in a small package. By doing this, App1 and App2 can run on the same hardware and OS - sharing all the things up to that point, but still remaining isolated from each other - if App1 needs App2 - there will be network communication and other things needed to make that happen.
If I have learned anything about this recently, containers offer a way to segment services from each other on shared physical hardware and shared OS… The cost of shared hardware, power, and cooling are still there, but like virtualization before they are split among many “customers”. Unlike the previous virtualization work - the OS is also shared by “customer” applications. This keeps the use cost of hardware low and can potentially reduce the cost of server OS licensing as well.
This to me, means that the biggest cost going forward that isn’t shared is development cost and a modest amount of maintenance on the systems end. And - if this idea can be brought back to a virtualization concept of Pets vs Cattle, where the VM was shot when it misbehaved (ok… removed and reprovisioned as new), which I think it can… a container that goes a bit wonky can be removed and have a new one spun up without admin intervention.
That’s what I know today… in a simplified form… MOAR CONTAINERS!
This is what I have learned in my short study time - before this, these were concepts that I understood something about, but seeing how this works up close is even better.