What one little whale could mean for the evolution of Windows and Linux
Caption: The Docker Whale
If you’re a Windows Systems Administrator or IT Pro, chances are you’ve seen this happy little fellow pop-up on Twitter, Ars Technica or TechEd and dismissed it. “Docker, eh? Sounds like App-V all over again”, you might have opined into your coffee, even. However, recent news out of Redmond hints that Docker is coming to Windows in a strong way, and perhaps integrated even into the Operating System with Server 2015.
Disclaimer: though I am under NDA for some things Microsoft, I have no special knowledge about this topic. All of the resources and people quoted here are freely available online, and links provided in the References section below.
A Quick Docker Primer
Docker is a technology that allows an application developer to install and sequence the execution of their application to ‘containerize’ it in a portable container which can be docked and executed anywhere, seamlessly alongside other applications without application installation dependencies causing interference. The key differentiation point here is that the application and its dependencies are virtualized, rather than the underlying OS, as in traditional virtualization. The big benefit to abstraction of the OS is portability and a significant reduction in overhead.
Instead of developers giving out an install script and binaries, they can provide a fully configured Docker image, which can be run on any system with the Docker engine installed, which includes OS X, Windows and pretty much any version of Linux.
Rather than the packager or administrator having to come up with a complex installation script to determine currently installed binaries and prereqs and then install missing pieces, the application developer instead can simply reference the needed containers, which are all retrieved on first demand to create the compiled environment. This can be thought of as ‘declarative packaging’.
Now, compare this to the traditional approach used today, in which someone–normally the System Center Configuration Manager administrator– must install every program on a clean virtual machine, testing step by step the installation process, and painstakingly build a deployment model to deliver varying prerequisites to end-point systems based on a variety of possible configurations. We can describe this approach to application delivery as ‘procedural packaging’.
It is not unusual for a packaging effort to consume between four and 32 hours per application, and this is assuming access to the greatest tools (AdminStudio) and an experienced packager. If a novice attempts the same, the amount of time spent to package each application could easily double or greater.
Immediately, the benefits of a Docker approach to application packaging should be apparent. But doesn’t this all feel rather familiar?
Docker: not Microsoft’s First Experience with Virtualization/Containerization
Microsoft Purchased SoftGrid in the mid 2000’s, namely for their talent and the excellent Softricity App-V product, which was bundled and distributed for years as part of its Microsoft Desktop Optimization Pack.
If it has been a while since you last investigated App-V, I’ll provide a quick refresher: App-V makes use of a method of monitored program installation known as ‘sequencing’ in order to allow an administrator to deploy an application to a clean VM, monitor the initial installation and program launch, and then wrap all of the program’s changes to the known base computing environment in a ‘bubble’ for later distribution. This bubble can be streamed down to an end-client to allow programs to follow a user as they move from computer to computer and be launched almost immediately on demand, using a clever process that separates out the bits for program launch from fully-loaded application.
It is very easy to draw the comparison between Docker and App-V, as both are built with the twin goals of simplifying program installation and application delivery in-mind. In a previous blog post, a primer on docker from the windows perspective, I also drew the direct comparison between the two (be sure to read that if you want even more information as to how Docker works.)
The similarities between Docker and App-V don’t stop with distribution; as both products also use abstraction and isolation to allow normally incompatible applications to exist simultaneously. For example, it is possible to use App-V to deploy two separate versions of Microsoft Office to a computer at the same time; very useful if your organization makes use of a critical line-of-business Office extension which does not work in a newer Office version. App-V the needed legacy app, and deploy it alongside your newer Office distribution.
Building containerization into the new OS
It’s no secret that Microsoft has been working on an experimental research operating system known as Drawbridge, built from the ground up with a focus on containerizing and sandboxing process from each other. Here’s the description Microsoft gives on their own site about the topic, from back in 2011.
Drawbridge is a research prototype of a new form of virtualization for application sandboxing. Drawbridge combines two core technologies: First, a picoprocess, which is a process-based isolation container with a minimal kernel API surface. Second, a library OS, which is a version of Windows enlightened to run efficiently within a picoprocess. Drawbridge combines two ideas from the literature, the picoprocess and the library OS, to provide a new form of computing, which retains the benefits of secure isolation, persistent compatibility, and execution continuity, but with drastically lower resource overheads.
From this alone, we can draw the conclusion that the research from Drawbridge could lead to Windows environments and Services running within their own containers.
When we add to this the recent interesting information that Microsoft Distinguished Engineer Jeffrey Snover covered recently in his TechDays Online1 talk titled ‘Our Server Journey’ about the “deep refactoring” currently in progress within Windows Server leading to its delayed release into 2016, a clearer picture emerges. Particularly when we consider the following image, which Mr. Snover shared.
Caption: an image of the block layout of Server vNext, showing ‘Server’ operating within a Container layer, presumably running on the baremetal Hypervisor
Does that look to you like the elements of Windows Server itself running within Containers? We’re not certain yet, but I think it is coming.
What I think this means
One of the big gains for Docker coming to Window and Azure is that once a Linux program is containerized, it can run anywhere that you run Docker. This means that long-term hold out applications and services that can only run in Linux, or are inefficiently emulated on Windows can now natively execute the way they normally would within Windows. We’ve already learned from Mark Russinovich that a native Docker client is coming to Windows as well2.
However, there is a catch, in order to run Linux Docker Images on your Windows system, you need to have the Linux 3.8 kernel. To quote Ben Armstrong, Microsoft Program Director for Virtualization3.
Fortunately there is a great technology that lets you run Linux 3.8 anywhere, it’s called virtualization! – ref 1 (4:00)
Yes, if Docker support is built into Windows at a low level, we could very well see a future version of the OS making use of Hyper-V and containerization to composite an environment in which both Linux and Windows binaries execute with no issues from within the familiar Windows environment.
The next big growth in Windows could come from Linux, what a strange world we live in these days.
I plan to revisit this topic in a few weeks, with an outline on how to package (sorry, old habits die hard) containerize a traditional application like 7zip or Adobe Reader into a container, to make use of it on Windows.