tl;dr You want to gain flexibility by leveraging the K8s offerings from multiple hyperscalers (“cloud providers”) for your application. You gain a bunch of inflexibility by assuring your application behaves equally on all these K8s target platforms.
I remember the days as a Java developer when we had to ensure our applications (quite often pre-Spring applications on Tomcat, but not full-blown J2EE on WAS anymore) were able to run on different target operating systems.1
Wow! We found out that we had to be very defensive in programming all sorts I/O related (file paths, line breaks, Unicode, whatnot). Our integration tests normally ended in manual testing, thanks to a lack of servers for real test automation.
Which several times ended up in troubleshooting problems in production (sic).
Through such nasty experiences, I’m pretty cautious in regards of rolling out applications on multiple target platforms. In various ways, the software development and operating paradigms have changed since then.
And to the better: Many teams work interdisciplinary and with a scientific approach (DevSecGitXYZOps). There’s K8s, The Reactive Manifesto and Reactive Programming, DDD, Serverless, EDAs to name some buzzing few.
Nevertheless, I stumbled upon this basic problem when talking to customers. And I was surprised to see a lot of architects and developers have never even thought about it.2
The thing is, that allthough the K8s offerings claim to be the same, under the hood they are not.
Consider you want to deploy your “Service A” to multiple hyperscalers and even to your DIY K8s in your enterprise’s data center.
If you look at the release information in this example (picture above), you might be surprised, that the K8s versions of your target platform are not the same:
But it comes worse.
K8s is a platform for container orchestration. And as we know, a running container basically isn’t much more than a restricted Linux process3.
Did you consider that you not only need to ensure application consistency accross various K8s distributions, but also need to take into consideration stuff like…
- different base operating systems…
- different base/OS libraries…
- different container runtimes…
- different components for K8s…
- different versions/patches for all components…
- different … (you name it)?
Here, some bring in the argument, a control plane product (that allows distributing workflow across multiple K8s clusters and control them in a unified way) will solve this problem.
Honestly, such a product is not intended to solve your problem. It cannot streamline the underlying K8s offering, neither can it tweak the software pieces K8s is running on.4
It comes even worse: Whenever the K8s offering hyperscale changes (think security fixes/patches) any component in the stack offering K8s, you cannot assume your service will behave the same. You need to re-test it.
So the target platform variety intended to give you flexibility becomes a moving target. It evolves to a Hydra that will cause you more work (thus, inflexibility), the more target platforms you plan to address.
This is what I call the Moving Target Platform Dilemma.
In another article, I’ll describe ways do deal with it (TBD as of 2022-06-30).
Like it? Hate it? Leave your comments in the box below.
- Namely RHEL (4/5) and Windows 2000 AS/Windows Server 2003 (R2).
- You might have experienced such moments when asking “have you considered this?” and your audience stares at you with large eyes and frozen-for-a-minute faces, have you? 😀
- (see above example from a great book that deserves an update)
- Actually the promise “we support all K8s” of some vendors sounds a bit dodgy to me: My understanding of good support is that in case of problems, the software vendor runs environments in the backend that easily allow to reproduce errors. Just the sheer amount of certified K8s distributions will make this a challenging task, not to speak about doing integration testing for the control plane software itself.