Development

Why Kubernetes Will Disappear

Over the last few months, I’ve been listening and involved in conversations about Kubernetes (k8s) and trying to identify the common topics that creates debate on whether it’s a “good” or “bad” idea. There are sensible points of view on both sides of the debate.

Thoughts

Everyone’s got thoughts, so during these conversations I’m looking for trends and points that resonate, even where I don’t personally agree. I’ve seen some good thinking that I both strongly agree and nonetheless have reservations about.

I’ve divided thoughts and opinions across different roles in the organization:

Kubernetes by conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity.

The Developer

The view in developer teams is characterized by two beliefs.

One set of people I talk to believe that having a generic, reliable platform to deploy software to is “good”. They’re not wrong. They see the potential, but there’s a yin to this yang.

The other belief, held perhaps by those who’ve had to deal with production problems (especially ones of their own making, where there’s no one else to blame) understand that simplicity is the prime directive for workability.

This is close to my own thoughts: that complication is intrinsically a killer — in and of itself an exponential risk to your chances of success. From this perspective, k8s is “bad” because the complicatedness will absorb more than all of your energy. Unless you have deep pockets and a dedicated platform team, time, budget and stakeholder patience will run out before meaningful value can be delivered.

Operations

The problem is understandability when applications aren’t behaving as expected.

I sense the operations view might be the most grounded. After all, these are the people who tend to be up at stupid o’clock, dealing with the fallout of the cans that architecture and delivery teams kicked down the road under pressure from senior stakeholders. The buck stops at operations. It’s rarely of their making and there’s often too little of a feedback loop to achieve workability.

In that situation, a generic platform that maintains healthy separation of workloads from infrastructure is “good” because it creates a clearer separation of root causes and helps to push back. Building standard around the way we package, run and monitor workloads is pain-relief. Simultaneously, there’s an acknowledgement that complicated systems are “bad”: they’re a recurring nightmare to keep going and, critically, create nebulous, multi-layered nests of unclarity that can comfortably obscure thundering security risks for undefined periods.

The problem is understandability when a cluster or applicationisn’t behaving as expected. Being able to comprehend it is like reading The Matrix code and seeing “the woman in the red dress”. A swirling maelstrom of intricate, verbose, interlaced yaml that drives an Alice-in-Wonderland-like rabbit hole of of master and worker control and data plane behaviors. Sure it’s declarative but it can feel like a riddle, wrapped in a mystery, inside an enigma.

Integration

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well, but…

By now it’s clear that Kubernetes is big. It’s both complex and complicated. That’s one thing you and I can surely agree on. If your team can understand and manage that, then it’s probably going to be “good” for you. Using managed services such as GKE or EKS means you’ll be able to externalize a proportion, but a rump of the cognitive load will remain in your court.

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well. But here’s the thing: customizing and building a complex platform and services adds no specific value to your organization. It’s an externality and as such ultimately will be externalized. We know this because that’s exactly what cloud is: externalizing the hard problems of running reliable, fault-tolerant generic infrastructure.

Out of Sight

Infrastructure is the endangered species of software delivery teams. Where once there were racks of computers in locked rooms with impressive and mysterious blinking lights and lots of whirring fans, now a co-working space and a laptop are all you need to conduct an orchestra of thousands.

That very need is what has driven externalization. Building infrastructure was too hard, too slow and too complicated. Constrained by the basic physics of office and data centre space and the mechanics of buying, racking, networking and tending to machines whilst handling failures with grace.

And this is why I think Kubernetes will disappear. It’s so generic that there’s no reason to do it yourself. Few organizations operate on a scale where it makes sense to run datacenters. The practical friction of running Kubernetes creates a similar dynamic. Like reliable infrastructure, it’s too hard, slow or expensive to justify doing it really well yourself, but there probably is value in paying for that as a service from a cloud provider.

Becoming Commodity

In the end, precisely because it’s generic and because building and running a customized and complex platform is an undifferentiated hard problem, it can and will be commoditized. If you remember what Maven did for the Java world, you’ll understand that accepting a little “opinionation” delivers a lot of productivity.

There will always be exceptions, but I think they’ll prove the rule: Kubernetes conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity