Fireside Chat With Kelsey Hightower

Join Kelsey Hightower, Bruno Andrade, and our host Jim Shilts for this 45 min fireside chat.

We will have an interactive discussion and will Q&A from the audience.  The main topics we are going to cover are:

– Application Context (10 min)
– Integrations into Pipelines (10 min)
– Application Management Workflows (10 min):
– Quick demo of how Shipa addresses this after each topic.
– Audience Q&A (10-15 min)

You can register here

Shipa Framework For Developers

...the central focus of these services is still Kubernetes itself, and now a new breed of abstraction has begun to appear which goes one step above these managed Kubernetes offerings to bring the focus back to the application itself. One such recently-launched company, Shipa, delivers a cloud native application management framework built to manage the full application lifecycle in an application-centric fashion.

The New Stack has recently featured Shipa in an article that provides excellent insight on Shipa’s product launch and how it can change how cloud-native applications are deployed and managed.

Read the full article here.



Rethinking Ops

Looking back at my years working with infrastructure and going through it’s changes, I believe its time we start to rethink Operations because clearly this model of Ops as cluster or infrastructure admins does not scale. Developers will always out-demand their capacity to supply. Either your headcount is out of control or your ability to innovate and deliver is severely hamstrung. Operations becomes this interrupt-driven thing where we’re just fighting fires as they happen. Ops as masters of production usually devolves to Ops becoming human incident routers, trying to figure out what team or person can help resolve problems because, being responsible for everything, they don’t have the insight to fix it themselves.

The idea of “Ops lock-in” can be a major problem, where your own Ops team who just isn’t able to support the kind of innovation that you’re trying to do slows down innovation.

My thought and vision for the future of Operations is taking combined engineering to its logical conclusion. Just like with QA, Ops capabilities should be embedded within development teams. The reality is you can’t be an effective software engineer today without some Ops skills, and I think every role should be working towards automating itself out of a job. Specifically, my thought and vision is that we should look at enabling developers to self-service through a continuous operation platform and empowering them to deploy and operate their services…with minimal Ops intervention..

With this, Ops become force multipliers. We move away from the reactive, interrupt-driven model where Ops are masters of production responsible for everything. Instead, we make dev teams responsible for their services but provide the tools they need to actually own their systems end-to-end — from the code on their laptops to operating it in production.

Enabling developers to self-service means treating Ops as a product team. The infrastructure automation, deployment automation, configuration management, logging, monitoring, and production tools — these are all products and it’s these products that allow teams to fully own their services. This leads to empowerment.

Products enable ownership. We move away from Ops as masters of production responsible for everything and push that responsibility onto dev teams. They are the experts for their services. They are best equipped to deal with problems that arise but we provide the tools they need to diagnose and resolve those problems on their own.

I believe the near future is exciting and I’m excited to see how we bridge more and more the gap between Devs and Ops while helping organizations to transition to a more effective model, that delivers value faster while reducing toil

Why Kubernetes Will Disappear

Over the last few months, I’ve been listening and involved in conversations about Kubernetes (k8s) and trying to identify the common topics that creates debate on whether it’s a “good” or “bad” idea. There are sensible points of view on both sides of the debate.

Thoughts

Everyone’s got thoughts, so during these conversations I’m looking for trends and points that resonate, even where I don’t personally agree. I’ve seen some good thinking that I both strongly agree and nonetheless have reservations about.

I’ve divided thoughts and opinions across different roles in the organization:

Kubernetes by conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity.

The Developer

The view in developer teams is characterized by two beliefs.

One set of people I talk to believe that having a generic, reliable platform to deploy software to is “good”. They’re not wrong. They see the potential, but there’s a yin to this yang.

The other belief, held perhaps by those who’ve had to deal with production problems (especially ones of their own making, where there’s no one else to blame) understand that simplicity is the prime directive for workability.

This is close to my own thoughts: that complication is intrinsically a killer — in and of itself an exponential risk to your chances of success. From this perspective, k8s is “bad” because the complicatedness will absorb more than all of your energy. Unless you have deep pockets and a dedicated platform team, time, budget and stakeholder patience will run out before meaningful value can be delivered.

Operations

The problem is understandability when applications aren’t behaving as expected.

I sense the operations view might be the most grounded. After all, these are the people who tend to be up at stupid o’clock, dealing with the fallout of the cans that architecture and delivery teams kicked down the road under pressure from senior stakeholders. The buck stops at operations. It’s rarely of their making and there’s often too little of a feedback loop to achieve workability.

In that situation, a generic platform that maintains healthy separation of workloads from infrastructure is “good” because it creates a clearer separation of root causes and helps to push back. Building standard around the way we package, run and monitor workloads is pain-relief. Simultaneously, there’s an acknowledgement that complicated systems are “bad”: they’re a recurring nightmare to keep going and, critically, create nebulous, multi-layered nests of unclarity that can comfortably obscure thundering security risks for undefined periods.

The problem is understandability when a cluster or applicationisn’t behaving as expected. Being able to comprehend it is like reading The Matrix code and seeing “the woman in the red dress”. A swirling maelstrom of intricate, verbose, interlaced yaml that drives an Alice-in-Wonderland-like rabbit hole of of master and worker control and data plane behaviors. Sure it’s declarative but it can feel like a riddle, wrapped in a mystery, inside an enigma.

Integration

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well, but…

By now it’s clear that Kubernetes is big. It’s both complex and complicated. That’s one thing you and I can surely agree on. If your team can understand and manage that, then it’s probably going to be “good” for you. Using managed services such as GKE or EKS means you’ll be able to externalize a proportion, but a rump of the cognitive load will remain in your court.

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well. But here’s the thing: customizing and building a complex platform and services adds no specific value to your organization. It’s an externality and as such ultimately will be externalized. We know this because that’s exactly what cloud is: externalizing the hard problems of running reliable, fault-tolerant generic infrastructure.

Out of Sight

Infrastructure is the endangered species of software delivery teams. Where once there were racks of computers in locked rooms with impressive and mysterious blinking lights and lots of whirring fans, now a co-working space and a laptop are all you need to conduct an orchestra of thousands.

That very need is what has driven externalization. Building infrastructure was too hard, too slow and too complicated. Constrained by the basic physics of office and data centre space and the mechanics of buying, racking, networking and tending to machines whilst handling failures with grace.

And this is why I think Kubernetes will disappear. It’s so generic that there’s no reason to do it yourself. Few organizations operate on a scale where it makes sense to run datacenters. The practical friction of running Kubernetes creates a similar dynamic. Like reliable infrastructure, it’s too hard, slow or expensive to justify doing it really well yourself, but there probably is value in paying for that as a service from a cloud provider.

Becoming Commodity

In the end, precisely because it’s generic and because building and running a customized and complex platform is an undifferentiated hard problem, it can and will be commoditized. If you remember what Maven did for the Java world, you’ll understand that accepting a little “opinionation” delivers a lot of productivity.

There will always be exceptions, but I think they’ll prove the rule: Kubernetes conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity