Category: Development

Shipa 1.2 Release

Shipa 1.2 is now GA

Shipa (https://www.shipa.io), the full lifecycle application-centric framework for Kubernetes and multi-cluster portability, just got better! Version 1.2 is now available, and we are excited to share these key new features and improvements with the Shipa community.

Shipa creates the guardrails, compliance and controls for your Kubernetes and OpenShift applications, while at the same time helps eliminate all the yaml-files, helm charts and custom scrips that are most likely starting to pile up and slowing things down for your developers.

Get Started

Shipa version 1.2 includes improvements to:

    • Multi-cloud incl. AKS, EKS, OKE, GKE, IKS & OpenShift
    • Multi-tenancy – improved detailed multi tenancy model

Shipa 1.2 key new feature include:

    • Network Policies Map
    • Integration with Istio – incl. canary rollouts
    • Vault integration
    • Integration with Private Registries – incl. JFrog
Shipa for Kubernetes multi-cloud portability

New in 1.2:

Network Policies Map

Shipa 1.2 brings user experience to the next level by empowering organizations with a visual translation of standard Kubernetes network policies, representing the simple abstraction level that Shipa provides when restricting or allowing traffic flow between applications. Shipa users can set rules for the application and have an automated visualization of all application policies displayed on the Shipa UI.

The map captures the complexities that are configured under the hood in a rich diagram, allowing you to achieve specific networking rules without understanding how pods or namespaces selectors work in the complex world of Kubernetes. Users can continue to think in an app-centric way and not be burdened with learning how to set up infrastructure objects.

Chart animation shows how the traffic moves between all of the graphed nodes, so users can have an exact representation of the incoming or outgoing network flow.

The map is an excellent tool for developers to understand how applications are configured quickly, and it can be used for a wide range of purposes. For example, from a security standpoint, this feature offers a quick view of whether the application is open to the world or to a specific set of applications/pools; this makes it easier for developers to match their policies to internal business requirements. This feature can also help developers from a debugging perspective. Because the chart show how all applications are connected (or not), developers can quickly see if a certain bug/issue can be derived from an infrastructure misconfiguration or from a codebase error.

Integration with Istio

Istio is an open-source service mesh developed by a collaboration between Google, IBM, and Lyft. It coordinates communication between services, providing service discovery, load balancing, security, recovery, telemetry, policy enforcement capabilities, and more.

Shipa users can now leverage their existing Istio ingress controller for their deployed applications.

Shipa simplifies using Service Mesh by abstracting the complexities away, empowering users to define services communication policies.

Canary rollouts

Shipa users can leverage Istio for the traffic routing rules, including canary rollouts based on percentage traffic splits. Canary rollouts allow you to test a new version of the service by sending small amounts of traffic to the new version. If the test is successful, it can gradually increase the traffic sent to the newest version until all traffic is moved. If anything goes wrong along the way, you can abort the rollout and return the traffic to the old version.

Metrics

Istio generates a set of service metrics based on the four golden monitoring signals ( latency, traffic, errors, and saturation). Once having all of these metrics, Shipa users can take advantage of  out-of-the-box integrations with your existing APM solutions and incident management tools. By doing that, Shipa makes it easier to solve problems and build more resilient applications quickly.

CNAME & HTTPS

Shipa integrates with cert-manager, and by using one single command, Shipa automatically generates certificates for your cname.

By using “shipa cname-add {appname} {cname} “and routing your DNS to Istio gateway endpoint, Shipa takes care of everything else.

Shipa also allows certificates to be added manually through “shipa certificate-add” command.

Vault integration

Users can now inject secrets from their Harshicorp Vault into their Kubernetes applications deployed using Shipa.

As many organizations migrate to the cloud, significant concern has been regarding how to best secure data. Vault is secret store software; it uses to store, manage safely, and control access to secrets ( tokens, passwords, certificates, and API keys) on Kubernetes clusters.

For safety reasons and user experience, Shipa users manage their secrets directly on Vault. Shipa provides a sophisticated user experience that enables the user to pass all necessary vault annotations through shipa.yaml, these annotations are used by Kubernetes Vault sidecar to inject secrets to your app.

Integration with Private Registries

At Shipa, we believe that integration is essential for Continuous Delivery; for that reason, Shipa integrates with your current stack and tools in minutes.

Shipa now provides the ability to deploy applications with docker images stored in private registries. This feature uses an image URL, docker username, and password/access token to gain access.
Shipa offers full support for JFrog Artifactory, Docker Hub, Amazon ECR, Azure Container Registry, Google GCR, Nexus repository, and more.

Try Shipa today

Shipa is easy to install and get started

Faster and safer application deployments on Kubernetes with Shipa and Oracle Kubernetes Engine (OKE)

Shipa’s application management framework, integrated into OKE, provides an out-of-the-box way for organizations to build, deploy and operate the full life-cycle of Kubernetes applications. With Shipa and OKE, organizations can make up for lost time and start getting value out of Kubernetes immediately.

In this webcast, you will learn how Shipa and OKE:

  • Provide developers an application-centric view they need so they never have to think about Kubernetes objects again.
  • Allow platform engineers to put up guardrails and centrally control deployments across multiple clusters, monitor application performance, and implement network policies and security configurations from a centralized dashboard.
  • Eliminate the need for custom Helm charts, Terraform scripts and YAML, reducing months of work that has to be done before pushing your first application into production.

Try Shipa today!

Full lifecycle application-centric framework for Kubernetes, so everyone can focus on applications

Making Kubernetes Disappear with Shipa

In this excellent YouTube video – Marcel Dempers aka That DevOps Guy explains how Shipa (https://www.shipa.io) makes Kubernetes disappear so developers can focus on coding while providing the controls the DevOps team needs.

With Shipa, not a single yaml is needed to deploy an application across multiple clouds. See for yourself… 

Video Description

Subscribe to show your support! https://goo.gl/1Ty1Q2 

Today we’re taking a look at a new platform called Shipa. Shipa is a full lifecycle application-centric framework that runs on top of Kubernetes so that everyone can focus on applications. 

Checkout https://www.shipa.io/ for more on the platform 

Checkout the source code below 👇🏽 and follow along 🤓 

Also if you want to support the channel further, become a member 😎 https://marceldempers.dev/join 

Checkout “That DevOps Community” too https://marceldempers.dev/community 

Source Code 🧐 https://github.com/marcel-dempers/doc… 

Kubernetes in the Cloud: https://www.youtube.com/playlist?list…

If you are new to Kubernetes, check out my getting started playlist on Kubernetes below 🙂 Kubernetes Guide for Beginners: https://www.youtube.com/playlist?list…

Kubernetes Monitoring Guide: https://www.youtube.com/playlist?list…

Kubernetes Secret Management Guide: https://www.youtube.com/playlist?list…

More about Marcel Dempers aka That DevOps Guy:

I am a solutions Architect and my passions are platform architecture, distributed systems engineering ,micro-services, containers and cloud native technology. I’m a DevOps evangelist and encourage use of automation technology and open source to help folks become autonomous.

I want to build up a platform where I can share everything I’ve learnt about software engineering and architecture.

Also, there are a ton of things I want to learn, so this is going to be a relaxed, vlog environment of me learning new things and taking you all on a journey , With weekly to bi-weekly video uploads.

I’ll be building software, and documenting things on my GitHub as I go along. Come learn with me! Subscribe! 🙂

https://www.youtube.com/c/MarcelDempers

———-

More videos like “Making Kubernetes Disappear” on the Shipa YouTube channel

Shipa Integration with CircleCI

Kubernetes can bring a wide collection of advantages to a development organization. Properly leveraging Kubernetes can greatly improve productivity, empower you to better utilize your cloud spend, improve application stability and reliability, and more. On the flip side, if you are not properly leveraging Kubernetes, your would-be benefits become drawbacks. As a developer, this can become especially frustrating when you are focused on delivering quality code, fast. The learning curve and management of the object-centric application architecture, scripting and integrations into multiple CI systems and pipelines, and managing infrastructure can all make you less productive. According to a survey conducted by Tidelift and The New Stack, just 32% of a developer’s time is spent writing new code or improving existing code. The other 68% is spent in meetings, code maintenance, testing, security issues, and more.

“Respondents spend 35% of their time managing code, including code maintenance (19%), testing (12%) and responding to security issues (4%).”

Chris Grams

What if developers were empowered to take full advantage of the benefits of Kubernetes while avoiding the associated pitfalls? A new integration between CircleCI and Shipa may offer exactly that. CircleCI is dedicated to maximizing speed and configurability with customizable pipelines. Shipa is focused on simplifying Kubernetes so that developers can spend more time doing what they do best. The partnership and integrations between both solutions allows developers to leverage Kubernetes and all of the associated benefits, without changing the way they work. Your platform engineering team is able to manage, secure and deliver a powerful Kubernetes platform for the entire development organization to benefit from.

In the video above (https://www.youtube.com/watch?v=DvW13w_2HOs), Shipa founder Bruno Andrade demonstrates the CircleCI and Shipa integration. . Using a simple Ruby app, a developer can deploy to Kubernetes without creating a single Kubernetes object or its related YAML files (a major pain point most developers have when deploying to Kubernetes). With any Git repository, a developer can code, check in, and watch CircleCI and Shipa do the rest. Shipa is able to pick up the deployment from CircleCI, and abstract the entire Kubernetes deployment process from the developer’s point of view.

With the application already running in a GKE cluster connected to Shipa, a developer can add a quick update to the application and check it into a Git repository. From there, the CircleCI pipeline immediately picks up the change, delivers the updated bits to Shipa, and Shipa manages the deployment to the GKE cluster.

NO MORE YAML!

As a developer, you will not need to create anything related to Kubernetes. In fact, I feel confident that even someone who is just starting on their Kubernetes journey, with a very basic understanding of it, can get started easily and speed up the adoption process. The deployment layer is completely abstracted, allowing a platform engineering team to manage a robust Kubernetes environment, including all relevant security scans, without slowing down the development team.

Finally, the video also covers additional benefits from the Shipa and CircleCI integration including historical application information, consumption in the cluster, the entire lifecycle, successful and failed deployments, and the ability to roll back to a different version of the application, again, into Kubernetes, without really needing to know how it is done.

It should also be noted that, although the video shows Google Cloud and GitHub in this instance, you are not actually tied to a cloud provider or a Git repository. You can leverage this integration in any single or hybrid type of environment with the provider of your choice. Another great benefit to this powerful partnership between Shipa and CircleCI.

https://www.shipa.io/

https://circleci.com/

 

See for yourself

Install and deploy your applications on Kubernetes with minimal infrastructure overhead. With the integration of Shipa and CircleCI across workflows, developers can deploy and manage applications on Kubernetes without the need to create or manage objects and YAML files.

Deploying Applications on Kubernetes

Developing and deploying applications to Kubernetes locally with Shipa and Minikube

In a previous article, we discussed why we frequently hear that developers are not that keen on Kubernetes. You can read it here.

In summary, while developers certainly see the value of Kubernetes, they want to continue focusing on their application code and updates and not be impacted by the company’s Kubernetes initiative, which is quite fair.

I’m sure that developers, platform engineers, and DevOps engineers have all explored available solutions to mitigate the amount of infrastructure-related work that Kubernetes adds to the developer’s workload. While there are a few options available, developers quickly discover that these tools bring additional difficulties, such as:

  • Integrating their development workflow into the overall organization’s structure and requirements is a challenge.
  • When using these tools, it’s hard for the developer to comply with security, resource utilization, and more.
  • It’s not always easy to migrate locally developed applications to Test and Production clusters. It ends up requiring some level of YAML and object manipulation to make their apps work on different clusters.
  • It’s challenging to have a “production-like” environment locally.
  • And more…

While developers certainly see the value of Kubernetes, they want to have the capability to continue focusing on their application

To address the challenges, developers we have spoken with say that they need a solution that:

  • Allows developers to focus on code only and remove the need to create and maintain objects and YAML files
  • Makes application deployment on Kubernetes locally easy so they can quickly test their applications and updates.
  • Facilitates moving the applications from their local environment to other clusters, e.g., Test, Production, etc.
  • It empowers them to leverage a production-like environment locally, where they can work with the same settings required around application performance, monitoring, security, and more.

To help achieve this, I am detailing below how to implement Shipa and Minikube, which will give you both a local Kubernetes cluster and Shipa’s application framework.

Installing Minikube

To install Minikube, you just need to follow step 1 described in the following link: 

https://minikube.sigs.k8s.io/docs/start/

Installing Virtualbox

We will be using Virtualbox as the driver for our Minikube. 

Virtualbox provides packages for the different operating systems, which you can download from the following link:

https://www.virtualbox.org/wiki/Downloads

Starting a Cluster

Once you install both tools, it’s now time for you to get a cluster running, which you can do using the following command:

minikube start --kubernetes-version='v1.18.2' --memory='5gb' --disk-size='20gb' --driver=virtualbox

The command above will create a Kubernetes cluster version 1.18 with 5GB of memory and 20GB of disk. Even though you can adjust this as needed based on the resources you have available, keep in mind the amount of resources you need to run Kubernetes and your apps when resizing this.

Running the command above will give you an output similar to the one below:

 minikube start --kubernetes-version='v1.18.2' --memory='5gb' --disk-size='20gb' --driver=virtualbox
* minikube v1.14.2 on Darwin 10.15.6
* Using the virtualbox driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating virtualbox VM (CPUs=2, Memory=5120MB, Disk=20480MB) ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.12 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" by default

To make sure your cluster started successfully, you can run the following command:

 kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m39s   v1.18.2

Installing Shipa

With your local cluster running, you can now install Shipa.

Shipa can be downloaded and installed in your local cluster as a Helm chart. You can download Shipa’s Helm chart using the following command:

git clone https://github.com/shipa-corp/helm-chart.git

Once the download is complete, you can access Shipa’s Helm chart by simply entering the following:

cd helm-chart/

Inside the folder, you will now apply the resource limits to the services created by Shipa using the following command:

 kubectl apply -f limits.yaml
limitrange/limits created

With the above completed, you will now update the chart dependencies using the following command:

 helm dep up
load.go:112: Warning: Dependencies are handled in Chart.yaml since apiVersion "v2". We recommend migrating dependencies to Chart.yaml.
Saving 2 charts
Downloading docker-registry from repo https://kubernetes-charts.storage.googleapis.com
Downloading mongodb-replicaset from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts

Now its time for you to install Shipa in our local cluster. You can do it by running the Helm command below:

helm install shipa . \
--timeout=15m \
--set=metrics.image=gcr.io/shipa-1000/metrics:30m \
--set=auth.adminUser=admin@shipa.io \
--set=auth.adminPassword=shipa2020 \
--set=shipaCore.serviceType=ClusterIP \
--set=shipaCore.ip=10.100.10.20 \
--set=service.nginx.serviceType=ClusterIP \
--set=service.nginx.clusterIP=10.100.10.10

The install process should take a few minutes, which can vary depending on the amount of memory allocated to your local Kubernetes cluster. One easy way to identify when Shipa’s install is complete is to make sure you see the shipa-init-job-x market as completed and dashboard-web-x pods created and running. You can check it using the following command:

Once running, you should now add routes for Shipa’s ingress, which can be done with the commands below:

Route for NGNIX:

 sudo route -n add -host -net 10.100.10.10/32 $(minikube ip )
Password:
add net 10.100.10.10: gateway 192.168.99.106

Route for Traefik:

 sudo route -n add -host -net 10.100.10.20/32 $(minikube ip )
add net 10.100.10.20: gateway 192.168.99.106

With Shipa install and routes in place, you will need to download Shipa’s CLI to your local machine. Shipa’s CLI is available for different operating systems, and download links can be found here:

https://learn.shipa.io/docs/downloading-the-shipa-client

With Shipa’s CLI in place, the last step is to add your local instance of Shipa to your CLI as a Shipa target, which you can do by using the command below:

 shipa target-add -s shipa-v11 10.100.10.10
New target shipa-v11 -> https://10.100.10.10:8081 added to target list and defined as the current target

With your local Shipa instance added as a target, you can use the login command:

 shipa login
Email: admin@shipa.io
Password: 
Successfully logged in!

The Email and Password used above are the ones used in the Helm install command.

With the login complete, you can now find the address to Shipa’s dashboard by using the following command:

 shipa app list
+-----------------+-------------+----------------------------------------------------+
| Application   | Units      | Address                                            |
+-----------------+-------------+----------------------------------------------------+
| dashboard    | 1 started  | http://dashboard.10.100.10.20.shipa.cloud      |
+-----------------+-------------+----------------------------------------------------+

If you access the address displayed above, you will see Shipa’s dashboard:

The login credentials are the same ones you set up when installed Shipa using the Helm install command and the one you used to log in through the CLI.

Deploying a Sample Application

With Shipa and Kubernetes in place, we can now deploy our first application.
There are multiple ways of deploying applications on Shipa, and both are covered below:

  1. Using a pre-built image
  2. Deploying from source
Using a pre-built image

It’s possible that there is already a Docker image in place, and you want that image to be deployed to Kubernetes by Shipa. If that’s the case, you can follow the steps below:

Create an application on Shipa:

 shipa app create go-hello -t admin
App "go-hello" has been created!
Use app-info to check the status of the app and its units.
Your repository for "go-hello" project is "git@10.100.10.10:go-hello.git"

The command above will create an application framework that will then be used by Shipa to deploy your application and, once deployed, give you an application context view and operation level. Once you execute the command above, you will be able to see your application both in the Shipa dashboard as well as through the Shipa CLI:

View from the dashboard:

Deploy the image to Kubernetes through Shipa:

When deploying, you should use the command app deploy, as shown in the example below:

 shipa app deploy -a go-hello -i gcr.io/cosimages-206514/golang-shipa@sha256:054d98bcdc2a53f1694c0742ea87a2fcd6fc4cda8a293f1ffb780bbf15072a2b

The image used below is a sample Golang application that you can also use as a test. Once the deployment process is complete, you can see the application in a running state in Shipa’s dashboard:

From there, you can see your application endpoint URL, monitoring, metrics, and more.

Deploying from source

You also have the option to deploy your application directly from source, so it saves you the time of having to build and manage Docker files, images, and more.

When deploying from source, you can deploy from source located in your local machine, deploy directly from your CI pipeline, or using your local IDE. For the sake of simplicity, in this document, we will deploy from source located in your local machine.

Compared to deploying from an image, the first difference when deploying from source is that you need to enable the language support (or platform as called inside Shipa) for your application. Since we will use a Ruby sample application, we should then enable the Ruby platform on Shipa:

 shipa platform add ruby

Once the process is complete, we can then create the framework for our Ruby application:

 shipa app create ruby-app1 ruby -t admin
App "ruby-app1" has been created!
Use app-info to check the status of the app and its units.
Your repository for "ruby-app1" project is "git@10.100.10.10:ruby-app1.git"

The command above sets the application name and sets the application platform, which, in our case, is Ruby.

You can find detailed information about application management on Shipa through the following link:

https://learn.shipa.io/docs/application

For our sample Ruby application, you can download the source code from the following Git repository:

 git clone https://github.com/shipa-corp/ruby-sample.git

Now, you can then deploy the Ruby source code by using the command below:

 shipa app deploy -a ruby-app1 -f ruby-sample/

The command above will build the image required to run the Ruby application and deploy it to Kubernetes using Shipa. Once the deployment process is complete, the same way as before, you can see your application available both through the Shipa CLI and dashboard:

There is just so much more you can do with Shipa. Still, hopefully, this helped you learn how you can deploy and test your applications locally in Kubernetes using Shipa as your application framework without having to create images, objects, YAML files, and others, so you can speed up your development process.

In the next blog entry, we will discuss how to move these applications from your local environment to other clusters that you may have using Shipa.

Are Developers Not That Keen On Kubernetes?

Are developers not that keen on Kubernetes?

Should application developers learn Kubernetes? Let’s ask an even deeper question; should application developers even be aware of Kubernetes in their infrastructure?

I frequently hear this question being asked by DevOps, Platform Engineering, and Development teams. Of course, this is a discussion that brings very different views from different people and can result in a very long debate.

Kubernetes, without a doubt, provides far more functionality than the average developer needs. While Kubernetes is robust and provides dozens of types of objects (around 50 the last time I checked), developers don’t care how many replicas of their service are running, what roles it has, or if it’s running via StatefulSets; all they care about is getting an HTTPS endpoint that they can use to deliver their product to their users.

When it comes to Kubernetes, even small changes can have significant ripple effects. As a result, even if developers are experienced with Kubernetes, operators may be reluctant to give them access to a cluster. 

To try and mitigate that concern, we’ve seen organizations spend anywhere between 1 and 2 years trying to build an intermediary layer between the application and Kubernetes, to:

  • Allow Platform Engineering and DevOps to maintain control of the cluster.
  • Limit and manage the number of clusters available.
  • Abstract Kubernetes away from the Developers, so they can simply push code to GitHub, and the rest is taken care of for them.

This may appear unnecessary when there are only a handful of developers with only a few applications or services deployed; however, the story quickly changes as an organization’s number of clusters, applications, and services in Kubernetes begin to scale. The development team is generally to first to feel frustrated by the growing complexity, greatly increasing the chance of inexperienced AND experienced developers to become distracted, less productive, and more prone to mistakes. Developers do need to deal with infrastructure more these days, so the focus should be on simplifying and not complicating.

The way we see it is that the same way Docker turned complex tools, such as cgroups, into user-friendly products, the same should also happen with Kubernetes, turn it into a user-friendly application management framework.

Considering this, we decided to build Shipa to do precisely that; grow Kubernetes into a user-friendly application management framework. Shipa’s goal is to allow developers to focus on their application code while empowering DevOps and Platform engineers to better manage and control their clusters and infrastructure.

Shipa makes deployment, management, and controls of applications easy. Shipa does not treat Kubernetes as a first-class citizen; Shipa reserves that title for the applications and the teams that develop and control them. Doing so allows the developer not to worry about ConfigMaps, ingress rules, PVs, PVCs, etc. in his/her day-to-day. Even if DevOps and Platform engineering teams decide tomorrow to move from one Kubernetes cluster to another or across different providers, the way applications are deployed, operated, and controlled will not be impacted.

Software is getting complicated, and business requirements are evolving rapidly. The easier we make it for developers to deploy their applications and for DevOps and Platform Engineering teams to build controls and guardrails around it, the more value they will deliver, faster, and more secure.

Fireside Chat With Kelsey Hightower

Watch Kelsey Hightower, Bruno Andrade, and our host Jim Shilts for this 45 min fireside chat.

We had an interactive discussion and will Q&A from the audience.  The main topics we covered were:

– Application Context (10 min)
– Integrations into Pipelines (10 min)
– Application Management Workflows (10 min):
– Quick demo of how Shipa addresses this after each topic.
– Audience Q&A (10-15 min)

You can watch the recording here

 

Why Kubernetes Will Disappear

Over the last few months, I’ve been listening and involved in conversations about Kubernetes (k8s) and trying to identify the common topics that creates debate on whether it’s a “good” or “bad” idea. There are sensible points of view on both sides of the debate.

Thoughts

Everyone’s got thoughts, so during these conversations I’m looking for trends and points that resonate, even where I don’t personally agree. I’ve seen some good thinking that I both strongly agree and nonetheless have reservations about.

I’ve divided thoughts and opinions across different roles in the organization:

Kubernetes by conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity.

The Developer

The view in developer teams is characterized by two beliefs.

One set of people I talk to believe that having a generic, reliable platform to deploy software to is “good”. They’re not wrong. They see the potential, but there’s a yin to this yang.

The other belief, held perhaps by those who’ve had to deal with production problems (especially ones of their own making, where there’s no one else to blame) understand that simplicity is the prime directive for workability.

This is close to my own thoughts: that complication is intrinsically a killer — in and of itself an exponential risk to your chances of success. From this perspective, k8s is “bad” because the complicatedness will absorb more than all of your energy. Unless you have deep pockets and a dedicated platform team, time, budget and stakeholder patience will run out before meaningful value can be delivered.

Operations

The problem is understandability when applications aren’t behaving as expected.

I sense the operations view might be the most grounded. After all, these are the people who tend to be up at stupid o’clock, dealing with the fallout of the cans that architecture and delivery teams kicked down the road under pressure from senior stakeholders. The buck stops at operations. It’s rarely of their making and there’s often too little of a feedback loop to achieve workability.

In that situation, a generic platform that maintains healthy separation of workloads from infrastructure is “good” because it creates a clearer separation of root causes and helps to push back. Building standard around the way we package, run and monitor workloads is pain-relief. Simultaneously, there’s an acknowledgement that complicated systems are “bad”: they’re a recurring nightmare to keep going and, critically, create nebulous, multi-layered nests of unclarity that can comfortably obscure thundering security risks for undefined periods.

The problem is understandability when a cluster or applicationisn’t behaving as expected. Being able to comprehend it is like reading The Matrix code and seeing “the woman in the red dress”. A swirling maelstrom of intricate, verbose, interlaced yaml that drives an Alice-in-Wonderland-like rabbit hole of of master and worker control and data plane behaviors. Sure it’s declarative but it can feel like a riddle, wrapped in a mystery, inside an enigma.

Integration

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well, but…

By now it’s clear that Kubernetes is big. It’s both complex and complicated. That’s one thing you and I can surely agree on. If your team can understand and manage that, then it’s probably going to be “good” for you. Using managed services such as GKE or EKS means you’ll be able to externalize a proportion, but a rump of the cognitive load will remain in your court.

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well. But here’s the thing: customizing and building a complex platform and services adds no specific value to your organization. It’s an externality and as such ultimately will be externalized. We know this because that’s exactly what cloud is: externalizing the hard problems of running reliable, fault-tolerant generic infrastructure.

Out of Sight

Infrastructure is the endangered species of software delivery teams. Where once there were racks of computers in locked rooms with impressive and mysterious blinking lights and lots of whirring fans, now a co-working space and a laptop are all you need to conduct an orchestra of thousands.

That very need is what has driven externalization. Building infrastructure was too hard, too slow and too complicated. Constrained by the basic physics of office and data centre space and the mechanics of buying, racking, networking and tending to machines whilst handling failures with grace.

And this is why I think Kubernetes will disappear. It’s so generic that there’s no reason to do it yourself. Few organizations operate on a scale where it makes sense to run datacenters. The practical friction of running Kubernetes creates a similar dynamic. Like reliable infrastructure, it’s too hard, slow or expensive to justify doing it really well yourself, but there probably is value in paying for that as a service from a cloud provider.

Becoming Commodity

In the end, precisely because it’s generic and because building and running a customized and complex platform is an undifferentiated hard problem, it can and will be commoditized. If you remember what Maven did for the Java world, you’ll understand that accepting a little “opinionation” delivers a lot of productivity.

There will always be exceptions, but I think they’ll prove the rule: Kubernetes conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity