Shipa Integration with CircleCI

Kubernetes can bring a wide collection of advantages to a development organization. Properly leveraging Kubernetes can greatly improve productivity, empower you to better utilize your cloud spend, improve application stability and reliability, and more. On the flip side, if you are not properly leveraging Kubernetes, your would-be benefits become drawbacks. As a developer, this can become especially frustrating when you are focused on delivering quality code, fast. The learning curve and management of the object-centric application architecture, scripting and integrations into multiple CI systems and pipelines, and managing infrastructure can all make you less productive. According to a survey conducted by Tidelift and The New Stack, just 32% of a developer’s time is spent writing new code or improving existing code. The other 68% is spent in meetings, code maintenance, testing, security issues, and more.

“Respondents spend 35% of their time managing code, including code maintenance (19%), testing (12%) and responding to security issues (4%).”

Chris Grams

What if developers were empowered to take full advantage of the benefits of Kubernetes while avoiding the associated pitfalls? A new integration between CircleCI and Shipa may offer exactly that. CircleCI is dedicated to maximizing speed and configurability with customizable pipelines. Shipa is focused on simplifying Kubernetes so that developers can spend more time doing what they do best. The partnership and integrations between both solutions allows developers to leverage Kubernetes and all of the associated benefits, without changing the way they work. Your platform engineering team is able to manage, secure and deliver a powerful Kubernetes platform for the entire development organization to benefit from.

In the video above (https://www.youtube.com/watch?v=DvW13w_2HOs), Shipa founder Bruno Andrade demonstrates the CircleCI and Shipa integration. . Using a simple Ruby app, a developer can deploy to Kubernetes without creating a single Kubernetes object or its related YAML files (a major pain point most developers have when deploying to Kubernetes). With any Git repository, a developer can code, check in, and watch CircleCI and Shipa do the rest. Shipa is able to pick up the deployment from CircleCI, and abstract the entire Kubernetes deployment process from the developer’s point of view.

With the application already running in a GKE cluster connected to Shipa, a developer can add a quick update to the application and check it into a Git repository. From there, the CircleCI pipeline immediately picks up the change, delivers the updated bits to Shipa, and Shipa manages the deployment to the GKE cluster.

NO MORE YAML!

As a developer, you will not need to create anything related to Kubernetes. In fact, I feel confident that even someone who is just starting on their Kubernetes journey, with a very basic understanding of it, can get started easily and speed up the adoption process. The deployment layer is completely abstracted, allowing a platform engineering team to manage a robust Kubernetes environment, including all relevant security scans, without slowing down the development team.

Finally, the video also covers additional benefits from the Shipa and CircleCI integration including historical application information, consumption in the cluster, the entire lifecycle, successful and failed deployments, and the ability to roll back to a different version of the application, again, into Kubernetes, without really needing to know how it is done.

It should also be noted that, although the video shows Google Cloud and GitHub in this instance, you are not actually tied to a cloud provider or a Git repository. You can leverage this integration in any single or hybrid type of environment with the provider of your choice. Another great benefit to this powerful partnership between Shipa and CircleCI.

https://www.shipa.io/

https://circleci.com/

 

See for yourself

Install and deploy your applications on Kubernetes with minimal infrastructure overhead. With the integration of Shipa and CircleCI across workflows, developers can deploy and manage applications on Kubernetes without the need to create or manage objects and YAML files.

Deploying Applications on Kubernetes

Developing and deploying applications to Kubernetes locally with Shipa and Minikube

In a previous article, we discussed why we frequently hear that developers are not that keen on Kubernetes. You can read it here.

In summary, while developers certainly see the value of Kubernetes, they want to continue focusing on their application code and updates and not be impacted by the company’s Kubernetes initiative, which is quite fair.

I’m sure that developers, platform engineers, and DevOps engineers have all explored available solutions to mitigate the amount of infrastructure-related work that Kubernetes adds to the developer’s workload. While there are a few options available, developers quickly discover that these tools bring additional difficulties, such as:

  • Integrating their development workflow into the overall organization’s structure and requirements is a challenge.
  • When using these tools, it’s hard for the developer to comply with security, resource utilization, and more.
  • It’s not always easy to migrate locally developed applications to Test and Production clusters. It ends up requiring some level of YAML and object manipulation to make their apps work on different clusters.
  • It’s challenging to have a “production-like” environment locally.
  • And more…

While developers certainly see the value of Kubernetes, they want to have the capability to continue focusing on their application

To address the challenges, developers we have spoken with say that they need a solution that:

  • Allows developers to focus on code only and remove the need to create and maintain objects and YAML files
  • Makes application deployment on Kubernetes locally easy so they can quickly test their applications and updates.
  • Facilitates moving the applications from their local environment to other clusters, e.g., Test, Production, etc.
  • It empowers them to leverage a production-like environment locally, where they can work with the same settings required around application performance, monitoring, security, and more.

To help achieve this, I am detailing below how to implement Shipa and Minikube, which will give you both a local Kubernetes cluster and Shipa’s application framework.

Installing Minikube

To install Minikube, you just need to follow step 1 described in the following link: 

https://minikube.sigs.k8s.io/docs/start/

Installing Virtualbox

We will be using Virtualbox as the driver for our Minikube. 

Virtualbox provides packages for the different operating systems, which you can download from the following link:

https://www.virtualbox.org/wiki/Downloads

Starting a Cluster

Once you install both tools, it’s now time for you to get a cluster running, which you can do using the following command:

minikube start --kubernetes-version='v1.18.2' --memory='5gb' --disk-size='20gb' --driver=virtualbox

The command above will create a Kubernetes cluster version 1.18 with 5GB of memory and 20GB of disk. Even though you can adjust this as needed based on the resources you have available, keep in mind the amount of resources you need to run Kubernetes and your apps when resizing this.

Running the command above will give you an output similar to the one below:

 minikube start --kubernetes-version='v1.18.2' --memory='5gb' --disk-size='20gb' --driver=virtualbox
* minikube v1.14.2 on Darwin 10.15.6
* Using the virtualbox driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating virtualbox VM (CPUs=2, Memory=5120MB, Disk=20480MB) ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.12 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" by default

To make sure your cluster started successfully, you can run the following command:

 kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m39s   v1.18.2

Installing Shipa

With your local cluster running, you can now install Shipa.

Shipa can be downloaded and installed in your local cluster as a Helm chart. You can download Shipa’s Helm chart using the following command:

git clone https://github.com/shipa-corp/helm-chart.git

Once the download is complete, you can access Shipa’s Helm chart by simply entering the following:

cd helm-chart/

Inside the folder, you will now apply the resource limits to the services created by Shipa using the following command:

 kubectl apply -f limits.yaml
limitrange/limits created

With the above completed, you will now update the chart dependencies using the following command:

 helm dep up
load.go:112: Warning: Dependencies are handled in Chart.yaml since apiVersion "v2". We recommend migrating dependencies to Chart.yaml.
Saving 2 charts
Downloading docker-registry from repo https://kubernetes-charts.storage.googleapis.com
Downloading mongodb-replicaset from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts

Now its time for you to install Shipa in our local cluster. You can do it by running the Helm command below:

helm install shipa . \
--timeout=15m \
--set=metrics.image=gcr.io/shipa-1000/metrics:30m \
--set=auth.adminUser=admin@shipa.io \
--set=auth.adminPassword=shipa2020 \
--set=shipaCore.serviceType=ClusterIP \
--set=shipaCore.ip=10.100.10.20 \
--set=service.nginx.serviceType=ClusterIP \
--set=service.nginx.clusterIP=10.100.10.10

The install process should take a few minutes, which can vary depending on the amount of memory allocated to your local Kubernetes cluster. One easy way to identify when Shipa’s install is complete is to make sure you see the shipa-init-job-x market as completed and dashboard-web-x pods created and running. You can check it using the following command:

Once running, you should now add routes for Shipa’s ingress, which can be done with the commands below:

Route for NGNIX:

 sudo route -n add -host -net 10.100.10.10/32 $(minikube ip )
Password:
add net 10.100.10.10: gateway 192.168.99.106

Route for Traefik:

 sudo route -n add -host -net 10.100.10.20/32 $(minikube ip )
add net 10.100.10.20: gateway 192.168.99.106

With Shipa install and routes in place, you will need to download Shipa’s CLI to your local machine. Shipa’s CLI is available for different operating systems, and download links can be found here:

https://learn.shipa.io/docs/downloading-the-shipa-client

With Shipa’s CLI in place, the last step is to add your local instance of Shipa to your CLI as a Shipa target, which you can do by using the command below:

 shipa target-add -s shipa-v11 10.100.10.10
New target shipa-v11 -> https://10.100.10.10:8081 added to target list and defined as the current target

With your local Shipa instance added as a target, you can use the login command:

 shipa login
Email: admin@shipa.io
Password: 
Successfully logged in!

The Email and Password used above are the ones used in the Helm install command.

With the login complete, you can now find the address to Shipa’s dashboard by using the following command:

 shipa app list
+-----------------+-------------+----------------------------------------------------+
| Application   | Units      | Address                                            |
+-----------------+-------------+----------------------------------------------------+
| dashboard    | 1 started  | http://dashboard.10.100.10.20.shipa.cloud      |
+-----------------+-------------+----------------------------------------------------+

If you access the address displayed above, you will see Shipa’s dashboard:

The login credentials are the same ones you set up when installed Shipa using the Helm install command and the one you used to log in through the CLI.

Deploying a Sample Application

With Shipa and Kubernetes in place, we can now deploy our first application.
There are multiple ways of deploying applications on Shipa, and both are covered below:

  1. Using a pre-built image
  2. Deploying from source
Using a pre-built image

It’s possible that there is already a Docker image in place, and you want that image to be deployed to Kubernetes by Shipa. If that’s the case, you can follow the steps below:

Create an application on Shipa:

 shipa app create go-hello -t admin
App "go-hello" has been created!
Use app-info to check the status of the app and its units.
Your repository for "go-hello" project is "git@10.100.10.10:go-hello.git"

The command above will create an application framework that will then be used by Shipa to deploy your application and, once deployed, give you an application context view and operation level. Once you execute the command above, you will be able to see your application both in the Shipa dashboard as well as through the Shipa CLI:

View from the dashboard:

Deploy the image to Kubernetes through Shipa:

When deploying, you should use the command app deploy, as shown in the example below:

 shipa app deploy -a go-hello -i gcr.io/cosimages-206514/golang-shipa@sha256:054d98bcdc2a53f1694c0742ea87a2fcd6fc4cda8a293f1ffb780bbf15072a2b

The image used below is a sample Golang application that you can also use as a test. Once the deployment process is complete, you can see the application in a running state in Shipa’s dashboard:

From there, you can see your application endpoint URL, monitoring, metrics, and more.

Deploying from source

You also have the option to deploy your application directly from source, so it saves you the time of having to build and manage Docker files, images, and more.

When deploying from source, you can deploy from source located in your local machine, deploy directly from your CI pipeline, or using your local IDE. For the sake of simplicity, in this document, we will deploy from source located in your local machine.

Compared to deploying from an image, the first difference when deploying from source is that you need to enable the language support (or platform as called inside Shipa) for your application. Since we will use a Ruby sample application, we should then enable the Ruby platform on Shipa:

 shipa platform add ruby

Once the process is complete, we can then create the framework for our Ruby application:

 shipa app create ruby-app1 ruby -t admin
App "ruby-app1" has been created!
Use app-info to check the status of the app and its units.
Your repository for "ruby-app1" project is "git@10.100.10.10:ruby-app1.git"

The command above sets the application name and sets the application platform, which, in our case, is Ruby.

You can find detailed information about application management on Shipa through the following link:

https://learn.shipa.io/docs/application

For our sample Ruby application, you can download the source code from the following Git repository:

 git clone https://github.com/shipa-corp/ruby-sample.git

Now, you can then deploy the Ruby source code by using the command below:

 shipa app deploy -a ruby-app1 -f ruby-sample/

The command above will build the image required to run the Ruby application and deploy it to Kubernetes using Shipa. Once the deployment process is complete, the same way as before, you can see your application available both through the Shipa CLI and dashboard:

There is just so much more you can do with Shipa. Still, hopefully, this helped you learn how you can deploy and test your applications locally in Kubernetes using Shipa as your application framework without having to create images, objects, YAML files, and others, so you can speed up your development process.

In the next blog entry, we will discuss how to move these applications from your local environment to other clusters that you may have using Shipa.

Are Developers Not That Keen On Kubernetes?

Are developers not that keen on Kubernetes?

Should application developers learn Kubernetes? Let’s ask an even deeper question; should application developers even be aware of Kubernetes in their infrastructure?

I frequently hear this question being asked by DevOps, Platform Engineering, and Development teams. Of course, this is a discussion that brings very different views from different people and can result in a very long debate.

Kubernetes, without a doubt, provides far more functionality than the average developer needs. While Kubernetes is robust and provides dozens of types of objects (around 50 the last time I checked), developers don’t care how many replicas of their service are running, what roles it has, or if it’s running via StatefulSets; all they care about is getting an HTTPS endpoint that they can use to deliver their product to their users.

When it comes to Kubernetes, even small changes can have significant ripple effects. As a result, even if developers are experienced with Kubernetes, operators may be reluctant to give them access to a cluster. 

To try and mitigate that concern, we’ve seen organizations spend anywhere between 1 and 2 years trying to build an intermediary layer between the application and Kubernetes, to:

  • Allow Platform Engineering and DevOps to maintain control of the cluster.
  • Limit and manage the number of clusters available.
  • Abstract Kubernetes away from the Developers, so they can simply push code to GitHub, and the rest is taken care of for them.

This may appear unnecessary when there are only a handful of developers with only a few applications or services deployed; however, the story quickly changes as an organization’s number of clusters, applications, and services in Kubernetes begin to scale. The development team is generally to first to feel frustrated by the growing complexity, greatly increasing the chance of inexperienced AND experienced developers to become distracted, less productive, and more prone to mistakes. Developers do need to deal with infrastructure more these days, so the focus should be on simplifying and not complicating.

The way we see it is that the same way Docker turned complex tools, such as cgroups, into user-friendly products, the same should also happen with Kubernetes, turn it into a user-friendly application management framework.

Considering this, we decided to build Shipa to do precisely that; grow Kubernetes into a user-friendly application management framework. Shipa’s goal is to allow developers to focus on their application code while empowering DevOps and Platform engineers to better manage and control their clusters and infrastructure.

Shipa makes deployment, management, and controls of applications easy. Shipa does not treat Kubernetes as a first-class citizen; Shipa reserves that title for the applications and the teams that develop and control them. Doing so allows the developer not to worry about ConfigMaps, ingress rules, PVs, PVCs, etc. in his/her day-to-day. Even if DevOps and Platform engineering teams decide tomorrow to move from one Kubernetes cluster to another or across different providers, the way applications are deployed, operated, and controlled will not be impacted.

Software is getting complicated, and business requirements are evolving rapidly. The easier we make it for developers to deploy their applications and for DevOps and Platform Engineering teams to build controls and guardrails around it, the more value they will deliver, faster, and more secure.

Fireside Chat With Kelsey Hightower

Watch Kelsey Hightower, Bruno Andrade, and our host Jim Shilts for this 45 min fireside chat.

We had an interactive discussion and will Q&A from the audience.  The main topics we covered were:

– Application Context (10 min)
– Integrations into Pipelines (10 min)
– Application Management Workflows (10 min):
– Quick demo of how Shipa addresses this after each topic.
– Audience Q&A (10-15 min)

You can watch the recording here

 

Shipa Framework For Developers

...the central focus of these services is still Kubernetes itself, and now a new breed of abstraction has begun to appear which goes one step above these managed Kubernetes offerings to bring the focus back to the application itself. One such recently-launched company, Shipa, delivers a cloud native application management framework built to manage the full application lifecycle in an application-centric fashion.

The New Stack has recently featured Shipa in an article that provides excellent insight on Shipa’s product launch and how it can change how cloud-native applications are deployed and managed.

Read the full article here.



Rethinking Ops

Looking back at my years working with infrastructure and going through it’s changes, I believe its time we start to rethink Operations because clearly this model of Ops as cluster or infrastructure admins does not scale. Developers will always out-demand their capacity to supply. Either your headcount is out of control or your ability to innovate and deliver is severely hamstrung. Operations becomes this interrupt-driven thing where we’re just fighting fires as they happen. Ops as masters of production usually devolves to Ops becoming human incident routers, trying to figure out what team or person can help resolve problems because, being responsible for everything, they don’t have the insight to fix it themselves.

The idea of “Ops lock-in” can be a major problem, where your own Ops team who just isn’t able to support the kind of innovation that you’re trying to do slows down innovation.

My thought and vision for the future of Operations is taking combined engineering to its logical conclusion. Just like with QA, Ops capabilities should be embedded within development teams. The reality is you can’t be an effective software engineer today without some Ops skills, and I think every role should be working towards automating itself out of a job. Specifically, my thought and vision is that we should look at enabling developers to self-service through a continuous operation platform and empowering them to deploy and operate their services…with minimal Ops intervention..

With this, Ops become force multipliers. We move away from the reactive, interrupt-driven model where Ops are masters of production responsible for everything. Instead, we make dev teams responsible for their services but provide the tools they need to actually own their systems end-to-end — from the code on their laptops to operating it in production.

Enabling developers to self-service means treating Ops as a product team. The infrastructure automation, deployment automation, configuration management, logging, monitoring, and production tools — these are all products and it’s these products that allow teams to fully own their services. This leads to empowerment.

Products enable ownership. We move away from Ops as masters of production responsible for everything and push that responsibility onto dev teams. They are the experts for their services. They are best equipped to deal with problems that arise but we provide the tools they need to diagnose and resolve those problems on their own.

I believe the near future is exciting and I’m excited to see how we bridge more and more the gap between Devs and Ops while helping organizations to transition to a more effective model, that delivers value faster while reducing toil

Why Kubernetes Will Disappear

Over the last few months, I’ve been listening and involved in conversations about Kubernetes (k8s) and trying to identify the common topics that creates debate on whether it’s a “good” or “bad” idea. There are sensible points of view on both sides of the debate.

Thoughts

Everyone’s got thoughts, so during these conversations I’m looking for trends and points that resonate, even where I don’t personally agree. I’ve seen some good thinking that I both strongly agree and nonetheless have reservations about.

I’ve divided thoughts and opinions across different roles in the organization:

Kubernetes by conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity.

The Developer

The view in developer teams is characterized by two beliefs.

One set of people I talk to believe that having a generic, reliable platform to deploy software to is “good”. They’re not wrong. They see the potential, but there’s a yin to this yang.

The other belief, held perhaps by those who’ve had to deal with production problems (especially ones of their own making, where there’s no one else to blame) understand that simplicity is the prime directive for workability.

This is close to my own thoughts: that complication is intrinsically a killer — in and of itself an exponential risk to your chances of success. From this perspective, k8s is “bad” because the complicatedness will absorb more than all of your energy. Unless you have deep pockets and a dedicated platform team, time, budget and stakeholder patience will run out before meaningful value can be delivered.

Operations

The problem is understandability when applications aren’t behaving as expected.

I sense the operations view might be the most grounded. After all, these are the people who tend to be up at stupid o’clock, dealing with the fallout of the cans that architecture and delivery teams kicked down the road under pressure from senior stakeholders. The buck stops at operations. It’s rarely of their making and there’s often too little of a feedback loop to achieve workability.

In that situation, a generic platform that maintains healthy separation of workloads from infrastructure is “good” because it creates a clearer separation of root causes and helps to push back. Building standard around the way we package, run and monitor workloads is pain-relief. Simultaneously, there’s an acknowledgement that complicated systems are “bad”: they’re a recurring nightmare to keep going and, critically, create nebulous, multi-layered nests of unclarity that can comfortably obscure thundering security risks for undefined periods.

The problem is understandability when a cluster or applicationisn’t behaving as expected. Being able to comprehend it is like reading The Matrix code and seeing “the woman in the red dress”. A swirling maelstrom of intricate, verbose, interlaced yaml that drives an Alice-in-Wonderland-like rabbit hole of of master and worker control and data plane behaviors. Sure it’s declarative but it can feel like a riddle, wrapped in a mystery, inside an enigma.

Integration

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well, but…

By now it’s clear that Kubernetes is big. It’s both complex and complicated. That’s one thing you and I can surely agree on. If your team can understand and manage that, then it’s probably going to be “good” for you. Using managed services such as GKE or EKS means you’ll be able to externalize a proportion, but a rump of the cognitive load will remain in your court.

If you’ve got a full time platform team of a dozen people dedicated to customizing, building and running Kubernetes you’ll do pretty well. But here’s the thing: customizing and building a complex platform and services adds no specific value to your organization. It’s an externality and as such ultimately will be externalized. We know this because that’s exactly what cloud is: externalizing the hard problems of running reliable, fault-tolerant generic infrastructure.

Out of Sight

Infrastructure is the endangered species of software delivery teams. Where once there were racks of computers in locked rooms with impressive and mysterious blinking lights and lots of whirring fans, now a co-working space and a laptop are all you need to conduct an orchestra of thousands.

That very need is what has driven externalization. Building infrastructure was too hard, too slow and too complicated. Constrained by the basic physics of office and data centre space and the mechanics of buying, racking, networking and tending to machines whilst handling failures with grace.

And this is why I think Kubernetes will disappear. It’s so generic that there’s no reason to do it yourself. Few organizations operate on a scale where it makes sense to run datacenters. The practical friction of running Kubernetes creates a similar dynamic. Like reliable infrastructure, it’s too hard, slow or expensive to justify doing it really well yourself, but there probably is value in paying for that as a service from a cloud provider.

Becoming Commodity

In the end, precisely because it’s generic and because building and running a customized and complex platform is an undifferentiated hard problem, it can and will be commoditized. If you remember what Maven did for the Java world, you’ll understand that accepting a little “opinionation” delivers a lot of productivity.

There will always be exceptions, but I think they’ll prove the rule: Kubernetes conquering the mainstream, for the majority of software delivery teams it will quietly slip below the waterline of commodity