Category: Development

GitOps meets AppOps

How do you avoid building a GitOoooops environment while delivering security, developer experience, and utilize your existing infrastructure?

Terraform meets AppOps

How do you take Terraform one step further to provision apps, governance, and integrate into your Internal Developer Platform (IDP) strategy?

Implementing an Internal Developer Platform

You want to make sure you deliver a solution that developers will use, security will like, and reduce the burden on the DevOps or Platform Engineering team. So let’s discuss different approaches we see teams taking when building their Internal developer platforms or IDP

7 Reasons Why Your Internal Developer Platform will Fail

According to a recent global survey done by Stripe, on a scale of 0 – 100%, developers responded that only 68.4% of their time is productive, which means that developers could be nearly 32% more productive than today.

Untangling Network Policies on K8s

Network Policy is a critical part of building a robust developer platform, but the learning curve to address complex real-world policies is not tiny. It is painful to get the YAML syntax right. There are many subtleties in the behavior of the network policy specification (e.g., default allow/deny, wildcarding, rules combination, etc.). Even an experienced Kubernetes YAML-wrangler can still easily tie their brain in knots working through an advanced network policy use case.

The rise of the developer platform

An engineer engaged in purely non-innovative activity destroys nearly $600K in employer market value. On the other hand, the average engineer, working on a combination of maintenance and innovation activities, adds $855K in market value to their employer.

A real-world application deployment on Kubernetes

Deploying a real-world application on Kubernetes

Bruno Andrade

Bruno Andrade

CEO and Founder, Shipa Corp

We see people talking more and more about Kubernetes these days, and if I have to guess, these conversations will continue to grow. Still, the reality is that most enterprise companies are just starting to explore Kubernetes, or they are at the very early stages of scaling it.

As you deploy production-grade apps on Kubernetes, both developers and DevOps teams realize that operationalizing applications on Kubernetes can be way more complicated than expected. Even further, that complexity grows as you start deploying distributed apps across multiple Kubernetes clusters, set up network policies, RBAC, and more.

It has been nothing short of amazing to see Shipa’s wide adoption as we start seeing Shipa being used as the default application management layer for Kubernetes by teams everywhere. With that, we wanted to show how developers can deploy a more complex application while empowering DevOps teams to enforce security and governance without dealing with many of the complexities that traditionally come hand-in-hand with Kubernetes.

For this example, we will use a Cinema application. You can find the source code for the app here:  https://github.com/shipa-corp/cinema-application

* For this example, we assume you already have an instance of Shipa running. You can find detailed information on how to install Shipa here: https://www.shipa.io/getting-started/

This is the high-level architecture for this application:

I know there is a lot on this picture, so let’s break things down by role:

For DevOps:

  • We have defined 4 Shipa frameworks: cinema-ui, api-gateway, cinema-services, and payment-services
  • The reason we broke this into multiple frameworks is because:
    • We wanted to show how DevOps teams can enforce different security levels for the different services deployed by the developers.
    • This provides isolation between services deployed using the different frameworks.
    • While most of the frameworks are hosted on a GKE cluster, one of the frameworks (cinema-ui) is on an AKS cluster. Shipa can also help you deploy apps and services across multiple clusters and providers (on-prem or in the cloud).

For Developers:

  • We are using NodeJS for the different services.
  • Services connect to an external MongoDB instance where:
    • Movies services retrieve a list of movies.
    • Cinema Catalog retrieves a list of theaters.
    • Payment service connects to a third-party service (Stripe) to perform payment operations.
    • Notification receives transaction details and sends them to users by email (fake emailing process for this example)
    • Booking connects to both Payment and Notification services to register the purchase of a movie.
    • API Gateway service provides centralized communication with the different services and can be called by external services/devices
    • UI service provides a user interface that communicates through our API gateway for booking movies.

Try this on Shipa Cloud

Create a free account now and try this example on Shipa Cloud

Creating the MongoDB Databases

Some of the services require a database to persist/read information exposed later on by the API. Therefore, having access to a MongoDB service is needed.

Assuming you already have a MongoDB instance running, here are the steps you can follow to create the required structure for our cinema services:

Movies DB

mongo -u

# Verify if databases already exist
show dbs

# Create your "movies" DB
use movies

# Create a user for the DB
db.createUser(
{
user: "shipau",
pwd: "shipapass"
roles: [
{ role: 'userAdmin', db: 'movies' },
{ role: 'dbAdmin', db: 'movies' },
{ role: 'readWrite', db: 'movies' }
]}
)

# Exit the current user and log in with the recently created one
exit

mongo -u shipau /movies

# Insert some DB records
db.movies.insertMany([
{"id" : "1", "title" : "Assasins Creed", "runtime" : 115, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 6 , poster: 'assasins-creed.jpg'}
{"id" : "2", "title" : "Gladiator", "runtime" : 124, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 13, poster: 'gladiator.jpg' }
{"id" : "3", "title" : "xXx: Reactivado", "runtime" : 107, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 20, poster: 'reactive.jpg' }
{"id" : "4", "title" : "Resident Evil: Capitulo Final", "runtime" : 107, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 27, poster: 'resident-evil.jpg' }
{"id" : "5", "title" : "Moana: Un Mar de Aventuras", "runtime" : 114, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2016, "releaseMonth" : 12, "releaseDay" : 2, poster: 'moana.jpg' }
])

Cinemas DB

mongo -u <user> <IP:PORT>

# Verify if databases already exist
show dbs

# Create your "cinemas" DB
use cinemas

# Create a user for the DB
db.createUser(
{
user: "shipau",
pwd: "shipapass",
roles: [
{ role: 'userAdmin', db: 'cinemas' },
{ role: 'dbAdmin', db: 'cinemas' },
{ role: 'readWrite', db: 'cinemas' }
]}
)

# Exit the current user and log in with the recently created one
exit

mongo -u shipau <IP:PORT>/cinema

# Insert some DB records
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/countries.json
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/states.json
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/cities.json
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/cinemas.json

Booking DB

mongo -u <user> <IP:PORT>

# Verify if databases already exist
show dbs

# Create your "movies" DB
use booking

# Create a user for the DB
db.createUser(
{
user: "shipau",
pwd: "shipapass",
roles: [
{ role: 'userAdmin', db: 'booking' },
{ role: 'dbAdmin', db: 'booking' },
{ role: 'readWrite', db: 'booking' }
]}
)

# Exit the current user and log in with the recently created one
exit

Creating the Shipa Frameworks

Creating the frameworks on Shipa is easy, and you can use the commands below in combination with the framework template files provided as part of the Git repo:

$ shipa framework add cinema-ui.yaml

$ shipa framework add cinema-services.yaml

$ shipa framework add cinema-payment.yaml

Once created, you can bind these frameworks to either a single cluster or multiple clusters. In our case, we attached them to a GKE and an AKS cluster.

You can find more information on how to connect clusters and bind frameworks to them here: https://learn.shipa.io/docs/connecting-clusters

Try this on Shipa Cloud

Create a free account now and try this example on Shipa Cloud

Deploying the Services

Cinema Services

Now that we have our MongoDB setup and the Shipa frameworks ready and bound to Kubernetes clusters, we can start creating and deploying our different cinema services.

As we create and deploy some of the cinema services, we will also be setting up required ENV variables to connect to databases and third-party services (Stripe).

* To make the process easier, we will use pre-built Docker images, but the Dockerfiles are available in the Git repo in case you want to recreate the images

Movies Service

$ shipa app create movies-service -t shipa-admin-team -o cinema-services

$ shipa env set -a movies-service DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=movies

$ shipa app deploy -a movies-service -i gcr.io/cosimages-206514/movies-service@sha256:da99b1f332c0f07dfee7c71fc4d6d09cf6a26299594b6d1ae1d82d57968b3c57

Cinema Catalog Service

$ shipa app create cinema-catalog -t shipa-admin-team -o cinema-services

$ shipa env set -a cinema-catalog DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=cinemas

$ shipa app deploy -a cinema-catalog -i gcr.io/cosimages-206514/cinema-catalog-service@sha256:6613440a460e9f1e6e75ec91d8686c1aa11844b3e7c5413e241c807ce9829498

Notifications Service

$ shipa app create notification-service -t shipa-admin-team -o cinema-services

$ shipa app deploy -a notification-service -i gcr.io/cosimages-206514/notification-service@sha256:ca71c0decb3e9194474b9ea121ab0a3432b57beb07f9297fa1233f8f3d6a2118

Payment Service

$ shipa app create payment-service -t shipa-admin-team -o cinema-payment

$ shipa env set -a payment-service DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=booking STRIPE_SECRET=your_secret STRIPE_PUBLIC=your_token

$ shipa app deploy -a payment-service -i gcr.io/cosimages-206514/payment-service@sha256:b1c311b37fb6c74ef862e93288aa869f014e7b77e755a4d71481fe5689204d31

Booking Service

$ shipa app create booking-service -t shipa-admin-team -o cinema-services

$ shipa env set -a booking-service DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=booking NOTIFICATION_API_HOST=<Notification-app-internal-dns>:3000 PAYMENT_API_HOST=<Payment-app-internal-dns>:3000

$ shipa app deploy -a booking-service -i gcr.io/cosimages-206514/booking-service@sha256:e639bfc7c0926be16f6b59214cc0753d47b168e342bea5e2d34d8f47dbdded27

Note that you will need to supply the internal endpoint for the Notification services and the Payment services for the Booking service ENV variables so that Booking can communicate with both. You can find both in your service’ application page for both services:

For my notification-service application:
Shipa for Kubernetes - notification service
For my payment-service application:

When setting the ENV variables above, we want to use the Internal DNS for both services.

API Gateway Service

$ shipa app create api-gateway -t shipa-admin-team -o cinema-services

$ shipa env set -a api-gateway API_BOOKING=booking_internal_dns API_MOVIES=movies_internal_dns API_CINEMA=cinema_catalog_internal_dns

$ shipa app deploy -a api-gateway -i gcr.io/cosimages-206514/api-gateway@sha256:ea7e9efe455f634ab3d9abaae361717c771a9bf4ab8881d005d3cf4b10195e1a

As with the previous service, the API Gateway requires you to add the Internal DNS endpoints for the Movies and Cinema Catalog services, which you can find on the service’s page.

UI Service

The UI app is a Frontend application built in React that interacts with the API client-side (through the browser), so it requires the API gateway to be opened to incoming traffic from endpoint exposed by Shipa” or something around that

$ shipa app create ui-service -t shipa-admin-team -o cinema-ui

$ shipa env set -a ui-cinema REACT_APP_API_SERVER=api-gateway-endpoint

$ shipa app deploy -a ui-service -i gcr.io/cosimages-206514/ui-cinemas@sha256:81f61cf1368b65e90a70637f9aa1c25ed741495d391cbbcea807d031e0c2a5e3

You can see that the UI service needs the API Gateway endpoint to communicate with it as part of the ENV variables that you are setting above.

Suppose you follow my example where the UI service is deployed through a framework hosted in a different cluster. In that case, you should use the Endpoint (external) URL provided in the API Gateway service page:

If you decided to have all frameworks in the same cluster, then you can use the Internal DNS address of your API Gateway service:

Once the services are all deployed, you can access our Cinema web page to start booking movies using the UI service Endpoint.

Set up network policies

Want to try experiencing how easy it is to set up network policies using Shipa? Create a free account now!

Network Policies

One of the optional steps you can take is to define network policies to protect the services we just deployed. Defining network policies on Kubernetes can be quite tricky, but Shipa has made it easy.

* For network policies to work on Shipa, you have to make sure Network Policies are enabled in your Kubernetes cluster where the Payment service and Shipa framework are hosted

Let’s set up network policies for the Payment service as an example since our payment service can be considered a critical app that needs to be secured.

To do so, we can go to the Payment service application page and click on Network Policies:

Once on the page, click on the “+” button to create a policy.

Since our Booking service is the one sending and receiving data to our Payment service, we can then build a policy that will allow our Payment service to receive data (ingress) only from Booking and only through port 3000:

For now, we can leave our egress with allow-all, but you can also specify egress rules to make sure data is sent to only allowed apps or endpoints:

Shipa for Kubernetes - allow all

Once complete, Shipa will restart the Payment service to ensure the network policies are applied. You can see this using Shipa’s network map:

You can define additional policies as you desire, but this is an example of how easy it is to control policies through Shipa.

In the end, we:

  • Deployed a complex application on Kubernetes without actually knowing Kubernetes or even having to touch kubectl as a developer!
  • We secured our services across multiple clusters and vectors, including security scans, network policies, RBAC, and more.

In the next post, we will show you how to connect this into your CI Pipeline and operate it through GitOps.

Try this on Shipa Cloud

Create a free account and deploy this cinema application using Shipa Cloud!

Resources:

GitOps in Kubernetes, the easy way–with GitHub Actions and Shipa

Bruno Andrade

Bruno Andrade

Founder and CEO, Shipa Corp

What is GitOps?

Putting it simply, it is how you do DevOps in Git. You store and manage your deployments using Git repositories as the version control system for tracking changes in your applications or, as everyone likes to say, “Git as a single source of truth.” It gets a bit more complicated when you start to talk GitOps in Kubernetes.

The challenge is that when you talk about GitOps in Kubernetes, this is directly tied to tools that still require you to build and manage things like Helm charts, Kustomize, or other similar approaches. This means that your developers and DevOps teams will always have to manage these charts, what variables are used, what changes are made to them, and more. Not to mention, the application’s post-deployment operation capabilities are minimal with these tools, enforcing governance and controls is challenging, and more.

We want to introduce you to an application-centric way of doing GitOps in Kubernetes. We will be using Shipa and GitHub Actions (You can use any Git repo or CI tool of choice). 

In the end, that’s what we want to deliver:

Shipa for Kubernetes - GitOps

In this example, we will learn how to construct the GitOps in Kubernetes workflow using GitHub, GitHub Actions, and Shipa. In the end, you will see that you were able to build a GitOps workflow without the need to learn, build, and maintain Helm charts and other Kubernetes-related objects.

 

Requirements

In this example, we assume you already have a Kubernetes cluster running and can access that cluster with kubectl. We will be using a cluster in GKE (Google Kubernetes Engine), but you can use any other cluster you’d like.

 

Installing Shipa

Installing Shipa on Kubernetes is as easy as 1, 2, 3:

  1. Create a namespace for Shipa’s components:
    $ kubectl create namespace shipa-system
  2. Set an email address and password to log in to Shipa:
    $ cat > values.override.yaml << EOF
    auth:
    adminUser: myemail@example.com
    adminPassword: mySecretAdminPassw0rd
    EOF
  3. Add Shipa’s Helm repository and deploy Shipa:
    $ helm repo add shipa-charts https://shipa-charts.storage.googleapis.com
    $ helm install shipa shipa-charts/shipa -n shipa-system —-timeout=1000s -f values.override.yaml

You can watch Shipa being deployed by listing all the pods in the shipa-system and shipa namespaces:

$ kubectl get pods --all-namespaces -w | grep shipa

Once Shipa’s components are up and running, install the Shipa CLI in your system:

$ curl -s https://storage.googleapis.com/shipa-client/install.sh | bash

You can verify that Shipa is installed successfully by running shipa version.

 

Adding a Target to Shipa

Before you start interacting with Shipa using the CLI, you need to configure a target (which tells the CLI where to find Shipa’s backend in your Kubernetes cluster). To configure a target, you first need to obtain the IP address (or DNS name) of Shipa’s server. To do that, run the following:

$ export SHIPA_HOST=$(kubectl --namespace=shipa-system get svc shipa-ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

$ if [[ -z $SHIPA_HOST ]]; then
export SHIPA_HOST=$(kubectl –namespace=shipa-system get svc shipa-ingress-nginx -o jsonpath=”{.status.loadBalancer.ingress[0].hostname}”)
fi

$ shipa target-add shipa $SHIPA_HOST -s

The code above obtains the DNS name of the Load Balancer that is serving traffic to the Shipa server, which, in turn, serves traffic to Shipa. The output of the shipa target add command will look similar to:

New target shipa -> https:/xxx.xxx.xxx.xxx:8081 added to target list and defined as the current target
 
Accessing the Shipa Dashboard

After a few minutes, you should be able to access the dashboard. Copy the shipa target address without the port and paste it into your Web Browser address bar, setting the port to 8080 (i.e., http://xxx.xxx.xxx.xxx:8080). You should see the following:

pasted image 0-10

Click on the Go to my Dashboard link. Once on the Dashboard, input the email address and password you set earlier.

Once you log in, this is how your Dashboard should look:

Shipa for Kubernetes - Dashboard

You can find more information on installing Shipa here https://learn.shipa.io/docs/installing-shipa.

 
Creating an Application on Shipa

Now we will create an application on Shipa, which is where we will deploy our code. You can create an application on Shipa using either the CLI or Dashboard. In this example, we will use the Shipa CLI:

$ shipa login
$ shipa app create gitops -t shipa-admin-team -o shipa-framework

 

CI/CD Pipeline with GitHub Actions

To demonstrate the full pipeline, I am using a web application called DevOps Toolkit, originally developed by (and forked from) our friend Viktor Farcic. You can find the Git repo here https://github.com/brunoa19/devops-toolkit.

We will be using GitHub Actions to help us build the image, push it to Google Container Registry, and trigger the deployment using Shipa, so let’s look at how the pipeline is constructed.

GitHub Secrets

Let’s create the required secrets! To get started, go to the “Settings” tab in your project and click “Secrets” in the sidebar. Click on “New repository secret.

You will need to create the following secrets:

    • GKE_PROJECT: The name of your project where your GKE cluster is located in your Google cloud account
    • GKE_SA_KEY: The service account used for the project with the Base64 encoded JSON service account key. More info available at https://github.com/GoogleCloudPlatform/github-actions/tree/docs/service-account-key/setup-gcloud#inputs 
    • SHIPA_APP: The name of the application we created on Shipa in the steps before that we will use to deploy our application (gitops in our case)
    • SHIPA_USER: The username you use to access Shipa
    • SHIPA_PASS: The password you use to access Shipa
    • SHIPA_SERVER: The IP of your Shipa instance (without HTTPS and port number, just the IP)

Using the GitHub secrets in your workflow is relatively straightforward. Each secret gets added as an environment variable prefixed with “secrets,” which means we can easily use them when creating our config file.

Pipeline Settings
name: DevOps Toolkit - Prod

on:
push:
branches: [ main ] pull_request:
branches: [ main ]
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
IMAGE: static-site

In this example, we will use our actions when there are either push or pull requests on our “main” branch. Our pipeline config file is stored inside .github/workflows directory, using a file called shipa-ci.yml

The steps below build our DevOps Toolkit image using the Dockerfile present in the repository and, once built, store the image in our Google Container Registry.

jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest

  steps:
- name: ACTIONS_ALLOW_UNSECURE_COMMANDS
id: ACTIONS_ALLOW_UNSECURE_COMMANDS
run: echo 'ACTIONS_ALLOW_UNSECURE_COMMANDS=true' >> $GITHUB_ENV

- name: Checkout
uses: actions/checkout@v2

# Setup gcloud CLI
- uses: GoogleCloudPlatform/github-actions/setup-gcloud@0.1.3
with:
service_account_key: ${{ secrets.GKE_SA_KEY }}
project_id: ${{ secrets.GKE_PROJECT }}

# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker

# Install Hugo
- run: |-
wget https://github.com/gohugoio/hugo/releases/download/v0.55.4/hugo_0.55.4_Linux-64bit.deb
sudo dpkg -i hugo_0.55.4_Linux-64bit.deb

# Build Hugo and Docker image
- name: Build
run: |-
make build
docker build \
--tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \

The step below sets up Shipa

  # Setup Shipa CLI
- run: |-
sudo wget https://storage.googleapis.com/shipa-client/1.2.0/shipa_linux_amd64
sudo chmod +x shipa_linux_amd64 && mv -v shipa_linux_amd64 shipa
./shipa target add shipa ${{ secrets.SHIPA_SERVER }} -s
echo ${{ secrets.SHIPA_PASS }} | ./shipa login ${{ secrets.SHIPA_USER }}

The final step in our pipeline triggers the deployment using Shipa

  # Deploy the Docker image to the cluster through Shipa
- name: Deploy
run: |-
./shipa app deploy -a ${{ secrets.SHIPA_APP }} -i gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA --step-interval=1m --step-weight=10 --steps=6

A few things to notice during the deployment:

    • We are deploying to the app we created on Shipa using the previous steps and set up using the SHIPA_APP secret
    • We are deploying using canary. If you want to run a straight deployment, just remove the –step-interval, –step-weight, and –steps flags

As we push code to the main branch, we can see that GitHub Actions started doing it’s job all the way to triggering the deployment through Shipa.

As the application is deployed, we have complete observability using Shipa.

Shipa for Kubernetes - observability

We can also have a complete overview of the application’s object and network dependency map:

We can see the detailed lifecycle of the application, with information and logs associated with every action taken:

Shipa for Kubernetes - lifecycle

Much more can be done with Shipa to operationalize Kubernetes and GitOps, such as security scans, RBAC, network policies, and more for operating your applications. With Shipa, you can not only deploy your apps, but you can manage them as well.

Another great point to mention is that we have done this all without creating Helm charts, Kustomize, deployment objects, services, etc. Shipa makes GitOps in Kubernetes much more dynamic and application-focused.

Thanks for reading this post. I hope you have a clearer picture of an application-centric GitOps model.

In the next post, we will cover how you can build an enterprise-level GitOps workflow. Stay tuned!

Resources:

Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn

Talking Shipa – “What’s New in 1.2?”

Shipa is excited to launch our new webcast series, Talking Shipa. To kick this series off, we sat down with Shipa Founder and CEO, Bruno Andrade, to discuss the release of Shipa Application Management Framework for Kubernetes, version 1.2 which includes application observability, network policies map and more.

In this video, Bruno spends a few minutes with us to talk about the new features and improvements that are packed into this new release.

Version 1.2 includes quite a few new features and improvements, but the items we focus on with Bruno include:

Improvements to:

    • Multi-cluster incl. AKS, EKS, OKE, GKE, IKS & OpenShift
    • Multi-tenancy – improved detailed multi tenancy model

New features including:

    • Application Observability
    • Network Policies Map
      Integration with Istio – incl. canary rollouts
    • Vault integration
    • Integration with Private Registries – incl. JFrog
    • and more!

The video below is time coded in the description to help you navigate to the topics that interest you most.

It is free to get started with Shipa. Follow the button below for details regarding how to download and install Shipa 1.2:

n this video, Bruno spends a few minutes with us to talk about the new features and improvements that are packed into this new release. Version 1.2 includes quite a few new features and improvements, but the items we focus on with Bruno include:

Improvements to:

– Multi-cluster incl. AKS, EKS, OKE, GKE, IKS & OpenShift…(3:15)

– Multi-tenancy – improved detailed multi tenancy model…(3:15)

New features including:

– Application Observability…(5:26)

– Network Policies Map…(9:40)

– Integration with Istio – incl. canary rollouts…(9:40)

– Vault integration…(13:00)

– Integration with Private Registries – incl. JFrog…(14:20)

Get started with Shipa today!

Get Started Free

Full lifecycle developer-centric application automation for Kubernetes

Operationalizing Kubernetes

Operationalize Kubernetes

Organizations have now seen the value of building microservices. They deliver applications as discrete functional parts, each of which can be delivered as a container or service and managed separately. But for every application, there are more parts to manage than ever before, especially at scale, and that’s where many turn to an orchestrator or a Kubernetes framework for help. While Kubernetes is one of the most popular container orchestration projects on GitHub, many organizations are still intimidated by its complexity.

Kubernetes solves many problems by providing an extensible, declarative platform that automates containers’ management for high availability, resiliency, and scale. But Kubernetes is a big, complex, fast-moving, and sometimes confusing platform that requires users to develop new skills, organizations to invest in building a new team to manage Kubernetes, platform teams to create a framework/interface for their developers on top of Kubernetes, and more. Often, this leads to slow adoption, increased costs, and a long list of frustrations.

As these Enterprises start adopting Kubernetes, one key ask for their DevOps and Platform Engineering teams is to “operationalize” Kubernetes. While powerful, Kubernetes is a platform that, at scale, can easily blur the lines between developers and operations teams, can introduce challenges around security and controls, can quickly scale to the state of containers and services sprawl, can lead to unnecessary resource consumption, can drive developer experience and speed down, and more. Overcoming these challenges and getting your organization to a state where they are comfortable with Kubernetes and where your software development and delivery teams can scale is not easy.

As we build Shipa, we talk to many organizations going through that “operationalization” process. As our roadmap is heavily driven by user input and use-cases, we wanted to share some of the common issues faced and the options on how we see users leveraging Shipa to overcome them to ultimately “Operationalize Kubernetes”:

Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn

1. Building and automating compliance

It can be a tremendous job to ensure that a cluster’s desired configuration is set and maintained, especially when you provide Kubernetes as the interface for your developers to deploy and manage their applications directly.

In addition, enforcing compliance across networking, resource utilization, permissions for multiple teams, utilization of external services, security scans, and more can be overwhelming because these components are, many times, completely independent from one another and managed by different teams. Trying to build an automated compliance model on top of all this ends up being a daunting task for the DevOps or Platform Engineering team. It only leads to more time spent on building something that is not core to the business.

Kubernetes Framework: To tackle that problem, we designed Shipa to leverage what we call Frameworks. The DevOps and Platform Engineering teams can use frameworks to build and automate compliance around the items below:

Kubernetes Framework

The focus for the team is now building the compliance framework that should be applied automatically to applications and services deployed using it rather than building a myriad of custom scripts, Helm charts, and Kustomize, hiring more people to build and manage it, and so on.

Through the framework, the team is now focused on what should be enforced at the application or service level and, as the framework can be bound to any Kubernetes cluster, the team can finally focus on the business requirements rather than building policies for the local cluster, Rego policy files, and more, which can drastically change if you decide to move to different cluster provider, cluster version, policy enforcement model, and others.

With the frameworks created, you can then attach these frameworks to different clusters:

Shipa for Kubernetes - Kubernetes Framework

You can create multiple frameworks per cluster, and each framework can have a completely different configuration, so you can tackle use-cases such as Dev, QA, and Prod environments, different projects, and more where each may require a different level of RBAC, Network policy, and other compliance requirements.

2. Managing workload configuration deployed on the cluster

One common concern we see from DevOps and Platform Engineering teams is the number of resources being consumed by applications and services deployed by the different development teams they support. We talk to companies that say they could easily reduce their compute utilization by ~30 or even ~40% if they could easily enforce resource utilization automatically for applications and services deployed. Still, they report issues identifying who owns a specific app or service, wasting time managing and maintaining a huge amount of YAML files, and more.

Thinking about these issues, we created a component called Plans. As a DevOps or Platform engineer, you can create multiple plans where you can set a specific amount of memory, CPU, and Swap that can be consumed by applications and attach that plan to a specific framework:

Shipa for Kubernetes

If you attach a plan to a framework, every time an application is deployed using that framework, the plan limits will automatically be enforced, and through Shipa, you can monitor closely how much of these resources the apps or services are actually using, who owns them, and more. If you need to adjust the plan for a specific app or service at any time, you can directly attach a new plan to it, without the need to change YAML files, Helm charts, or Kustomize scripts.

3. Multi-tenant and multi-clusters

As you scale the clusters required by your company to run all the services your developers are deploying, it becomes more difficult to manage and control what’s deployed, where, and how across the different clusters and sometimes, different providers.

At this point, most people agree that multi-tenancy is not an easy thing to manage on Kubernetes, especially at scale and across multiple clusters and cluster providers.

Thinking about that, Shipa incorporated the concept of multi-tenancy at the framework level, which makes it incredibly easy for you to enforce roles and permissions for teams and users.

Shipa for Kubernetes - multi-tenancy

When using Shipa, Kubernetes clusters are no longer the interface given to your developers for development and deployment of apps and services, so Shipa facilitates the enforcement of multi-tenancy at the right level for the different teams and users within your organization:

Shipa for Kubernetes - permission list

In addition to RBAC and detailed permission options at the framework level, Shipa also provides an enhanced multi-tenancy level through its frameworks, which creates a namespace in your cluster for each framework, so apps and services deployed across different Shipa frameworks will be isolated by different namespaces across your clusters.

4. Security and agility

The pressure is on for organizations to continue to innovate at a pace never seen before, making it easy to treat security as an afterthought. Many organizations think container and microservice security can be simply interpreted as security scanning of your images and code. Still, security goes well beyond that and should be incorporated into your workflow.

Shipa for Kubernetes - workflow configuration

To ensure that your organization implements best practices, we embedded security as part of the framework workflow, incorporating security scanning, network policies, registry control, RBAC, and more. That workflow embedded into your CI/CD pipeline can be a powerful way to enforce security while enabling velocity for your developers.

We embedded a holistic approach to security into Shipa’s framework so that it can be tied to your CI/CD pipelines, allowing developers to simply focus on delivering application code and updates while Shipa’s framework enforces the different security settings automatically. With Shipa’s dashboard, you can observe security reports, deployment information, application ownership, application lifecycle, logs, and more. Combined, this becomes a powerful way for you to support your application post-deployment while enabling developers with better observability and velocity.

Shipa for Kubernetes - observability
Shipa for Kubernetes - observability
Shipa for Kubernetes - observability

It’s terrific to see Kubernetes and its community growing and more companies adopting it and scaling their deployments. Here at Shipa, we believe that a structured workflow leveraging a Kubernetes framework can help these organizations to adopt Kubernetes and microservices architecture at scale, at the pace they are looking for.

Resources:

Shipa Website: https://www.shipa.io

Shipa Documentation: https://learn.shipa.io

Getting Started with Shipa: https://learn.shipa.io/docs/getting-started-with-shipa

Try Shipa today!

Full lifecycle application-centric framework for Kubernetes
Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn