Category: Development

A real-world application deployment on Kubernetes

Deploying a real-world application on Kubernetes

Bruno Andrade

Bruno Andrade

CEO and Founder, Shipa Corp

We see people talking more and more about Kubernetes these days, and if I have to guess, these conversations will continue to grow. Still, the reality is that most enterprise companies are just starting to explore Kubernetes, or they are at the very early stages of scaling it.

As you deploy production-grade apps on Kubernetes, both developers and DevOps teams realize that operationalizing applications on Kubernetes can be way more complicated than expected. Even further, that complexity grows as you start deploying distributed apps across multiple Kubernetes clusters, set up network policies, RBAC, and more.

It has been nothing short of amazing to see Shipa’s wide adoption as we start seeing Shipa being used as the default application management layer for Kubernetes by teams everywhere. With that, we wanted to show how developers can deploy a more complex application while empowering DevOps teams to enforce security and governance without dealing with many of the complexities that traditionally come hand-in-hand with Kubernetes.

For this example, we will use a Cinema application. You can find the source code for the app here:  https://github.com/shipa-corp/cinema-application

* For this example, we assume you already have an instance of Shipa running. You can find detailed information on how to install Shipa here: https://www.shipa.io/getting-started/

This is the high-level architecture for this application:

I know there is a lot on this picture, so let’s break things down by role:

For DevOps:

  • We have defined 4 Shipa frameworks: cinema-ui, api-gateway, cinema-services, and payment-services
  • The reason we broke this into multiple frameworks is because:
    • We wanted to show how DevOps teams can enforce different security levels for the different services deployed by the developers.
    • This provides isolation between services deployed using the different frameworks.
    • While most of the frameworks are hosted on a GKE cluster, one of the frameworks (cinema-ui) is on an AKS cluster. Shipa can also help you deploy apps and services across multiple clusters and providers (on-prem or in the cloud).

For Developers:

  • We are using NodeJS for the different services.
  • Services connect to an external MongoDB instance where:
    • Movies services retrieve a list of movies.
    • Cinema Catalog retrieves a list of theaters.
    • Payment service connects to a third-party service (Stripe) to perform payment operations.
    • Notification receives transaction details and sends them to users by email (fake emailing process for this example)
    • Booking connects to both Payment and Notification services to register the purchase of a movie.
    • API Gateway service provides centralized communication with the different services and can be called by external services/devices
    • UI service provides a user interface that communicates through our API gateway for booking movies.

Creating the MongoDB Databases

Some of the services require a database to persist/read information exposed later on by the API. Therefore, having access to a MongoDB service is needed.

Assuming you already have a MongoDB instance running, here are the steps you can follow to create the required structure for our cinema services:

Movies DB

mongo -u

# Verify if databases already exist
show dbs

# Create your "movies" DB
use movies

# Create a user for the DB
db.createUser(
{
user: "shipau",
pwd: "shipapass"
roles: [
{ role: 'userAdmin', db: 'movies' },
{ role: 'dbAdmin', db: 'movies' },
{ role: 'readWrite', db: 'movies' }
]}
)

# Exit the current user and log in with the recently created one
exit

mongo -u shipau /movies

# Insert some DB records
db.movies.insertMany([
{"id" : "1", "title" : "Assasins Creed", "runtime" : 115, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 6 , poster: 'assasins-creed.jpg'}
{"id" : "2", "title" : "Gladiator", "runtime" : 124, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 13, poster: 'gladiator.jpg' }
{"id" : "3", "title" : "xXx: Reactivado", "runtime" : 107, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 20, poster: 'reactive.jpg' }
{"id" : "4", "title" : "Resident Evil: Capitulo Final", "runtime" : 107, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2017, "releaseMonth" : 1, "releaseDay" : 27, poster: 'resident-evil.jpg' }
{"id" : "5", "title" : "Moana: Un Mar de Aventuras", "runtime" : 114, "format" : "IMAX", "plot" : "Lorem ipsum dolor sit amet", "releaseYear" : 2016, "releaseMonth" : 12, "releaseDay" : 2, poster: 'moana.jpg' }
])

Cinemas DB

mongo -u <user> <IP:PORT>

# Verify if databases already exist
show dbs

# Create your "cinemas" DB
use cinemas

# Create a user for the DB
db.createUser(
{
user: "shipau",
pwd: "shipapass",
roles: [
{ role: 'userAdmin', db: 'cinemas' },
{ role: 'dbAdmin', db: 'cinemas' },
{ role: 'readWrite', db: 'cinemas' }
]}
)

# Exit the current user and log in with the recently created one
exit

mongo -u shipau <IP:PORT>/cinema

# Insert some DB records
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/countries.json
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/states.json
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/cities.json
mongoimport --jsonArray --db cinemas --collection docs --file ~/your_path/cinema-catalog-service/src/mock/cinemas.json

Booking DB

mongo -u <user> <IP:PORT>

# Verify if databases already exist
show dbs

# Create your "movies" DB
use booking

# Create a user for the DB
db.createUser(
{
user: "shipau",
pwd: "shipapass",
roles: [
{ role: 'userAdmin', db: 'booking' },
{ role: 'dbAdmin', db: 'booking' },
{ role: 'readWrite', db: 'booking' }
]}
)

# Exit the current user and log in with the recently created one
exit

Creating the Shipa Frameworks

Creating the frameworks on Shipa is easy, and you can use the commands below in combination with the framework template files provided as part of the Git repo:

$ shipa framework add cinema-ui.yaml

$ shipa framework add cinema-services.yaml

$ shipa framework add cinema-payment.yaml

Once created, you can bind these frameworks to either a single cluster or multiple clusters. In our case, we attached them to a GKE and an AKS cluster.

You can find more information on how to connect clusters and bind frameworks to them here: https://learn.shipa.io/docs/connecting-clusters

Cinema Services

Now that we have our MongoDB setup and the Shipa frameworks ready and bound to Kubernetes clusters, we can start creating and deploying our different cinema services.

As we create and deploy some of the cinema services, we will also be setting up required ENV variables to connect to databases and third-party services (Stripe).

* To make the process easier, we will use pre-built Docker images, but the Dockerfiles are available in the Git repo in case you want to recreate the images

Movies Service

$ shipa app create movies-service -t shipa-admin-team -o cinema-services

$ shipa env set -a movies-service DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=movies

$ shipa app deploy -a movies-service -i gcr.io/cosimages-206514/movies-service@sha256:da99b1f332c0f07dfee7c71fc4d6d09cf6a26299594b6d1ae1d82d57968b3c57

Cinema Catalog Service

$ shipa app create cinema-catalog -t shipa-admin-team -o cinema-services

$ shipa env set -a cinema-catalog DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=cinemas

$ shipa app deploy -a cinema-catalog -i gcr.io/cosimages-206514/cinema-catalog-service@sha256:6613440a460e9f1e6e75ec91d8686c1aa11844b3e7c5413e241c807ce9829498

Notifications Service

$ shipa app create notification-service -t shipa-admin-team -o cinema-services

$ shipa app deploy -a notification-service -i gcr.io/cosimages-206514/notification-service@sha256:ca71c0decb3e9194474b9ea121ab0a3432b57beb07f9297fa1233f8f3d6a2118

Payment Service

$ shipa app create payment-service -t shipa-admin-team -o cinema-payment

$ shipa env set -a payment-service DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=booking STRIPE_SECRET=your_secret STRIPE_PUBLIC=your_token

$ shipa app deploy -a payment-service -i gcr.io/cosimages-206514/payment-service@sha256:b1c311b37fb6c74ef862e93288aa869f014e7b77e755a4d71481fe5689204d31

Booking Service

$ shipa app create booking-service -t shipa-admin-team -o cinema-services

$ shipa env set -a payment-service DB_SERVER=<MongoDB IP:PORT> DB_USER=shipau DB_PASS=shipapass DB=booking STRIPE_SECRET=your_stripe_secret STRIPE_PUBLIC=your_strip_token

$ shipa app deploy -a booking-service -i gcr.io/cosimages-206514/booking-service@sha256:e639bfc7c0926be16f6b59214cc0753d47b168e342bea5e2d34d8f47dbdded27

Note that you will need to supply the internal endpoint for the Notification services and the Payment services for the Booking service ENV variables so that Booking can communicate with both. You can find both in your service’ application page for both services:

For my notification-service application:
Shipa for Kubernetes - notification service
For my payment-service application:

When setting the ENV variables above, we want to use the Internal DNS for both services.

API Gateway Service

$ shipa app create api-gateway -t shipa-admin-team -o cinema-services

$ shipa env set -a api-gateway API_BOOKING=booking_internal_dns API_MOVIES=movies_internal_dns API_CINEMA=cinema_catalog_internal_dns

$ shipa app deploy -a api-gateway -i gcr.io/cosimages-206514/api-gateway@sha256:ea7e9efe455f634ab3d9abaae361717c771a9bf4ab8881d005d3cf4b10195e1a

As with the previous service, the API Gateway requires you to add the Internal DNS endpoints for the Movies and Cinema Catalog services, which you can find on the service’s page.

UI Service

The UI app is a Frontend application built in React that interacts with the API client-side (through the browser), so it requires the API gateway to be opened to incoming traffic from endpoint exposed by Shipa” or something around that

$ shipa app create ui-service -t shipa-admin-team -o cinema-ui

$ shipa env set -a ui-cinema REACT_APP_API_SERVER=api-gateway-endpoint

$ shipa app deploy -a ui-service -i gcr.io/cosimages-206514/ui-cinemas@sha256:81f61cf1368b65e90a70637f9aa1c25ed741495d391cbbcea807d031e0c2a5e3

You can see that the UI service needs the API Gateway endpoint to communicate with it as part of the ENV variables that you are setting above.

Suppose you follow my example where the UI service is deployed through a framework hosted in a different cluster. In that case, you should use the Endpoint (external) URL provided in the API Gateway service page:

If you decided to have all frameworks in the same cluster, then you can use the Internal DNS address of your API Gateway service:

Once the services are all deployed, you can access our Cinema web page to start booking movies using the UI service Endpoint.

Network Policies

One of the optional steps you can take is to define network policies to protect the services we just deployed. Defining network policies on Kubernetes can be quite tricky, but Shipa has made it easy.

* For network policies to work on Shipa, you have to make sure Network Policies are enabled in your Kubernetes cluster where the Payment service and Shipa framework are hosted

Let’s set up network policies for the Payment service as an example since our payment service can be considered a critical app that needs to be secured.

To do so, we can go to the Payment service application page and click on Network Policies:

Once on the page, click on the “+” button to create a policy.

Since our Booking service is the one sending and receiving data to our Payment service, we can then build a policy that will allow our Payment service to receive data (ingress) only from Booking and only through port 3000:

For now, we can leave our egress with allow-all, but you can also specify egress rules to make sure data is sent to only allowed apps or endpoints:

Shipa for Kubernetes - allow all

Once complete, Shipa will restart the Payment service to ensure the network policies are applied. You can see this using Shipa’s network map:

You can define additional policies as you desire, but this is an example of how easy it is to control policies through Shipa.

In the end, we:

  • Deployed a complex application on Kubernetes without actually knowing Kubernetes or even having to touch kubectl as a developer!
  • We secured our services across multiple clusters and vectors, including security scans, network policies, RBAC, and more.

In the next post, we will show you how to connect this into your CI Pipeline and operate it through GitOps.

Resources:

GitOps in Kubernetes, the easy way–with GitHub Actions and Shipa

Bruno Andrade

Bruno Andrade

Founder and CEO, Shipa Corp

What is GitOps?

Putting it simply, it is how you do DevOps in Git. You store and manage your deployments using Git repositories as the version control system for tracking changes in your applications or, as everyone likes to say, “Git as a single source of truth.” It gets a bit more complicated when you start to talk GitOps in Kubernetes.

The challenge is that when you talk about GitOps in Kubernetes, this is directly tied to tools that still require you to build and manage things like Helm charts, Kustomize, or other similar approaches. This means that your developers and DevOps teams will always have to manage these charts, what variables are used, what changes are made to them, and more. Not to mention, the application’s post-deployment operation capabilities are minimal with these tools, enforcing governance and controls is challenging, and more.

We want to introduce you to an application-centric way of doing GitOps in Kubernetes. We will be using Shipa and GitHub Actions (You can use any Git repo or CI tool of choice). 

In the end, that’s what we want to deliver:

Shipa for Kubernetes - GitOps

In this example, we will learn how to construct the GitOps in Kubernetes workflow using GitHub, GitHub Actions, and Shipa. In the end, you will see that you were able to build a GitOps workflow without the need to learn, build, and maintain Helm charts and other Kubernetes-related objects.

 

Requirements

In this example, we assume you already have a Kubernetes cluster running and can access that cluster with kubectl. We will be using a cluster in GKE (Google Kubernetes Engine), but you can use any other cluster you’d like.

 

Installing Shipa

Installing Shipa on Kubernetes is as easy as 1, 2, 3:

  1. Create a namespace for Shipa’s components:
    $ kubectl create namespace shipa-system
  2. Set an email address and password to log in to Shipa:
    $ cat > values.override.yaml << EOF
    auth:
    adminUser: myemail@example.com
    adminPassword: mySecretAdminPassw0rd
    EOF
  3. Add Shipa’s Helm repository and deploy Shipa:
    $ helm repo add shipa-charts https://shipa-charts.storage.googleapis.com
    $ helm install shipa shipa-charts/shipa -n shipa-system —-timeout=1000s -f values.override.yaml

You can watch Shipa being deployed by listing all the pods in the shipa-system and shipa namespaces:

$ kubectl get pods --all-namespaces -w | grep shipa

Once Shipa’s components are up and running, install the Shipa CLI in your system:

$ curl -s https://storage.googleapis.com/shipa-client/install.sh | bash

You can verify that Shipa is installed successfully by running shipa version.

 

Adding a Target to Shipa

Before you start interacting with Shipa using the CLI, you need to configure a target (which tells the CLI where to find Shipa’s backend in your Kubernetes cluster). To configure a target, you first need to obtain the IP address (or DNS name) of Shipa’s server. To do that, run the following:

$ export SHIPA_HOST=$(kubectl --namespace=shipa-system get svc shipa-ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

$ if [[ -z $SHIPA_HOST ]]; then
export SHIPA_HOST=$(kubectl –namespace=shipa-system get svc shipa-ingress-nginx -o jsonpath=”{.status.loadBalancer.ingress[0].hostname}”)
fi

$ shipa target-add shipa $SHIPA_HOST -s

The code above obtains the DNS name of the Load Balancer that is serving traffic to the Shipa server, which, in turn, serves traffic to Shipa. The output of the shipa target add command will look similar to:

New target shipa -> https:/xxx.xxx.xxx.xxx:8081 added to target list and defined as the current target
 
Accessing the Shipa Dashboard

After a few minutes, you should be able to access the dashboard. Copy the shipa target address without the port and paste it into your Web Browser address bar, setting the port to 8080 (i.e., http://xxx.xxx.xxx.xxx:8080). You should see the following:

pasted image 0-10

Click on the Go to my Dashboard link. Once on the Dashboard, input the email address and password you set earlier.

Once you log in, this is how your Dashboard should look:

Shipa for Kubernetes - Dashboard

You can find more information on installing Shipa here https://learn.shipa.io/docs/installing-shipa.

 
Creating an Application on Shipa

Now we will create an application on Shipa, which is where we will deploy our code. You can create an application on Shipa using either the CLI or Dashboard. In this example, we will use the Shipa CLI:

$ shipa login
$ shipa app create gitops -t shipa-admin-team -o shipa-framework

 

CI/CD Pipeline with GitHub Actions

To demonstrate the full pipeline, I am using a web application called DevOps Toolkit, originally developed by (and forked from) our friend Viktor Farcic. You can find the Git repo here https://github.com/brunoa19/devops-toolkit.

We will be using GitHub Actions to help us build the image, push it to Google Container Registry, and trigger the deployment using Shipa, so let’s look at how the pipeline is constructed.

GitHub Secrets

Let’s create the required secrets! To get started, go to the “Settings” tab in your project and click “Secrets” in the sidebar. Click on “New repository secret.

You will need to create the following secrets:

    • GKE_PROJECT: The name of your project where your GKE cluster is located in your Google cloud account
    • GKE_SA_KEY: The service account used for the project with the Base64 encoded JSON service account key. More info available at https://github.com/GoogleCloudPlatform/github-actions/tree/docs/service-account-key/setup-gcloud#inputs 
    • SHIPA_APP: The name of the application we created on Shipa in the steps before that we will use to deploy our application (gitops in our case)
    • SHIPA_USER: The username you use to access Shipa
    • SHIPA_PASS: The password you use to access Shipa
    • SHIPA_SERVER: The IP of your Shipa instance (without HTTPS and port number, just the IP)

Using the GitHub secrets in your workflow is relatively straightforward. Each secret gets added as an environment variable prefixed with “secrets,” which means we can easily use them when creating our config file.

Pipeline Settings
name: DevOps Toolkit - Prod

on:
push:
branches: [ main ] pull_request:
branches: [ main ]
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
IMAGE: static-site

In this example, we will use our actions when there are either push or pull requests on our “main” branch. Our pipeline config file is stored inside .github/workflows directory, using a file called shipa-ci.yml

The steps below build our DevOps Toolkit image using the Dockerfile present in the repository and, once built, store the image in our Google Container Registry.

jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest

  steps:
- name: ACTIONS_ALLOW_UNSECURE_COMMANDS
id: ACTIONS_ALLOW_UNSECURE_COMMANDS
run: echo 'ACTIONS_ALLOW_UNSECURE_COMMANDS=true' >> $GITHUB_ENV

- name: Checkout
uses: actions/checkout@v2

# Setup gcloud CLI
- uses: GoogleCloudPlatform/github-actions/setup-gcloud@0.1.3
with:
service_account_key: ${{ secrets.GKE_SA_KEY }}
project_id: ${{ secrets.GKE_PROJECT }}

# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker

# Install Hugo
- run: |-
wget https://github.com/gohugoio/hugo/releases/download/v0.55.4/hugo_0.55.4_Linux-64bit.deb
sudo dpkg -i hugo_0.55.4_Linux-64bit.deb

# Build Hugo and Docker image
- name: Build
run: |-
make build
docker build \
--tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \

The step below sets up Shipa

  # Setup Shipa CLI
- run: |-
sudo wget https://storage.googleapis.com/shipa-client/1.2.0/shipa_linux_amd64
sudo chmod +x shipa_linux_amd64 && mv -v shipa_linux_amd64 shipa
./shipa target add shipa ${{ secrets.SHIPA_SERVER }} -s
echo ${{ secrets.SHIPA_PASS }} | ./shipa login ${{ secrets.SHIPA_USER }}

The final step in our pipeline triggers the deployment using Shipa

  # Deploy the Docker image to the cluster through Shipa
- name: Deploy
run: |-
./shipa app deploy -a ${{ secrets.SHIPA_APP }} -i gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA --step-interval=1m --step-weight=10 --steps=6

A few things to notice during the deployment:

    • We are deploying to the app we created on Shipa using the previous steps and set up using the SHIPA_APP secret
    • We are deploying using canary. If you want to run a straight deployment, just remove the –step-interval, –step-weight, and –steps flags

As we push code to the main branch, we can see that GitHub Actions started doing it’s job all the way to triggering the deployment through Shipa.

As the application is deployed, we have complete observability using Shipa.

Shipa for Kubernetes - observability

We can also have a complete overview of the application’s object and network dependency map:

We can see the detailed lifecycle of the application, with information and logs associated with every action taken:

Shipa for Kubernetes - lifecycle

Much more can be done with Shipa to operationalize Kubernetes and GitOps, such as security scans, RBAC, network policies, and more for operating your applications. With Shipa, you can not only deploy your apps, but you can manage them as well.

Another great point to mention is that we have done this all without creating Helm charts, Kustomize, deployment objects, services, etc. Shipa makes GitOps in Kubernetes much more dynamic and application-focused.

Thanks for reading this post. I hope you have a clearer picture of an application-centric GitOps model.

In the next post, we will cover how you can build an enterprise-level GitOps workflow. Stay tuned!

Resources:

Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn

Talking Shipa – “What’s New in 1.2?”

Shipa is excited to launch our new webcast series, Talking Shipa. To kick this series off, we sat down with Shipa Founder and CEO, Bruno Andrade, to discuss the release of Shipa Application Management Framework for Kubernetes, version 1.2 which includes application observability, network policies map and more.

In this video, Bruno spends a few minutes with us to talk about the new features and improvements that are packed into this new release.

Version 1.2 includes quite a few new features and improvements, but the items we focus on with Bruno include:

Improvements to:

    • Multi-cluster incl. AKS, EKS, OKE, GKE, IKS & OpenShift
    • Multi-tenancy – improved detailed multi tenancy model

New features including:

    • Application Observability
    • Network Policies Map
      Integration with Istio – incl. canary rollouts
    • Vault integration
    • Integration with Private Registries – incl. JFrog
    • and more!

The video below is time coded in the description to help you navigate to the topics that interest you most.

It is free to get started with Shipa. Follow the button below for details regarding how to download and install Shipa 1.2:

n this video, Bruno spends a few minutes with us to talk about the new features and improvements that are packed into this new release. Version 1.2 includes quite a few new features and improvements, but the items we focus on with Bruno include:

Improvements to:

– Multi-cluster incl. AKS, EKS, OKE, GKE, IKS & OpenShift…(3:15)

– Multi-tenancy – improved detailed multi tenancy model…(3:15)

New features including:

– Application Observability…(5:26)

– Network Policies Map…(9:40)

– Integration with Istio – incl. canary rollouts…(9:40)

– Vault integration…(13:00)

– Integration with Private Registries – incl. JFrog…(14:20)

Get started with Shipa today!

Get Started Free

Full lifecycle developer-centric application automation for Kubernetes

Operationalizing Kubernetes

Operationalize Kubernetes

Organizations have now seen the value of building microservices. They deliver applications as discrete functional parts, each of which can be delivered as a container or service and managed separately. But for every application, there are more parts to manage than ever before, especially at scale, and that’s where many turn to an orchestrator or a Kubernetes framework for help. While Kubernetes is one of the most popular container orchestration projects on GitHub, many organizations are still intimidated by its complexity.

Kubernetes solves many problems by providing an extensible, declarative platform that automates containers’ management for high availability, resiliency, and scale. But Kubernetes is a big, complex, fast-moving, and sometimes confusing platform that requires users to develop new skills, organizations to invest in building a new team to manage Kubernetes, platform teams to create a framework/interface for their developers on top of Kubernetes, and more. Often, this leads to slow adoption, increased costs, and a long list of frustrations.

As these Enterprises start adopting Kubernetes, one key ask for their DevOps and Platform Engineering teams is to “operationalize” Kubernetes. While powerful, Kubernetes is a platform that, at scale, can easily blur the lines between developers and operations teams, can introduce challenges around security and controls, can quickly scale to the state of containers and services sprawl, can lead to unnecessary resource consumption, can drive developer experience and speed down, and more. Overcoming these challenges and getting your organization to a state where they are comfortable with Kubernetes and where your software development and delivery teams can scale is not easy.

As we build Shipa, we talk to many organizations going through that “operationalization” process. As our roadmap is heavily driven by user input and use-cases, we wanted to share some of the common issues faced and the options on how we see users leveraging Shipa to overcome them to ultimately “Operationalize Kubernetes”:

Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn

1. Building and automating compliance

It can be a tremendous job to ensure that a cluster’s desired configuration is set and maintained, especially when you provide Kubernetes as the interface for your developers to deploy and manage their applications directly.

In addition, enforcing compliance across networking, resource utilization, permissions for multiple teams, utilization of external services, security scans, and more can be overwhelming because these components are, many times, completely independent from one another and managed by different teams. Trying to build an automated compliance model on top of all this ends up being a daunting task for the DevOps or Platform Engineering team. It only leads to more time spent on building something that is not core to the business.

Kubernetes Framework: To tackle that problem, we designed Shipa to leverage what we call Frameworks. The DevOps and Platform Engineering teams can use frameworks to build and automate compliance around the items below:

Kubernetes Framework

The focus for the team is now building the compliance framework that should be applied automatically to applications and services deployed using it rather than building a myriad of custom scripts, Helm charts, and Kustomize, hiring more people to build and manage it, and so on.

Through the framework, the team is now focused on what should be enforced at the application or service level and, as the framework can be bound to any Kubernetes cluster, the team can finally focus on the business requirements rather than building policies for the local cluster, Rego policy files, and more, which can drastically change if you decide to move to different cluster provider, cluster version, policy enforcement model, and others.

With the frameworks created, you can then attach these frameworks to different clusters:

Shipa for Kubernetes - Kubernetes Framework

You can create multiple frameworks per cluster, and each framework can have a completely different configuration, so you can tackle use-cases such as Dev, QA, and Prod environments, different projects, and more where each may require a different level of RBAC, Network policy, and other compliance requirements.

2. Managing workload configuration deployed on the cluster

One common concern we see from DevOps and Platform Engineering teams is the number of resources being consumed by applications and services deployed by the different development teams they support. We talk to companies that say they could easily reduce their compute utilization by ~30 or even ~40% if they could easily enforce resource utilization automatically for applications and services deployed. Still, they report issues identifying who owns a specific app or service, wasting time managing and maintaining a huge amount of YAML files, and more.

Thinking about these issues, we created a component called Plans. As a DevOps or Platform engineer, you can create multiple plans where you can set a specific amount of memory, CPU, and Swap that can be consumed by applications and attach that plan to a specific framework:

Shipa for Kubernetes

If you attach a plan to a framework, every time an application is deployed using that framework, the plan limits will automatically be enforced, and through Shipa, you can monitor closely how much of these resources the apps or services are actually using, who owns them, and more. If you need to adjust the plan for a specific app or service at any time, you can directly attach a new plan to it, without the need to change YAML files, Helm charts, or Kustomize scripts.

3. Multi-tenant and multi-clusters

As you scale the clusters required by your company to run all the services your developers are deploying, it becomes more difficult to manage and control what’s deployed, where, and how across the different clusters and sometimes, different providers.

At this point, most people agree that multi-tenancy is not an easy thing to manage on Kubernetes, especially at scale and across multiple clusters and cluster providers.

Thinking about that, Shipa incorporated the concept of multi-tenancy at the framework level, which makes it incredibly easy for you to enforce roles and permissions for teams and users.

Shipa for Kubernetes - multi-tenancy

When using Shipa, Kubernetes clusters are no longer the interface given to your developers for development and deployment of apps and services, so Shipa facilitates the enforcement of multi-tenancy at the right level for the different teams and users within your organization:

Shipa for Kubernetes - permission list

In addition to RBAC and detailed permission options at the framework level, Shipa also provides an enhanced multi-tenancy level through its frameworks, which creates a namespace in your cluster for each framework, so apps and services deployed across different Shipa frameworks will be isolated by different namespaces across your clusters.

4. Security and agility

The pressure is on for organizations to continue to innovate at a pace never seen before, making it easy to treat security as an afterthought. Many organizations think container and microservice security can be simply interpreted as security scanning of your images and code. Still, security goes well beyond that and should be incorporated into your workflow.

Shipa for Kubernetes - workflow configuration

To ensure that your organization implements best practices, we embedded security as part of the framework workflow, incorporating security scanning, network policies, registry control, RBAC, and more. That workflow embedded into your CI/CD pipeline can be a powerful way to enforce security while enabling velocity for your developers.

We embedded a holistic approach to security into Shipa’s framework so that it can be tied to your CI/CD pipelines, allowing developers to simply focus on delivering application code and updates while Shipa’s framework enforces the different security settings automatically. With Shipa’s dashboard, you can observe security reports, deployment information, application ownership, application lifecycle, logs, and more. Combined, this becomes a powerful way for you to support your application post-deployment while enabling developers with better observability and velocity.

Shipa for Kubernetes - observability
Shipa for Kubernetes - observability
Shipa for Kubernetes - observability

It’s terrific to see Kubernetes and its community growing and more companies adopting it and scaling their deployments. Here at Shipa, we believe that a structured workflow leveraging a Kubernetes framework can help these organizations to adopt Kubernetes and microservices architecture at scale, at the pace they are looking for.

Resources:

Shipa Website: https://www.shipa.io

Shipa Documentation: https://learn.shipa.io

Getting Started with Shipa: https://learn.shipa.io/docs/getting-started-with-shipa

Try Shipa today!

Full lifecycle application-centric framework for Kubernetes
Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn

Framework for Kubernetes – Shipa 1.2 Release

Shipa 1.2 is now GA

Shipa (https://www.shipa.io), the full lifecycle application-centric framework for Kubernetes and multi-cluster portability, just got better! Version 1.2 is now available, and we are excited to share these key new features and improvements with the Shipa community.

Shipa creates the guardrails, compliance and controls for your Kubernetes and OpenShift applications, while at the same time helps eliminate all the yaml-files, helm charts and custom scrips that are most likely starting to pile up and slowing things down for your developers.

Get Started

Shipa version 1.2 includes improvements to:

    • Multi-cloud incl. AKS, EKS, OKE, GKE, IKS & OpenShift
    • Multi-tenancy – improved detailed multi tenancy model

Shipa 1.2 key new feature include:

    • Network Policies Map
    • Integration with Istio – incl. canary rollouts
    • Vault integration
    • Integration with Private Registries – incl. JFrog
Shipa for Kubernetes multi-cloud portability

New in 1.2:

Network Policies Map

Shipa 1.2 brings user experience to the next level by empowering organizations with a visual translation of standard Kubernetes network policies, representing the simple abstraction level that Shipa provides when restricting or allowing traffic flow between applications. Shipa users can set rules for the application and have an automated visualization of all application policies displayed on the Shipa UI.

The map captures the complexities that are configured under the hood in a rich diagram, allowing you to achieve specific networking rules without understanding how pods or namespaces selectors work in the complex world of Kubernetes. Users can continue to think in an app-centric way and not be burdened with learning how to set up infrastructure objects.

Chart animation shows how the traffic moves between all of the graphed nodes, so users can have an exact representation of the incoming or outgoing network flow.

The map is an excellent tool for developers to understand how applications are configured quickly, and it can be used for a wide range of purposes. For example, from a security standpoint, this feature offers a quick view of whether the application is open to the world or to a specific set of applications/pools; this makes it easier for developers to match their policies to internal business requirements. This feature can also help developers from a debugging perspective. Because the chart show how all applications are connected (or not), developers can quickly see if a certain bug/issue can be derived from an infrastructure misconfiguration or from a codebase error.

Integration with Istio

Istio is an open-source service mesh developed by a collaboration between Google, IBM, and Lyft. It coordinates communication between services, providing service discovery, load balancing, security, recovery, telemetry, policy enforcement capabilities, and more.

Shipa users can now leverage their existing Istio ingress controller for their deployed applications.

Shipa simplifies using Service Mesh by abstracting the complexities away, empowering users to define services communication policies.

Canary rollouts

Shipa users can leverage Istio for the traffic routing rules, including canary rollouts based on percentage traffic splits. Canary rollouts allow you to test a new version of the service by sending small amounts of traffic to the new version. If the test is successful, it can gradually increase the traffic sent to the newest version until all traffic is moved. If anything goes wrong along the way, you can abort the rollout and return the traffic to the old version.

Metrics

Istio generates a set of service metrics based on the four golden monitoring signals ( latency, traffic, errors, and saturation). Once having all of these metrics, Shipa users can take advantage of  out-of-the-box integrations with your existing APM solutions and incident management tools. By doing that, Shipa makes it easier to solve problems and build more resilient applications quickly.

CNAME & HTTPS

Shipa integrates with cert-manager, and by using one single command, Shipa automatically generates certificates for your cname.

By using “shipa cname-add {appname} {cname} “and routing your DNS to Istio gateway endpoint, Shipa takes care of everything else.

Shipa also allows certificates to be added manually through “shipa certificate-add” command.

Vault integration

Users can now inject secrets from their Harshicorp Vault into their Kubernetes applications deployed using Shipa.

As many organizations migrate to the cloud, significant concern has been regarding how to best secure data. Vault is secret store software; it uses to store, manage safely, and control access to secrets ( tokens, passwords, certificates, and API keys) on Kubernetes clusters.

For safety reasons and user experience, Shipa users manage their secrets directly on Vault. Shipa provides a sophisticated user experience that enables the user to pass all necessary vault annotations through shipa.yaml, these annotations are used by Kubernetes Vault sidecar to inject secrets to your app.

shipa.yaml
security:
vault:
annotations:
vault.hashicorp.com/agent-inject: true
vault.hashicorp.com/role: "internal-app"
vault.hashicorp.com/agent-inject-secret-database-config.txt: "internal/data/database/config"

Integration with Private Registries

At Shipa, we believe that integration is essential for Continuous Delivery; for that reason, Shipa integrates with your current stack and tools in minutes.

Shipa now provides the ability to deploy applications with docker images stored in private registries. This feature uses an image URL, docker username, and password/access token to gain access.
Shipa offers full support for JFrog Artifactory, Docker Hub, Amazon ECR, Azure Container Registry, Google GCR, Nexus repository, and more.

Try Shipa today

Shipa is easy to install and get started

OKE deployments on Kubernetes – Faster and Safer with Shipa

OKE Deployments just got faster and safer using Shipa! Shipa’s application management framework, integrated into OKE, provides an out-of-the-box way for organizations to build, deploy and operate the full life-cycle of Kubernetes applications. With Shipa and OKE, organizations can make up for lost time and start getting value out of Kubernetes immediately.

In this webcast, you will learn how Shipa and OKE:

  • Provide developers an application-centric view they need so they never have to think about Kubernetes objects again.
  • Allow platform engineers to put up guardrails and centrally control deployments across multiple clusters, monitor application performance, and implement network policies and security configurations from a centralized dashboard.
  • Eliminate the need for custom Helm charts, Terraform scripts and YAML, reducing months of work that has to be done before pushing your first application into production.

Watch the full video discussion:

“OKE Deployments on Kubernetes – Faster and Safer with Shipa”

Try Shipa today!

Full lifecycle application-centric framework for Kubernetes, so everyone can focus on applications

Making Kubernetes Disappear with Shipa

In this excellent YouTube video – Marcel Dempers aka That DevOps Guy explains how Shipa (https://www.shipa.io) makes Kubernetes disappear so developers can focus on coding while providing the controls the DevOps team needs.

With Shipa, not a single yaml is needed to deploy an application across multiple clouds. See for yourself… 

Video Description

Subscribe to show your support! https://goo.gl/1Ty1Q2 

Today we’re taking a look at a new platform called Shipa. Shipa is a full lifecycle application-centric framework that runs on top of Kubernetes so that everyone can focus on applications. 

Checkout https://www.shipa.io/ for more on the platform 

Checkout the source code below 👇🏽 and follow along 🤓 

Also if you want to support the channel further, become a member 😎 https://marceldempers.dev/join 

Checkout “That DevOps Community” too https://marceldempers.dev/community 

Source Code 🧐 https://github.com/marcel-dempers/doc… 

Kubernetes in the Cloud: https://www.youtube.com/playlist?list…

If you are new to Kubernetes, check out my getting started playlist on Kubernetes below 🙂 Kubernetes Guide for Beginners: https://www.youtube.com/playlist?list…

Kubernetes Monitoring Guide: https://www.youtube.com/playlist?list…

Kubernetes Secret Management Guide: https://www.youtube.com/playlist?list…

More about Marcel Dempers aka That DevOps Guy:

I am a solutions Architect and my passions are platform architecture, distributed systems engineering ,micro-services, containers and cloud native technology. I’m a DevOps evangelist and encourage use of automation technology and open source to help folks become autonomous.

I want to build up a platform where I can share everything I’ve learnt about software engineering and architecture.

Also, there are a ton of things I want to learn, so this is going to be a relaxed, vlog environment of me learning new things and taking you all on a journey , With weekly to bi-weekly video uploads.

I’ll be building software, and documenting things on my GitHub as I go along. Come learn with me! Subscribe! 🙂

https://www.youtube.com/c/MarcelDempers

———-

More videos like “Making Kubernetes Disappear” on the Shipa YouTube channel

Shipa Integration with CircleCI

Kubernetes can bring a wide collection of advantages to a development organization. Properly leveraging Kubernetes can greatly improve productivity, empower you to better utilize your cloud spend, improve application stability and reliability, and more. On the flip side, if you are not properly leveraging Kubernetes, your would-be benefits become drawbacks. As a developer, this can become especially frustrating when you are focused on delivering quality code, fast. The learning curve and management of the object-centric application architecture, scripting and integrations into multiple CI systems and pipelines, and managing infrastructure can all make you less productive. According to a survey conducted by Tidelift and The New Stack, just 32% of a developer’s time is spent writing new code or improving existing code. The other 68% is spent in meetings, code maintenance, testing, security issues, and more.

“Respondents spend 35% of their time managing code, including code maintenance (19%), testing (12%) and responding to security issues (4%).”

Chris Grams

What if developers were empowered to take full advantage of the benefits of Kubernetes while avoiding the associated pitfalls? A new integration between CircleCI and Shipa may offer exactly that. CircleCI is dedicated to maximizing speed and configurability with customizable pipelines. Shipa is focused on simplifying Kubernetes so that developers can spend more time doing what they do best. The partnership and integrations between both solutions allows developers to leverage Kubernetes and all of the associated benefits, without changing the way they work. Your platform engineering team is able to manage, secure and deliver a powerful Kubernetes platform for the entire development organization to benefit from.

In the video above (https://www.youtube.com/watch?v=DvW13w_2HOs), Shipa founder Bruno Andrade demonstrates the CircleCI and Shipa integration. . Using a simple Ruby app, a developer can deploy to Kubernetes without creating a single Kubernetes object or its related YAML files (a major pain point most developers have when deploying to Kubernetes). With any Git repository, a developer can code, check in, and watch CircleCI and Shipa do the rest. Shipa is able to pick up the deployment from CircleCI, and abstract the entire Kubernetes deployment process from the developer’s point of view.

With the application already running in a GKE cluster connected to Shipa, a developer can add a quick update to the application and check it into a Git repository. From there, the CircleCI pipeline immediately picks up the change, delivers the updated bits to Shipa, and Shipa manages the deployment to the GKE cluster.

NO MORE YAML!

As a developer, you will not need to create anything related to Kubernetes. In fact, I feel confident that even someone who is just starting on their Kubernetes journey, with a very basic understanding of it, can get started easily and speed up the adoption process. The deployment layer is completely abstracted, allowing a platform engineering team to manage a robust Kubernetes environment, including all relevant security scans, without slowing down the development team.

Finally, the video also covers additional benefits from the Shipa and CircleCI integration including historical application information, consumption in the cluster, the entire lifecycle, successful and failed deployments, and the ability to roll back to a different version of the application, again, into Kubernetes, without really needing to know how it is done.

It should also be noted that, although the video shows Google Cloud and GitHub in this instance, you are not actually tied to a cloud provider or a Git repository. You can leverage this integration in any single or hybrid type of environment with the provider of your choice. Another great benefit to this powerful partnership between Shipa and CircleCI.

https://www.shipa.io/

https://circleci.com/

 

See for yourself

Install and deploy your applications on Kubernetes with minimal infrastructure overhead. With the integration of Shipa and CircleCI across workflows, developers can deploy and manage applications on Kubernetes without the need to create or manage objects and YAML files.

Deploying Applications on Kubernetes

Developing and deploying applications to Kubernetes locally with Shipa and Minikube

In a previous article, we discussed why we frequently hear that developers are not that keen on Kubernetes. You can read it here.

In summary, while developers certainly see the value of Kubernetes, they want to continue focusing on their application code and updates and not be impacted by the company’s Kubernetes initiative, which is quite fair.

I’m sure that developers, platform engineers, and DevOps engineers have all explored available solutions to mitigate the amount of infrastructure-related work that Kubernetes adds to the developer’s workload. While there are a few options available, developers quickly discover that these tools bring additional difficulties, such as:

  • Integrating their development workflow into the overall organization’s structure and requirements is a challenge.
  • When using these tools, it’s hard for the developer to comply with security, resource utilization, and more.
  • It’s not always easy to migrate locally developed applications to Test and Production clusters. It ends up requiring some level of YAML and object manipulation to make their apps work on different clusters.
  • It’s challenging to have a “production-like” environment locally.
  • And more…

While developers certainly see the value of Kubernetes, they want to have the capability to continue focusing on their application

To address the challenges, developers we have spoken with say that they need a solution that:

  • Allows developers to focus on code only and remove the need to create and maintain objects and YAML files
  • Makes application deployment on Kubernetes locally easy so they can quickly test their applications and updates.
  • Facilitates moving the applications from their local environment to other clusters, e.g., Test, Production, etc.
  • It empowers them to leverage a production-like environment locally, where they can work with the same settings required around application performance, monitoring, security, and more.

To help achieve this, I am detailing below how to implement Shipa and Minikube, which will give you both a local Kubernetes cluster and Shipa’s application framework.

Installing Minikube

To install Minikube, you just need to follow step 1 described in the following link: 

https://minikube.sigs.k8s.io/docs/start/

Installing Virtualbox

We will be using Virtualbox as the driver for our Minikube. 

Virtualbox provides packages for the different operating systems, which you can download from the following link:

https://www.virtualbox.org/wiki/Downloads

Starting a Cluster

Once you install both tools, it’s now time for you to get a cluster running, which you can do using the following command:

minikube start --kubernetes-version='v1.18.2' --memory='5gb' --disk-size='20gb' --driver=virtualbox

The command above will create a Kubernetes cluster version 1.18 with 5GB of memory and 20GB of disk. Even though you can adjust this as needed based on the resources you have available, keep in mind the amount of resources you need to run Kubernetes and your apps when resizing this.

Running the command above will give you an output similar to the one below:

 minikube start --kubernetes-version='v1.18.2' --memory='5gb' --disk-size='20gb' --driver=virtualbox
* minikube v1.14.2 on Darwin 10.15.6
* Using the virtualbox driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating virtualbox VM (CPUs=2, Memory=5120MB, Disk=20480MB) ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.12 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" by default

To make sure your cluster started successfully, you can run the following command:

 kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m39s   v1.18.2

Installing Shipa

With your local cluster running, you can now install Shipa.

Shipa can be downloaded and installed in your local cluster as a Helm chart. You can download Shipa’s Helm chart using the following command:

git clone https://github.com/shipa-corp/helm-chart.git

Once the download is complete, you can access Shipa’s Helm chart by simply entering the following:

cd helm-chart/

Inside the folder, you will now apply the resource limits to the services created by Shipa using the following command:

 kubectl apply -f limits.yaml
limitrange/limits created

With the above completed, you will now update the chart dependencies using the following command:

 helm dep up
load.go:112: Warning: Dependencies are handled in Chart.yaml since apiVersion "v2". We recommend migrating dependencies to Chart.yaml.
Saving 2 charts
Downloading docker-registry from repo https://kubernetes-charts.storage.googleapis.com
Downloading mongodb-replicaset from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts

Now its time for you to install Shipa in our local cluster. You can do it by running the Helm command below:

helm install shipa . \
--timeout=15m \
--set=metrics.image=gcr.io/shipa-1000/metrics:30m \
--set=auth.adminUser=admin@shipa.io \
--set=auth.adminPassword=shipa2020 \
--set=shipaCore.serviceType=ClusterIP \
--set=shipaCore.ip=10.100.10.20 \
--set=service.nginx.serviceType=ClusterIP \
--set=service.nginx.clusterIP=10.100.10.10

The install process should take a few minutes, which can vary depending on the amount of memory allocated to your local Kubernetes cluster. One easy way to identify when Shipa’s install is complete is to make sure you see the shipa-init-job-x market as completed and dashboard-web-x pods created and running. You can check it using the following command:

Once running, you should now add routes for Shipa’s ingress, which can be done with the commands below:

Route for NGNIX:

 sudo route -n add -host -net 10.100.10.10/32 $(minikube ip )
Password:
add net 10.100.10.10: gateway 192.168.99.106

Route for Traefik:

 sudo route -n add -host -net 10.100.10.20/32 $(minikube ip )
add net 10.100.10.20: gateway 192.168.99.106

With Shipa install and routes in place, you will need to download Shipa’s CLI to your local machine. Shipa’s CLI is available for different operating systems, and download links can be found here:

https://learn.shipa.io/docs/downloading-the-shipa-client

With Shipa’s CLI in place, the last step is to add your local instance of Shipa to your CLI as a Shipa target, which you can do by using the command below:

 shipa target-add -s shipa-v11 10.100.10.10
New target shipa-v11 -> https://10.100.10.10:8081 added to target list and defined as the current target

With your local Shipa instance added as a target, you can use the login command:

 shipa login
Email: admin@shipa.io
Password: 
Successfully logged in!

The Email and Password used above are the ones used in the Helm install command.

With the login complete, you can now find the address to Shipa’s dashboard by using the following command:

 shipa app list
+-----------------+-------------+----------------------------------------------------+
| Application   | Units      | Address                                            |
+-----------------+-------------+----------------------------------------------------+
| dashboard    | 1 started  | http://dashboard.10.100.10.20.shipa.cloud      |
+-----------------+-------------+----------------------------------------------------+

If you access the address displayed above, you will see Shipa’s dashboard:

The login credentials are the same ones you set up when installed Shipa using the Helm install command and the one you used to log in through the CLI.

Deploying a Sample Application

With Shipa and Kubernetes in place, we can now deploy our first application.
There are multiple ways of deploying applications on Shipa, and both are covered below:

  1. Using a pre-built image
  2. Deploying from source
Using a pre-built image

It’s possible that there is already a Docker image in place, and you want that image to be deployed to Kubernetes by Shipa. If that’s the case, you can follow the steps below:

Create an application on Shipa:

 shipa app create go-hello -t admin
App "go-hello" has been created!
Use app-info to check the status of the app and its units.
Your repository for "go-hello" project is "git@10.100.10.10:go-hello.git"

The command above will create an application framework that will then be used by Shipa to deploy your application and, once deployed, give you an application context view and operation level. Once you execute the command above, you will be able to see your application both in the Shipa dashboard as well as through the Shipa CLI:

View from the dashboard:

Deploy the image to Kubernetes through Shipa:

When deploying, you should use the command app deploy, as shown in the example below:

 shipa app deploy -a go-hello -i gcr.io/cosimages-206514/golang-shipa@sha256:054d98bcdc2a53f1694c0742ea87a2fcd6fc4cda8a293f1ffb780bbf15072a2b

The image used below is a sample Golang application that you can also use as a test. Once the deployment process is complete, you can see the application in a running state in Shipa’s dashboard:

From there, you can see your application endpoint URL, monitoring, metrics, and more.

Deploying from source

You also have the option to deploy your application directly from source, so it saves you the time of having to build and manage Docker files, images, and more.

When deploying from source, you can deploy from source located in your local machine, deploy directly from your CI pipeline, or using your local IDE. For the sake of simplicity, in this document, we will deploy from source located in your local machine.

Compared to deploying from an image, the first difference when deploying from source is that you need to enable the language support (or platform as called inside Shipa) for your application. Since we will use a Ruby sample application, we should then enable the Ruby platform on Shipa:

 shipa platform add ruby

Once the process is complete, we can then create the framework for our Ruby application:

 shipa app create ruby-app1 ruby -t admin
App "ruby-app1" has been created!
Use app-info to check the status of the app and its units.
Your repository for "ruby-app1" project is "git@10.100.10.10:ruby-app1.git"

The command above sets the application name and sets the application platform, which, in our case, is Ruby.

You can find detailed information about application management on Shipa through the following link:

https://learn.shipa.io/docs/application

For our sample Ruby application, you can download the source code from the following Git repository:

 git clone https://github.com/shipa-corp/ruby-sample.git

Now, you can then deploy the Ruby source code by using the command below:

 shipa app deploy -a ruby-app1 -f ruby-sample/

The command above will build the image required to run the Ruby application and deploy it to Kubernetes using Shipa. Once the deployment process is complete, the same way as before, you can see your application available both through the Shipa CLI and dashboard:

There is just so much more you can do with Shipa. Still, hopefully, this helped you learn how you can deploy and test your applications locally in Kubernetes using Shipa as your application framework without having to create images, objects, YAML files, and others, so you can speed up your development process.

In the next blog entry, we will discuss how to move these applications from your local environment to other clusters that you may have using Shipa.

Are Developers Not That Keen On Kubernetes?

Are developers not that keen on Kubernetes?

Should application developers learn Kubernetes? Let’s ask an even deeper question; should application developers even be aware of Kubernetes in their infrastructure?

I frequently hear this question being asked by DevOps, Platform Engineering, and Development teams. Of course, this is a discussion that brings very different views from different people and can result in a very long debate.

Kubernetes, without a doubt, provides far more functionality than the average developer needs. While Kubernetes is robust and provides dozens of types of objects (around 50 the last time I checked), developers don’t care how many replicas of their service are running, what roles it has, or if it’s running via StatefulSets; all they care about is getting an HTTPS endpoint that they can use to deliver their product to their users.

Share on email
Email
Share on twitter
Twitter
Share on linkedin
LinkedIn

When it comes to Kubernetes, even small changes can have significant ripple effects. As a result, even if developers are experienced with Kubernetes, operators may be reluctant to give them access to a cluster. 

To try and mitigate that concern, we’ve seen organizations spend anywhere between 1 and 2 years trying to build an intermediary layer between the application and Kubernetes to:

  • Allow Platform Engineering and DevOps to maintain control of the cluster.
  • Limit and manage the number of clusters available.
  • Abstract Kubernetes away from the Developers, so they can simply push code to GitHub, and the rest is taken care of for them.

This may appear unnecessary when only a handful of developers with only a few applications or services are deployed; however, the story quickly changes as an organization’s number of clusters, applications, and services in Kubernetes begin to scale. The development team is generally to first to feel frustrated by the growing complexity, greatly increasing the chance of inexperienced AND experienced developers to become distracted, less productive, and more prone to mistakes. Developers need to deal with infrastructure more these days, so the focus should be on simplifying and not complicating.

The way we see it is that the same way Docker turned complex tools, such as cgroups, into user-friendly products, the same should also happen with Kubernetes, turn it into a user-friendly application management framework.

Considering this, we decided to build Shipa to do precisely that; grow Kubernetes into a user-friendly application management framework. Shipa’s goal is to allow developers to focus on their application code while empowering DevOps and Platform engineers to better manage and control their clusters and infrastructure.

Shipa makes deployment, management, and controls of applications easy. Shipa does not treat Kubernetes as a first-class citizen; Shipa reserves that title for the applications and the teams that develop and control them. Doing so allows the developer not to worry about ConfigMaps, ingress rules, PVs, PVCs, etc. in his/her day-to-day. Even if DevOps and Platform engineering teams decide tomorrow to move from one Kubernetes cluster to another or across different providers, the way applications are deployed, operated, and controlled will not be impacted.

Software is getting complicated, and business requirements are evolving rapidly. The easier we make it for developers to deploy their applications and for DevOps and Platform Engineering teams to build controls and guardrails around it, the more value they will deliver, faster, and more secure.

Try Shipa