Skip to main content

16 posts tagged with "Kubernetes"

View All Tags

Why Fast Feedback Loops Matter When Working with Kubernetes

· 9 min read

Debugging and Testing Made Easy with mirrord

Introduction

In software development, fast, iterative feedback loops — known as dev loops — are essential for rapid prototyping and innovation. However, the complexity and disparity between local and remote execution environments can make this process challenging. Instead of staying in a flow state and quickly moving from idea to prototype, developers often have to deal with adjacent issues: building, deploying, and managing CI/CD pipelines.

Overlaying Kubernetes on top of an already complex process can turn development into a slow and painful experience. This complexity is why so many attempts have been made to solve this problem. Today, we will explore one of the most promising solutions.

The key to fast development cycles is reducing feedback time. The quicker you can test and debug, the faster you can iterate and improve.

Who Is This Content For?

This blog is for developers, DevOps engineers, and tech leads who work with Kubernetes and are tired of slow development cycles. If you’re looking for ways to improve your productivity and streamline your workflow, this is for you.

The Pain of Slow Dev Loops

Working with Kubernetes typically involves:

  • Creating Containers: This step can be time-consuming, especially when dealing with complex applications.
  • Pushing Images: Uploading images to a registry can take a significant amount of time, depending on the size of the image and the network speed.
  • Deploying to the Cluster: Waiting for the Kubernetes cluster to pull the image and start the containers adds to the delay.
  • Waiting for Feedback: Once the application is running, developers need to wait for logs and test results, which can take several minutes or more.

Each of these steps contributes to a slow development cycle, making it difficult to quickly test and debug changes. This can be particularly frustrating when trying to resolve critical issues or implement new features under tight deadlines.

By the end of this blog, you’ll learn strategies to speed up your development loop, reduce waiting times, and make debugging and testing more efficient.

Fast Feedback = Rapid Prototyping

What if we could integrate development loops so that the inner loop (code, test, build/run) seamlessly extends into the outer loop (build, test, scan, deploy, release)?

This would result in much faster feedback and iteration, leading to increased innovation. However, the development environment on my laptop or in a remote containerized service like Gitpod differs from the execution environment where the deployment is running.

What are the options?

  • Recreate the Kubernetes environment locally?
    Viable for some workloads.
  • What about recreating a cloud environment locally?
    Possible with LocalStack with AWS or TestContainers, but only to some extent.

The lowest common denominator

An interesting question to ask would be: is there something that both environments share? They do share the same container, but that’s not what I develop; it’s just a packaging mechanism. Digging deeper… how about a process 💡?

A process executing code runs on my machine the same way a process executing code runs in a containerized environment in the cloud.

Execution context matters

When executing code locally, everything needs to be set up on the local machine, including environment variables, file access, and networking. This setup ensures that the code can run as expected within the local environment. However, the context changes significantly when moving to a remote environment, such as a public or private cloud.

In a remote environment, which often includes a Kubernetes cluster running in the cloud, the infrastructure is more complex. Environment variables might be managed through cloud services or orchestration tools, file access could involve distributed storage systems, and networking might include virtual networks, load balancers, and security groups. Additionally, the context may include various integrated services such as databases, message queues, and other networking components.

Photo by Shubham
Dhage
on
Unsplash

A piece of code that works perfectly on a local machine might fail in the cloud due to missing environment variables, incorrect network configurations, or issues accessing remote databases and queues. This discrepancy forces developers to replicate the remote environment locally or debug issues that only appear in the cloud, slowing down the development cycle.

Testing Locally with Remote Environment

So far we have established that in order to combine the two development loops and make prototyping faster, we need to run our local process in the context of its remote environment.

This is the recipe for success; run my local process in the context of a remote environment.

Here is where mirrord can help!

Mirrord

Does exactly what we need:

mirrord is an open-source tool that lets developers run local processes in the context of their cloud environment. It makes it incredibly easy to test your code on a cloud environment (e.g. staging) without actually going through the hassle of Dockerization, CI, or deployment, and without disrupting the environment by deploying untested code

https://mirrord.dev/docs/overview/introduction/

Running App In Local Context

A simple nodejs app connects to Azure Storage Account and displays file content. As a developer, my task is to test my app. Let’s run it locally by executing those 2 justfile recipes:

start_server:
nodemon server.js

browser: start_server
browser-sync start - proxy "localhost:3000" - files "server.js" "public/**/*"

If you want to learn more about justfile:

Master Command Orchestration

Transform Your Projects with Just

itnext.io

My app works, but there is connectivity
error

Right, that’s not going to help us test a new version of my app. With mirrord, we can connect my local process to a remote execution environment and my app running in a pod.

This results in the traffic, environment variables, and file operations being mirrored into my locally running process. Whew, that’s a mouthful. Let’s run mirrord:

# Run mirrord on deployment, this resolves to a single pod
mirrord:
@mirrord exec --target-namespace devops-team \
--target deployment/foo-app-deployment \
nodemon server.js
✗ just mirrord
New mirrord version available: 3.106.0. To update, run: `"curl -fsSL https://raw.githubusercontent.com/metalbear-co/mirrord/main/scripts/install.sh | bash"`.
To disable version checks, set env variable MIRRORD_CHECK_VERSION to 'false'.
When targeting multi-pod deployments, mirrord impersonates the first pod in the deployment.
Support for multi-pod impersonation requires the mirrord operator, which is part of mirrord for Teams.
You can get started with mirrord for Teams at this link: https://mirrord.dev/docs/overview/teams/
* Running binary "nodemon" with arguments: ["server.js"].
* mirrord will target: deployment/foo-app-deployment, no configuration file was loaded
* operator: the operator will be used if possible
* env: all environment variables will be fetched
* fs: file operations will default to read only from the remote
* incoming: incoming traffic will be mirrored
* outgoing: forwarding is enabled on TCP and UDP
* dns: DNS will be resolved remotely
⠖ mirrord exec
✓ Update to 3.106.0 available
✓ ready to launch process
✓ layer extracted
✓ operator not found
✓ agent pod created
✓ pod is ready
✓ config summary [nodemon] 3.1.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node server.js`
Listing all containers and their first blob:
Server running on port 3000
Container: sample-blob

Notice that now all environmental variables are fetched from the remote execution environment (Kubernetes cluster).

* env: all environment variables will be fetched

Now we should be able to navigate to localhost:3000 and see that the connection was successful:

What we have achieved is that calls from our local process are as if they were made by the remote process and calls to the remote process are mirrored to our local process.

How does mirrord work

Simplifying, mirrord operates in an agentless mode, using kubectl to connect to a cluster and create a temporary pod for mirroring. The key components include the mirrordAgent, a Rust binary running as a Kubernetes job that proxies local processes by sniffing network traffic and accessing the file system of the target pod. The mirrordLayer, a dynamic library, hooks into the local process to relay file system and network operations to the mirrordAgent.

source: author based on
https://mirrord.dev/docs/reference/architecture/

Real world is messier than that

Now we can iterate rapidly on a new version of my app and see how it behaves in the actual remote execution environment. This is however a simple example, there might be scenarios where we want to:

  • redirect or steal all the traffic instead of mirroring it
  • redirect only subset of calls or filtering out Kubernetes health checks
  • redirect database writes to a local db to avoid duplicated remote writes
  • run a new app or a tool in the remote context using targetless mode

Mirrord supports all those scenarios and more are being developed.

Closing Thoughts

Developing containerized applications for Kubernetes is hard. There are many options to help with the process, including tools like telepresence, skaffold, okteto, garden, and many more.

This tells us two things: first, there is no standardized way of supporting developers in their efforts; and second, the space is still growing and maturing. mirrord has a unique place in this ecosystem. The blend of very little to no friction and strong defaults makes it an attractive solution for cloud-native development.

By seamlessly mirroring traffic, environment variables, and file operations from the remote environment to the local process, it eliminates the common pain points in Kubernetes development. This approach not only speeds up the development cycle but also ensures that your local environment is as close to production as possible, reducing the risk of unexpected issues during deployment.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

How to Build Cloud Native Platforms with Kubernetes

· 10 min read

header-image

Developer Portals, GitOps, Best Practices

Introduction

Platform Engineering focuses on empowering developers and organizations by creating and maintaining internal software products known as platforms. In this blog, we will explore what platforms are, why they are important, and uncover best practices for creating and maintaining well-architected platforms.

This content will be valuable for Platform Engineers, Architects, DevOps Specialists, or anyone curious about how platforms can drive innovation and efficiency in development.

Understanding Platform Categories: Mapping the Terrain

Similar to DevOps, Platform Engineering struggles to define a platform concisely. A good way to understand what a platform is, is to list various kinds of platforms and their characteristics.

Platform TypesPlatform Types

  • Business as a Platform: Consider Uber, the entire product is a platform that connects users and drivers. This platform creates an ecosystem where businesses operate, users engage, and interactions happen seamlessly.

  • Domain-Specific Platforms: These platforms provide cross-cutting functionality for other applications. An example could be a geolocation API that is consumed by web frontend, mobile app and other services.

  • Domain-Agnostic Platforms: These platforms serve as foundational building blocks for developers, offering essential tools like database management, cloud storage, and user authentication. Cloud platforms like AWS or Azure provide the infrastructure and services that countless digital products rely on and are a good mental model to have when designing our own cloud native platform.

The platform landscape is vast and varied. From business-centric models to specialized domain platforms and versatile tools for developers, each plays a pivotal role in the digital ecosystem.

In this blog, we will focus on domain-agnostic platforms providing infrastructure.

The Case for Cloud-Native Platforms

Cloud-native is about how applications are created and deployed, not where. — Priyanka Sharma

Cloud-native platforms provide a foundation that allows applications to be designed with flexibility, largely making them environment agnostic. A well-architected platform offers several key benefits:

  • Simplified infrastructure management: Infrastructure provisioning and management are abstracted in a way that enables developers to move faster without compromising security and compliance requirements.

  • Increased development efficiency: A well-designed platform should increase developers productivity, improving metrics like lowering time to first commit, improving the incidents-to-resolution or reducing time to onboard new developers.

  • **Built-in scalability and reliability: **A successful platform brings to the table elements that are not part of core development efforts, but are crucial for product success. Those are: observability, scalability, automated rollbacks, integrated authentication and more.

Building Blocks of Cloud-Native PlatformsBuilding Blocks of Cloud-Native Platforms

Self-service portal

A self-service portal is a user-friendly interface that allows users to access and manage resources independently, empowering developers and users to create, configure, and deploy resources without IT support. This streamlines workflows, accelerates project timelines, and enhances productivity.

Examples like Backstage, developed by Spotify, and Port provide customizable interfaces for managing developer tools and services, ensuring efficient and consistent interactions. These portals embody the essence of self-service, enabling quick, autonomous actions that reduce bottlenecks and foster agility in development processes.

Programmatic APIs

Programmatic APIs are the backbone of cloud-native platforms, enabling seamless interaction with platform services and functionalities. These APIs allow developers to automate tasks, integrate different services, and build complex workflows, enhancing efficiency and consistency across environments.

APIs provide programmatic access to essential platform features, allowing developers to automate repetitive tasks and streamline operations. They support various transport mechanisms such as REST, HTTP, and gRPC, offering flexibility in how services communicate. For instance, API based on Kubernetes Resource Model enables developers to manage containerized applications, while AWS SDKs facilitate interactions with a wide range of cloud resources. By leveraging programmatic APIs, platforms ensure that developers can efficiently build, deploy, and manage applications, driving productivity and innovation.

Automated Workflows

Automated workflows are crucial for provisioning and deployment processes in cloud-native platforms. They ensure tasks are executed consistently and efficiently, minimizing human error and enhancing productivity.

Key to these workflows are CI/CD pipelines, which automate the build, test, and deployment stages of application development. Tools like Argo CD and Flux enable GitOps practices, where infrastructure and application updates are managed through Git repositories. By leveraging automated workflows, platforms can ensure rapid, reliable deployments, maintain consistency across environments, and accelerate the overall development process.

Monitoring and Observability

Monitoring and observability tools provide crucial insights into the performance and health of cloud-native platforms. These tools help detect issues early, understand system behavior, and ensure applications run smoothly.

Prominent tools include Prometheus for collecting and querying metrics, Grafana for visualizing data and creating dashboards, and OpenTelemetry for tracing and observability. Together, they enable proactive management of resources, quick resolution of issues, and comprehensive visibility into system performance. By integrating these tools, platforms can maintain high availability and performance, ensuring a seamless user experience.

Security and Governance Controls

Integrated security and governance controls are vital for maintaining compliance and protecting sensitive data in cloud-native platforms. These controls ensure that platform operations adhere to security policies and regulatory requirements.

Tools like OPA GateKeeper, Kyverno, and Falco play a crucial role in enforcing security policies, managing configurations, and detecting anomalies. OPA GateKeeper and Kyverno help in policy enforcement and compliance, while Falco specializes in runtime security and intrusion detection. By incorporating these tools, platforms can ensure robust security, maintain compliance, and mitigate risks effectively.

Ever Evolving

The only constant in technology is change. — Marc Benioff

Developer platforms are constantly evolving to meet the changing needs of developers and users alike. This continuous evolution ensures that platforms remain relevant, efficient, and capable of supporting the latest innovations and best practices. By staying adaptable and forward-thinking, platforms can provide the tools and features necessary to drive ongoing success and innovation.

Embracing Kubernetes Resource Model APIs

Application Programming Interface, is a set of rules and protocols for building and interacting with software.

The Kubernetes Resource Model API is the industry standard for managing resources in a cloud-native environment. Kubernetes acts as a universal control plane, continuously reconciling the desired state with the actual state of the system. Standardizing on this model offers several key benefits:

  1. Industry-Wide Standardization: Kubernetes has become the de facto standard for cloud-native infrastructure management. Its API-driven approach is widely adopted, ensuring compatibility and ease of integration with various tools and services.

  2. Universal Control Plane: Kubernetes serves as a universal control plane, providing a centralized management interface for infrastructure and applications. This centralization simplifies operations and enforces consistency across environments.

  3. Continuous Reconciliation: The Kubernetes API supports declarative management, where the desired state of resources is defined, and Kubernetes continuously reconciles this state. This automated reconciliation reduces manual intervention and ensures system reliability.

  4. Separation of Concerns: Platform engineers can configure infrastructure and policies, while developers interact with higher-level APIs. This separation enhances automation and self-service capabilities, empowering developers without compromising security or compliance.

  5. Scalability and Extensibility: Supporting transport mechanisms like REST, HTTP, and gRPC, the Kubernetes API is adaptable and scalable. It integrates seamlessly with a wide range of tools, facilitating the growth and evolution of the platform.

By leveraging Kubernetes Resource Model APIs, organizations can build robust, scalable, and efficient platforms that meet the dynamic needs of modern development environments​.

Platform as a Product: A New Perspective

Adopting a product approach to platform engineering is crucial for creating successful internal platforms. This means focusing on delivering continuous value to users — developers and the organization. It involves understanding user needs, designing and testing solutions, implementing them, and gathering feedback for continuous improvement.

Cloud hyperscalers like AWS, Google Cloud, and Microsoft Azure exemplify this approach. They have built user-centric platforms that are constantly updated with new features, driven by user feedback and accessible via standardized APIs. This ensures they remain relevant and valuable.

For internal platforms, roles such as product owners and project managers are essential. They help ensure the platform evolves in response to developer needs, maintaining usability and effectiveness. By treating your internal platform as a product, you create a sustainable resource tailored to your organization’s unique needs.

Platform as a ProductPlatform as a Product

Delivering Value Through Cloud-Native Platforms

In our demo video, we showcase how to build a platform that embodies key cloud-native principles. This practical example demonstrates the immense value that a well-architected cloud-native platform can deliver. Here’s a brief overview of what you can expect:

  • Empowering Developers: See how the platform provides developers with the tools and autonomy they need to innovate and deliver faster.

  • Cloud-Native Principles: Watch as we leverage containerization, microservices, and other cloud-native practices to build a robust, scalable platform.

  • API-Driven Approach: Discover how using programmatic APIs streamlines operations, enhances automation, and ensures seamless integration between services.

  • GitOps Workflow: Learn how the platform employs GitOps practices to manage infrastructure as code, enabling more efficient and reliable deployments.

Watch the video to see these principles in action and understand how they come together to create a powerful, developer-centric platform.

Essential Tools

In the demo, you can see a range of tools that form the backbone of cloud-native platforms, each serving a critical role. From Kubernetes as the control plane orchestrator to GitHub for managing API calls via pull requests, these tools collectively ensure efficient, scalable, and secure infrastructure management.

Recap: Platform Components in Action

Let’s recap what we’ve learned about using Kubernetes as a control plane for infrastructure provisioning:

  1. Self-Service Portal: The Developer accesses the IDP Portal for a unified UI experience to manage applications and infrastructure.

  2. Push Changes: The Developer pushes changes to the GitOps Repository via a pull request.

  3. Approval and Merge: The Platform Engineer reviews, approves, and merges the pull request, updating configurations.

  4. Sync Changes: The GitOps Repository syncs the changes to ArgoCD.

  5. Deploy Changes: ArgoCD deploys the changes to the Kubernetes API.

  6. Reconcile Infrastructure: The Kubernetes API reconciles the infrastructure via Crossplane.

  7. Provision Infrastructure: Crossplane provisions the infrastructure via various providers.

This sequence ensures a streamlined, automated process for managing and provisioning infrastructure using Kubernetes and GitOps principles.

platform-components

Closing Thoughts

Cloud-native platforms are revolutionizing how we develop and manage applications by providing robust, scalable, and secure environments. They empower developers with self-service portals, streamline operations with programmatic APIs, and ensure reliability through automated workflows and comprehensive monitoring tools. By embracing these platforms, organizations can accelerate innovation, enhance productivity, and maintain high standards of security and compliance.

Treating platforms as products ensures continuous improvement and alignment with user needs, making them indispensable tools in today’s fast-paced tech landscape. Whether you’re a Platform Engineer, Architect, or DevOps Specialist, leveraging cloud-native platforms can drive significant value, fostering a culture of efficiency and agility. Stay ahead of the curve, explore the potential of cloud-native platforms, and watch your organization thrive.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

How to Simplify Kubernetes Deployments with Kluctl: A Beginner's Guide

· 9 min read

intro-to-kluctl

Introduction

Kubernetes has revolutionized the way we manage containerized applications, providing a robust and scalable platform for deploying, scaling, and operating these applications. However, despite its powerful capabilities, managing Kubernetes deployments can sometimes be challenging. Popular tools like Helm and Kustomize have become the standard for many teams, but they might not always meet every need.

This is where Kluctl steps in. Kluctl is a deployment tool for Kubernetes that aims to combine the strengths of Helm and Kustomize while addressing their limitations. It provides a more flexible and declarative approach to Kubernetes deployments, making it an excellent choice for those seeking an alternative.

In this blog, we'll explore Kluctl's unique features and how it can streamline your Kubernetes deployment process. Whether you're an experienced Kubernetes user or just getting started, this guide will provide valuable insights into why Kluctl might be the tool you've been looking for.

An interactive version of this blog is available on killercoda.com:

interactive
scenariointeractive scenario

What is Kluctl?

Kluctl is a modern deployment tool designed specifically for Kubernetes. It aims to simplify and enhance the deployment process by combining the best aspects of Helm and Kustomize, while also addressing some of their shortcomings. With Kluctl, you can manage complex Kubernetes deployments more efficiently and with greater flexibility.

Key Features of Kluctl

  1. Declarative Configuration: Kluctl allows you to define your deployments declaratively using YAML files. This approach ensures that your deployments are consistent and reproducible.

  2. GitOps Ready: Kluctl integrates seamlessly with GitOps workflows, enabling you to manage your deployments via Git. This integration supports continuous deployment practices and makes it easier to track changes and rollbacks.

  3. Flexible and Modular: Kluctl supports modular configurations, making it easy to reuse and share components across different projects. This modularity reduces duplication and enhances maintainability.

  4. Validation and Diffing: One of Kluctl's standout features is its built-in validation and diffing capabilities. Before applying changes, Kluctl shows you what changes will be made, allowing you to review and approve them. This feature helps prevent accidental misconfigurations and ensures deployments are accurate.

source source: https://kluctl.io/

Why Choose Kluctl?

  • Enhanced Flexibility: Kluctl provides a higher degree of flexibility compared to traditional tools like Helm and Kustomize. It enables you to customize and manage your deployments in a way that best fits your workflow and organizational needs.

  • Improved Collaboration: By leveraging GitOps, Kluctl enhances collaboration within teams. All deployment configurations are stored in Git, making it easy for team members to review, suggest changes, and track the history of deployments.

  • Reduced Complexity: Kluctl simplifies the deployment process, especially for complex applications. Its modular approach allows you to break down deployments into manageable components, making it easier to understand and maintain your Kubernetes configurations.

In summary, Kluctl is a powerful tool that enhances the Kubernetes deployment experience. Its declarative nature, seamless GitOps integration, and advanced features make it an excellent choice for teams looking to improve their deployment workflows.

Installing Kluctl

Getting started with Kluctl is straightforward. The following steps will guide you through the installation process, allowing you to set up Kluctl on your local machine and prepare it for managing your Kubernetes deployments.

Step 1: Install Kluctl CLI

First, you need to install the Kluctl command-line interface (CLI). The CLI is the primary tool you'll use to interact with Kluctl.

To install the Kluctl CLI, run the following command:

curl -sSL https://github.com/kluctl/kluctl/releases/latest/download/install.sh | bash

This script downloads and installs the latest version of Kluctl. After the installation is complete, verify that Kluctl has been installed correctly by checking its version:

kluctl version

You should see output indicating the installed version of Kluctl, confirming that the installation was successful.

Step 2: Set Up a Kubernetes Cluster

Before you can use Kluctl, you need to have a Kubernetes cluster up and running. Kluctl interacts with your cluster to manage deployments, so it's essential to ensure that you have a functioning Kubernetes environment. If you haven't set up a Kubernetes cluster yet, you can refer to my previous blogs for detailed instructions on setting up clusters using various tools and services like Minikube, Kind, GKE, EKS, or AKS.

Kluctl in action

Next we will setup a basic kluctl project. To start using kluctl define a .kluctl.yaml file in the root of your project with the targets where you want to deploy.

Let's create a folder for our project and create a *.kluctl.yaml *file in it.

mkdir kluctl-project && cd kluctl-project

cat <<EOF > .kluctl.yaml
discriminator: "kluctl-demo-{{ target.name }}"

targets:
- name: dev
context: kubernetes-admin@kubernetes
args:
environment: dev
- name: prod
context: kubernetes-admin@kubernetes
args:
environment: prod

args:
- name: environment
EOF

This file defines two targets, dev and prod, that will deploy to the same Kubernetes cluster.

We can use the args section to define the arguments that we will use in our YAML files to template them. For example {{ args.environment }} would output dev or prod depending on the target we are deploying to.

Create Deployment

Next we will create a kustomize deployment for redis application. Under the hood kluctl uses kustomize to manage the Kubernetes manifests. kustomize is a tool that lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. > 💡 we are following a tutorial from the kluctl documentation Basic Project Setup Introduction

Let's create a deployment.yaml where we will define elements that kluctl will use to deploy the application.

cat <<EOF > deployment.yaml
deployments:
- path: redis

commonLabels:
examples.kluctl.io/deployment-project: "redis"
EOF

Now we need to create redis the deployment folder.

mkdir redis && cd redis

Since we are using kustomize we need to create a kustomization.yaml file.

cat <<EOF > kustomization.yaml
resources:
- deployment.yaml
- service.yaml
EOF

And now we can create the service.yaml and deployment.yaml files.

cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cart
spec:
selector:
matchLabels:
app: redis-cart
template:
metadata:
labels:
app: redis-cart
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
readinessProbe:
periodSeconds: 5
tcpSocket:
port: 6379
livenessProbe:
periodSeconds: 5
tcpSocket:
port: 6379
volumeMounts:
- mountPath: /data
name: redis-data
resources:
limits:
memory: 256Mi
cpu: 125m
requests:
cpu: 70m
memory: 200Mi
volumes:
- name: redis-data
emptyDir: {}
EOF

cat <<EOF > service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-cart
spec:
type: ClusterIP
selector:
app: redis-cart
ports:
- name: redis
port: 6379
targetPort: 6379
EOF

Deploy the app

Next, we will deploy the redis application to the dev target. First, we need to change to the root of the kluctl-project repository and initialize a git repository there.

cd /root/kluctl-project && \
git init && \
git add . && \
git commit -m "Initial commit"

Now we can deploy the application to dev environment.

kluctl deploy --yes -t dev

💡 Notice that we are using the --yes flag to avoid the confirmation prompt. This is useful for the scenario, but in real life you should always review the changes before applying them.

Handling Changes

Next we will introduce changes to our setup and see how kluctl handles them. Let's see what we have deployed so far by executing tree command.

.
|-- deployment.yaml
|-- kustomization.yaml
`-- redis
|-- deployment.yaml
|-- kustomization.yaml
`-- service.yaml

1 directory, 5 files

💡 Notice this resembles a typical kustomize directory structure.

One of the superpowers of kluctl is how transparently it handles changes. Let's modify the redis deployment and see how kluctl handles it.

yq -i eval '.spec.replicas = 2' redis/deployment.yaml

Now let's deploy the changes to the dev target.

kluctl deploy --yes -t dev

Remember at the beginning, we have added custom labels to each deployment. Let's see if the labels were correctly applied.

kubectl get deployments -A --show-labels

Templating

Next, we will use templating capabilities of kluctl to deploy the same application to a different namespace At the beginning of the workshop, we have two different environments; prod and dev. This setup works out of the box for multiple targets (clusters), however in our case, we want to have a single target (cluster) and we want to deploy different targets to different namespaces.

Let's start by deleting the existing resources and modifying some files. > 💡 It is possible to migrate the resources to a different namespace using the kluctl prune command. However, in this case, we will delete the old resources and recreate them in new namespaces.

kluctl delete --yes -t dev

In order to differentiate between the two environments, we will need to adjust the discriminator field in the .kluctl.yaml file.

yq e '.discriminator = "kluctl-demo-{{ target.name }}-{{ args.environment }}"' -i .kluctl.yaml

We also need to create a namespace folder and yaml and add it to our kustomization.yaml file.

First create the namespace folder.

mkdir namespace

Now we can add the namespace folder to the kustomization.yaml file. > 💡 Notice the use of barrier: true in the kustomization.yaml file. This tells kluctl to apply the resources in the order they are defined in the file and wait for the resource before the barrier to be ready before applying the next ones

cat <<EOF > deployment.yaml
deployments:
- path: namespace
- barrier: true
- path: redis
commonLabels:
examples.kluctl.io/deployment-project: "redis"
overrideNamespace: kluctl-demo-{{ args.environment }}
EOF

Now let's create the namespace YAML file.

cat <<EOF > ./namespace/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kluctl-demo-{{ args.environment }}
EOF

Test the deployment

We will test if our setup works by deploying the redis application to the dev and prod namespaces. Deploying the resources to the dev namespace:

kluctl deploy --yes -t dev

And to the prod namespace:

kluctl deploy --yes -t prod

Let's check if everything deployed as expected:

kubectl get pods,svc -n kluctl-demo-dev
kubectl get pods,svc -n kluctl-demo-prod

Closing thoughts

That's it! We have seen basic capabilities of kluctl.

We have barely scratched the surface of kluctl capabilities. You can use it to deploy to multiple clusters, namespace, and even different environments.

The mix of templating capabilities based on jinja2 and kustomize architecture makes it a really flexible tool for complex deployments.

Next Steps


Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my blogs on Medium

GitOpsify Cloud Infrastructure with Crossplane and Flux

· 9 min read

In this article we are going to learn how to automate the provisioning of cloud resources via Crossplane and combine it with GitOps practices.

You will most benefit from this blog if you are a Platform or DevOps Engineer, Infrastructure Architect or Operations Specialist.

If you are new to GitOps, read more about it in my blog GitOps with Kubernetes

Let's set the stage by imagining following context. We are working as a part of a Platform Team in a large organization. Our goal is to help Development Teams to onboard get up to speed with using our Cloud Infrastructure. Here are a few base requirements:

  • Platform Team doesn't have resources to deal with every request individually, so there must be a very high degree of automation
  • Company policy is to adopt the principle of least privilege. We should expose the cloud resources only when needed with the lowest permissions necessary.
  • Developers are not interested in managing cloud, they should only consume cloud resources without even needing to login to a cloud console.
  • New Teams should get their own set of cloud resources when on-boarding to the Platform.
  • It should be easy to provision new cloud resources on demand.

Initial Architecture

The requirements lead us to an initial architecture proposal with following high level solution strategy.

  • create template repositories for various types of workloads (using Backstage Software Templates would be helpful)
  • once a new Team is on boarded and creates first repository from a template, it will trigger a CI pipeline and deploy common infrastructure components by adding the repository as Source to Flux infrastructure repo
  • once a Team wants to create more cloud infrastructure, they can place the Crossplane claim YAMLs in the designated folder in their repository
  • adjustments to this process are easily implemented using Crossplane Compositions

In real world scenario we would manage Crossplane also using Flux, but for demo purposes we are focusing only on the application level.

The developer experience should be similar to this:

TeamBootstrap

Tools and Implementation

Knowing the requirements and initial architecture, we can start selecting the tools. For our example, the tools we will use are Flux and Crossplane.

We are going to use Flux as a GitOps engine, but the same could be achieved with ArgoCD or Rancher Fleet.

Let's look at the architecture and use cases that both tools support.

Flux Architecture Overview

Flux exposes several components in the form of Kubernetes CRDs and controllers that help with expressing a workflow with GitOps model. Short description of 3 major components. All those components have their corresponding CRDs.

flux-architecture source: https://github.com/fluxcd/flux2

  1. Source Controller Main role is to provide standardized API to manage sources of the Kubernetes deployments; Git and Helm repositories.

    apiVersion: source.toolkit.fluxcd.io/v1beta1
    kind: GitRepository
    metadata:
    name: podinfo
    namespace: default
    spec:
    interval: 1m
    url: https://github.com/stefanprodan/podinfo
  2. Kustomize Controller This is a CD part of the workflow. Where source controllers specify sources for data, this controller specifies what artifacts to run from a repository.

    This controller can work with kustomization files, but also plain Kubernetes manifests

    apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    kind: Kustomization
    metadata:
    name: webapp
    namespace: apps
    spec:
    interval: 5m
    path: "./deploy"
    sourceRef:
    kind: GitRepository
    name: webapp
    namespace: shared
  3. Helm Controller This operator helps managing Helm chart releases containing Kubernetes manifests and deploy them onto a cluster.

    apiVersion: helm.toolkit.fluxcd.io/v2beta1
    kind: HelmRelease
    metadata:
    name: backend
    namespace: default
    spec:
    interval: 5m
    chart:
    spec:
    chart: podinfo
    version: ">=4.0.0 <5.0.0"
    sourceRef:
    kind: HelmRepository
    name: podinfo
    namespace: default
    interval: 1m
    upgrade:
    remediation:
    remediateLastFailure: true
    test:
    enable: true
    values:
    service:
    grpcService: backend
    resources:
    requests:
    cpu: 100m
    memory: 64Mi

Crossplane Architecture Overview

Let’s look how the Crossplane component model looks like. A word of warning, if you are new to Kubernetes this might be overwhelming, but there is value in making an effort to understand it. The below diagram shows the Crossplane component model and its basic interactions.

Crossplane-architecture Source: Author based on Crossplane.io

Learn more about Crossplane in my blog "Infrastructure as Code: the next big shift is here"

Demo

If you want to follow along with the demo, clone this repository, it contains all the scripts to run the demo code.

Prerequisites

In this demo, we are going to show how to use Flux and Crossplane to provision an EC2 instance directly from a new GitHub repository. This simulates a new team on boarding to our Platform.

To follow along, you will need AWS CLI configured on your local machine.

Once you obtain credentials, configure default profile for AWS CLI following this tutorial.

Locally installed you will need:

  • Docker Desktop or other container run time
  • WSL2 if using Windows
  • kubectl

Run make in the root folder of the project, this will:

If you are running on on Mac, use make setup_mac instead of make.

  • Install kind (Kubernetes IN Docker) if not already installed
  • Create kind cluster called crossplane-cluster and swap context to it
  • Install crossplane using helm
  • Install crossplane CLI if not already installed
  • Install flux CLI if not already installed
  • Install AWS provider on the cluster
  • Create a temporary file with AWS credentials based on default CLI profile
  • Create a secret with AWS credentials in crossplane-system namespace
  • Configure AWS provider to use the secret for provisioning the infrastructure
  • Remove the temporary file with credentials so it's not accidentally checked in the repository

Following tools need to be installed manually

IMPORTANT: The demo code will create a small EC2 Instance in eu-centra-1 region. The instance and underlying infrastructure will be removed as part of the demo, but please make sure all the resources were successfully removed and in case of any disruptions in the demo flow, be ready to remove the resources manually.

Setup Flux Repository

  • create a new kind cluster with make, this will install Crossplane with AWS provider and configure secret to access selected AWS account

    Flux CLI was installed as put of the Makefile scrip, but optionally you can configure shell completion for the CLI . <(flux completion zsh)

    Refer to the Flux documentation page for more installation options

  • create access token in GitHub with full repo permissions. github-token
  • export variables for your GitHub user and the newly created token
    • export GITHUB_TOKEN=<token copied form GitHub>
    • export GITHUB_USER=<your user name>
  • use flux to bootstrap a new GitHub repository so flux can manage itself and underlying infrastructure

    Flux will look for GITHUB_USER and GITHUB_TOKEN variables and once found will create a private repository on GitHub where Flux infrastructure will be tracked.

    flux bootstrap github  \
--owner=${GITHUB_USER} \
--repository=flux-infra \
--path=clusters/crossplane-cluster \
--personal

Setup Crossplane EC2 Composition

Now we will install a Crossplane Composition that defines what cloud resources to crate when someone asks for EC2 claim.

  • setup Crossplane composition and definition for creating EC2 instances

    • kubectl crossplane install configuration piotrzan/crossplane-ec2-instance:v1
  • fork repository with the EC2 claims

    • gh repo fork https://github.com/Piotr1215/crossplane-ec2

      answer YES when prompted whether to clone the repository

Clone Flux Infra Repository

  • clone the flux infra repository created in your personal repos

    git clone git@github.com:${GITHUB_USER}/flux-infra.git

    cd flux-infra

Add Source

  • add source repository to tell Flux what to observe and synchronize

    Flux will register this repository and every 30 seconds check for changes.

  • execute below command in the flux-infra repository, it will add a Git Source

    flux create source git crossplane-demo \
    --url=https://github.com/${GITHUB_USER}/crossplane-ec2.git \
    --branch=master \
    --interval=30s \
    --export > clusters/crossplane-cluster/demo-source.yaml
  • the previous command created a file in clusters/crossplane-cluster sub folder, commit the file

    • git add .
    • git commit -m "Adding Source Repository"
    • git push
  • execute kubectl get gitrepositories.source.toolkit.fluxcd.io -A to see active Git Repositories sources in Flux

Create Flux Kustomization

  • setup watch on the AWS managed resources, for now there should be none

    watch kubectl get managed

  • create Flux Kustomization to watch for specific folder in the repository with the Crossplane EC2 claim

    flux create kustomization crossplane-demo \
    --target-namespace=default \
    --source=crossplane-demo \
    --path="./ec2-claim" \
    --prune=true \
    --interval=1m \
    --export > clusters/crossplane-cluster/crossplane-demo.yaml
    • git add .
    • git commit -m "Adding EC2 Instance"
    • git push
  • after a minute or so you should see a new EC2 Instance being synchronized with Crossplane and resources in AWS

new-instance-sync

Let's take a step back and make sure we understand all the resources and repositories used.

repos

The first repository we have created is what Flux uses to manage itself on the cluster as well as other repositories. In order to tell Flux about a repository with Crossplane EC2 claims, we have created a GitSource YAML file that points to HTTPS address of the repository with the EC2 claims.

The EC2 claims repository contains a folder where plain Kubernetes manifest files are located. In order to tell Flux what files to observe, we have created a Kustomization and linked it with GitSource via its name. Kustomization points to the folder containing K8s manifests.

Cleanup

  • to cleanup the EC2 Instance and underlying infrastructure, remove the claim-aws.yaml demo from the crossplane-ec2 repository
    • rm ec2-claim/claim-aws.yaml
    • git add .
    • git commit -m "EC2 instance removed"
  • after a commit or timer lapse Flux will synchronize and Crossplane will pick up removed artefact and delete cloud resources

    the ec2-claim folder must be present in the repo after the claim yaml is removed, otherwise Flux cannot reconcile

Manual Cleanup

In case you cannot use the repository, it's possible to cleanup the resources by deleting them from flux.

  • deleting Flux kustomization flux delete kustomization crossplane-demo will remove all the resources from the cluster and AWS
  • to cleanup the EC2 Instance and underlying infrastructure, remove the ec2 claim form the cluster kubectl delete VirtualMachineInstance sample-ec2

Cluster Cleanup

  • wait until watch kubectl get managed output doesn't contain any AWS resources
  • delete the cluster with make cleanup
  • optionally remove the flux-infra repository

Summary

GitOps with Flux or Argo CD and Crossplane offers a very powerful and flexible model for Platform builders. In this demo, we have focused on applications side with Kubernetes clusters deployed some other way, either with Crossplane or Fleet or Cluster API etc.

What we achieved on top of using Crossplane’s Resource Model is the fact that we do not interact with kubectl directly any longer to manage resources, but ratter delegate this activity to Flux. Crossplane still runs on the cluster and reconciles all resources. In other words, we've moved the API surface from kubectl to Git.

Kubernetes Cookbook

· 14 min read

intro-pic

Overview

This documentation assumes basic knowledge of Kubernetes and kubectl. To learn or refresh on container orchestration related concepts, please refer to the official documentation:

How to Create Kubernetes Homelab

· 7 min read

Photo by Clay Banks on Unsplash

How to create Kubernetes home lab on an old laptop

Introduction

I’ve recently “discovered” and old laptop forgotten somewhere in the depths of my basement and decided to create a mini home lab with it where I could play around with Kubernetes.