Skip to main content

15 posts tagged with "kubernetes"

View All Tags

How to Build Cloud Native Platforms with Kubernetes

· 10 min read


Developer Portals, GitOps, Best Practices


Platform Engineering focuses on empowering developers and organizations by creating and maintaining internal software products known as platforms. In this blog, we will explore what platforms are, why they are important, and uncover best practices for creating and maintaining well-architected platforms.

This content will be valuable for Platform Engineers, Architects, DevOps Specialists, or anyone curious about how platforms can drive innovation and efficiency in development.

Understanding Platform Categories: Mapping the Terrain

Similar to DevOps, Platform Engineering struggles to define a platform concisely. A good way to understand what a platform is, is to list various kinds of platforms and their characteristics.

Platform TypesPlatform Types

  • Business as a Platform: Consider Uber, the entire product is a platform that connects users and drivers. This platform creates an ecosystem where businesses operate, users engage, and interactions happen seamlessly.

  • Domain-Specific Platforms: These platforms provide cross-cutting functionality for other applications. An example could be a geolocation API that is consumed by web frontend, mobile app and other services.

  • Domain-Agnostic Platforms: These platforms serve as foundational building blocks for developers, offering essential tools like database management, cloud storage, and user authentication. Cloud platforms like AWS or Azure provide the infrastructure and services that countless digital products rely on and are a good mental model to have when designing our own cloud native platform.

The platform landscape is vast and varied. From business-centric models to specialized domain platforms and versatile tools for developers, each plays a pivotal role in the digital ecosystem.

In this blog, we will focus on domain-agnostic platforms providing infrastructure.

The Case for Cloud-Native Platforms

Cloud-native is about how applications are created and deployed, not where. — Priyanka Sharma

Cloud-native platforms provide a foundation that allows applications to be designed with flexibility, largely making them environment agnostic. A well-architected platform offers several key benefits:

  • Simplified infrastructure management: Infrastructure provisioning and management are abstracted in a way that enables developers to move faster without compromising security and compliance requirements.

  • Increased development efficiency: A well-designed platform should increase developers productivity, improving metrics like lowering time to first commit, improving the incidents-to-resolution or reducing time to onboard new developers.

  • **Built-in scalability and reliability: **A successful platform brings to the table elements that are not part of core development efforts, but are crucial for product success. Those are: observability, scalability, automated rollbacks, integrated authentication and more.

Building Blocks of Cloud-Native PlatformsBuilding Blocks of Cloud-Native Platforms

Self-service portal

A self-service portal is a user-friendly interface that allows users to access and manage resources independently, empowering developers and users to create, configure, and deploy resources without IT support. This streamlines workflows, accelerates project timelines, and enhances productivity.

Examples like Backstage, developed by Spotify, and Port provide customizable interfaces for managing developer tools and services, ensuring efficient and consistent interactions. These portals embody the essence of self-service, enabling quick, autonomous actions that reduce bottlenecks and foster agility in development processes.

Programmatic APIs

Programmatic APIs are the backbone of cloud-native platforms, enabling seamless interaction with platform services and functionalities. These APIs allow developers to automate tasks, integrate different services, and build complex workflows, enhancing efficiency and consistency across environments.

APIs provide programmatic access to essential platform features, allowing developers to automate repetitive tasks and streamline operations. They support various transport mechanisms such as REST, HTTP, and gRPC, offering flexibility in how services communicate. For instance, API based on Kubernetes Resource Model enables developers to manage containerized applications, while AWS SDKs facilitate interactions with a wide range of cloud resources. By leveraging programmatic APIs, platforms ensure that developers can efficiently build, deploy, and manage applications, driving productivity and innovation.

Automated Workflows

Automated workflows are crucial for provisioning and deployment processes in cloud-native platforms. They ensure tasks are executed consistently and efficiently, minimizing human error and enhancing productivity.

Key to these workflows are CI/CD pipelines, which automate the build, test, and deployment stages of application development. Tools like Argo CD and Flux enable GitOps practices, where infrastructure and application updates are managed through Git repositories. By leveraging automated workflows, platforms can ensure rapid, reliable deployments, maintain consistency across environments, and accelerate the overall development process.

Monitoring and Observability

Monitoring and observability tools provide crucial insights into the performance and health of cloud-native platforms. These tools help detect issues early, understand system behavior, and ensure applications run smoothly.

Prominent tools include Prometheus for collecting and querying metrics, Grafana for visualizing data and creating dashboards, and OpenTelemetry for tracing and observability. Together, they enable proactive management of resources, quick resolution of issues, and comprehensive visibility into system performance. By integrating these tools, platforms can maintain high availability and performance, ensuring a seamless user experience.

Security and Governance Controls

Integrated security and governance controls are vital for maintaining compliance and protecting sensitive data in cloud-native platforms. These controls ensure that platform operations adhere to security policies and regulatory requirements.

Tools like OPA GateKeeper, Kyverno, and Falco play a crucial role in enforcing security policies, managing configurations, and detecting anomalies. OPA GateKeeper and Kyverno help in policy enforcement and compliance, while Falco specializes in runtime security and intrusion detection. By incorporating these tools, platforms can ensure robust security, maintain compliance, and mitigate risks effectively.

Ever Evolving

The only constant in technology is change. — Marc Benioff

Developer platforms are constantly evolving to meet the changing needs of developers and users alike. This continuous evolution ensures that platforms remain relevant, efficient, and capable of supporting the latest innovations and best practices. By staying adaptable and forward-thinking, platforms can provide the tools and features necessary to drive ongoing success and innovation.

Embracing Kubernetes Resource Model APIs

Application Programming Interface, is a set of rules and protocols for building and interacting with software.

The Kubernetes Resource Model API is the industry standard for managing resources in a cloud-native environment. Kubernetes acts as a universal control plane, continuously reconciling the desired state with the actual state of the system. Standardizing on this model offers several key benefits:

  1. Industry-Wide Standardization: Kubernetes has become the de facto standard for cloud-native infrastructure management. Its API-driven approach is widely adopted, ensuring compatibility and ease of integration with various tools and services.

  2. Universal Control Plane: Kubernetes serves as a universal control plane, providing a centralized management interface for infrastructure and applications. This centralization simplifies operations and enforces consistency across environments.

  3. Continuous Reconciliation: The Kubernetes API supports declarative management, where the desired state of resources is defined, and Kubernetes continuously reconciles this state. This automated reconciliation reduces manual intervention and ensures system reliability.

  4. Separation of Concerns: Platform engineers can configure infrastructure and policies, while developers interact with higher-level APIs. This separation enhances automation and self-service capabilities, empowering developers without compromising security or compliance.

  5. Scalability and Extensibility: Supporting transport mechanisms like REST, HTTP, and gRPC, the Kubernetes API is adaptable and scalable. It integrates seamlessly with a wide range of tools, facilitating the growth and evolution of the platform.

By leveraging Kubernetes Resource Model APIs, organizations can build robust, scalable, and efficient platforms that meet the dynamic needs of modern development environments​.

Platform as a Product: A New Perspective

Adopting a product approach to platform engineering is crucial for creating successful internal platforms. This means focusing on delivering continuous value to users — developers and the organization. It involves understanding user needs, designing and testing solutions, implementing them, and gathering feedback for continuous improvement.

Cloud hyperscalers like AWS, Google Cloud, and Microsoft Azure exemplify this approach. They have built user-centric platforms that are constantly updated with new features, driven by user feedback and accessible via standardized APIs. This ensures they remain relevant and valuable.

For internal platforms, roles such as product owners and project managers are essential. They help ensure the platform evolves in response to developer needs, maintaining usability and effectiveness. By treating your internal platform as a product, you create a sustainable resource tailored to your organization’s unique needs.

Platform as a ProductPlatform as a Product

Delivering Value Through Cloud-Native Platforms

In our demo video, we showcase how to build a platform that embodies key cloud-native principles. This practical example demonstrates the immense value that a well-architected cloud-native platform can deliver. Here’s a brief overview of what you can expect:

  • Empowering Developers: See how the platform provides developers with the tools and autonomy they need to innovate and deliver faster.

  • Cloud-Native Principles: Watch as we leverage containerization, microservices, and other cloud-native practices to build a robust, scalable platform.

  • API-Driven Approach: Discover how using programmatic APIs streamlines operations, enhances automation, and ensures seamless integration between services.

  • GitOps Workflow: Learn how the platform employs GitOps practices to manage infrastructure as code, enabling more efficient and reliable deployments.

Watch the video to see these principles in action and understand how they come together to create a powerful, developer-centric platform.

Essential Tools

In the demo, you can see a range of tools that form the backbone of cloud-native platforms, each serving a critical role. From Kubernetes as the control plane orchestrator to GitHub for managing API calls via pull requests, these tools collectively ensure efficient, scalable, and secure infrastructure management.

Recap: Platform Components in Action

Let’s recap what we’ve learned about using Kubernetes as a control plane for infrastructure provisioning:

  1. Self-Service Portal: The Developer accesses the IDP Portal for a unified UI experience to manage applications and infrastructure.

  2. Push Changes: The Developer pushes changes to the GitOps Repository via a pull request.

  3. Approval and Merge: The Platform Engineer reviews, approves, and merges the pull request, updating configurations.

  4. Sync Changes: The GitOps Repository syncs the changes to ArgoCD.

  5. Deploy Changes: ArgoCD deploys the changes to the Kubernetes API.

  6. Reconcile Infrastructure: The Kubernetes API reconciles the infrastructure via Crossplane.

  7. Provision Infrastructure: Crossplane provisions the infrastructure via various providers.

This sequence ensures a streamlined, automated process for managing and provisioning infrastructure using Kubernetes and GitOps principles.


Closing Thoughts

Cloud-native platforms are revolutionizing how we develop and manage applications by providing robust, scalable, and secure environments. They empower developers with self-service portals, streamline operations with programmatic APIs, and ensure reliability through automated workflows and comprehensive monitoring tools. By embracing these platforms, organizations can accelerate innovation, enhance productivity, and maintain high standards of security and compliance.

Treating platforms as products ensures continuous improvement and alignment with user needs, making them indispensable tools in today’s fast-paced tech landscape. Whether you’re a Platform Engineer, Architect, or DevOps Specialist, leveraging cloud-native platforms can drive significant value, fostering a culture of efficiency and agility. Stay ahead of the curve, explore the potential of cloud-native platforms, and watch your organization thrive.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

How to Simplify Kubernetes Deployments with Kluctl: A Beginner's Guide

· 9 min read



Kubernetes has revolutionized the way we manage containerized applications, providing a robust and scalable platform for deploying, scaling, and operating these applications. However, despite its powerful capabilities, managing Kubernetes deployments can sometimes be challenging. Popular tools like Helm and Kustomize have become the standard for many teams, but they might not always meet every need.

This is where Kluctl steps in. Kluctl is a deployment tool for Kubernetes that aims to combine the strengths of Helm and Kustomize while addressing their limitations. It provides a more flexible and declarative approach to Kubernetes deployments, making it an excellent choice for those seeking an alternative.

In this blog, we'll explore Kluctl's unique features and how it can streamline your Kubernetes deployment process. Whether you're an experienced Kubernetes user or just getting started, this guide will provide valuable insights into why Kluctl might be the tool you've been looking for.

An interactive version of this blog is available on

scenariointeractive scenario

What is Kluctl?

Kluctl is a modern deployment tool designed specifically for Kubernetes. It aims to simplify and enhance the deployment process by combining the best aspects of Helm and Kustomize, while also addressing some of their shortcomings. With Kluctl, you can manage complex Kubernetes deployments more efficiently and with greater flexibility.

Key Features of Kluctl

  1. Declarative Configuration: Kluctl allows you to define your deployments declaratively using YAML files. This approach ensures that your deployments are consistent and reproducible.

  2. GitOps Ready: Kluctl integrates seamlessly with GitOps workflows, enabling you to manage your deployments via Git. This integration supports continuous deployment practices and makes it easier to track changes and rollbacks.

  3. Flexible and Modular: Kluctl supports modular configurations, making it easy to reuse and share components across different projects. This modularity reduces duplication and enhances maintainability.

  4. Validation and Diffing: One of Kluctl's standout features is its built-in validation and diffing capabilities. Before applying changes, Kluctl shows you what changes will be made, allowing you to review and approve them. This feature helps prevent accidental misconfigurations and ensures deployments are accurate.

source source:

Why Choose Kluctl?

  • Enhanced Flexibility: Kluctl provides a higher degree of flexibility compared to traditional tools like Helm and Kustomize. It enables you to customize and manage your deployments in a way that best fits your workflow and organizational needs.

  • Improved Collaboration: By leveraging GitOps, Kluctl enhances collaboration within teams. All deployment configurations are stored in Git, making it easy for team members to review, suggest changes, and track the history of deployments.

  • Reduced Complexity: Kluctl simplifies the deployment process, especially for complex applications. Its modular approach allows you to break down deployments into manageable components, making it easier to understand and maintain your Kubernetes configurations.

In summary, Kluctl is a powerful tool that enhances the Kubernetes deployment experience. Its declarative nature, seamless GitOps integration, and advanced features make it an excellent choice for teams looking to improve their deployment workflows.

Installing Kluctl

Getting started with Kluctl is straightforward. The following steps will guide you through the installation process, allowing you to set up Kluctl on your local machine and prepare it for managing your Kubernetes deployments.

Step 1: Install Kluctl CLI

First, you need to install the Kluctl command-line interface (CLI). The CLI is the primary tool you'll use to interact with Kluctl.

To install the Kluctl CLI, run the following command:

curl -sSL | bash

This script downloads and installs the latest version of Kluctl. After the installation is complete, verify that Kluctl has been installed correctly by checking its version:

kluctl version

You should see output indicating the installed version of Kluctl, confirming that the installation was successful.

Step 2: Set Up a Kubernetes Cluster

Before you can use Kluctl, you need to have a Kubernetes cluster up and running. Kluctl interacts with your cluster to manage deployments, so it's essential to ensure that you have a functioning Kubernetes environment. If you haven't set up a Kubernetes cluster yet, you can refer to my previous blogs for detailed instructions on setting up clusters using various tools and services like Minikube, Kind, GKE, EKS, or AKS.

Kluctl in action

Next we will setup a basic kluctl project. To start using kluctl define a .kluctl.yaml file in the root of your project with the targets where you want to deploy.

Let's create a folder for our project and create a *.kluctl.yaml *file in it.

mkdir kluctl-project && cd kluctl-project

cat <<EOF > .kluctl.yaml
discriminator: "kluctl-demo-{{ }}"

- name: dev
context: kubernetes-admin@kubernetes
environment: dev
- name: prod
context: kubernetes-admin@kubernetes
environment: prod

- name: environment

This file defines two targets, dev and prod, that will deploy to the same Kubernetes cluster.

We can use the args section to define the arguments that we will use in our YAML files to template them. For example {{ args.environment }} would output dev or prod depending on the target we are deploying to.

Create Deployment

Next we will create a kustomize deployment for redis application. Under the hood kluctl uses kustomize to manage the Kubernetes manifests. kustomize is a tool that lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. > 💡 we are following a tutorial from the kluctl documentation Basic Project Setup Introduction

Let's create a deployment.yaml where we will define elements that kluctl will use to deploy the application.

cat <<EOF > deployment.yaml
- path: redis

commonLabels: "redis"

Now we need to create redis the deployment folder.

mkdir redis && cd redis

Since we are using kustomize we need to create a kustomization.yaml file.

cat <<EOF > kustomization.yaml
- deployment.yaml
- service.yaml

And now we can create the service.yaml and deployment.yaml files.

cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
name: redis-cart
app: redis-cart
app: redis-cart
- name: redis
image: redis:alpine
- containerPort: 6379
periodSeconds: 5
port: 6379
periodSeconds: 5
port: 6379
- mountPath: /data
name: redis-data
memory: 256Mi
cpu: 125m
cpu: 70m
memory: 200Mi
- name: redis-data
emptyDir: {}

cat <<EOF > service.yaml
apiVersion: v1
kind: Service
name: redis-cart
type: ClusterIP
app: redis-cart
- name: redis
port: 6379
targetPort: 6379

Deploy the app

Next, we will deploy the redis application to the dev target. First, we need to change to the root of the kluctl-project repository and initialize a git repository there.

cd /root/kluctl-project && \
git init && \
git add . && \
git commit -m "Initial commit"

Now we can deploy the application to dev environment.

kluctl deploy --yes -t dev

💡 Notice that we are using the --yes flag to avoid the confirmation prompt. This is useful for the scenario, but in real life you should always review the changes before applying them.

Handling Changes

Next we will introduce changes to our setup and see how kluctl handles them. Let's see what we have deployed so far by executing tree command.

|-- deployment.yaml
|-- kustomization.yaml
`-- redis
|-- deployment.yaml
|-- kustomization.yaml
`-- service.yaml

1 directory, 5 files

💡 Notice this resembles a typical kustomize directory structure.

One of the superpowers of kluctl is how transparently it handles changes. Let's modify the redis deployment and see how kluctl handles it.

yq -i eval '.spec.replicas = 2' redis/deployment.yaml

Now let's deploy the changes to the dev target.

kluctl deploy --yes -t dev

Remember at the beginning, we have added custom labels to each deployment. Let's see if the labels were correctly applied.

kubectl get deployments -A --show-labels


Next, we will use templating capabilities of kluctl to deploy the same application to a different namespace At the beginning of the workshop, we have two different environments; prod and dev. This setup works out of the box for multiple targets (clusters), however in our case, we want to have a single target (cluster) and we want to deploy different targets to different namespaces.

Let's start by deleting the existing resources and modifying some files. > 💡 It is possible to migrate the resources to a different namespace using the kluctl prune command. However, in this case, we will delete the old resources and recreate them in new namespaces.

kluctl delete --yes -t dev

In order to differentiate between the two environments, we will need to adjust the discriminator field in the .kluctl.yaml file.

yq e '.discriminator = "kluctl-demo-{{ }}-{{ args.environment }}"' -i .kluctl.yaml

We also need to create a namespace folder and yaml and add it to our kustomization.yaml file.

First create the namespace folder.

mkdir namespace

Now we can add the namespace folder to the kustomization.yaml file. > 💡 Notice the use of barrier: true in the kustomization.yaml file. This tells kluctl to apply the resources in the order they are defined in the file and wait for the resource before the barrier to be ready before applying the next ones

cat <<EOF > deployment.yaml
- path: namespace
- barrier: true
- path: redis
commonLabels: "redis"
overrideNamespace: kluctl-demo-{{ args.environment }}

Now let's create the namespace YAML file.

cat <<EOF > ./namespace/namespace.yaml
apiVersion: v1
kind: Namespace
name: kluctl-demo-{{ args.environment }}

Test the deployment

We will test if our setup works by deploying the redis application to the dev and prod namespaces. Deploying the resources to the dev namespace:

kluctl deploy --yes -t dev

And to the prod namespace:

kluctl deploy --yes -t prod

Let's check if everything deployed as expected:

kubectl get pods,svc -n kluctl-demo-dev
kubectl get pods,svc -n kluctl-demo-prod

Closing thoughts

That's it! We have seen basic capabilities of kluctl.

We have barely scratched the surface of kluctl capabilities. You can use it to deploy to multiple clusters, namespace, and even different environments.

The mix of templating capabilities based on jinja2 and kustomize architecture makes it a really flexible tool for complex deployments.

Next Steps

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my blogs on Medium

GitOpsify Cloud Infrastructure with Crossplane and Flux

· 9 min read

In this article we are going to learn how to automate the provisioning of cloud resources via Crossplane and combine it with GitOps practices.

You will most benefit from this blog if you are a Platform or DevOps Engineer, Infrastructure Architect or Operations Specialist.

If you are new to GitOps, read more about it in my blog GitOps with Kubernetes

Let's set the stage by imagining following context. We are working as a part of a Platform Team in a large organization. Our goal is to help Development Teams to onboard get up to speed with using our Cloud Infrastructure. Here are a few base requirements:

  • Platform Team doesn't have resources to deal with every request individually, so there must be a very high degree of automation
  • Company policy is to adopt the principle of least privilege. We should expose the cloud resources only when needed with the lowest permissions necessary.
  • Developers are not interested in managing cloud, they should only consume cloud resources without even needing to login to a cloud console.
  • New Teams should get their own set of cloud resources when on-boarding to the Platform.
  • It should be easy to provision new cloud resources on demand.

Initial Architecture

The requirements lead us to an initial architecture proposal with following high level solution strategy.

  • create template repositories for various types of workloads (using Backstage Software Templates would be helpful)
  • once a new Team is on boarded and creates first repository from a template, it will trigger a CI pipeline and deploy common infrastructure components by adding the repository as Source to Flux infrastructure repo
  • once a Team wants to create more cloud infrastructure, they can place the Crossplane claim YAMLs in the designated folder in their repository
  • adjustments to this process are easily implemented using Crossplane Compositions

In real world scenario we would manage Crossplane also using Flux, but for demo purposes we are focusing only on the application level.

The developer experience should be similar to this:


Tools and Implementation

Knowing the requirements and initial architecture, we can start selecting the tools. For our example, the tools we will use are Flux and Crossplane.

We are going to use Flux as a GitOps engine, but the same could be achieved with ArgoCD or Rancher Fleet.

Let's look at the architecture and use cases that both tools support.

Flux Architecture Overview

Flux exposes several components in the form of Kubernetes CRDs and controllers that help with expressing a workflow with GitOps model. Short description of 3 major components. All those components have their corresponding CRDs.

flux-architecture source:

  1. Source Controller Main role is to provide standardized API to manage sources of the Kubernetes deployments; Git and Helm repositories.

    kind: GitRepository
    name: podinfo
    namespace: default
    interval: 1m
  2. Kustomize Controller This is a CD part of the workflow. Where source controllers specify sources for data, this controller specifies what artifacts to run from a repository.

    This controller can work with kustomization files, but also plain Kubernetes manifests

    kind: Kustomization
    name: webapp
    namespace: apps
    interval: 5m
    path: "./deploy"
    kind: GitRepository
    name: webapp
    namespace: shared
  3. Helm Controller This operator helps managing Helm chart releases containing Kubernetes manifests and deploy them onto a cluster.

    kind: HelmRelease
    name: backend
    namespace: default
    interval: 5m
    chart: podinfo
    version: ">=4.0.0 <5.0.0"
    kind: HelmRepository
    name: podinfo
    namespace: default
    interval: 1m
    remediateLastFailure: true
    enable: true
    grpcService: backend
    cpu: 100m
    memory: 64Mi

Crossplane Architecture Overview

Let’s look how the Crossplane component model looks like. A word of warning, if you are new to Kubernetes this might be overwhelming, but there is value in making an effort to understand it. The below diagram shows the Crossplane component model and its basic interactions.

Crossplane-architecture Source: Author based on

Learn more about Crossplane in my blog "Infrastructure as Code: the next big shift is here"


If you want to follow along with the demo, clone this repository, it contains all the scripts to run the demo code.


In this demo, we are going to show how to use Flux and Crossplane to provision an EC2 instance directly from a new GitHub repository. This simulates a new team on boarding to our Platform.

To follow along, you will need AWS CLI configured on your local machine.

Once you obtain credentials, configure default profile for AWS CLI following this tutorial.

Locally installed you will need:

  • Docker Desktop or other container run time
  • WSL2 if using Windows
  • kubectl

Run make in the root folder of the project, this will:

If you are running on on Mac, use make setup_mac instead of make.

  • Install kind (Kubernetes IN Docker) if not already installed
  • Create kind cluster called crossplane-cluster and swap context to it
  • Install crossplane using helm
  • Install crossplane CLI if not already installed
  • Install flux CLI if not already installed
  • Install AWS provider on the cluster
  • Create a temporary file with AWS credentials based on default CLI profile
  • Create a secret with AWS credentials in crossplane-system namespace
  • Configure AWS provider to use the secret for provisioning the infrastructure
  • Remove the temporary file with credentials so it's not accidentally checked in the repository

Following tools need to be installed manually

IMPORTANT: The demo code will create a small EC2 Instance in eu-centra-1 region. The instance and underlying infrastructure will be removed as part of the demo, but please make sure all the resources were successfully removed and in case of any disruptions in the demo flow, be ready to remove the resources manually.

Setup Flux Repository

  • create a new kind cluster with make, this will install Crossplane with AWS provider and configure secret to access selected AWS account

    Flux CLI was installed as put of the Makefile scrip, but optionally you can configure shell completion for the CLI . <(flux completion zsh)

    Refer to the Flux documentation page for more installation options

  • create access token in GitHub with full repo permissions. github-token
  • export variables for your GitHub user and the newly created token
    • export GITHUB_TOKEN=<token copied form GitHub>
    • export GITHUB_USER=<your user name>
  • use flux to bootstrap a new GitHub repository so flux can manage itself and underlying infrastructure

    Flux will look for GITHUB_USER and GITHUB_TOKEN variables and once found will create a private repository on GitHub where Flux infrastructure will be tracked.

    flux bootstrap github  \
--owner=${GITHUB_USER} \
--repository=flux-infra \
--path=clusters/crossplane-cluster \

Setup Crossplane EC2 Composition

Now we will install a Crossplane Composition that defines what cloud resources to crate when someone asks for EC2 claim.

  • setup Crossplane composition and definition for creating EC2 instances

    • kubectl crossplane install configuration piotrzan/crossplane-ec2-instance:v1
  • fork repository with the EC2 claims

    • gh repo fork

      answer YES when prompted whether to clone the repository

Clone Flux Infra Repository

  • clone the flux infra repository created in your personal repos

    git clone${GITHUB_USER}/flux-infra.git

    cd flux-infra

Add Source

  • add source repository to tell Flux what to observe and synchronize

    Flux will register this repository and every 30 seconds check for changes.

  • execute below command in the flux-infra repository, it will add a Git Source

    flux create source git crossplane-demo \
    --url=${GITHUB_USER}/crossplane-ec2.git \
    --branch=master \
    --interval=30s \
    --export > clusters/crossplane-cluster/demo-source.yaml
  • the previous command created a file in clusters/crossplane-cluster sub folder, commit the file

    • git add .
    • git commit -m "Adding Source Repository"
    • git push
  • execute kubectl get -A to see active Git Repositories sources in Flux

Create Flux Kustomization

  • setup watch on the AWS managed resources, for now there should be none

    watch kubectl get managed

  • create Flux Kustomization to watch for specific folder in the repository with the Crossplane EC2 claim

    flux create kustomization crossplane-demo \
    --target-namespace=default \
    --source=crossplane-demo \
    --path="./ec2-claim" \
    --prune=true \
    --interval=1m \
    --export > clusters/crossplane-cluster/crossplane-demo.yaml
    • git add .
    • git commit -m "Adding EC2 Instance"
    • git push
  • after a minute or so you should see a new EC2 Instance being synchronized with Crossplane and resources in AWS


Let's take a step back and make sure we understand all the resources and repositories used.


The first repository we have created is what Flux uses to manage itself on the cluster as well as other repositories. In order to tell Flux about a repository with Crossplane EC2 claims, we have created a GitSource YAML file that points to HTTPS address of the repository with the EC2 claims.

The EC2 claims repository contains a folder where plain Kubernetes manifest files are located. In order to tell Flux what files to observe, we have created a Kustomization and linked it with GitSource via its name. Kustomization points to the folder containing K8s manifests.


  • to cleanup the EC2 Instance and underlying infrastructure, remove the claim-aws.yaml demo from the crossplane-ec2 repository
    • rm ec2-claim/claim-aws.yaml
    • git add .
    • git commit -m "EC2 instance removed"
  • after a commit or timer lapse Flux will synchronize and Crossplane will pick up removed artefact and delete cloud resources

    the ec2-claim folder must be present in the repo after the claim yaml is removed, otherwise Flux cannot reconcile

Manual Cleanup

In case you cannot use the repository, it's possible to cleanup the resources by deleting them from flux.

  • deleting Flux kustomization flux delete kustomization crossplane-demo will remove all the resources from the cluster and AWS
  • to cleanup the EC2 Instance and underlying infrastructure, remove the ec2 claim form the cluster kubectl delete VirtualMachineInstance sample-ec2

Cluster Cleanup

  • wait until watch kubectl get managed output doesn't contain any AWS resources
  • delete the cluster with make cleanup
  • optionally remove the flux-infra repository


GitOps with Flux or Argo CD and Crossplane offers a very powerful and flexible model for Platform builders. In this demo, we have focused on applications side with Kubernetes clusters deployed some other way, either with Crossplane or Fleet or Cluster API etc.

What we achieved on top of using Crossplane’s Resource Model is the fact that we do not interact with kubectl directly any longer to manage resources, but ratter delegate this activity to Flux. Crossplane still runs on the cluster and reconciles all resources. In other words, we've moved the API surface from kubectl to Git.

Kubernetes Cookbook

· 14 min read



This documentation assumes basic knowledge of Kubernetes and kubectl. To learn or refresh on container orchestration related concepts, please refer to the official documentation:

How to Create Kubernetes Homelab

· 7 min read

Photo by Clay Banks on Unsplash

How to create Kubernetes home lab on an old laptop


I’ve recently “discovered” and old laptop forgotten somewhere in the depths of my basement and decided to create a mini home lab with it where I could play around with Kubernetes.

Cloud Native Developer Workflow

· 7 min read

Photo by Hack Capital on Unsplash

Cloud Native - Developer Workflow

Software Development Lifecycle with Kubernetes and Docker


Software development tooling and processes have evolved rapidly in last decade to meet growing needs of developers. On top of mastering, often a few, programing languages and paradigms, software developers must learn to navigate increasingly complex landscape of tools and processes.