Skip to main content

Typing Simulation in Neovim with typeit.nvim

· 4 min read

Photo by Andrew
Seaman
on
Unsplash

Neovim plugin useful for technical presentations from a terminal

Introduction

As a developer who frequently gives technical presentations and demos from the terminal, I’ve always been drawn to Neovim’s extensibility. It’s a powerful feature, especially for those of us creating tutorials, demos, or presentations. That’s why I created typeit.nvim.

Over the years, I’ve found that live coding during presentations can be risky — typos, mistakes, and the pressure of an audience can sometimes lead to less-than-smooth demonstrations. On the other hand, pre-recorded videos or static code snippets often lack the dynamism that keeps an audience engaged. I needed something in between, and that’s where the idea for typeit.nvim was born.

I leveraged Neovim’s extensibility to create a plugin that allows presenters to simulate typing in real-time, complete with customizable typing speed. Whether you’re creating tutorials, giving live demos, or just want to add some flair to your coding screencasts, typeit.nvim can help bring your code to life.

In this blog post, I’ll guide you through the features, installation, configuration, and usage of typeit.nvim. By the end, you'll have a new tool in your Neovim arsenal for creating more engaging and realistic coding demonstrations, enhancing your Neovim experience.

Prerequisites

Before diving into the installation and setup of typeit.nvim, ensure you have the following:

  • Neovim version 0.9.0 or higher
  • A plugin manager such as, packer.nvim, or lazy.nvim

Installation

You can install typeit.nvim using various plugin managers. Below are the instructions for the three popular options:

Using packer.nvim

use 'Piotr1215/typeit.nvim'

Using lazy.nvim

{
'Piotr1215/typeit.nvim',
config = function()
require('typeit').setup({
-- Your configuration here
})
end
}

Configuration

After installation, you can configure typeit.nvim globally using the setup function. Here’s a basic example:

require('typeit').setup({
default_speed = 30, -- Default typing speed (milliseconds)
default_pause = 'line' -- Default pause behavior ('line' or 'paragraph')
})

Usage

Vim Commands

typeit.nvim provides several commands for simulating typing in Neovim:

  • :SimulateTyping [file_path] [speed]: Simulate typing from a file
  • :SimulateTypingWithPauses [file_path] [speed] [pause_at]: Simulate typing with pauses (‘line’ or ‘paragraph’)
  • :StopTyping: Stop the current typing simulation

Simulating Typing from a File

To simulate typing the contents of a file:

  1. Open a new empty buffer: :enew
  2. Use the SimulateTyping command:
:SimulateTyping ~/example.txt 30

This command simulates typing the contents of example.txt at a speed of 30 milliseconds per character.

Simulating Typing with Pauses

To simulate typing with pauses between lines or paragraphs:

  1. Open a new empty buffer: :enew
  2. Use the SimulateTypingWithPauses command:
:SimulateTypingWithPauses ~/example.txt 50 line

This command pauses after each line at a typing speed of 50 milliseconds per character. For paragraph pauses, use:

:SimulateTypingWithPauses ~/example.txt 50 paragraph

Simulating Custom Text Typing

You can also simulate typing custom text directly in Neovim:

  1. Open a new empty buffer: :enew
  2. Enter command mode and type your text in quotes:
:call luaeval("require('typeit').simulate_typing(_A[1], _A[2])", ["This is a custom text being typed out.", 40])

This command simulates typing “This is a custom text being typed out.” at a speed of 40 milliseconds per character.

For custom text with pauses:

:call luaeval("require('typeit').simulate_typing_with_pauses(_A[1], _A[2], _A[3])", ["Line 1\nLine 2\nLine 3", "line", 30])

This simulates typing the given lines with pauses after each line at a speed of 30 milliseconds per character.

Stopping the Simulation

To stop the typing simulation at any point, use:

:StopTyping

Alternatively, you can use Ctrl+C to interrupt the typing simulation.

Custom Keybindings

Set up custom keybindings for typeit.nvim commands:

vim.api.nvim_set_keymap('n', '<leader>st', ':SimulateTyping<CR>', { noremap = true, silent = true })
vim.api.nvim_set_keymap('n', '<leader>sp', ':SimulateTypingWithPauses<CR>', { noremap = true, silent = true })

Conclusion

typeit.nvim is a versatile plugin that brings dynamic typing simulations to Neovim, making it perfect for live demos, tutorials, and presentations. By integrating this plugin into your workflow, you can create more engaging content and showcase your coding skills in real-time.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

📺 Subscribe to my YouTube Channel

From Silos to Synergy: Cloud Infrastructure Management in the Age of Platform Teams

· 7 min read

Introduction

Software and the infrastructure it needs, have a complicated relationship. On one hand, a web app will need somewhere to run, but where it runs and how it gets there might vary.

Infrastructure and the software it serves can have different lifecycle and ownership. A microservice with a database is a good example of shared lifecycle and ownership. Different lifecycles are when the same microservice requires a message broker like Kafka. An example of different lifecycle and ownership? A microservice might need a key-value store while a centralized security team manages it.

The resulting architecture is typically more complicated than a hallo world IaC example.

Applications/Infrastructure with different lifecycles and
ownerhsip

Why does it matter?

Photo by Oyemike
Princewill
on
Unsplash

Platform Teams are responsible for maintaining all the shared infrastructure. An excellent book, “Team Toplogies” claims.

“Platform teams produce and maintain the infrastructure and services that all of your teams use to communicate with each other and perform tasks. Examples of these kinds of services include your internal security tools, remote work applications, cloud storage solutions, and internal network design.”​

I would like to argue that it depends. Depends on lifecycle and ownership. Remember the phrase “You build it, you run it” attributed to Werner Vogels, the CTO of Amazon? Death to silos and long live 2 pizza teams. It’s all great, if not always practical. Moreover, blindly following this rule might get us in trouble.

Silos are not bad

On the contrary, silos are amazing, when implemented right. Can you imagine being involved in the decisions of how to run your EKS cluster or Google Cloud Run?

Well, it’s your cluster and your container, so … run it. Thankfully, that’s not how it works. Cloud providers offer a self-service options and otherwise get out of the way. For all intents and purposes, they are silos where we throw our workloads over the fence and that’s it.

Cloud hyperscalers came up with guidelines that capture differences in ownership and lifecycle. They are called shared responsibility guidelines.

Rethinking Ownership in Tech

The tech industry’s approach to ownership and lifecycle management continues to evolve. While Werner Vogels’ “You build it, you run it” philosophy pushed for end-to-end ownership, reality often demands more nuance.

As Sam Newman points out in “Building Microservices”

“Microservices give us options in terms of how we implement our systems, but they don’t dictate the organizational structures we use.”

This applies to ownership models as well.

Different components typically have different lifecycles and ownership needs. A rapidly iterating microservice might be fully owned by a product team, while a shared database could be managed by a platform team. The key is finding the right balance for each organization’s unique needs.

“Good architecture allows major decisions to be deferred.” — Robert C. Martin

This flexibility in architecture should extend to our ownership models, allowing them to adapt as our systems and organizations grow and change.

Architecture Recommendations

Photo by Alex
wong
on
Unsplash

In modern software development and infrastructure management, various techniques and tools have emerged to address the challenges of different lifecycle and ownership models. These include service discovery portals, self-service platforms, GitOps practices, Infrastructure as Code (IaC), and internal developer platforms (IDPs).

While these techniques are valuable, the underlying organizational architecture is crucial for their effective implementation. Here is an overview of organizational architecture models addressing each lifecycle/ownership permutation.

LifecycleOwnershipArchitecture/Organization Model
Application-boundDevelopers"You build it, you run it" model
Application-boundExternal teamEmbedded platform engineers
SharedDevelopersComplicated subsystem team
SharedExternal teamPlatform team approach

Based on the analysis of lifecycle and ownership patterns, I recommend the following architectural approaches:

1. “You build it, you run it” model

Context: Application-bound infrastructure owned by developers

This model, popularized by Amazon, empowers development teams with full responsibility for their services, including infrastructure. It promotes:

  • End-to-end ownership and accountability
  • Rapid iteration and deployment
  • Deep understanding of both application and infrastructure needs

Implementation often involves extensive use of cloud services, Infrastructure as Code, and robust CI/CD pipelines. Teams in this model benefit from self-service platforms and comprehensive monitoring tools.

2. Embedded platform engineers

Context: Application-bound infrastructure owned by an external team

This approach bridges the gap between specialized infrastructure knowledge and application-specific needs. Key aspects include:

  • Close collaboration between platform experts and development teams
  • Tailored infrastructure solutions that align with application requirements
  • Knowledge transfer and upskilling of development teams

Successful implementation often involves creating service catalogs, implementing GitOps practices, and establishing clear communication channels between platform engineers and developers.

3. Complicated subsystem team

Context: Shared infrastructure owned by developers

This model, derived from Team Topologies, is suitable for managing complex, shared components that require deep expertise. Characteristics include:

  • Focused team of specialists managing a critical, shared subsystem
  • Clear interfaces and APIs for other teams to interact with the subsystem
  • Continuous evolution and optimization of the shared component

Implementation might involve creating comprehensive documentation, establishing service level objectives (SLOs), and developing self-service interfaces for other teams to utilize the subsystem.

4. Platform team approach

Context: Shared infrastructure owned by an external team

This model centralizes the management of shared infrastructure, providing a foundation for other teams to build upon. Key features include:

  • Dedicated team focusing on creating and maintaining shared infrastructure
  • Emphasis on creating self-service capabilities for development teams
  • Standardization of infrastructure practices across the organization

Successful platform teams frequently implement internal developer platforms (IDPs), use Infrastructure as Code for managing resources, and create service discovery portals to make their offerings easily accessible to development teams.

Closing Thoughts

As we’ve explored throughout this discussion, the relationship between software and infrastructure is complex and multifaceted. The architectural approaches we’ve outlined — from “You build it, you run it” to Platform teams — each align with specific lifecycle and ownership patterns, providing a foundation for effective infrastructure management.

It’s crucial to remember that there’s no one-size-fits-all solution. The choice of approach should be based on your organization’s specific needs, culture, and technical landscape. Factors such as team size, technical expertise, regulatory requirements, and business objectives all play a role in determining the most suitable architecture.

Moreover, it’s entirely possible — and often beneficial — for these models to coexist within the same organization. Different components or services may require different approaches. A critical, shared database might be best managed by a complicated subsystem team, while a customer-facing microservice could thrive under a “You build it, you run it” model.

The key is to remain flexible and open to evolution. As your organization grows and changes, so too should your architectural approach. Regular reassessment of your infrastructure management strategies can help ensure they continue to serve your needs effectively.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

5 Must-Have Command Line AI Tools

· 10 min read

Terminal Friendly AI Projects

Introduction

Artificial Intelligence (AI) is not just a buzzword; it’s a transformative force reshaping industries across the globe. The U.S. AI market alone is projected to reach approximately $594 billion by 2032, growing at a robust CAGR of 19% from 2023. This staggering growth underscores AI’s pivotal role in driving innovation and efficiency.

If you’re not leveraging AI in your workflows yet, you might be missing out on significant opportunities. AI is rapidly becoming a critical component in staying competitive, and those who adopt AI tools now are positioning themselves at the forefront of technological advancement.

In this blog, I would like to show you 5 tools that improved my productivity and. You don’t need to be a software developer or IT professional to take advantage of the same efficiencty boost.

Let’s look at some statistics

The U.S. AI market is expected to reach approximately $594 billion by 2032, with a CAGR of 19% from 2023​ (Statistics and Facts for 2024 CompTIA)​.

Approximately 34% of companies are currently using AI, with an additional 42% exploring AI technologies. This highlights a significant interest and ongoing integration of AI in business operations​ (Statistics and Facts for 2024 CompTIA)​.

AI is projected to create 12 million more jobs than it will replace by 2025. The demand for AI specialists is anticipated to rise, with 97 million positions needed in the industry by that time​ (Statistics and Facts for 2024 CompTIA)​.

Why the Terminal?

You are probably familiar with ChatGPT or Claude web interfaces and those are great first steps to try out generative AI. However, those web UIs have important limitations; they are generic and not tailored to specific needs. While convenient, they lack the flexibility to integrate seamlessly with custom workflows and automate repetitive tasks.

The command line is a powerful interface that offers more control, efficiency and flexibility than graphical interfaces. It allows for scripting, automation, and quick access to powerful tools without the overhead of a graphical interface.

AI is revolutionizing the way we interact with technology. By integrating AI with command line tools, we can automate complex tasks, gain deeper insights from data, and improve overall productivity.

It’s Easier Than You Think

Using AI tools in the terminal is straightforward. Many tools provide simple installation commands and detailed documentation to help you get started quickly.

Command line tools often offer more granular control over their operation, allowing you to customize your workflows to suit your specific needs.

Better Automation

Terminal-based AI tools excel at automation. They can be easily integrated into shell scripts, scheduled with cron jobs, and used in combination with other command line utilities to create powerful automated workflows.

Tools

Before we jump into the tooling overview, let’s make sure we are on the same page, defining what’s what in the terminal universe. Put simply:

Terminal Related Definitions

Ollama

A command-line tool developed my Meta, that allows you to run AI models locally, enabling seamless and secure interactions with various LLMs directly from your terminal. Chat or interact with AI models through APIs on your local machine or a remote server in home network or somewhere else.

  • ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language…

🎥 See ollama in action on asciinema: https://asciinema.org/a/664871

Fabric

An open-source framework designed to augment human capabilities using AI. It offers a modular system for solving specific problems through a crowdsourced set of AI prompts, known as Patterns.

Fabric enables users to integrate AI into their daily tasks seamlessly, from summarizing content to generating social media posts. It emphasizes usability with a command-line interface and clear Markdown-based patterns. The tool also supports creating personal AI infrastructures, making it a versatile choice for both simple and complex AI applications.

  • danielmiessler/fabric: fabric is an open-source framework for augmenting humans using AI…

However, we can do better than this and let fabric speak for itself using its — agent flag:

echo "research the https://github.com/danielmiessler/fabric repository and summarize what the tool does" \
| fabric --agents | summarize

This gives us the following response:

ONE SENTENCE SUMMARY:
The fabric tool enhances productivity by automating tasks, integrating with software, and offering features to streamline user workflows.

MAIN POINTS:
1. The fabric tool is designed to improve productivity and streamline tasks.
2. It focuses on automation and simplification of repetitive tasks.
3. Offers seamless integration with other software or services.
4. Provides a suite of features aimed at enhancing process efficiency.
5. Direct consultation of the GitHub repository is recommended for full potential leverage.
6. The repository contains a comprehensive README file for detailed insights.
7. Additional documentation is available, prepared by the repository’s owner.
8. In-depth insights into the tool’s purpose and features are provided.
9. Setup instructions and practical use cases are included in the documentation.
10. Effective utilization requires exploring the GitHub repository for maximum utility.

TAKEAWAYS:
1. The fabric tool is pivotal for enhancing productivity through automation and integration.
2. Direct exploration of the GitHub repository is essential for understanding its full capabilities.
3. The README file and additional documentation are key resources for users.
4. The tool offers significant benefits in streamlining and improving efficiency of tasks.
5. Understanding and applying the tool’s features requires consulting the provided documentation.

Extract Transcript from YouTube Videos

Fabric can also extract transcript from youtube videos and pass it through any prompt(s). Let’s extract some ideas from one of my recent videos

yt --transcript https://www.youtube.com/watch\?v\=EK_ivK8HlNo | create_micro_summary

ONE SENTENCE SUMMARY:
- Kubernetes development challenges are mitigated by MirrorD for faster feedback loops and seamless remote environment testing.

MAIN POINTS:
- Kubernetes excels in production but complicates development and testing.
- Fast feedback loops are crucial for efficient Kubernetes development.
- MirrorD enables local processes to run in a remote Kubernetes context.

TAKEAWAYS:
- Development on Kubernetes requires innovative tools for efficiency.
- MirrorD shortens the feedback loop significantly.
- Local and remote environment synchronization is key for developers.

The summary is pretty spot on!!

Shell Automation

There are so many commands with flags and options that it’s impossible to remember… thinking about you ffmpeg . To make this easier for myself, I have created a tool that helps me with various commands.

  • Piotr1215/aicmdtools: GoAI is a Go library and command line for interacting with OpenAI…

🎥 Check out this asciinema recording to see what it can do: https://asciinema.org/a/TFJABWbNochPWuuBFhiHDHBLF

Terminal Chat

Terminal chat apps are similar to ollama but allow for talking to various LLM providers such as OpenAI, Claude and more. Here are two I’ve experimented with and can recommend.

However, I usually use AI chats through neovim, more on it later in a bonus section!

  • sigoden/aichat: All-in-one AI CLI tool that integrates 20+ AI platforms, including OpenAI…

  • ekkinox/yai: Your AI powered terminal assistant.

Aider — Programming Support

If you are a programmer, aider is absolutely amazing. While it’s not perfect, it’s the closest I know to GitHub Copilot (with chat function) and it’s completely free!

Let’s see if fabric can help us capture the main strengths of aider. Let’s paste the content or aider’s repo README.md and let fabric summarize it.

xclip -o -sel clipboard | summarize

ONE SENTENCE SUMMARY:
Aider is an AI-powered pair programming tool that enhances coding productivity by integrating with local git repositories and various LLMs.

MAIN POINTS:
1. Aider facilitates pair programming with AI, specifically designed for local git repositories.
2. Compatible with leading LLMs like GPT-4o and Claude 3 Opus, offering versatile AI assistance.
3. Installation and setup are straightforward, requiring only a few commands to start.
4. Supports editing multiple files simultaneously for comprehensive code modifications.
5. Automatically commits changes to git with appropriate commit messages, streamlining version control.
6. Compatible with a wide range of programming languages, including Python, JavaScript, and more.
7. Utilizes a complete map of your git repository for better context understanding in larger projects.
8. Allows for voice commands and adding images or URLs in chat for enhanced interaction.
9. Achieved the top score on SWE Bench, indicating superior performance in solving real GitHub issues.
10. Offers extensive documentation, tutorials, and a supportive Discord community for users.

TAKEAWAYS:
1. Aider significantly boosts coding efficiency by automating tasks and providing intelligent suggestions.
2. Its compatibility with major LLMs ensures a flexible and powerful coding assistant experience.
3. The tool’s ability to understand and navigate large codebases makes it suitable for complex projects.
4. Community feedback highlights Aider’s impact on productivity and its user-friendly design.
5. Aider’s recognition in benchmarks underscores its effectiveness in addressing real-world coding challenges.

Bonus for NeoVim Nerds

If you happen to use the best editor known to mankind… neovim btw, you are in for a treat. Neovim plugins ecosystem, due to adopting lua as plugins programming language, is very strong and versatile. Here are two plugins that I use almost daily when coding, creating documentation or chatting with LLMs.

Gen.nvim

Let’s us use locall ollama models as a neovim copilot.

Gp.nvim

Provides rich chat experience and copilot-like functionality in the editor.

Ok, we went through a lot of tools, let’s summarize:

ToolCategoryDescriptionURL
OllamaLocal AI ModelsRun AI models locally and interact with them through terminal.Ollama GitHub
FabricAI FrameworkModular framework for solving problems using AI prompts.Fabric GitHub
Shell AutomationCommand Line AutomationTool to simplify various commands and automate tasks.Shell Automation GitHub
AIChatTerminal ChatIntegrates multiple AI platforms for chat via terminal.AIChat GitHub
YaiTerminal AssistantAI-powered assistant for terminal commands and tasks.Yai GitHub
AiderAI Pair ProgrammingAI tool for pair programming with local git integration.Aider GitHub
gen.nvimNeovim PluginGenerate text using LLMs with customizable prompts in Neovim.gen.nvim GitHub
gp.nvimNeovim PluginChatGPT sessions and copilot functionality in Neovim.gp.nvim GitHub

Closing Thoughts

Most of the tools mentioned can work with proprietary models such as OpenAI, Claude but also with open source models like the ones provided by ollama.

Integrating AI with command line tools not only boosts productivity but also transforms how we interact with technology. The tools mentioned here, from Ollama to Fabric, offer powerful capabilities right at your fingertips, enhancing automation, insight, and efficiency.

Ready to supercharge your terminal? Let me know which tool is your favourite, did I miss some that you use and find valuable?

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

Why Fast Feedback Loops Matter When Working with Kubernetes

· 9 min read

Debugging and Testing Made Easy with mirrord

Introduction

In software development, fast, iterative feedback loops — known as dev loops — are essential for rapid prototyping and innovation. However, the complexity and disparity between local and remote execution environments can make this process challenging. Instead of staying in a flow state and quickly moving from idea to prototype, developers often have to deal with adjacent issues: building, deploying, and managing CI/CD pipelines.

Overlaying Kubernetes on top of an already complex process can turn development into a slow and painful experience. This complexity is why so many attempts have been made to solve this problem. Today, we will explore one of the most promising solutions.

The key to fast development cycles is reducing feedback time. The quicker you can test and debug, the faster you can iterate and improve.

Who Is This Content For?

This blog is for developers, DevOps engineers, and tech leads who work with Kubernetes and are tired of slow development cycles. If you’re looking for ways to improve your productivity and streamline your workflow, this is for you.

The Pain of Slow Dev Loops

Working with Kubernetes typically involves:

  • Creating Containers: This step can be time-consuming, especially when dealing with complex applications.
  • Pushing Images: Uploading images to a registry can take a significant amount of time, depending on the size of the image and the network speed.
  • Deploying to the Cluster: Waiting for the Kubernetes cluster to pull the image and start the containers adds to the delay.
  • Waiting for Feedback: Once the application is running, developers need to wait for logs and test results, which can take several minutes or more.

Each of these steps contributes to a slow development cycle, making it difficult to quickly test and debug changes. This can be particularly frustrating when trying to resolve critical issues or implement new features under tight deadlines.

By the end of this blog, you’ll learn strategies to speed up your development loop, reduce waiting times, and make debugging and testing more efficient.

Fast Feedback = Rapid Prototyping

What if we could integrate development loops so that the inner loop (code, test, build/run) seamlessly extends into the outer loop (build, test, scan, deploy, release)?

This would result in much faster feedback and iteration, leading to increased innovation. However, the development environment on my laptop or in a remote containerized service like Gitpod differs from the execution environment where the deployment is running.

What are the options?

  • Recreate the Kubernetes environment locally?
    Viable for some workloads.
  • What about recreating a cloud environment locally?
    Possible with LocalStack with AWS or TestContainers, but only to some extent.

The lowest common denominator

An interesting question to ask would be: is there something that both environments share? They do share the same container, but that’s not what I develop; it’s just a packaging mechanism. Digging deeper… how about a process 💡?

A process executing code runs on my machine the same way a process executing code runs in a containerized environment in the cloud.

Execution context matters

When executing code locally, everything needs to be set up on the local machine, including environment variables, file access, and networking. This setup ensures that the code can run as expected within the local environment. However, the context changes significantly when moving to a remote environment, such as a public or private cloud.

In a remote environment, which often includes a Kubernetes cluster running in the cloud, the infrastructure is more complex. Environment variables might be managed through cloud services or orchestration tools, file access could involve distributed storage systems, and networking might include virtual networks, load balancers, and security groups. Additionally, the context may include various integrated services such as databases, message queues, and other networking components.

Photo by Shubham
Dhage
on
Unsplash

A piece of code that works perfectly on a local machine might fail in the cloud due to missing environment variables, incorrect network configurations, or issues accessing remote databases and queues. This discrepancy forces developers to replicate the remote environment locally or debug issues that only appear in the cloud, slowing down the development cycle.

Testing Locally with Remote Environment

So far we have established that in order to combine the two development loops and make prototyping faster, we need to run our local process in the context of its remote environment.

This is the recipe for success; run my local process in the context of a remote environment.

Here is where mirrord can help!

Mirrord

Does exactly what we need:

mirrord is an open-source tool that lets developers run local processes in the context of their cloud environment. It makes it incredibly easy to test your code on a cloud environment (e.g. staging) without actually going through the hassle of Dockerization, CI, or deployment, and without disrupting the environment by deploying untested code

https://mirrord.dev/docs/overview/introduction/

Running App In Local Context

A simple nodejs app connects to Azure Storage Account and displays file content. As a developer, my task is to test my app. Let’s run it locally by executing those 2 justfile recipes:

start_server:
nodemon server.js

browser: start_server
browser-sync start - proxy "localhost:3000" - files "server.js" "public/**/*"

If you want to learn more about justfile:

Master Command Orchestration

Transform Your Projects with Just

itnext.io

My app works, but there is connectivity
error

Right, that’s not going to help us test a new version of my app. With mirrord, we can connect my local process to a remote execution environment and my app running in a pod.

This results in the traffic, environment variables, and file operations being mirrored into my locally running process. Whew, that’s a mouthful. Let’s run mirrord:

# Run mirrord on deployment, this resolves to a single pod
mirrord:
@mirrord exec --target-namespace devops-team \
--target deployment/foo-app-deployment \
nodemon server.js
✗ just mirrord
New mirrord version available: 3.106.0. To update, run: `"curl -fsSL https://raw.githubusercontent.com/metalbear-co/mirrord/main/scripts/install.sh | bash"`.
To disable version checks, set env variable MIRRORD_CHECK_VERSION to 'false'.
When targeting multi-pod deployments, mirrord impersonates the first pod in the deployment.
Support for multi-pod impersonation requires the mirrord operator, which is part of mirrord for Teams.
You can get started with mirrord for Teams at this link: https://mirrord.dev/docs/overview/teams/
* Running binary "nodemon" with arguments: ["server.js"].
* mirrord will target: deployment/foo-app-deployment, no configuration file was loaded
* operator: the operator will be used if possible
* env: all environment variables will be fetched
* fs: file operations will default to read only from the remote
* incoming: incoming traffic will be mirrored
* outgoing: forwarding is enabled on TCP and UDP
* dns: DNS will be resolved remotely
⠖ mirrord exec
✓ Update to 3.106.0 available
✓ ready to launch process
✓ layer extracted
✓ operator not found
✓ agent pod created
✓ pod is ready
✓ config summary [nodemon] 3.1.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node server.js`
Listing all containers and their first blob:
Server running on port 3000
Container: sample-blob

Notice that now all environmental variables are fetched from the remote execution environment (Kubernetes cluster).

* env: all environment variables will be fetched

Now we should be able to navigate to localhost:3000 and see that the connection was successful:

What we have achieved is that calls from our local process are as if they were made by the remote process and calls to the remote process are mirrored to our local process.

How does mirrord work

Simplifying, mirrord operates in an agentless mode, using kubectl to connect to a cluster and create a temporary pod for mirroring. The key components include the mirrordAgent, a Rust binary running as a Kubernetes job that proxies local processes by sniffing network traffic and accessing the file system of the target pod. The mirrordLayer, a dynamic library, hooks into the local process to relay file system and network operations to the mirrordAgent.

source: author based on
https://mirrord.dev/docs/reference/architecture/

Real world is messier than that

Now we can iterate rapidly on a new version of my app and see how it behaves in the actual remote execution environment. This is however a simple example, there might be scenarios where we want to:

  • redirect or steal all the traffic instead of mirroring it
  • redirect only subset of calls or filtering out Kubernetes health checks
  • redirect database writes to a local db to avoid duplicated remote writes
  • run a new app or a tool in the remote context using targetless mode

Mirrord supports all those scenarios and more are being developed.

Closing Thoughts

Developing containerized applications for Kubernetes is hard. There are many options to help with the process, including tools like telepresence, skaffold, okteto, garden, and many more.

This tells us two things: first, there is no standardized way of supporting developers in their efforts; and second, the space is still growing and maturing. mirrord has a unique place in this ecosystem. The blend of very little to no friction and strong defaults makes it an attractive solution for cloud-native development.

By seamlessly mirroring traffic, environment variables, and file operations from the remote environment to the local process, it eliminates the common pain points in Kubernetes development. This approach not only speeds up the development cycle but also ensures that your local environment is as close to production as possible, reducing the risk of unexpected issues during deployment.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

Why is Building Pipelines Different from Software Development?

· 10 min read

It Doesn’t Have to Be! Simplify Your CI/CD Workflow with Dagger

Introduction

CI/CD pipelines are essential for automating the process of software integration and deployment, ensuring that code changes are automatically tested, integrated, and deployed to production with minimal manual intervention or ideally in a fully automated way.

However, building and managing pipelines is not easy. In this blog, we will look into addressing some of the most common pinpoints of pipelines development and see how to improve.

This content will be valuable for Developers, Architects, DevOps Specialists, or anyone curious about how to improve building and maintaining CI/CD pipelines.

Challenges of building and running a CI/CD pipeline

The main challenge comes from the fact that CI/CD pipelines development and lifecycle management is treated differently from software development practices.

Nowadays, pipelines are mostly written in YAML. Large configuration files instruct pipeline runners hosted on services like GitLab or GitHub how to interpret a pipeline workflow file and what actions should happen.

👉 Read more about YAML file structure using Azure DevOps as an example.

Individual actions are wrapped with YAML tasks or steps which are in turn often bash scripts executed in the runner’s environment. Here is an example build job with multiple steps. Actions are used to call specialized steps and by default, the runner will execute commands provided in the run block.

  build:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Setup Hatch
run: pipx install hatch==1.7.0

- name: Set Default PyPI Project Version
if: env.PYPI_VERSION == ''
run: echo "PYPI_VERSION=v0.0.0+$(date -d@$(git show -s --format=%ct) +%Y%m%d%H%M%S)-$(git rev-parse --short=12 HEAD)" >> $GITHUB_ENV

- name: Set PyPI Project Version
run: hatch version ${{ env.PYPI_VERSION }}

- name: Build Sdist and Wheel
run: hatch build

- name: Upload Sdist and Wheel to GitHub
uses: actions/upload-artifact@v4
with:
name: dist
path: "dist/*"
if-no-files-found: error
retention-days: 1

The main issue is that the commands can be executed locally, however there is no guarantee that the same commands will execute in the same way in the runner’s environment.

In other words, we cannot guarantee fully reproducibility of the pipeline.

Since reproducibility does not work, testing happens in the runner’s environment. The only way to debug the workflow is to add print statements or log to a file and dig in the runner’s log file to see what went wrong (spoiler alert, most of the time it’s missing comma or something equally trivial).

This typically results in commit history like this where pipeline doesn’t work but the only way to be sure it will work is to let it run and check errors.

What if pipelines could be… just code

Good news is that they can! Pipelines can be just code working in a standardized way, who knows maybe even using docker under the hood, but more on that later.

Why would we want pipelines to be written in a programming language?

  • creating and running tests with actual testing frameworks
  • locally debugging and testing each pipeline step
  • pipeline code versioning and releases
  • linting and code support (including various copilots) inside and IDE or text editor
  • using all the more programming languages facilities; async calls, functions, data structures, and more

What’s the deal with docker

I mention earlier that having docker could be nice. This is because when running pipelines we are bound to the proprietary runners offered to us by 3rd party vendors such as GitHub, GitLab, but also tools like Jenkins mandate their own syntax and integration points.

Wouldn’t it be cool if we could run the whole pipeline anywhere with reproducibility guarantees? For this to happen we need a standard way of executing pipeline jobs and steps/tasks. Here is where docker comes in.

Instead of running all pipeline steps in a proprietary runner format, we just need to run one step to hook into the runner’s execution environment and let the containerized environment do the rest.

What is dagger and how it can help

Sadly, dagger has nothing to do with beautiful blades but more do to with DAG (Directed acyclic graph).

So less of a:

Photo by Jimmy
Chang
on
Unsplash

And more like:

Source: dagger website https://dagger.io/

Transform your Messy CI Scripts into Clean Code

Powerful, programmable open source CI/CD engine that runs your pipelines in containers — pre-push on your local machine and/or post-push in CI

Let’s convert a pipeline

We are going to convert an actual python pipeline of one of my projects killercoda-cli which is a simple Python CLI helping with writing killercoda.com scenarios. The goal is to convert just the right amount of steps and introduce dagger gradually to the project.

The pipeline builds the CLI, runs tests and enables manual push to PyPi registry.

https://github.com/Piotr1215/killercoda-cli/blob/main/.github/workflows/ci.yml

First things first; prerequisites

Before we start, we need to install dagger CLI and docker (podman and nerdctl would work too).

I’m using Linux, so curl will do just fine (with modified path to save the binary to):

curl -L https://dl.dagger.io/dagger/install.sh | BIN_DIR=/usr/local/bin sh

The installation script instructs me how to add completion; executing

dagger completion zsh > /usr/local/share/zsh/site-functions/_dagger

and after reloading zshrc tab completion works just fine:

Adding dagger to the project

Running dagger init --sdk=python pulled dagger image, created a dagger directory and dagger.json file.

dagger
├── pyproject.toml
├── requirements.lock
├── sdk
│ ├── codegen
│ │ ├── pyproject.toml
│ │ ├── requirements.lock
│ │ └── src
│ │ └── codegen
│ │ ├── cli.py
│ │ ├── generator.py
│ │ ├── __init__.py
│ │ └── __main__.py
│ ├── LICENSE
│ ├── pyproject.toml
│ ├── README.md
│ └── src
│ └── dagger
│ ├── client
│ │ ├── base.py
│ │ ├── _core.py
│ │ ├── gen.py
│ │ ├── _guards.py
│ │ ├── __init__.py
│ │ └── _session.py
│ ├── _config.py
│ ├── _connection.py
│ ├── _engine
│ │ ├── conn.py
│ │ ├── download.py
│ │ ├── __init__.py
│ │ ├── progress.py
│ │ ├── session.py
│ │ └── _version.py
│ ├── _exceptions.py
│ ├── __init__.py
│ ├── log.py
│ ├── _managers.py
│ ├── mod
│ │ ├── _arguments.py
│ │ ├── cli.py
│ │ ├── _converter.py
│ │ ├── _exceptions.py
│ │ ├── __init__.py
│ │ ├── _module.py
│ │ ├── _resolver.py
│ │ ├── _types.py
│ │ └── _utils.py
│ ├── py.typed
│ └── telemetry
│ ├── attributes.py
│ └── __init__.py
└── src
└── main
└── __init__.py

12 directories, 42 files
{
"name": "killercoda-cli",
"sdk": "python",
"source": "dagger",
"engineVersion": "v0.11.7"
}

Build Environment Container

Let’s add a function to the src->main->__init__.py to create a development environment to build and test our project.

This function builds an environment with all the dependencies required by my application.

We can run it dagger call build-env — source=. and build an image.

💡Notice the kebab-case naming convention in the CLI, build_env becomes build-env

Earlier, we discussed local debugging and testing. Well, this is a container, so we should be able to drop into it with a shell!

➜ dagger call build-env --source=. terminal --cmd=bash
root@grt1fshu1uc6c:/src# ls -lah
total 2.1M
drwxr-xr-x 14 root root 4.0K Jun 13 16:21 .
drwxr-xr-x 1 root root 4.0K Jun 13 16:23 ..
-rw-rw-r-- 2 root root 1.3M May 26 20:39 .aider.chat.history.md
-rw-rw-r-- 2 root root 23K May 26 20:39 .aider.input.history
drwxr-xr-x 2 root root 4.0K May 26 20:39 .aider.tags.cache.v3
-rw-r--r-- 2 root root 52K Jun 13 14:00 .coverage
-rw-rw-r-- 2 root root 116 Feb 10 19:36 .coveragerc
drwxrwxr-x 9 root root 4.0K Jun 13 16:21 .git
drwxrwxr-x 3 root root 4.0K Feb 10 11:44 .github
-rw-rw-r-- 2 root root 3.4K Feb 10 11:34 .gitignore
drwxr-xr-x 6 root root 4.0K Jun 13 12:55 .pytest_cache
drwxrwxr-x 4 root root 4.0K Jun 13 13:58 .venv-test
-rw-rw-r-- 2 root root 1.1K Feb 10 11:39 LICENSE.txt
-rw-rw-r-- 2 root root 5.4K Jun 13 16:17 README.md
drwx------ 2 root root 4.0K Jun 13 14:51 _media
drwxrwxr-x 2 root root 4.0K May 31 11:38 assets
-rw-rw-r-- 2 root root 0 Jun 13 14:44 costam.log
-rw-rw-r-- 2 root root 6.4K May 26 14:42 coverage.xml
drwxr-xr-x 4 root root 4.0K Jun 13 14:45 dagger
-rw-r--r-- 2 root root 102 Jun 13 12:40 dagger.json
drwxrwxr-x 2 root root 4.0K Feb 10 20:21 dist
drwxrwxr-x 3 root root 4.0K May 31 11:32 killercoda_cli
-rw-rw-r-- 2 root root 686K Jun 13 14:47 output.txt
-rw-rw-r-- 2 root root 2.5K Jun 13 14:15 pyproject.toml
drwxrwxr-x 2 root root 4.0K May 31 11:08 temp_template
drwxrwxr-x 4 root root 4.0K Jun 13 14:03 tests
root@grt1fshu1uc6c:/src#

Being able to locally check the container is a game changer. Remember, the same will run on a remote runner VM.

Running Tests

One more function for running tests, notice how we execute the build_environment function before running tests.

➜ dagger call test --source=.
Current directory: /tmp/test_generate_assets/killercoda-assets
Source directory: /tmp/test_generate_assets/killercoda-assets
Generating assets from template: https://github.com/Piotr1215/cookiecutter-killercoda-assets
Output directory: /tmp/test_generate_assets
Assets generated successfully.

Tests pass however outpout is a bit sparse, earlier we have setup tab completion, let’s see if there are any flags that can help us dagger call test — source=. <TAB>

➜ dagger call test --source=. --debug
--debug -- show debug logs and full verbosity
--json -- Present result as JSON
--mod -- Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir")
--output -- Path in the host to save the result to
--progress -- progress output format (auto, plain, tty)
--silent -- disable terminal UI and progress output
--verbose -- increase verbosity (use -vv or -vvv for more)

The debug option gives us full run logs with max verbosity, great!

Build & Publish

The last two functions are publish and build. Publish is going to use the awesome ttl.sh service, which allows for publishing short-lived images (max 24h) and is great for testing. Build will perform a multistaged build and get our application ready for deployment.

➜ dagger call publish --source=.
ttl.sh/killercoda-cli-15186746@sha256:6d2e22543154fff996e3d829450953981c60aba6f161477e5aa3e00d3faaa2cb

Integrate with GitHub Actions

The integration with GitHub Actions is easy, just add the action to YAML workflow and call any dagger function

- name: Hello
uses: dagger/dagger-for-github@v5
with:
verb: call
args: call publish --source=.

Closing Thoughts

Integrating dagger into my GitHub Actions workflow wasn’t super easy, but it was way easier than dealing with pure YAML. The ability to adapt only parts of the workflow is great, no need for all-or-nothing rewrites; small, incremental steps are just fine.

From a high level, the below diagram shows working with dagger locally and in a remote environment.

dagger workflow

A bit win, in my opinion, is that if tomorrow I will decide to move to GitLab, I can do it much easier. Instead of migrating the whole pipeline, I keep it as is and migrate only the entry-point.

With the right amount of abstractions and clever reuse of the docker engine, dagger is a very strong choice for any pipeline. The community can collaborate on functions and capture best practices, which are available on daggerverse.

Give dagger a try and let me know what are your experiences.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

How to Build Cloud Native Platforms with Kubernetes

· 10 min read

header-image

Developer Portals, GitOps, Best Practices

Introduction

Platform Engineering focuses on empowering developers and organizations by creating and maintaining internal software products known as platforms. In this blog, we will explore what platforms are, why they are important, and uncover best practices for creating and maintaining well-architected platforms.

This content will be valuable for Platform Engineers, Architects, DevOps Specialists, or anyone curious about how platforms can drive innovation and efficiency in development.

Understanding Platform Categories: Mapping the Terrain

Similar to DevOps, Platform Engineering struggles to define a platform concisely. A good way to understand what a platform is, is to list various kinds of platforms and their characteristics.

Platform TypesPlatform Types

  • Business as a Platform: Consider Uber, the entire product is a platform that connects users and drivers. This platform creates an ecosystem where businesses operate, users engage, and interactions happen seamlessly.

  • Domain-Specific Platforms: These platforms provide cross-cutting functionality for other applications. An example could be a geolocation API that is consumed by web frontend, mobile app and other services.

  • Domain-Agnostic Platforms: These platforms serve as foundational building blocks for developers, offering essential tools like database management, cloud storage, and user authentication. Cloud platforms like AWS or Azure provide the infrastructure and services that countless digital products rely on and are a good mental model to have when designing our own cloud native platform.

The platform landscape is vast and varied. From business-centric models to specialized domain platforms and versatile tools for developers, each plays a pivotal role in the digital ecosystem.

In this blog, we will focus on domain-agnostic platforms providing infrastructure.

The Case for Cloud-Native Platforms

Cloud-native is about how applications are created and deployed, not where. — Priyanka Sharma

Cloud-native platforms provide a foundation that allows applications to be designed with flexibility, largely making them environment agnostic. A well-architected platform offers several key benefits:

  • Simplified infrastructure management: Infrastructure provisioning and management are abstracted in a way that enables developers to move faster without compromising security and compliance requirements.

  • Increased development efficiency: A well-designed platform should increase developers productivity, improving metrics like lowering time to first commit, improving the incidents-to-resolution or reducing time to onboard new developers.

  • **Built-in scalability and reliability: **A successful platform brings to the table elements that are not part of core development efforts, but are crucial for product success. Those are: observability, scalability, automated rollbacks, integrated authentication and more.

Building Blocks of Cloud-Native PlatformsBuilding Blocks of Cloud-Native Platforms

Self-service portal

A self-service portal is a user-friendly interface that allows users to access and manage resources independently, empowering developers and users to create, configure, and deploy resources without IT support. This streamlines workflows, accelerates project timelines, and enhances productivity.

Examples like Backstage, developed by Spotify, and Port provide customizable interfaces for managing developer tools and services, ensuring efficient and consistent interactions. These portals embody the essence of self-service, enabling quick, autonomous actions that reduce bottlenecks and foster agility in development processes.

Programmatic APIs

Programmatic APIs are the backbone of cloud-native platforms, enabling seamless interaction with platform services and functionalities. These APIs allow developers to automate tasks, integrate different services, and build complex workflows, enhancing efficiency and consistency across environments.

APIs provide programmatic access to essential platform features, allowing developers to automate repetitive tasks and streamline operations. They support various transport mechanisms such as REST, HTTP, and gRPC, offering flexibility in how services communicate. For instance, API based on Kubernetes Resource Model enables developers to manage containerized applications, while AWS SDKs facilitate interactions with a wide range of cloud resources. By leveraging programmatic APIs, platforms ensure that developers can efficiently build, deploy, and manage applications, driving productivity and innovation.

Automated Workflows

Automated workflows are crucial for provisioning and deployment processes in cloud-native platforms. They ensure tasks are executed consistently and efficiently, minimizing human error and enhancing productivity.

Key to these workflows are CI/CD pipelines, which automate the build, test, and deployment stages of application development. Tools like Argo CD and Flux enable GitOps practices, where infrastructure and application updates are managed through Git repositories. By leveraging automated workflows, platforms can ensure rapid, reliable deployments, maintain consistency across environments, and accelerate the overall development process.

Monitoring and Observability

Monitoring and observability tools provide crucial insights into the performance and health of cloud-native platforms. These tools help detect issues early, understand system behavior, and ensure applications run smoothly.

Prominent tools include Prometheus for collecting and querying metrics, Grafana for visualizing data and creating dashboards, and OpenTelemetry for tracing and observability. Together, they enable proactive management of resources, quick resolution of issues, and comprehensive visibility into system performance. By integrating these tools, platforms can maintain high availability and performance, ensuring a seamless user experience.

Security and Governance Controls

Integrated security and governance controls are vital for maintaining compliance and protecting sensitive data in cloud-native platforms. These controls ensure that platform operations adhere to security policies and regulatory requirements.

Tools like OPA GateKeeper, Kyverno, and Falco play a crucial role in enforcing security policies, managing configurations, and detecting anomalies. OPA GateKeeper and Kyverno help in policy enforcement and compliance, while Falco specializes in runtime security and intrusion detection. By incorporating these tools, platforms can ensure robust security, maintain compliance, and mitigate risks effectively.

Ever Evolving

The only constant in technology is change. — Marc Benioff

Developer platforms are constantly evolving to meet the changing needs of developers and users alike. This continuous evolution ensures that platforms remain relevant, efficient, and capable of supporting the latest innovations and best practices. By staying adaptable and forward-thinking, platforms can provide the tools and features necessary to drive ongoing success and innovation.

Embracing Kubernetes Resource Model APIs

Application Programming Interface, is a set of rules and protocols for building and interacting with software.

The Kubernetes Resource Model API is the industry standard for managing resources in a cloud-native environment. Kubernetes acts as a universal control plane, continuously reconciling the desired state with the actual state of the system. Standardizing on this model offers several key benefits:

  1. Industry-Wide Standardization: Kubernetes has become the de facto standard for cloud-native infrastructure management. Its API-driven approach is widely adopted, ensuring compatibility and ease of integration with various tools and services.

  2. Universal Control Plane: Kubernetes serves as a universal control plane, providing a centralized management interface for infrastructure and applications. This centralization simplifies operations and enforces consistency across environments.

  3. Continuous Reconciliation: The Kubernetes API supports declarative management, where the desired state of resources is defined, and Kubernetes continuously reconciles this state. This automated reconciliation reduces manual intervention and ensures system reliability.

  4. Separation of Concerns: Platform engineers can configure infrastructure and policies, while developers interact with higher-level APIs. This separation enhances automation and self-service capabilities, empowering developers without compromising security or compliance.

  5. Scalability and Extensibility: Supporting transport mechanisms like REST, HTTP, and gRPC, the Kubernetes API is adaptable and scalable. It integrates seamlessly with a wide range of tools, facilitating the growth and evolution of the platform.

By leveraging Kubernetes Resource Model APIs, organizations can build robust, scalable, and efficient platforms that meet the dynamic needs of modern development environments​.

Platform as a Product: A New Perspective

Adopting a product approach to platform engineering is crucial for creating successful internal platforms. This means focusing on delivering continuous value to users — developers and the organization. It involves understanding user needs, designing and testing solutions, implementing them, and gathering feedback for continuous improvement.

Cloud hyperscalers like AWS, Google Cloud, and Microsoft Azure exemplify this approach. They have built user-centric platforms that are constantly updated with new features, driven by user feedback and accessible via standardized APIs. This ensures they remain relevant and valuable.

For internal platforms, roles such as product owners and project managers are essential. They help ensure the platform evolves in response to developer needs, maintaining usability and effectiveness. By treating your internal platform as a product, you create a sustainable resource tailored to your organization’s unique needs.

Platform as a ProductPlatform as a Product

Delivering Value Through Cloud-Native Platforms

In our demo video, we showcase how to build a platform that embodies key cloud-native principles. This practical example demonstrates the immense value that a well-architected cloud-native platform can deliver. Here’s a brief overview of what you can expect:

  • Empowering Developers: See how the platform provides developers with the tools and autonomy they need to innovate and deliver faster.

  • Cloud-Native Principles: Watch as we leverage containerization, microservices, and other cloud-native practices to build a robust, scalable platform.

  • API-Driven Approach: Discover how using programmatic APIs streamlines operations, enhances automation, and ensures seamless integration between services.

  • GitOps Workflow: Learn how the platform employs GitOps practices to manage infrastructure as code, enabling more efficient and reliable deployments.

Watch the video to see these principles in action and understand how they come together to create a powerful, developer-centric platform.

Essential Tools

In the demo, you can see a range of tools that form the backbone of cloud-native platforms, each serving a critical role. From Kubernetes as the control plane orchestrator to GitHub for managing API calls via pull requests, these tools collectively ensure efficient, scalable, and secure infrastructure management.

Recap: Platform Components in Action

Let’s recap what we’ve learned about using Kubernetes as a control plane for infrastructure provisioning:

  1. Self-Service Portal: The Developer accesses the IDP Portal for a unified UI experience to manage applications and infrastructure.

  2. Push Changes: The Developer pushes changes to the GitOps Repository via a pull request.

  3. Approval and Merge: The Platform Engineer reviews, approves, and merges the pull request, updating configurations.

  4. Sync Changes: The GitOps Repository syncs the changes to ArgoCD.

  5. Deploy Changes: ArgoCD deploys the changes to the Kubernetes API.

  6. Reconcile Infrastructure: The Kubernetes API reconciles the infrastructure via Crossplane.

  7. Provision Infrastructure: Crossplane provisions the infrastructure via various providers.

This sequence ensures a streamlined, automated process for managing and provisioning infrastructure using Kubernetes and GitOps principles.

platform-components

Closing Thoughts

Cloud-native platforms are revolutionizing how we develop and manage applications by providing robust, scalable, and secure environments. They empower developers with self-service portals, streamline operations with programmatic APIs, and ensure reliability through automated workflows and comprehensive monitoring tools. By embracing these platforms, organizations can accelerate innovation, enhance productivity, and maintain high standards of security and compliance.

Treating platforms as products ensures continuous improvement and alignment with user needs, making them indispensable tools in today’s fast-paced tech landscape. Whether you’re a Platform Engineer, Architect, or DevOps Specialist, leveraging cloud-native platforms can drive significant value, fostering a culture of efficiency and agility. Stay ahead of the curve, explore the potential of cloud-native platforms, and watch your organization thrive.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

Master Command Orchestration with Justfile

· 8 min read

image

Introducing Just

This content is also available as an interactive workshop on killercoda.com

It's like a Makefile just better

Every project needs to orchestrate commands; whether for testing, building, creating components, infrastructure and many many more. This is typically done via a Makefile or bash scripts. The problem with make is that it is designed as a tool to *build_C source code, it _can* run commands but that's not its purpose. This means that when using Makefile we take on the whole unnecessary baggage of the build part.

Bash scripts are a bit better but after a while when more scripts are created, managing them and their dependencies becomes a nightmare.

There is a tool that combines best of both worlds; just is similar to make, but focused on commands orchestration.

Installation

Next we will install just and set up a simple justfile to see how it works.

curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh \
| bash -s -- --to /usr/local/bin

We can confirm that the installation was succesfull by running

just --version

Setting up sample project

Create a directory for the sample project

mkdir -p ./just-example && cd ./just-example

Create a justfile in the project directory

cat << 'EOF' > justfile
# This is a comment
hello:
echo "Hello, World!"
EOF

Running the first command

Now we can run the hello command > 💡 by default just will run first recipe in the justfile if we don't specify one

just

Basic Syntax

Next we will learn basic commands and syntax of just. > 💡 Commands in justfile are called recipes.

We can use @ to suppress printing the command to the terminal. This is useful when we want to show only command output and not the command itself.

Suppressing command

cat << 'EOF' >> justfile
supress_command:
@echo "Only this is printed"
full_command:
echo "Both command and output are printed"
EOF

Now we can run the supress_command and full_command commands

just supress_command
just full_command

Running faulty recipes will fail early

Recipes will fail if any of the commands return non-zero exit code.

cat << 'EOF' >> justfile
fail_recipe:
@ls /non-existing-dir
@echo "This is never printed"
EOF

just fail_recipe

Recipes can have dependencies.

cat << 'EOF' >> justfile
dependency:
@echo "This is the dependency"
dependent: dependency
@echo "This is the dependent"
EOF

just dependent

Running multiple recipes

cat << 'EOF' >> justfile
recipe1:
@echo "This is recipe1"
recipe2:
@echo "This is recipe2"
EOF

just recipe1 recipe2

Default recipe

If we don't specify a recipe, just will run the first recipe in the

justfile. We can specify a default recipe by using default as the recipe name.

{
echo "default:"
echo " just --list"
echo ""
cat justfile
} | sponge justfile

just

Notice that running just this time simply printed the list of recipes. > 💡 comments on top of recipes are used as descriptions when running just --list

Real-life example

There are many more features in just that we can explore. Next we will look an at example production ready justfile

Let's start by cloning a repository with the justfile and checking all the recipes in it.

cd ../
git clone https://github.com/Piotr1215/crossplane-box.git
cd crossplane-box

just

📓 This justfile contains a set of recipes that help you to manage your Crossplane installation. You can find more information about the recipes in the Crossplane Box Blog

Settings

Just supports a set of settings that you can use to customize the behavior of the runner. For example:

set export
set shell := ["bash", "-uc"]

This tells just to export all variables and to use bash -uc for every shell execution where the -c flag tells bash to run commands specified as text and -u to treat unset variables as an error.

Built-in Functions

Just provides a set of builtin functions. For example:

yaml          := justfile_directory() + "/yaml"
apps := justfile_directory() + "/apps"

This tells just to define two variables yaml and apps that point to the specific directories regardless where the justfile directory is located.

It is also easy to detect operatin system and conditionally execute different commands:

browse        := if os() == "linux" { "xdg-open "} else { "open" }
copy := if os() == "linux" { "xsel -ib"} else { "pbcopy" }
replace := if os() == "linux" { "sed -i"} else { "sed -i '' -e" }

Here we can see that browse, copy, and replace functions are defined based on the operating system. Those can be used later in the recipes like this:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" \| base64 -d \| {{copy}}

Recipe Parameters

Just allows you to define parameters for the recipes. For example:

# setup kind cluster
setup_kind cluster_name='control-plane':
#!/usr/bin/env bash
set -euo pipefail
echo "Creating kind cluster - {{cluster_name}}"
envsubst < kind-config.yaml | kind create cluster --config - --wait 3m
kind get kubeconfig --name {{cluster_name}}
kubectl config use-context kind-{{cluster_name}}

Here we can see that the setup_kind recipe takes a parameter cluster_name which has a default value of control-plane.

Control Flow Recipes

It's very easy to string together various recipes in a sequence. For example:

# * setup kind cluster with crossplane, ArgoCD and launch argocd in browser
setup: _replace_repo_user setup_kind setup_crossplane setup_argo launch_argo

Here we can see that the setup recipe is a sequence of other recipes. This is useful when all commands are well tested and we want to quickly execute them in a sequence.

Advanced Features

Next we will look at some advanced features of just. Just offers a lot of flexibility and power to define and execute recipes. It is continuously being improved and has a very active community. Here are some advanced features that helped me to write complex recipes:

Using Shell Recipes

just allows you to define recipes in any shell language. This is very useful when you need to write complex shell scripts. For example:

# setup kind cluster
setup_kind cluster_name='control-plane':
#!/usr/bin/env bash
set -euo pipefail
echo "Creating kind cluster - {{cluster_name}}"
envsubst < kind-config.yaml | kind create cluster --config - --wait 3m
kind get kubeconfig --name {{cluster_name}}
kubectl config use-context kind-{{cluster_name}}

Notice the #!/usr/bin/env bash shebang at the beginning of the recipe. This means that the recipe is executed by single bash subshell and can share variables context.

Commands Evaluation

Variables' values are evaluated at runtime. This means that you can use

date_suffix                      := `echo test_$(date +%F)`

This will allow using the date_suffix variable in the recipes, and it will add suffix with test_(current date). Let's try it out:

cat <<EOF >> justfile 
date_suffix := \`echo test_\$(date +%F)\`
add_suffix:
echo "Adding date suffix: {{date_suffix}}"
EOF

Just Scripts

Adding #!/usr/bin/env -S just --justfile shebang to the script allows calling just recipes directly as if they were scripts. This is very useful when working with system-wide scripts. For example, I use this alias to call just recipe that sets up my kind cluster with crossplane.

alias uxp="just ~/dev/dotfiles/scripts/uxp-setup/setup_infra"

This allows me to call uxp from anywhere in the system and it will execute the setup_infra recipe.

Interactive Mode

Just has an interactive mode that allows you to select recipes from the list using the --choose flag. Another alias I like to use is:

.j: aliased to just --justfile ~/dev/dotfiles/scripts/uxp-setup/justfile --working-directory ~/dev/dotfiles/scripts/uxp-setup --choose

Closing Thoughts

We have just scratched the surface of what just can do. Read more about features in just documentation.

We have just scratched the surface of just's capabilities. This powerful tool can orchestrate a wide range of commands for various tasks, offering flexibility and simplicity.

The combination of just's command orchestration and shell-like syntax makes it a versatile tool for managing complex workflows.

Next Steps

Happy orchestrating! 🚀

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

How to Simplify Kubernetes Deployments with Kluctl: A Beginner's Guide

· 9 min read

intro-to-kluctl

Introduction

Kubernetes has revolutionized the way we manage containerized applications, providing a robust and scalable platform for deploying, scaling, and operating these applications. However, despite its powerful capabilities, managing Kubernetes deployments can sometimes be challenging. Popular tools like Helm and Kustomize have become the standard for many teams, but they might not always meet every need.

This is where Kluctl steps in. Kluctl is a deployment tool for Kubernetes that aims to combine the strengths of Helm and Kustomize while addressing their limitations. It provides a more flexible and declarative approach to Kubernetes deployments, making it an excellent choice for those seeking an alternative.

In this blog, we'll explore Kluctl's unique features and how it can streamline your Kubernetes deployment process. Whether you're an experienced Kubernetes user or just getting started, this guide will provide valuable insights into why Kluctl might be the tool you've been looking for.

An interactive version of this blog is available on killercoda.com:

interactive
scenariointeractive scenario

What is Kluctl?

Kluctl is a modern deployment tool designed specifically for Kubernetes. It aims to simplify and enhance the deployment process by combining the best aspects of Helm and Kustomize, while also addressing some of their shortcomings. With Kluctl, you can manage complex Kubernetes deployments more efficiently and with greater flexibility.

Key Features of Kluctl

  1. Declarative Configuration: Kluctl allows you to define your deployments declaratively using YAML files. This approach ensures that your deployments are consistent and reproducible.

  2. GitOps Ready: Kluctl integrates seamlessly with GitOps workflows, enabling you to manage your deployments via Git. This integration supports continuous deployment practices and makes it easier to track changes and rollbacks.

  3. Flexible and Modular: Kluctl supports modular configurations, making it easy to reuse and share components across different projects. This modularity reduces duplication and enhances maintainability.

  4. Validation and Diffing: One of Kluctl's standout features is its built-in validation and diffing capabilities. Before applying changes, Kluctl shows you what changes will be made, allowing you to review and approve them. This feature helps prevent accidental misconfigurations and ensures deployments are accurate.

source source: https://kluctl.io/

Why Choose Kluctl?

  • Enhanced Flexibility: Kluctl provides a higher degree of flexibility compared to traditional tools like Helm and Kustomize. It enables you to customize and manage your deployments in a way that best fits your workflow and organizational needs.

  • Improved Collaboration: By leveraging GitOps, Kluctl enhances collaboration within teams. All deployment configurations are stored in Git, making it easy for team members to review, suggest changes, and track the history of deployments.

  • Reduced Complexity: Kluctl simplifies the deployment process, especially for complex applications. Its modular approach allows you to break down deployments into manageable components, making it easier to understand and maintain your Kubernetes configurations.

In summary, Kluctl is a powerful tool that enhances the Kubernetes deployment experience. Its declarative nature, seamless GitOps integration, and advanced features make it an excellent choice for teams looking to improve their deployment workflows.

Installing Kluctl

Getting started with Kluctl is straightforward. The following steps will guide you through the installation process, allowing you to set up Kluctl on your local machine and prepare it for managing your Kubernetes deployments.

Step 1: Install Kluctl CLI

First, you need to install the Kluctl command-line interface (CLI). The CLI is the primary tool you'll use to interact with Kluctl.

To install the Kluctl CLI, run the following command:

curl -sSL https://github.com/kluctl/kluctl/releases/latest/download/install.sh | bash

This script downloads and installs the latest version of Kluctl. After the installation is complete, verify that Kluctl has been installed correctly by checking its version:

kluctl version

You should see output indicating the installed version of Kluctl, confirming that the installation was successful.

Step 2: Set Up a Kubernetes Cluster

Before you can use Kluctl, you need to have a Kubernetes cluster up and running. Kluctl interacts with your cluster to manage deployments, so it's essential to ensure that you have a functioning Kubernetes environment. If you haven't set up a Kubernetes cluster yet, you can refer to my previous blogs for detailed instructions on setting up clusters using various tools and services like Minikube, Kind, GKE, EKS, or AKS.

Kluctl in action

Next we will setup a basic kluctl project. To start using kluctl define a .kluctl.yaml file in the root of your project with the targets where you want to deploy.

Let's create a folder for our project and create a *.kluctl.yaml *file in it.

mkdir kluctl-project && cd kluctl-project

cat <<EOF > .kluctl.yaml
discriminator: "kluctl-demo-{{ target.name }}"

targets:
- name: dev
context: kubernetes-admin@kubernetes
args:
environment: dev
- name: prod
context: kubernetes-admin@kubernetes
args:
environment: prod

args:
- name: environment
EOF

This file defines two targets, dev and prod, that will deploy to the same Kubernetes cluster.

We can use the args section to define the arguments that we will use in our YAML files to template them. For example {{ args.environment }} would output dev or prod depending on the target we are deploying to.

Create Deployment

Next we will create a kustomize deployment for redis application. Under the hood kluctl uses kustomize to manage the Kubernetes manifests. kustomize is a tool that lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. > 💡 we are following a tutorial from the kluctl documentation Basic Project Setup Introduction

Let's create a deployment.yaml where we will define elements that kluctl will use to deploy the application.

cat <<EOF > deployment.yaml
deployments:
- path: redis

commonLabels:
examples.kluctl.io/deployment-project: "redis"
EOF

Now we need to create redis the deployment folder.

mkdir redis && cd redis

Since we are using kustomize we need to create a kustomization.yaml file.

cat <<EOF > kustomization.yaml
resources:
- deployment.yaml
- service.yaml
EOF

And now we can create the service.yaml and deployment.yaml files.

cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cart
spec:
selector:
matchLabels:
app: redis-cart
template:
metadata:
labels:
app: redis-cart
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
readinessProbe:
periodSeconds: 5
tcpSocket:
port: 6379
livenessProbe:
periodSeconds: 5
tcpSocket:
port: 6379
volumeMounts:
- mountPath: /data
name: redis-data
resources:
limits:
memory: 256Mi
cpu: 125m
requests:
cpu: 70m
memory: 200Mi
volumes:
- name: redis-data
emptyDir: {}
EOF

cat <<EOF > service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-cart
spec:
type: ClusterIP
selector:
app: redis-cart
ports:
- name: redis
port: 6379
targetPort: 6379
EOF

Deploy the app

Next, we will deploy the redis application to the dev target. First, we need to change to the root of the kluctl-project repository and initialize a git repository there.

cd /root/kluctl-project && \
git init && \
git add . && \
git commit -m "Initial commit"

Now we can deploy the application to dev environment.

kluctl deploy --yes -t dev

💡 Notice that we are using the --yes flag to avoid the confirmation prompt. This is useful for the scenario, but in real life you should always review the changes before applying them.

Handling Changes

Next we will introduce changes to our setup and see how kluctl handles them. Let's see what we have deployed so far by executing tree command.

.
|-- deployment.yaml
|-- kustomization.yaml
`-- redis
|-- deployment.yaml
|-- kustomization.yaml
`-- service.yaml

1 directory, 5 files

💡 Notice this resembles a typical kustomize directory structure.

One of the superpowers of kluctl is how transparently it handles changes. Let's modify the redis deployment and see how kluctl handles it.

yq -i eval '.spec.replicas = 2' redis/deployment.yaml

Now let's deploy the changes to the dev target.

kluctl deploy --yes -t dev

Remember at the beginning, we have added custom labels to each deployment. Let's see if the labels were correctly applied.

kubectl get deployments -A --show-labels

Templating

Next, we will use templating capabilities of kluctl to deploy the same application to a different namespace At the beginning of the workshop, we have two different environments; prod and dev. This setup works out of the box for multiple targets (clusters), however in our case, we want to have a single target (cluster) and we want to deploy different targets to different namespaces.

Let's start by deleting the existing resources and modifying some files. > 💡 It is possible to migrate the resources to a different namespace using the kluctl prune command. However, in this case, we will delete the old resources and recreate them in new namespaces.

kluctl delete --yes -t dev

In order to differentiate between the two environments, we will need to adjust the discriminator field in the .kluctl.yaml file.

yq e '.discriminator = "kluctl-demo-{{ target.name }}-{{ args.environment }}"' -i .kluctl.yaml

We also need to create a namespace folder and yaml and add it to our kustomization.yaml file.

First create the namespace folder.

mkdir namespace

Now we can add the namespace folder to the kustomization.yaml file. > 💡 Notice the use of barrier: true in the kustomization.yaml file. This tells kluctl to apply the resources in the order they are defined in the file and wait for the resource before the barrier to be ready before applying the next ones

cat <<EOF > deployment.yaml
deployments:
- path: namespace
- barrier: true
- path: redis
commonLabels:
examples.kluctl.io/deployment-project: "redis"
overrideNamespace: kluctl-demo-{{ args.environment }}
EOF

Now let's create the namespace YAML file.

cat <<EOF > ./namespace/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kluctl-demo-{{ args.environment }}
EOF

Test the deployment

We will test if our setup works by deploying the redis application to the dev and prod namespaces. Deploying the resources to the dev namespace:

kluctl deploy --yes -t dev

And to the prod namespace:

kluctl deploy --yes -t prod

Let's check if everything deployed as expected:

kubectl get pods,svc -n kluctl-demo-dev
kubectl get pods,svc -n kluctl-demo-prod

Closing thoughts

That's it! We have seen basic capabilities of kluctl.

We have barely scratched the surface of kluctl capabilities. You can use it to deploy to multiple clusters, namespace, and even different environments.

The mix of templating capabilities based on jinja2 and kustomize architecture makes it a really flexible tool for complex deployments.

Next Steps


Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my blogs on Medium

Build Your Own Kubernetes Co-Pilot: Harness AI for Reliable Cluster Management

· 8 min read

kubernetes-copilot

Have you ever felt frustrated by nonsensical AI outputs and hallucinations? If yes, this blog is going to be helpful for new or seasoned Kubernetes users who want to explore how AI can help manage Kubernetes resources more reliability.

What are AI hallucinations?

In a nutshell, AI hallucination occurs when a large language model (LLM) generates misleading or incorrect information in response to a prompt. This can happen due to various factors such as insufficient or flawed training data, overfitting, unrecognized idioms or slang, and adversarial inputs. These hallucinations manifest when the AI, aiming to produce coherent responses, makes errors that range from subtle factual inaccuracies to nonsensical or surreal outputs, similar to how humans might perceive patterns in random visuals.

In the context of Kubernetes, these aren't just minor nuisances; they can lead to significant operational blunders. In this blog, we explore how to enhance reliability of AI responses, mitigate the risks of hallucinations, manage Kubernetes resources using AI!

How can AI be helpful in managing Kubernetes resources?

Before we start exploring the technical setup, let's answer the question how can AI be helpful in managing Kubernetes resources? Imagine an AI assistant that can help you create, fix, and validate Kubernetes resources in a conversational manner. You might ask it to create a new deployment, fix a broken service, or validate a YAML file. If you are learning Kubernetes, this assistant can be a great learning tool to help you explore the cluster and clarify Kubernetes concepts.

Kubernetes helps manage cloud applications, but its YAML configurations can be tricky. When working with AI tooling, we've all faced those moments when AI tools, designed to ease this burden, instead contribute to it by generating nonsensical outputs; a phenomenon we refer to as "AI hallucinations".

Problem Statement

Let's state the issue we do have with AI in the context of Kubernetes:

  • 🤖 AI faces issues with consistency and reliability when dealing with large YAML files.
  • 🧠 AIs can have "hallucinations," generating illogical outputs that become more problematic as the input size increases.
  • 📈 This inconsistency makes working with AI models non-deterministic and error prone

Goals

Our main goal is to increase reliability and consistency in AI responses. We use two main techniques to achieve this:

  • 🛠️ Function calling to bind API routes as tools available for the AI Assistant to communicate with a Kubernetes cluster
  • 🔍 Internet search APIs to provide accurate and relevant information about Kubernetes

Implementation Plan

The following steps outline the plan to achieve our goals:

  • 💼 Use Flowise to implement the logic flow so that the AI Assistant can help with managing and troubleshooting a Kubernetes cluster on our behalf.
  • 🛠️ Create a simple Flask API that exposes functions for the AI Assistant to enable it to interact with the Kubernetes cluster.
  • 💻 Use function calling to bind the API routes as tools available for the AI Assistant which enables communication with a local Kind cluster with Kubernetes running.
  • 💬 Test the AI Assistant with various scenarios to ensure it can handle different Kubernetes configurations and provide accurate responses.

Assistant in Action

To follow along, you can clone the repository from GitHub, install prerequisites and follow the instructions.

Step 1: Setup the AI Assistant

In flowise create a new assistant. Notice that I'm using OpenAI's latest model, but for testing purposes you can select less powerful models or any open source model. The quality of responses will be affected, but it will still work.

Here are instructions that the assistand will follow:

You are a helpful Kubernetes Assistant specializing in helping build, fixing and validating various kubernetes resources yaml files.
Start by greeting the user and introducing yourself as a helpful and friendly Kubernetes Assistant.

If the user asks for help with creating or validating yaml files, do the following:

- if the files are correct proceed with the next steps, if no propose fixes and correct the file yourself
- if user asks for information about the kubernetes cluster use the get_config function and provide relevant information
- ask the user to submit one yaml file at a time or create one yaml file yourself if the user asks you to create one
- send the YAML content and only the YAML content to the create_yaml function
- immediately after use the tool cleanup_events to clean any old events
- ask the user if they would like to see the validation results and inform them that it takes some time for the resources to be installed on the cluster
- if the user responds yes, use the tool check_events to see if everything is correct
- if the validation passes, ask the user if they want to submit another YAML file
- if the validation fails, propose a new corrected YAML to the user and ask if the user would like to submit it for validation
- repeat the whole process with new YAML files

Your secondary function is to assist the user in finding information related to crossplane. Example categories:

- for questions about kubernetes concepts such as pods, deployments, secrets, etc, use brave search API on https://kubernetes.io/docs/concepts/
- for generic Kubernetes questions use brave search API on kubernetes docs: https://kubernetes.io/docs/home/
- for questions regarding kubernetes releases and features use brave search API on kubernetes releases documentaiton: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG. If you are asked for details about specific release, select one of the releases, otherwise use latest stable release.

Step 2: Flask API

The server.py file defines API routes that wrap the kubectl commands.

ℹ️ The flask server is a naive implementation for demonstration purposes only. In real life scenario, we wouldn't call kubectl directly from the server but rather use a client library like kubernetes or client-go.

Step 3: Expose local URL to the internet

In order to enable the OpenAI assistant to use the functions we must expose the locally running flask server to the internet. For this a nice tool to use is ngrok. You can download it from here and follow the instructions to expose the local URL.

Step 4: Function calling

Now we can create functions for each API route. Those are:

  • get_config - returns the current Kubernetes configuration
  • create_yaml - creates a new Kubernetes resource from a YAML file
  • check_events - checks the status of the Kubernetes resources

For each of those routes we create a function that calls the API and returns the response. Here is how the function looks like in flowise:

function-in-flowise

Step 5: Use brave search API

The secondary function of our assistant is to assist the user in finding information related to Kubernetes. We can use the brave search API to achieve this

Step 6: Testing

Now since we have the whole flow available, let's test the assistant.

flow

Let's start by asking what is the cluster we are running on:

what-cluster

Here the assistant used the get_config function to get the current Kubernetes configuration and correctly identified the cluster.

Now let's ask the assistant to create a new nginx based ingress:

nginx-deployment

Notice how the assistant correctly selected the create_yaml function to create the ingress and then used the check_events function after asking if we would like to see the output. It's also interesting that it has found a different event that was not related to the nginx ingress and classified it as unrelated to our request.

Now, let's submit a broken deployment and see if the assistant can fix it:

broken-nginx

In this case we have submitted a broken deployment and the assistant has correctly identified the issue and even proposed a fix.

Lastly, let's check if the assistant can help us undrstand some Kubernetes concepts:

concepts-search

Here the assistant has used the brave search API to find information about the Kubernetes resource model and provided a link to the source.

Closing Thoughts

We have successfully demonstrated that using function calling and carefully crafted prompt instructions, we can increase the reliability and usefulness of AI assistants in managing Kubernetes resources. This approach can be further extended to other use cases and AI models.

Here are a few use cases where this approach can be useful:

  • 🤖 improved learning experience
  • 📈 help increase Kubernetes adoption
  • 🌐 virtual Kubernetes assistant

This guide demonstrates using function calling and carefully crafted prompt instructions to enhance the reliability and usefulness of AI assistants in Kubernetes management. These strategies can be extended to other use cases and AI models

Next Steps

Give it a try, build your own AI powered Kubernetes management today:

  • Clone the Repository: Visit GitHub to get the necessary files.
  • Set Up Your Assistant: Follow the instructions setup prerequisites and start building your Kubernetes Co-Pilot.
  • Engage with the Community: Share your experiences and solutions, there setup is very much proof of concept and can be improved in many ways.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my blogs on Medium