Skip to main content

3 posts tagged with "ai"

View All Tags

5 Must-Have Command Line AI Tools

· 10 min read

Terminal Friendly AI Projects

Introduction

Artificial Intelligence (AI) is not just a buzzword; it’s a transformative force reshaping industries across the globe. The U.S. AI market alone is projected to reach approximately $594 billion by 2032, growing at a robust CAGR of 19% from 2023. This staggering growth underscores AI’s pivotal role in driving innovation and efficiency.

If you’re not leveraging AI in your workflows yet, you might be missing out on significant opportunities. AI is rapidly becoming a critical component in staying competitive, and those who adopt AI tools now are positioning themselves at the forefront of technological advancement.

In this blog, I would like to show you 5 tools that improved my productivity and. You don’t need to be a software developer or IT professional to take advantage of the same efficiencty boost.

Let’s look at some statistics

The U.S. AI market is expected to reach approximately $594 billion by 2032, with a CAGR of 19% from 2023​ (Statistics and Facts for 2024 CompTIA)​.

Approximately 34% of companies are currently using AI, with an additional 42% exploring AI technologies. This highlights a significant interest and ongoing integration of AI in business operations​ (Statistics and Facts for 2024 CompTIA)​.

AI is projected to create 12 million more jobs than it will replace by 2025. The demand for AI specialists is anticipated to rise, with 97 million positions needed in the industry by that time​ (Statistics and Facts for 2024 CompTIA)​.

Why the Terminal?

You are probably familiar with ChatGPT or Claude web interfaces and those are great first steps to try out generative AI. However, those web UIs have important limitations; they are generic and not tailored to specific needs. While convenient, they lack the flexibility to integrate seamlessly with custom workflows and automate repetitive tasks.

The command line is a powerful interface that offers more control, efficiency and flexibility than graphical interfaces. It allows for scripting, automation, and quick access to powerful tools without the overhead of a graphical interface.

AI is revolutionizing the way we interact with technology. By integrating AI with command line tools, we can automate complex tasks, gain deeper insights from data, and improve overall productivity.

It’s Easier Than You Think

Using AI tools in the terminal is straightforward. Many tools provide simple installation commands and detailed documentation to help you get started quickly.

Command line tools often offer more granular control over their operation, allowing you to customize your workflows to suit your specific needs.

Better Automation

Terminal-based AI tools excel at automation. They can be easily integrated into shell scripts, scheduled with cron jobs, and used in combination with other command line utilities to create powerful automated workflows.

Tools

Before we jump into the tooling overview, let’s make sure we are on the same page, defining what’s what in the terminal universe. Put simply:

Terminal Related Definitions

Ollama

A command-line tool developed my Meta, that allows you to run AI models locally, enabling seamless and secure interactions with various LLMs directly from your terminal. Chat or interact with AI models through APIs on your local machine or a remote server in home network or somewhere else.

  • ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language…

🎥 See ollama in action on asciinema: https://asciinema.org/a/664871

Fabric

An open-source framework designed to augment human capabilities using AI. It offers a modular system for solving specific problems through a crowdsourced set of AI prompts, known as Patterns.

Fabric enables users to integrate AI into their daily tasks seamlessly, from summarizing content to generating social media posts. It emphasizes usability with a command-line interface and clear Markdown-based patterns. The tool also supports creating personal AI infrastructures, making it a versatile choice for both simple and complex AI applications.

  • danielmiessler/fabric: fabric is an open-source framework for augmenting humans using AI…

However, we can do better than this and let fabric speak for itself using its — agent flag:

echo "research the https://github.com/danielmiessler/fabric repository and summarize what the tool does" \
| fabric --agents | summarize

This gives us the following response:

ONE SENTENCE SUMMARY:
The fabric tool enhances productivity by automating tasks, integrating with software, and offering features to streamline user workflows.

MAIN POINTS:
1. The fabric tool is designed to improve productivity and streamline tasks.
2. It focuses on automation and simplification of repetitive tasks.
3. Offers seamless integration with other software or services.
4. Provides a suite of features aimed at enhancing process efficiency.
5. Direct consultation of the GitHub repository is recommended for full potential leverage.
6. The repository contains a comprehensive README file for detailed insights.
7. Additional documentation is available, prepared by the repository’s owner.
8. In-depth insights into the tool’s purpose and features are provided.
9. Setup instructions and practical use cases are included in the documentation.
10. Effective utilization requires exploring the GitHub repository for maximum utility.

TAKEAWAYS:
1. The fabric tool is pivotal for enhancing productivity through automation and integration.
2. Direct exploration of the GitHub repository is essential for understanding its full capabilities.
3. The README file and additional documentation are key resources for users.
4. The tool offers significant benefits in streamlining and improving efficiency of tasks.
5. Understanding and applying the tool’s features requires consulting the provided documentation.

Extract Transcript from YouTube Videos

Fabric can also extract transcript from youtube videos and pass it through any prompt(s). Let’s extract some ideas from one of my recent videos

yt --transcript https://www.youtube.com/watch\?v\=EK_ivK8HlNo | create_micro_summary

ONE SENTENCE SUMMARY:
- Kubernetes development challenges are mitigated by MirrorD for faster feedback loops and seamless remote environment testing.

MAIN POINTS:
- Kubernetes excels in production but complicates development and testing.
- Fast feedback loops are crucial for efficient Kubernetes development.
- MirrorD enables local processes to run in a remote Kubernetes context.

TAKEAWAYS:
- Development on Kubernetes requires innovative tools for efficiency.
- MirrorD shortens the feedback loop significantly.
- Local and remote environment synchronization is key for developers.

The summary is pretty spot on!!

Shell Automation

There are so many commands with flags and options that it’s impossible to remember… thinking about you ffmpeg . To make this easier for myself, I have created a tool that helps me with various commands.

  • Piotr1215/aicmdtools: GoAI is a Go library and command line for interacting with OpenAI…

🎥 Check out this asciinema recording to see what it can do: https://asciinema.org/a/TFJABWbNochPWuuBFhiHDHBLF

Terminal Chat

Terminal chat apps are similar to ollama but allow for talking to various LLM providers such as OpenAI, Claude and more. Here are two I’ve experimented with and can recommend.

However, I usually use AI chats through neovim, more on it later in a bonus section!

  • sigoden/aichat: All-in-one AI CLI tool that integrates 20+ AI platforms, including OpenAI…

  • ekkinox/yai: Your AI powered terminal assistant.

Aider — Programming Support

If you are a programmer, aider is absolutely amazing. While it’s not perfect, it’s the closest I know to GitHub Copilot (with chat function) and it’s completely free!

Let’s see if fabric can help us capture the main strengths of aider. Let’s paste the content or aider’s repo README.md and let fabric summarize it.

xclip -o -sel clipboard | summarize

ONE SENTENCE SUMMARY:
Aider is an AI-powered pair programming tool that enhances coding productivity by integrating with local git repositories and various LLMs.

MAIN POINTS:
1. Aider facilitates pair programming with AI, specifically designed for local git repositories.
2. Compatible with leading LLMs like GPT-4o and Claude 3 Opus, offering versatile AI assistance.
3. Installation and setup are straightforward, requiring only a few commands to start.
4. Supports editing multiple files simultaneously for comprehensive code modifications.
5. Automatically commits changes to git with appropriate commit messages, streamlining version control.
6. Compatible with a wide range of programming languages, including Python, JavaScript, and more.
7. Utilizes a complete map of your git repository for better context understanding in larger projects.
8. Allows for voice commands and adding images or URLs in chat for enhanced interaction.
9. Achieved the top score on SWE Bench, indicating superior performance in solving real GitHub issues.
10. Offers extensive documentation, tutorials, and a supportive Discord community for users.

TAKEAWAYS:
1. Aider significantly boosts coding efficiency by automating tasks and providing intelligent suggestions.
2. Its compatibility with major LLMs ensures a flexible and powerful coding assistant experience.
3. The tool’s ability to understand and navigate large codebases makes it suitable for complex projects.
4. Community feedback highlights Aider’s impact on productivity and its user-friendly design.
5. Aider’s recognition in benchmarks underscores its effectiveness in addressing real-world coding challenges.

Bonus for NeoVim Nerds

If you happen to use the best editor known to mankind… neovim btw, you are in for a treat. Neovim plugins ecosystem, due to adopting lua as plugins programming language, is very strong and versatile. Here are two plugins that I use almost daily when coding, creating documentation or chatting with LLMs.

Gen.nvim

Let’s us use locall ollama models as a neovim copilot.

Gp.nvim

Provides rich chat experience and copilot-like functionality in the editor.

Ok, we went through a lot of tools, let’s summarize:

ToolCategoryDescriptionURL
OllamaLocal AI ModelsRun AI models locally and interact with them through terminal.Ollama GitHub
FabricAI FrameworkModular framework for solving problems using AI prompts.Fabric GitHub
Shell AutomationCommand Line AutomationTool to simplify various commands and automate tasks.Shell Automation GitHub
AIChatTerminal ChatIntegrates multiple AI platforms for chat via terminal.AIChat GitHub
YaiTerminal AssistantAI-powered assistant for terminal commands and tasks.Yai GitHub
AiderAI Pair ProgrammingAI tool for pair programming with local git integration.Aider GitHub
gen.nvimNeovim PluginGenerate text using LLMs with customizable prompts in Neovim.gen.nvim GitHub
gp.nvimNeovim PluginChatGPT sessions and copilot functionality in Neovim.gp.nvim GitHub

Closing Thoughts

Most of the tools mentioned can work with proprietary models such as OpenAI, Claude but also with open source models like the ones provided by ollama.

Integrating AI with command line tools not only boosts productivity but also transforms how we interact with technology. The tools mentioned here, from Ollama to Fabric, offer powerful capabilities right at your fingertips, enhancing automation, insight, and efficiency.

Ready to supercharge your terminal? Let me know which tool is your favourite, did I miss some that you use and find valuable?

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my Website

📺 Subscribe to my YouTube Channel

Build Your Own Kubernetes Co-Pilot: Harness AI for Reliable Cluster Management

· 8 min read

kubernetes-copilot

Have you ever felt frustrated by nonsensical AI outputs and hallucinations? If yes, this blog is going to be helpful for new or seasoned Kubernetes users who want to explore how AI can help manage Kubernetes resources more reliability.

What are AI hallucinations?

In a nutshell, AI hallucination occurs when a large language model (LLM) generates misleading or incorrect information in response to a prompt. This can happen due to various factors such as insufficient or flawed training data, overfitting, unrecognized idioms or slang, and adversarial inputs. These hallucinations manifest when the AI, aiming to produce coherent responses, makes errors that range from subtle factual inaccuracies to nonsensical or surreal outputs, similar to how humans might perceive patterns in random visuals.

In the context of Kubernetes, these aren't just minor nuisances; they can lead to significant operational blunders. In this blog, we explore how to enhance reliability of AI responses, mitigate the risks of hallucinations, manage Kubernetes resources using AI!

How can AI be helpful in managing Kubernetes resources?

Before we start exploring the technical setup, let's answer the question how can AI be helpful in managing Kubernetes resources? Imagine an AI assistant that can help you create, fix, and validate Kubernetes resources in a conversational manner. You might ask it to create a new deployment, fix a broken service, or validate a YAML file. If you are learning Kubernetes, this assistant can be a great learning tool to help you explore the cluster and clarify Kubernetes concepts.

Kubernetes helps manage cloud applications, but its YAML configurations can be tricky. When working with AI tooling, we've all faced those moments when AI tools, designed to ease this burden, instead contribute to it by generating nonsensical outputs; a phenomenon we refer to as "AI hallucinations".

Problem Statement

Let's state the issue we do have with AI in the context of Kubernetes:

  • 🤖 AI faces issues with consistency and reliability when dealing with large YAML files.
  • 🧠 AIs can have "hallucinations," generating illogical outputs that become more problematic as the input size increases.
  • 📈 This inconsistency makes working with AI models non-deterministic and error prone

Goals

Our main goal is to increase reliability and consistency in AI responses. We use two main techniques to achieve this:

  • 🛠️ Function calling to bind API routes as tools available for the AI Assistant to communicate with a Kubernetes cluster
  • 🔍 Internet search APIs to provide accurate and relevant information about Kubernetes

Implementation Plan

The following steps outline the plan to achieve our goals:

  • 💼 Use Flowise to implement the logic flow so that the AI Assistant can help with managing and troubleshooting a Kubernetes cluster on our behalf.
  • 🛠️ Create a simple Flask API that exposes functions for the AI Assistant to enable it to interact with the Kubernetes cluster.
  • 💻 Use function calling to bind the API routes as tools available for the AI Assistant which enables communication with a local Kind cluster with Kubernetes running.
  • 💬 Test the AI Assistant with various scenarios to ensure it can handle different Kubernetes configurations and provide accurate responses.

Assistant in Action

To follow along, you can clone the repository from GitHub, install prerequisites and follow the instructions.

Step 1: Setup the AI Assistant

In flowise create a new assistant. Notice that I'm using OpenAI's latest model, but for testing purposes you can select less powerful models or any open source model. The quality of responses will be affected, but it will still work.

Here are instructions that the assistand will follow:

You are a helpful Kubernetes Assistant specializing in helping build, fixing and validating various kubernetes resources yaml files.
Start by greeting the user and introducing yourself as a helpful and friendly Kubernetes Assistant.

If the user asks for help with creating or validating yaml files, do the following:

- if the files are correct proceed with the next steps, if no propose fixes and correct the file yourself
- if user asks for information about the kubernetes cluster use the get_config function and provide relevant information
- ask the user to submit one yaml file at a time or create one yaml file yourself if the user asks you to create one
- send the YAML content and only the YAML content to the create_yaml function
- immediately after use the tool cleanup_events to clean any old events
- ask the user if they would like to see the validation results and inform them that it takes some time for the resources to be installed on the cluster
- if the user responds yes, use the tool check_events to see if everything is correct
- if the validation passes, ask the user if they want to submit another YAML file
- if the validation fails, propose a new corrected YAML to the user and ask if the user would like to submit it for validation
- repeat the whole process with new YAML files

Your secondary function is to assist the user in finding information related to crossplane. Example categories:

- for questions about kubernetes concepts such as pods, deployments, secrets, etc, use brave search API on https://kubernetes.io/docs/concepts/
- for generic Kubernetes questions use brave search API on kubernetes docs: https://kubernetes.io/docs/home/
- for questions regarding kubernetes releases and features use brave search API on kubernetes releases documentaiton: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG. If you are asked for details about specific release, select one of the releases, otherwise use latest stable release.

Step 2: Flask API

The server.py file defines API routes that wrap the kubectl commands.

ℹ️ The flask server is a naive implementation for demonstration purposes only. In real life scenario, we wouldn't call kubectl directly from the server but rather use a client library like kubernetes or client-go.

Step 3: Expose local URL to the internet

In order to enable the OpenAI assistant to use the functions we must expose the locally running flask server to the internet. For this a nice tool to use is ngrok. You can download it from here and follow the instructions to expose the local URL.

Step 4: Function calling

Now we can create functions for each API route. Those are:

  • get_config - returns the current Kubernetes configuration
  • create_yaml - creates a new Kubernetes resource from a YAML file
  • check_events - checks the status of the Kubernetes resources

For each of those routes we create a function that calls the API and returns the response. Here is how the function looks like in flowise:

function-in-flowise

Step 5: Use brave search API

The secondary function of our assistant is to assist the user in finding information related to Kubernetes. We can use the brave search API to achieve this

Step 6: Testing

Now since we have the whole flow available, let's test the assistant.

flow

Let's start by asking what is the cluster we are running on:

what-cluster

Here the assistant used the get_config function to get the current Kubernetes configuration and correctly identified the cluster.

Now let's ask the assistant to create a new nginx based ingress:

nginx-deployment

Notice how the assistant correctly selected the create_yaml function to create the ingress and then used the check_events function after asking if we would like to see the output. It's also interesting that it has found a different event that was not related to the nginx ingress and classified it as unrelated to our request.

Now, let's submit a broken deployment and see if the assistant can fix it:

broken-nginx

In this case we have submitted a broken deployment and the assistant has correctly identified the issue and even proposed a fix.

Lastly, let's check if the assistant can help us undrstand some Kubernetes concepts:

concepts-search

Here the assistant has used the brave search API to find information about the Kubernetes resource model and provided a link to the source.

Closing Thoughts

We have successfully demonstrated that using function calling and carefully crafted prompt instructions, we can increase the reliability and usefulness of AI assistants in managing Kubernetes resources. This approach can be further extended to other use cases and AI models.

Here are a few use cases where this approach can be useful:

  • 🤖 improved learning experience
  • 📈 help increase Kubernetes adoption
  • 🌐 virtual Kubernetes assistant

This guide demonstrates using function calling and carefully crafted prompt instructions to enhance the reliability and usefulness of AI assistants in Kubernetes management. These strategies can be extended to other use cases and AI models

Next Steps

Give it a try, build your own AI powered Kubernetes management today:

  • Clone the Repository: Visit GitHub to get the necessary files.
  • Set Up Your Assistant: Follow the instructions setup prerequisites and start building your Kubernetes Co-Pilot.
  • Engage with the Community: Share your experiences and solutions, there setup is very much proof of concept and can be improved in many ways.

Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my blogs on Medium

Development with AI: the GAG Stack

· 5 min read

gag-stack

Introduction

Are developers going to be replaced by AI? What is the future of software development? Those questions are asked again and again as the software development landscape is evolving rapidly.

Viewpoints are polarized and generate heated debates and discussions. There is enough debate to fill a book, but in this article, I would like to explore practical applications of AI in software development. We are operating under the assumption that AI is here to stay and evolve, but at the end of the day, it is a tool that can be used to enhance our capabilities.

The GAG Stack

The GAG Stack is a bit of a tongue-in-cheek term that I came up with to describe a workflow that I have been experimenting with. It stands for GPT Pilot, Aider, and GitHub Copilot. These are three AI tools that exemplify well the stages of software development.

Communication and collaboration between people is at the heart of software development. For as long as this stays the case, AI tools will be used to help us model this process. This is how it could look like using the GAG Stack:

gag-stack-flow

We will still have to gather requiremetns, design, refine, test, retest, fix bugs, debug and deploy. The paradigm doesn't change much, the tools however do. The tools evolved to help us with the process.

Example Workflow

Let's take a look at how the GAG Stack could be used in practice. We will use a simple example of building a to-do list app.

Setup the environment

I'm using neovim and linux for my development workflow, yours might be different. Refer to the installation instructions for all the tools to setup on your machine.

For me the gtp-pilot runs via docker-compose, aider is installed via pip and GitHub Copilot as a neovim plugin.

Design and Refinement

We start by gathering requirements for our to-do list app. We want to have a simple app that allows us to add, remove and edit tasks. Let's start by providing this concept to GPT Pilot.

The Docker image has only node installed, so we are going to use it. It should be simple to add new tools to the image or use local setup. Here is initial prompt for a simple todo app:

gpt-pilot-prompt

The main value of this tool is the ability to refine and iterate on the desing. GPT Pilot will ask for specifications and generate an initial scaffolding:

architecture-questions

As a result of this back and forth, GTP Pilot generated app in a local folder (mounted via volume in docker-compose):

~/gpt-pilot-workspace/minimal-todo-app🔒 [ v16.15.1]
➜ tree -L 3 -I node_modules
.
├── app.js
├── package.json
└── package-lock.json

0 directories, 3 files

After a few iterations, we have a simple app running:

app-running with the following code:

// Require Express and Body-parser modules
const express = require("express");
const bodyParser = require("body-parser");

// Initialize a new Express application
const app = express();

// Configure the application to use Body-parser's JSON and urlencoded middleware
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));

// Start the server
const port = process.env.PORT || 3002;

app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});

Feature development

Now we can use Aider to help us with the development of the app. Aider is a development accelerator that can help with code modifications and features development.

Aider interface:

➜ aider
Aider v0.27.0
Model: gpt-4-1106-preview using udiff edit format
Git repo: .git with 4 files
Repo-map: using 1024 tokens
Use /help to see in-chat commands, run with --help to see cmd line args

Now we can generate feature for adding a new TODO item:

add-todo

We can keep iterating by adding new features and testing. For example:

iteration-features

Code Iteration

Finally, we can use GitHub Copilot to help us with the code iteration. GitHub Copilot is an autocompletion aid that can provide suggestions.

For example, here I want to log the GET request to the console, so I start typing:

autocompletion

And get autocomplete suggestions:

Conclusion

Obviously, the GAG stack is not the only set of tools, and the ones I've chosen might or might not have something to do with the resulting acronym. There is Devin, an open-source equivalent, Devina, that claims to be the first AI software engineer. There is Codeium, a free Copilot alternative. There are many other tools in this category, and the landscape is evolving rapidly.

Keen readers might have noticed that the underlying models used are OpenAI's GPT-3 and GPT-4. However, this is not a requirement. The tools can work with both local and remote models, paid and free. The choice of the model is up to the user.

So, are developers going to be replaced by AI? Are doomers or accelerationists right?

dommers-optimists

I think the answer is more nuanced. AI tools are here to stay, and they will be used to enhance our capabilities. The GAG stack is just one example of how AI can be utilized to assist us with software development.

As long as software development relies on human communication and creative collaboration, we will be talking about augmenting software development with AI rather than replacing it.