Run Local AI Agents Like a Pro – Part 1: Setting Up the Environment



🧭 About This Series: Run Local AI Agents Like a Pro
In this 3-part blog series, we walk through everything you need to create a powerful local AI agent environment — no cloud APIs required. You'll go from setting up your development environment to running intelligent agents on your laptop.
Here’s what’s coming:
🔧 Part 1 – Setting Up the Environment (this post)
We set up WSL2, Ubuntu 24.04, Python, Docker, and a modern shell experience to create a stable, repeatable base for AI tools.
🤖 Part 2 – Running LLMs Locally with Ollama
Install and serve open-source models like LLaMA 3, Mistral, and Phi-3 on your machine with Ollama. This tool simplifies model serving to a single CLI command and supports both REST APIs and chat interfaces.
🧠 Part 3 – Building Local AI Agents with Goose
Connect your local LLMs with a powerful agent runtime using Goose. You’ll create agents that can summarize documents, generate code, and even automate tasks — all locally.
This series helps you unlock:
- End-to-end privacy
- Full control over your tools
- Offline capability
- Extensible agent design
Let’s get started.
Open-source AI tooling has never been more powerful — or more accessible. But before you run LLMs like LLaMA 3, Mistral, or Phi-3 on your laptop, you need a dependable local environment.
That’s what this post is about: laying the foundation.
We’ll walk through setting up a clean, fast, and idempotent local AI environment using WSL2, Ubuntu 24.04, and a few smart tools.
🧱 Why WSL2 + Ubuntu?
If you're on Windows, WSL2 (Windows Subsystem for Linux) is a must-have:
- Run native Linux tooling
- Install dev environments in seconds
- Seamlessly integrate with VS Code and Docker
- GPU acceleration support
Ubuntu 24.04 is the latest LTS release — stable, fast, and widely supported.
If you're on macOS or Linux, you can skip WSL and use the same setup directly in your terminal.
🛠️ Step-by-Step: Install WSL2 + Ubuntu 24.04
✅ Step 1: Enable WSL and Virtualization
Run this in PowerShell (as Administrator):
wsl --install -d Ubuntu-24.04
If Ubuntu-24.04
isn’t available yet:
wsl --list --online
wsl --install -d Ubuntu
After the install, reboot if prompted. Then launch Ubuntu from the Start menu.
⚡ Automate Setup with a PowerShell Script
Manually installing Python, Docker, and Git is tedious. Instead, use the wsl2-bootstraper script:
irm https://raw.githubusercontent.com/ramkrishs/wsl2-bootstraper/main/setup-wsl.ps1 | iex
This script:
- Creates a new UNIX user
- Sets up Git (name/email)
- Installs Python via pyenv
- Installs Docker + Docker CLI
- Sets up Oh My Zsh
- Enables systemd for background services
You can choose which components to install interactively.
🐍 Python: Why Use Pyenv?
Instead of relying on Ubuntu’s default Python, pyenv
lets you:
- Install multiple Python versions
- Easily switch between them
- Avoid version conflicts
Once provisioned, confirm Python:
python --version
pyenv versions
🐳 Docker + WSL: Local Containers Made Easy
Docker runs inside WSL2 using a socket binding. This allows you to:
- Use
docker
CLI from Ubuntu - Avoid switching between Windows/Linux Docker
- Pull and run LLM containers easily
Check your install:
docker run hello-world
If successful, you're ready to run containerized apps like Ollama or LangChain servers.
🧠 Oh My Zsh: Shell Productivity Boost
If you’re new to Zsh:
- It’s like Bash, but better
- With Oh My Zsh themes and plugins, you get autocomplete, Git hints, syntax highlighting
Try:
zsh
Switch permanently:
chsh -s $(which zsh)
🚀 Optional: Enable GPU for AI Acceleration
You can accelerate inference using your local NVIDIA GPU.
Steps:
- Update WSL:
wsl --update
- Shut it down:
wsl --shutdown
- Install the NVIDIA WSL2 Driver
- Test in Ubuntu:
nvidia-smi
If your GPU shows up, you're ready for accelerated local inference.
🧪 Verify Your Setup
Here’s a checklist to validate everything:
# Run Zsh
zsh
# Check Python and pyenv
python --version
pyenv versions
# Docker test
docker run hello-world
# Git config
git config --global user.name
Everything working? You’re now ready to run LLMs locally.
What’s Next?
In Part 2: Running LLMs Locally with Ollama, we'll install Ollama, a local runtime to serve LLaMA 3, Mistral, and more in seconds.
You'll go from "bare metal" to "chat with LLM locally" in under 10 minutes.
Stay tuned.