Run LLMs Offline, Your Way.

PyLlamaUI is a lightweight, privacy-focused desktop GUI for running Large Language Models locally on your machine. No internet required, no data collection.

PyLlamaUI Interface

A Modern, Private, and Cross-Platform Experience

Engineered for performance, privacy, and a great developer experience.

Fully Offline & Private

Your data, your models, your machine. No internet connection is needed after setup.

Truly Cross-Platform

Linux-first development with full support for Windows. Use it on your favorite OS.

Modern Python Backend

Built with Python 3.10+ and a simple REST API for robust performance.

Ollama Powered

Natively supports model inference via Ollama, including models like TinyLLaMA, Gemma, and DeepSeek-Coder.

Seamless Integration

Integrates with VS Code via the Webview API for a native editor experience.

Lightweight & Fast

Optimized for performance with minimal resource overhead, even on older hardware.

Get Started in Seconds

Install PyLlamaUI on your favorite platform.

Windows Installer

Download the .exe installer for a simple setup experience.

Download for Windows (.exe)

Simple Three-Step Setup

1

Install PyLlamaUI

Choose your platform and run the installer. No complex configuration required.

2

Connect to Ollama

Ensure Ollama is running and pull your desired model (e.g., ollama run llama3).

3

Start Chatting

Launch PyLlamaUI, select your model, and begin your first offline conversation.

Built for the Community

PyLlamaUI is 100% free and open-source. We welcome contributions, bug reports, and feature requests. Help us build the best offline AI tool.