PyLlamaUI is a lightweight, privacy-focused desktop GUI for running Large Language Models locally on your machine. No internet required, no data collection.
Engineered for performance, privacy, and a great developer experience.
Your data, your models, your machine. No internet connection is needed after setup.
Linux-first development with full support for Windows. Use it on your favorite OS.
Built with Python 3.10+ and a simple REST API for robust performance.
Natively supports model inference via Ollama, including models like TinyLLaMA, Gemma, and DeepSeek-Coder.
Integrates with VS Code via the Webview API for a native editor experience.
Optimized for performance with minimal resource overhead, even on older hardware.
Install PyLlamaUI on your favorite platform.
Download the .exe installer for a simple setup experience.
Choose your platform and run the installer. No complex configuration required.
Ensure Ollama is running and pull your desired model (e.g., ollama run llama3).
Launch PyLlamaUI, select your model, and begin your first offline conversation.
PyLlamaUI is 100% free and open-source. We welcome contributions, bug reports, and feature requests. Help us build the best offline AI tool.