Local model discovery, launch controls, runtime readiness handling, and live model details.
Streaming chat with Markdown, code blocks, math rendering, reasoning panels, and prompt revision tools.
File attachment support with configurable limits and structured context chunking.
Logs, GPU monitoring, active process visibility, and analytics inside the same application shell.
Administrator controls for settings, model registry governance, user management, and benchmarks.
Chat history portability with export and import support for ongoing local AI workflows.
Get Notified at Launch
No spam. Just release updates, platform milestones, and the important stuff.
LLM Controller
5.x RC1
About LLM Controller
LLM Controller CE is a local-first AI control platform for running, managing, and evaluating language models on your own hardware.
It combines model launch controls, a modern chat workspace, file-aware conversations, live logs, GPU monitoring, analytics, and benchmarking inside one browser-based interface.
Built for practical local use, it keeps runtime visibility, conversation workflows, and administrator controls in the same daily workspace.
Key Features
Model Management โ scan, launch, stop, and govern your local model library.
Modern Chat Workflows โ streaming output, Markdown, code, math, stop, regenerate, and prompt editing.
File-Aware AI โ attach supported text and code files directly to prompts.
Observability โ live logs, NVIDIA GPU telemetry, process visibility, and analytics.
Benchmarking โ built-in CE evaluation tools for comparing eligible models on your own hardware.
LLM Controller CE (Community Edition) Local AI, done right.