LLM Controller
Beta 4
About LLM Controller
LLM Controller is a local-first dashboard for running and managing Large Language Models on your own hardware.
Launch, switch, and monitor models with zero cloud dependencies, full privacy, and real-time insight into performance and GPU usage.
Built on llama-server, it detects NVIDIA GPUs and supports advanced multi-GPU and dual-model setups without manual configuration.
Key Features
- Model Management β scan, launch, stop, and switch models quickly.
- Live Analytics β logs, NVIDIA SMI, throughput, and latency metrics.
- Modern Chat UI β streaming output, Markdown, code, math, and titles.
- Local & Private β fully self-hosted. No cloud, no data sharing.
- Actively Developed β built to evolve with new models and features.
LLM Controller CE (Community Edition)
Local AI, done right.