Local AI Dashboard
Run LLMs on your hardware. Live logs. Live analytics. No cloud. Beta 4

🖥️ Multi-GPU Optimized


Automatically detects NVIDIA GPUs and supports multi-GPU model splitting and dual-model setups (main chat + lightweight helper).

👤 Modern Dashboard


Streaming chat UI with Markdown + math, live server logs, NVIDIA SMI, and performance analytics built in.

⚙️ Model Management


Scan your model library, start/stop/switch instantly, and tune runtime settings without editing configs.

🔒 Runs Locally


No cloud required. Run on your own hardware with full privacy and LAN-first control.

🧩 CE + Pro Roadmap


Community Edition focuses on a solid local experience. Pro will add multi-server orchestration and scale-out features.

🐧 Linux & Windows Support


Built to run on real machines: Windows and Linux deployments with broad NVIDIA hardware support.
🛣️ See the Roadmap 📓 Read the Blog

Get Notified at Launch

No spam. Just CE release updates and the important stuff.