Chat first
Users create chats, send prompts, stream responses, attach supported files, and manage their own chat history.
Open Chat UXLLM Controller Community Edition is a self-hosted browser app for chatting with local GGUF models through a configured llama-server executable.
The main experience is the chat UI. Signed-in users create chats, send prompts to the currently loaded main model, stream responses, attach supported files, and manage their own history.
A drawer bar keeps model status, live logs, GPU monitoring, analytics, chat import/export, benchmarks, and admin settings close to the chat so operators can monitor and manage the local runtime.
Users create chats, send prompts, stream responses, attach supported files, and manage their own chat history.
Open Chat UXLogs, model status, GPU telemetry, analytics, and chat history tools stay close to the chat without taking over the page.
Open Runtime VisibilityAdmins manage the local model registry, model launch controls, users, password policy, auth email settings, benchmarks, and destructive chat tools.
Open AdministrationThe installer and Admin Settings drawer define the database connection, runtime defaults, attachment limits, SMTP settings, and local prerequisites.
Open Installation & SettingsLLM Controller launches the configured llama-server executable against local GGUF model files. Admins choose which scanned models are enabled, which one is loaded as the main chat model, and whether a separate title model process is running.
MySQL stores accounts, app settings, login attempt records, and benchmark metadata/results. A local SQLite database stores chat sessions, chat turns, response metadata, usage metrics, and version/variant state.