Glossary of Core Terms
| Term | Simple Definition | Why It Matters |
| LLM (Large Language Model) | The Brain. The core software file that understands, generates, and responds to human language. | Determines the quality and speed of your AI’s conversation. |
| Quantization (GGUF / Q4_K_M) | The Compression. Shrinks the size and memory needed for an LLM so it can run efficiently on consumer RAM/VRAM. | Allows high-quality models to run quickly on consumer hardware. |
| Ollama | The Engine. The easiest tool for downloading, running, and managing local LLMs. | Serves the LLM’s intelligence to the various frontends. |
| Docker | The Package. A tool that bundles up software (like Open WebUI) and its dependencies into a single, neat package, simplifying installation. | Prevents software conflicts and simplifies installation. |
| WSL (Windows Subsystem for Linux) | A compatibility layer for running a Linux OS within Windows. | Essential for Windows users who want to use the high-performance CUDA images with an NVIDIA GPU. |
| RAG (Retrieval-Augmented Generation) | The Memory. The technique that allows the AI to search your private documents or notes to inform its answers, giving it long-term context. | Key component for Project C’s ability to remember and access files. |
| VMC Protocol | The Lip Sync Signal. A simple signal that connects the voice (TTS) to the visual avatar (VRM), telling the avatar how to move its mouth. | Essential for bringing your avatar to life in Project C. |
| Tailscale (Tailnet) | The Secure Link. A security tool that creates a private, encrypted network connection between your home computer and your remote devices. | Allows you to securely access your local AI from anywhere without risky router setup. |
