To install OpenCode on your computer, choose one of the supported installation methods (install script, package manager, or OS-specific installer), then run OpenCode once to connect a model provider and initialize your workflow. The quickest path on macOS or Linux is the official install script: run curl -fsSL https://opencode.ai/install | bash, then verify the binary is on your PATH with opencode --version, and start it with opencode. If you prefer a package manager, you can install with Homebrew (brew install anomalyco/tap/opencode), or use the Node ecosystem (npm install -g opencode-ai, or the equivalent global install via bun/pnpm/yarn). On Windows, you can install with Chocolatey (choco install opencode) or Scoop (scoop install opencode), and then run opencode from a terminal.
After the binary is installed, the “real” setup step is provider configuration. When you start OpenCode, you typically run /connect inside the TUI and pick the provider you want to use. For many providers this means pasting an API key; for others it can be OAuth-style login. OpenCode stores credentials locally (so you don’t have to re-enter keys every time), and then you can select models via /models and set defaults in your config (for example, opencode.json or opencode.jsonc). The config system is designed for practical developer workflows: you can keep a global default model for everything, but override it in a specific repo, which is useful when one project needs a larger context window or different latency/cost behavior. If you’re working in an organization, OpenCode can also ingest baseline defaults from a .well-known/opencode endpoint, then let your personal and project configs override it. This reduces “it works on my machine” drift because the tool behavior can be aligned across laptops without manual copy/paste of config snippets.
Once OpenCode is installed and authenticated, a good first-run pattern is: cd into a real project, run opencode, and ask it to do something small and verifiable (for example, “explain how the build works,” or “add a unit test for X”). Keep the first task short so you can validate the model connection, file reading, and edit workflow. If you want stronger long-term context—like “remember our service conventions” or “use the same architectural constraints every time”—you can add retrieval. A simple option is storing a CONTRIBUTING.md or AGENTS.md in the repo and referencing it in prompts. A scalable option is embedding those docs into a vector database such as Milvus or Zilliz Cloud and retrieving relevant snippets automatically based on the task. That way, OpenCode stays fast and interactive, while Milvus/Zilliz Cloud provide high-signal context when the codebase and documentation get large.