# Repo Hub — Complete Documentation > Give your AI coding assistant the full picture. Repo Hub lets your AI see across all your repositories as one workspace. Configure agents, skills, MCPs, and workflows in a single hub.yaml file. --- # Getting Started > Get up and running with Repo Hub in minutes. Repo Hub gives your AI coding agent everything it needs to ship real features — context across your codebase, structured agent workflows, and infrastructure access. Follow these steps to set up your first workspace. ## Prerequisites - **Node.js** 18+ installed - **Git** configured with SSH access to your repositories - **Cursor**, **Kiro**, or any AI-powered editor installed ## Quick Start Every command can be run with `npx @arvoretech/hub `. To use the shorter `hub` alias throughout this guide (and all other docs), install it globally first: ```bash npm i -g @arvoretech/hub ``` ### 1. Initialize a new hub ```bash hub init my-hub cd my-hub ``` This creates a new directory with a `hub.yaml` configuration file. The file includes a schema reference that enables autocomplete, validation, and hover docs in your editor automatically (see [Configuration](/docs/configuration#editor-support-autocomplete--validation) for details). ### 2. Add your repositories ```bash hub add-repo git@github.com:company/api.git --tech nestjs hub add-repo git@github.com:company/frontend.git --tech nextjs ``` Each repository is declared in `hub.yaml` with its tech stack, commands, and environment configuration. ### 3. Set up the workspace ```bash hub setup ``` This clones all repos, starts infrastructure services (databases, caches), and installs dependencies. ### 4. Generate editor configuration ```bash hub generate --editor cursor # or hub generate --editor kiro # or hub generate --editor claude-code ``` This reads `hub.yaml` and produces editor-specific files: - **Cursor**: `.cursor/rules/orchestrator.mdc`, `.cursor/agents/*.md`, `.cursor/mcp.json`, `.gitignore` + `.cursorignore` - **Kiro**: `.kiro/steering/orchestrator.md`, `.kiro/steering/agent-*.md`, `.kiro/settings/mcp.json`, `AGENTS.md`, `.gitignore` - **Claude Code**: `CLAUDE.md`, `.claude/agents/`, `.mcp.json`, `.gitignore` ### 5. Open and start building Open the project in your editor. Your AI now sees all repos and follows the generated workflow. ## How the Context Pattern Works The magic is in two lines of config: ```bash # .gitignore — repos are excluded from the hub's git api frontend # .cursorignore — but included for AI context !api/ !frontend/ ``` Your AI sees all repos as one workspace. Each repo keeps its own git history. Zero migration, zero overhead. ## What Happens When You Generate When you run `hub generate --editor cursor`, the CLI: 1. Reads your `hub.yaml` declaring the full pipeline 2. Generates an orchestrator rule with workflow instructions 3. Creates agent definitions for each pipeline step 4. Configures MCP connections for infrastructure access 5. Sets up `.gitignore` and `.cursorignore` for the context pattern **The editor is the runtime.** There is no daemon, no server, no separate process. Your AI editor executes the workflow by following the generated rules. ## Next Steps - [Configuration](/docs/configuration) — Learn the full `hub.yaml` schema - [Agent Orchestration](/docs/agents) — Understand how agents collaborate - [Skills](/docs/skills) — Package domain knowledge for agents - [MCPs](/docs/mcps) — Connect AI to your infrastructure --- # Configuration > Overview of the hub.yaml schema. The `hub.yaml` file is the single source of truth for your multi-repo workspace. It declares everything — repositories, tools, services, environments, MCPs, integrations, and the development workflow. ## Editor Support (Autocomplete & Validation) Repo Hub provides a [JSON Schema](https://json-schema.org/) for `hub.yaml` that enables autocomplete, inline validation, hover documentation, and error highlighting in any editor with YAML language server support. ### How it works When you run `hub init`, the generated `hub.yaml` includes a schema comment at the top: ```yaml # yaml-language-server: $schema=https://raw.githubusercontent.com/arvoreeducacao/rhm/main/schemas/hub.schema.json name: my-company # ... ``` This single line gives you: - **Autocomplete** for all keys and enum values (tech stacks, MCP packages, pipeline steps, actions) - **Validation** with real-time errors in the Problems panel - **Hover documentation** describing every field - **Required field checking** (`name` and `repos` are required) ### Prerequisites | Editor | What you need | |--------|--------------| | **Cursor** | Works out of the box (YAML support is built-in) | | **VS Code** | Install the [YAML extension by Red Hat](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml) | | **Kiro** | Works out of the box | | **Neovim** | Configure [yaml-language-server](https://github.com/redhat-developer/yaml-language-server) via LSP | ### Adding to an existing hub.yaml If you created your `hub.yaml` before this feature, just add the comment to the first line: ```yaml # yaml-language-server: $schema=https://raw.githubusercontent.com/arvoreeducacao/rhm/main/schemas/hub.schema.json ``` That's it. No extension to install, no config to change. --- ## Full Example ```yaml name: my-company tools: node: "22.18.0" pnpm: "10.26.0" repos: - name: api path: ./api url: git@github.com:company/api.git tech: nestjs env_file: .env commands: install: pnpm install dev: pnpm dev build: pnpm build skills: [backend-nestjs] - name: backend path: ./backend url: git@github.com:company/backend.git tech: elixir tools: erlang: "27.3.3" elixir: "1.18.3-otp-27" - name: frontend path: ./frontend url: git@github.com:company/frontend.git tech: nextjs env_file: .env.local skills: [frontend-nextjs] services: - name: postgres image: postgres:16 port: 5432 - name: redis image: redis:7-alpine port: 6379 env: profiles: local: description: "Local environment - Docker services" staging: aws_profile: my-company-stg secrets: api: api-staging-secret prod: aws_profile: my-company-prd secrets: api: api-prod-secret overrides: local: api: DATABASE_URL: "postgres://localhost:5432/mydb" mcps: - name: postgresql package: "@arvoretech/postgresql-mcp" - name: playwright package: "@playwright/mcp" integrations: github: pr_branch_pattern: "{linear_id}-{slug}" slack: channels: prs: "#eng-prs" hooks: pre_tool_use: - type: command command: "./hooks/block-dangerous.sh" matcher: "rm -rf|drop table" after_file_edit: - type: command command: "./hooks/format.sh" session_start: - type: command command: "./hooks/init.sh" commands: review: ./commands/review.md deploy: ./commands/deploy.md workflow: prompt: prepend: | Always respond in Portuguese. sections: after_repositories: | The API uses a custom auth middleware. See api/docs/auth.md. pipeline: - step: refinement agent: refinement - step: coding agents: [coding-backend, coding-frontend] parallel: true - step: review agent: code-reviewer - step: deliver actions: [create-pr, notify-slack] ``` ## Schema Sections Each section of `hub.yaml` has its own documentation page with full schema reference, examples, and CLI commands: | Section | Description | |---------|-------------| | [`repos`](/docs/repos) | Repositories, commands, skills, and tech stack | | [`tools`](/docs/tools) | Tool version management via mise | | [`env`](/docs/environment) | Environment profiles, AWS secrets, and overrides | | [`services`](/docs/services) | Docker Compose services for local development | | [`mcps`](/docs/mcps) | Model Context Protocol server connections | | [`integrations`](/docs/integrations) | Linear, GitHub, Slack, and Playwright | | [`workflow`](/docs/workflow) | Agent pipeline, orchestration, and prompt customization | | [`hooks`](/docs/hooks) | Editor hooks for automation (Cursor + Claude Code + Kiro) | | [`commands`](/docs/commands) | Custom slash commands (Cursor only) | ## Top-Level Fields | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Hub workspace name | | `description` | string | No | Human-readable description | | `version` | string | No | Configuration version | | `hooks` | object | No | Editor hook definitions keyed by event name | | `commands` | object | No | Named command files (Cursor only) | | `commands_dir` | string | No | Directory of command `.md` files to auto-discover | For the full CLI command reference, see [CLI Reference](/docs/cli). --- # CLI Reference > Complete reference for all hub CLI commands. Install globally with `npm i -g @arvoretech/hub`, then run commands as `hub `. You can also use `npx @arvoretech/hub ` without installing. ## Setup & Init ### `hub init [name]` Create a new hub workspace: ```bash hub init my-hub ``` Creates a directory with `hub.yaml`, `.gitignore`, `.cursorignore`, and `README.md`. ### `hub setup` Full workspace setup — clone repos, start services, install tools, generate env files, install dependencies: ```bash hub setup hub setup --skip-services hub setup --skip-install hub setup --skip-tools ``` | Flag | Description | |------|-------------| | `--skip-services` | Skip Docker service startup | | `--skip-install` | Skip dependency installation | | `--skip-tools` | Skip tool installation via mise | ### `hub doctor` Check required dependencies and tool versions: ```bash hub doctor ``` Checks: git, docker, node (required), pnpm, mise, gh, aws (recommended), plus any tools declared in `hub.yaml`. ### `hub generate` Generate editor configuration files from `hub.yaml`: ```bash hub generate --editor cursor # Cursor IDE (default) hub generate --editor claude-code # Claude Code (CLAUDE.md) hub generate --editor kiro # Kiro IDE ``` Produces: - **Cursor**: `.cursor/rules/orchestrator.mdc`, `.cursor/agents/`, `.cursor/skills/`, `.cursor/commands/`, `.cursor/hooks.json`, `.cursor/mcp.json`, `.gitignore`, `.cursorignore` - **Claude Code**: `CLAUDE.md`, `.claude/agents/`, `.claude/skills/`, `.claude/settings.json` (with hooks), `.mcp.json`, `.gitignore` - **Kiro**: `.kiro/steering/orchestrator.md`, `.kiro/steering/agent-*.md`, `.kiro/skills/`, `.kiro/settings/mcp.json`, `AGENTS.md`, `.gitignore` ## Repositories ### `hub add-repo ` Add a repository to the hub: ```bash hub add-repo git@github.com:company/api.git hub add-repo git@github.com:company/api.git --name api --tech nestjs ``` | Flag | Description | |------|-------------| | `-n, --name ` | Repository name (defaults to repo name from URL) | | `-t, --tech ` | Technology stack (nestjs, nextjs, elixir, react, etc.) | ### `hub pull` Pull latest changes in all repositories: ```bash hub pull ``` ### `hub status` Show git status for all repositories: ```bash hub status ``` Shows: current branch, number of changed files, commits ahead/behind upstream. ### `hub exec ` Execute a command in all repositories: ```bash hub exec "git checkout main" hub exec "git stash" hub exec "pnpm outdated" ``` ## Environment ### `hub env [profile]` Generate environment files from hub.yaml profiles: ```bash hub env local # Local Docker services (no AWS) hub env staging # Staging secrets from AWS hub env prod # Production secrets (careful!) ``` Default profile: `local`. See [Environment](/docs/environment) for full documentation. ## Services ### `hub services [action]` Manage Docker development services: ```bash hub services up # Start services hub services down # Stop services hub services restart # Restart services hub services ps # Show status hub services logs # Follow all logs hub services logs mysql # Follow specific service logs hub services clean # Stop and remove volumes ``` See [Services](/docs/services) for configuration details. ## Tools ### `hub tools [subcommand]` Manage development tool versions via mise: ```bash hub tools generate # Generate .mise.toml files hub tools install # Install tools hub tools install --generate # Generate + install hub tools check # Verify versions match ``` See [Tools](/docs/tools) for configuration details. ## Directory ### `hub directory [query]` Browse the curated Repo Hub directory of skills, agents, hooks, and commands: ```bash hub directory # Open the full directory hub dir # Short alias hub directory "aws" # Search for "aws" hub directory -t skill # Filter by type hub directory -t hook "session" # Combine type filter + search ``` | Flag | Description | |------|-------------| | `-t, --type ` | Filter by type: `skill`, `agent`, `hook`, `command` | Each resource type also has its own `find` subcommand: ```bash hub skills find react # Browse skills hub agents find aws # Browse agents hub hooks find # Browse hooks hub commands find # Browse commands ``` ## Skills ### `hub skills [subcommand]` Manage agent skills. Browse curated skills in the [Directory](/directory). ```bash hub skills list # List installed skills hub skills find react # Browse curated skills in the directory hub skills add backend-nestjs # Install from registry (by name) hub skills add vercel-labs/agent-skills/react-best-practices # Specific skill from GitHub hub skills add vercel-labs/agent-skills # All skills from a GitHub repo hub skills add vercel-labs/agent-skills --list # List remote skills hub skills add git@github.com:company/skills.git # Install from git URL hub skills add ./my-local-skills # Install from local path hub skills add backend-nestjs --global # Install globally hub skills remove backend-nestjs # Remove a skill ``` | Flag | Description | |------|-------------| | `-s, --skill ` | Install a specific skill only (for repo sources) | | `-g, --global` | Install to/remove from global `~/.cursor/skills/` | | `-r, --repo ` | Override registry repository | | `-l, --list` | List available skills without installing | See [Skills](/docs/skills) for more details. ## Agents ### `hub agents [subcommand]` Manage agent definitions: ```bash hub agents list # List installed agents hub agents add debugger # Install from registry (by name) hub agents add arvoreeducacao/rhm # Install from GitHub repo hub agents add debugger --global # Install globally hub agents remove debugger # Remove an agent hub agents sync # Install all agents from hub.yaml pipeline hub agents sync --force # Re-install all, even if they exist ``` | Flag | Description | |------|-------------| | `-a, --agent ` | Install a specific agent only | | `-g, --global` | Install to/remove from global `~/.cursor/agents/` | | `-r, --repo ` | Override registry repository | | `-f, --force` | Re-install even if already installed (sync only) | See [Agents](/docs/agents) for more details. ## Hooks ### `hub hooks [subcommand]` Manage editor hooks: ```bash hub hooks list # List installed hooks hub hooks find # Browse curated hooks in the directory hub hooks add format-on-save # Install from registry hub hooks add company/shared-hooks # Install from GitHub repo hub hooks add ./my-hooks # Install from local path hub hooks remove format-on-save # Remove a hook ``` | Flag | Description | |------|-------------| | `--hook ` | Install a specific hook only (for repo sources) | | `-r, --repo ` | Override registry repository | See [Hooks](/docs/hooks) for more details. ## Commands ### `hub commands [subcommand]` Manage slash commands (Cursor only): ```bash hub commands list # List installed commands hub commands find # Browse curated commands in the directory hub commands add review # Install from registry hub commands add company/shared-cmds # Install from GitHub repo hub commands add ./my-commands # Install from local path hub commands remove review # Remove a command ``` | Flag | Description | |------|-------------| | `-c, --command ` | Install a specific command only (for repo sources) | | `-r, --repo ` | Override registry repository | See [Commands](/docs/commands) for more details. ## Memory ### `hub memory [subcommand]` Manage team memories — persistent knowledge base for AI context: ```bash hub memory list # List all memories hub memory list --category decisions # Filter by category hub memory list --status archived # Show archived hub memory add decisions "Use PostgreSQL" # Create a memory hub memory add conventions "API naming" --tags "rest,api" --author joao hub memory add gotchas "Sentry v8 leak" --content "Sentry SDK v8 causes memory leak with NestJS interceptors" hub memory archive 2024-06-01-use-postgresql # Soft-delete hub memory remove 2024-06-01-use-postgresql # Permanent delete ``` | Flag | Description | |------|-------------| | `-c, --category ` | Filter by category | | `-s, --status ` | Filter by status (active, archived, superseded) | | `-t, --tags ` | Comma-separated tags | | `-a, --author ` | Author name | | `--content ` | Memory content (otherwise opens template) | Categories: `decisions`, `conventions`, `incidents`, `domain`, `gotchas`. The `@arvoretech/memory-mcp` provides semantic search via MCP so AI agents can query your team's knowledge base on-demand. See [Memory](/docs/memory) for full documentation. ## Registry ### `hub registry [subcommand]` Browse skills, agents, hooks, and commands available in the registry: ```bash hub registry list # List everything hub registry list --type hook # List hooks only hub registry search "nestjs" # Search by keyword hub registry search --type skill # Filter by type hub registry search --type command # Filter by type ``` | Flag | Description | |------|-------------| | `-t, --type ` | Filter by type: `skill`, `agent`, `hook`, `command` | | `-r, --repo ` | Override registry repository | ## Worktrees ### `hub worktree [subcommand]` Manage git worktrees for parallel work: ```bash hub worktree add feature-login # Create worktree + copy envs hub worktree list # List all worktrees hub worktree remove feature-login # Remove a worktree hub worktree copy-envs # Copy env files to current dir hub worktree copy-envs feature-login # Copy to specific worktree ``` See [Worktrees](/docs/worktrees) for more details. ## Update ### `hub update` Update the hub CLI to the latest version: ```bash hub update # Update to latest hub update --check # Check for updates without installing ``` Automatically detects your package manager (pnpm, npm, or yarn) and runs the appropriate global install command. | Flag | Description | |------|-------------| | `--check` | Only check for updates, don't install | --- # Repositories > Declare and manage repositories in hub.yaml. The `repos` section declares all repositories in your workspace. Each repo keeps its own git history while being visible to AI agents as part of a unified workspace. ## Schema ```yaml repos: - name: api path: ./api url: git@github.com:company/api.git tech: nestjs description: "REST API and GraphQL gateway" env_file: .env commands: install: pnpm install dev: pnpm dev build: pnpm build lint: pnpm lint test: pnpm test skills: [backend-nestjs] tools: node: "22.18.0" ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Unique identifier for the repo | | `path` | string | Yes | Local path relative to hub root | | `url` | string | Yes | Git clone URL | | `tech` | string | No | Technology stack (nestjs, nextjs, elixir, react, etc.) | | `description` | string | No | Human-readable description | | `env_file` | string | No | Path to environment file relative to repo root | | `commands` | object | No | Named commands | | `skills` | string[] | No | [Skills](/docs/skills) to attach to this repo | | `tools` | object | No | Per-repo [tool versions](/docs/tools) (merged with global) | ## Commands The `commands` object maps command names to shell commands. These are used by `hub setup` (for `install`) and available to agents: ```yaml commands: install: pnpm install dev: pnpm dev build: pnpm build lint: pnpm lint test: pnpm vitest run migrate: pnpm drizzle-kit push ``` Built-in command names: `install`, `dev`, `build`, `lint`, `test`. You can add custom names. ## Adding Repositories ```bash # Add from URL (name inferred from repo) hub add-repo git@github.com:company/api.git # With explicit name and tech hub add-repo git@github.com:company/api.git --name api --tech nestjs ``` This updates `hub.yaml`, `.gitignore`, and `.cursorignore` automatically. ## Git Operations ```bash # Pull latest in all repos hub pull # Show git status for all repos hub status # Run any command across all repos hub exec "git checkout main" hub exec "git stash" ``` ## How the Context Pattern Works The magic is in two files: ```bash # .gitignore — repos are excluded from the hub's git api frontend backend # .cursorignore — but included for AI context !api/ !frontend/ !backend/ ``` Your AI sees all repos as one workspace. Each repo keeps its own git history. `hub generate` creates these files automatically. --- # Tools > Manage development tool versions via mise. The `tools` section declares development tool versions for your workspace. Repo Hub uses [mise](https://mise.jdx.dev) to install and manage tools like Node.js, pnpm, Erlang, Elixir, Ruby, and more. ## Schema ### Global Tools Declare tools at the hub level — these are shared by all repositories: ```yaml tools: node: "22.18.0" pnpm: "10.26.0" direnv: "2.33.0" ``` ### Per-Repo Tools Override or add tools for specific repositories. Per-repo tools are **merged** with global tools (repo values win on conflicts): ```yaml tools: node: "22.18.0" pnpm: "10.26.0" repos: - name: api path: ./api # inherits node 22.18.0 and pnpm 10.26.0 - name: backend path: ./backend tools: erlang: "27.3.3" elixir: "1.18.3-otp-27" # also inherits node 22.18.0 and pnpm 10.26.0 ``` ### Mise Settings Pass settings to mise (e.g., enable experimental features): ```yaml mise_settings: experimental: true ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `tools` (top-level) | object | No | Global tool versions shared by all repos | | `tools` (per-repo) | object | No | Repo-specific tools, merged with global | | `mise_settings` | object | No | Settings passed to `.mise.toml` `[settings]` section | ## CLI Commands ### `hub tools generate` Generates `.mise.toml` files from `hub.yaml`: ```bash hub tools generate ``` This creates: - A root `.mise.toml` with global tools - A `.mise.toml` in each repo that has per-repo tools (with global tools merged in) ### `hub tools install` Installs all tools defined in `hub.yaml` using mise: ```bash # Install from existing .mise.toml files hub tools install # Generate .mise.toml files first, then install hub tools install --generate ``` ### `hub tools check` Verifies that installed tool versions match what's declared in `hub.yaml`: ```bash hub tools check ``` Output: ``` Hub tools: ✓ node 22.18.0 ✓ pnpm 10.26.0 ⚠ direnv: 2.32.0 (expected 2.33.0) ▸ backend ✓ erlang 27.3.3 ✗ elixir: not found (expected 1.18.3-otp-27) Fix with: hub tools install --generate ``` ### `hub doctor` The doctor command also checks tool versions from `hub.yaml` alongside required dependencies: ```bash hub doctor ``` ## Integration with Setup `hub setup` automatically installs tools if `tools` is defined in `hub.yaml`. Skip with `--skip-tools`: ```bash # Full setup (includes tool installation) hub setup # Skip tools hub setup --skip-tools ``` ## Generated Files The generated `.mise.toml` follows the standard mise format: ```toml [tools] node = "22.18.0" pnpm = "10.26.0" erlang = "27.3.3" elixir = "1.18.3-otp-27" [settings] experimental = true ``` ## Prerequisites - [mise](https://mise.jdx.dev) must be installed: `curl https://mise.run | sh` or `brew install mise` - mise must be activated in your shell: ```bash eval "$(mise activate zsh)" # zsh eval "$(mise activate bash)" # bash ``` --- # Environment > Manage environment variables, AWS secrets, and profiles. The `env` section manages environment variables across all repositories. It supports multiple profiles (local, staging, prod), AWS Secrets Manager integration, and per-repo overrides. ## Schema ```yaml env: profiles: local: description: "Local environment - uses Docker services" services: [mysql, redis, elasticsearch] staging: description: "Staging environment" aws_profile: my-company-stg secrets: api: api-staging-secret backend: backend-staging-secret prod: description: "Production - READ ONLY" aws_profile: my-company-prd secrets: api: api-prod-secret backend: backend-prod-secret overrides: local: api: NODE_ENV: development PORT: "3001" DATABASE_URL: "mysql://root:root@localhost:3306/mydb" REDIS_URL: "redis://localhost:6379" frontend: NODE_ENV: development NEXT_PUBLIC_API_URL: "http://localhost:3001" staging: api: NODE_ENV: development PORT: "3001" REDIS_URL: "redis://localhost:6379" prod: api: NODE_ENV: development PORT: "3001" ``` ## Profiles Each profile represents a target environment. Profiles are defined under `env.profiles`: | Field | Type | Required | Description | |-------|------|----------|-------------| | `description` | string | No | Human-readable description | | `services` | string[] | No | Docker services required for this profile | | `aws_profile` | string | No | AWS CLI profile name for authentication | | `secrets` | object | No | Map of repo name to AWS Secrets Manager secret name | | `build_database_url` | object | No | Build DATABASE_URL from secret fields | ## How It Works When you run `hub env `, the CLI: 1. **Authenticates** with AWS using the profile's `aws_profile` 2. **Fetches secrets** from AWS Secrets Manager for each repo 3. **Writes** all secret key-value pairs to the repo's `env_file` 4. **Applies overrides** on top (overrides always win) 5. **Merges** with existing env file content (existing vars preserved unless overwritten) For the `local` profile, AWS secrets are skipped — only overrides are applied. ## CLI Commands ```bash # Generate env files for local development (no AWS needed) hub env local # Generate env files with staging secrets from AWS hub env staging # Generate env files with production secrets (careful!) hub env prod ``` ## Overrides Static environment variable overrides per profile per repo. Structure: `overrides[profile][repoName][VAR_NAME] = value`. Overrides are applied **after** AWS secrets, so they always take precedence. Common uses: - Setting local connection strings (`localhost` instead of remote hosts) - Development-specific flags (`NODE_ENV=development`) - Disabling production-only features locally (empty SQS queue URLs, etc.) ```yaml overrides: staging: api: NODE_ENV: development PORT: "3001" REDIS_URL: "redis://localhost:6379" AUTOMATIC_ASSESSMENT_SQS_QUEUE_URL: "" ``` ## Secret with Profile Override By default, secrets are fetched using the profile's `aws_profile`. But sometimes a repo needs secrets from a **different** AWS account (e.g., staging uses prod API keys). Use the object form: ```yaml env: profiles: staging: aws_profile: my-company-stg secrets: # Simple form — uses my-company-stg backend: backend-staging-secret # Object form — uses a different profile api: secret: api-prod-secret profile: my-company-prd ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `secret` | string | Yes | AWS Secrets Manager secret name | | `profile` | string | No | AWS profile override (defaults to the profile's `aws_profile`) | ## Building DATABASE_URL Constructs a `DATABASE_URL` from fields in an AWS secret. Useful when the database credentials are stored as individual fields rather than a connection string. ```yaml env: profiles: staging: aws_profile: my-company-stg build_database_url: api: from_secret: backend-staging-secret profile: my-company-stg ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `from_secret` | string | Yes | Secret name containing DB credentials | | `profile` | string | No | AWS profile override | | `vars` | object | No | Custom field name mapping | | `template` | string | No | Custom URL template | ### Default Field Names The CLI looks for: `DB_USERNAME`, `DB_PASSWORD`, `DB_HOSTNAME`, `DB_PORT`, `DB_NAME`. ### Custom Field Mapping If your secret uses different field names: ```yaml build_database_url: api: from_secret: rds-credentials vars: user: username password: password host: endpoint port: port database: dbname ``` ### Custom URL Template Default template: `mysql://{user}:{password}@{host}:{port}/{database}` ```yaml build_database_url: api: from_secret: rds-credentials template: "postgresql://{user}:{password}@{host}:{port}/{database}?sslmode=require" ``` The resulting `DATABASE_URL` and `IDENTITY_DATABASE_URL` are both written to the env file. ## Integration with Setup `hub setup` generates local env files automatically during setup (using the `local` overrides). --- # Services > Manage Docker development services. The `services` section declares infrastructure services (databases, caches, search engines) that run locally via Docker Compose. ## Schema ```yaml services: - name: mysql image: mysql:8.0 port: 3306 env: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: myapp - name: postgres image: postgres:16 port: 5432 env: POSTGRES_PASSWORD: postgres - name: redis image: redis:7-alpine port: 6379 - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0 port: 9200 env: discovery.type: single-node xpack.security.enabled: "false" - name: qdrant image: qdrant/qdrant:latest port: 6333 ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | Service name | | `image` | string | Yes | Docker image | | `port` | number | No | Single exposed port | | `ports` | number[] | No | Multiple exposed ports | | `env` | object | No | Environment variables passed to the container | ## CLI Commands ### `hub services up` Start all services in the background: ```bash hub services up ``` This generates a `docker-compose.yml` from `hub.yaml` (if it doesn't exist) and runs `docker compose up -d`. ### `hub services down` Stop all services: ```bash hub services down ``` ### `hub services ps` Show status of running services: ```bash hub services ps ``` ### `hub services logs` Follow logs from all services, or a specific one: ```bash # All services hub services logs # Specific service hub services logs mysql ``` ### `hub services restart` Restart all services: ```bash hub services restart ``` ### `hub services clean` Stop services **and remove volumes** (resets all data): ```bash hub services clean ``` ## Generated docker-compose.yml The CLI generates a `docker-compose.yml` from the `services` section. Example output: ```yaml services: mysql: image: mysql:8.0 restart: unless-stopped ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: myapp volumes: - mysql_data:/var/lib/mysql redis: image: redis:7-alpine restart: unless-stopped ports: - "6379:6379" volumes: - redis_data:/var/lib/redis volumes: mysql_data: redis_data: ``` Volumes and `restart: unless-stopped` are automatically added by the CLI. The volume mount path is inferred from the image name (e.g. `mysql` -> `/var/lib/mysql`, `redis` -> `/var/lib/redis`, `postgres` -> `/var/lib/postgresql/data`). ## Integration with Setup `hub setup` starts services automatically. Skip with `--skip-services`: ```bash hub setup --skip-services ``` --- # Integrations > Connect to Linear, GitHub, Slack, and Playwright. The `integrations` section configures external services used by the orchestrator agent during the development pipeline — task management, pull requests, notifications, and browser testing. ## Schema ```yaml integrations: linear: team: Engineering labels: [feature, ai-generated] link_pattern: "https://linear.app/my-company/issue/{id}/{slug}" github: org: my-company pr_branch_pattern: "{linear_id}-{slug}" slack: channels: prs: "#eng-prs" releases: "#releases" templates: pr_created: "New PR: {pr_url} for {task_id}" playwright: base_url: "http://localhost:3000" ``` ## Linear Task management integration. The orchestrator creates and updates tasks in Linear. | Field | Type | Description | |-------|------|-------------| | `team` | string | Linear team name for task creation | | `labels` | string[] | Default labels applied to new tasks | | `link_pattern` | string | URL pattern for task links (`{id}` and `{slug}` are replaced) | When a user requests a feature without a task, the orchestrator automatically creates one in Linear and provides the link. ## GitHub Pull request and repository integration. | Field | Type | Description | |-------|------|-------------| | `org` | string | GitHub organization name | | `pr_branch_pattern` | string | Branch naming pattern for PRs | Available variables in `pr_branch_pattern`: - `{linear_id}` — Linear task ID (e.g., `ENG-123`) - `{slug}` — URL-friendly task title - `{task_id}` — Generic task ID ## Slack Team notifications via Slack. | Field | Type | Description | |-------|------|-------------| | `channels` | object | Map of purpose to channel name | | `templates` | object | Message templates with `{variable}` placeholders | ### Channels Define channels by purpose: ```yaml channels: prs: "#eng-prs" # PR notifications releases: "#releases" # Release notifications alerts: "#eng-alerts" # Error alerts ``` ### Templates Define message templates with placeholders: ```yaml templates: pr_created: "PR created: {pr_url} for {task_id}" deploy_done: "Deployed {version} to {environment}" ``` Available variables: `{pr_url}`, `{task_id}`, `{repo_name}`, `{branch}`, `{version}`, `{environment}`. ## Playwright Browser automation configuration for QA agents. | Field | Type | Description | |-------|------|-------------| | `base_url` | string | Base URL for the web application under test | ```yaml playwright: base_url: "http://localhost:3000" ``` The QA agent uses this to navigate the application during end-to-end testing. ## How Integrations Work Integrations are **purely declarative** — there is no CLI code that calls Linear, Slack, or GitHub APIs. Instead, `hub generate` reads the `integrations` section and **injects instructions into the orchestrator rule** (`.cursor/rules/orchestrator.mdc` or `AGENTS.md`). The generated orchestrator rule will contain sections like: ```markdown ## Task Management If the user doesn't have a task, create one using the Linear MCP. Add it to the **Engineering** team. ## Delivery Details ### Pull Requests For each repository with changes, push the branch and create a PR. Branch naming pattern: `{linear_id}-{slug}` ### Slack Notifications - **prs**: Post to `#eng-prs` ``` At runtime, the **AI agent reads these instructions** and uses the corresponding MCPs to execute them: 1. **Linear MCP** — Creates tasks, updates status 2. **GitHub MCP** (or `gh` CLI) — Creates branches and PRs 3. **Slack MCP** — Posts notifications to channels 4. **Playwright MCP** — Runs browser tests So the flow is: ``` hub.yaml (integrations) → hub generate → orchestrator rule (text) → AI agent → MCPs (execution) ``` This means you need the corresponding MCPs configured in the `mcps` section for integrations to actually work at runtime. --- # Workflow > Define the agent pipeline for end-to-end development. The `workflow` section defines the development pipeline that the orchestrator agent follows. Each step calls one or more specialized [agents](/docs/agents) in order. ## Schema ```yaml workflow: task_folder: "./tasks/{task_id}/" pipeline: - step: refinement agent: refinement output: refinement.md - step: coding agents: - agent: coding-backend output: code-backend.md when: repos_include(api, backend) - agent: coding-frontend output: code-frontend.md when: repos_include(frontend) parallel: true - step: review agent: code-reviewer output: code-review.md - step: qa agents: [qa-backend, qa-frontend] parallel: true tools: [playwright] - step: deliver actions: [create-pr, notify-slack, update-linear] ``` ## Pipeline Steps | Field | Type | Required | Description | |-------|------|----------|-------------| | `step` | string | Yes | Step name (used as section header) | | `agent` | string | No | Single agent to execute | | `agents` | array | No | Multiple agents (string names or objects) | | `parallel` | boolean | No | Run agents in parallel (default: sequential) | | `output` | string | No | Document filename for agent output | | `tools` | string[] | No | MCP tools available to this step | | `when` | string | No | Condition for executing this step | | `actions` | string[] | No | Built-in actions to execute | ## Task Folder All task documents are stored in the `task_folder`. Default: `./tasks/{task_id}/` ``` ./tasks/ENG-123/ ├── refinement.md ├── code-backend.md ├── code-frontend.md ├── code-review.md ├── qa-backend.md └── qa-frontend.md ``` ## Agent Steps ### Single Agent ```yaml - step: refinement agent: refinement output: refinement.md ``` ### Multiple Agents (Parallel) ```yaml - step: coding agents: [coding-backend, coding-frontend] parallel: true ``` ### Multiple Agents (with conditions) ```yaml - step: coding agents: - agent: coding-backend output: code-backend.md when: repos_include(api) - agent: coding-frontend output: code-frontend.md when: repos_include(frontend) parallel: true ``` ## Delivery Actions The `actions` field triggers built-in delivery behaviors: | Action | Description | |--------|-------------| | `create-pr` | Create pull requests for each repo with changes | | `notify-slack` | Send notification to the configured Slack channel | | `notify-slack-prs` | Send PR links to the Slack PRs channel | | `update-linear` | Update the Linear task status | | `update-linear-status` | Update the Linear task to Review status | ```yaml - step: deliver actions: [create-pr, notify-slack-prs, update-linear-status] ``` ## How It Runs 1. `hub generate` reads the pipeline and produces orchestrator instructions 2. You open the project in your AI editor 3. The orchestrator agent reads the generated rule and follows the pipeline 4. Each step calls a sub-agent (via the editor's agent mode) 5. Agents write output documents to the task folder 6. The orchestrator validates between steps — asking the user for approval when needed **The editor is the runtime.** No daemon, no server, no separate process. ## Enforcing the Pipeline By default, the orchestrator follows the pipeline but may occasionally take shortcuts for simple tasks. Set `enforce_workflow: true` to add strict enforcement rules to the generated prompt: ```yaml workflow: enforce_workflow: true pipeline: - step: refinement agent: refinement # ... ``` When enabled, the orchestrator will: - **Never skip a step**, even for small or "obvious" changes - **Always execute steps in order** — no reordering, merging, or ad-hoc parallelization - **Always delegate to the designated agent** instead of attempting the work itself - **Always validate outputs** before moving to the next step - **Ask for confirmation** if the user explicitly requests skipping a step This is recommended for teams that want a consistent, auditable development process on every task. ## Prompt Customization You can inject custom content into the generated orchestrator prompt using the `prompt` field. This lets you add company-specific instructions, guidelines, or context without editing generated files. ```yaml workflow: prompt: prepend: | Always respond in Portuguese. Follow the coding standards documented at docs/standards.md. append: | When in doubt, ask the user before proceeding. sections: after_repositories: | The API uses a custom auth middleware. See api/docs/auth.md for details. after_pipeline: | Always run linting before committing changes. coding_guidelines: | Use functional programming patterns where possible. Prefer composition over inheritance. ``` | Field | Description | |-------|-------------| | `prepend` | Injected right after the orchestrator introduction | | `append` | Added at the very end of the generated prompt | | `sections.after_repositories` | Injected after the repositories list | | `sections.after_pipeline` | Injected after the pipeline section | | `sections.after_delivery` | Injected after the delivery section | | `sections.` | Any other key becomes a new `## Section Name` at the end | Custom section keys are converted to title case (e.g., `coding_guidelines` becomes `## Coding Guidelines`). ## Validation Flow Between steps, the orchestrator: 1. **Reads the output document** from the previous step 2. **Checks for unanswered questions** — asks the user one at a time 3. **Validates completeness** — ensures all required sections are filled 4. **Gets user approval** before proceeding to the next step If any validation agent (review, QA) leaves comments, the orchestrator calls the relevant coding agents again to address them before proceeding. ## Related - **[Hooks](/docs/hooks)** — Automate workflows with editor lifecycle hooks (format on save, block dangerous commands, session init) - **[Commands](/docs/commands)** — Create reusable slash commands for your AI editor --- # Agent Orchestration > How specialized AI agents collaborate in structured pipelines. Repo Hub's agent system transforms your AI editor into a full development team. Specialized agents collaborate in a structured pipeline, each handling a specific part of the development lifecycle. ## How It Works When you run `hub generate`, the CLI produces agent definitions from your `hub.yaml` workflow. The orchestrator agent reads the pipeline and delegates to sub-agents in order. ``` Developer: "Implement user profile editing" | v Orchestrator -----> Creates task in Linear | v Refinement Agent -> Collects requirements, defines contracts | v Coding Agents ----> Implement backend + frontend in parallel | v Review Agent -----> Reviews code against the requirements | v QA Agent ---------> Runs E2E tests with Playwright | v Orchestrator -----> Creates PR, posts in Slack, updates Linear | v Developer: "PR is ready, tested, and notified" ``` **The editor is the runtime.** There is no daemon or separate process. The orchestrator agent follows the pipeline by calling sub-agents through your AI editor's agent mode. > **Kiro note:** Kiro does not support sub-agents. When generating for Kiro, each agent definition becomes a steering file (`.kiro/steering/agent-*.md`) with `inclusion: auto`. The orchestrator instructs the AI to follow each steering file's guidelines sequentially instead of spawning sub-agents. The result is the same structured pipeline, executed by a single agent that switches roles at each step. ## Built-in Agents ### Orchestrator The main coordinator. It reads the pipeline, manages task state, and delegates to sub-agents in order. **Responsibilities:** - Create tasks in project management tools (Linear, Jira) - Call sub-agents following the pipeline order - Validate outputs between steps - Create PRs and send notifications when done ### Refinement Agent Gathers requirements and defines technical contracts before any code is written. **Outputs:** - `tasks//refinement.md` with requirements, scope, API contracts, and edge cases ### Coding Agents Implement the actual code. You can have separate agents per tech stack: - `coding-backend` — Backend implementation - `coding-frontend` — Frontend implementation **Outputs:** - `tasks//code-backend-*.md` with implementation details - `tasks//code-frontend-*.md` with implementation details ### Code Review Agent Reviews all implementations against the refinement requirements. **Outputs:** - `tasks//code-review.md` with review comments and approval status ### QA Agent Runs tests and validates the implementation. **Outputs:** - `tasks//qa-backend.md` or `qa-frontend.md` with test results ### Debugger Agent Investigates bugs and production issues. Can coordinate with infrastructure MCPs (AWS, Kubernetes) to gather context. ## Customizing Agents Agents are defined as Markdown files with structured instructions. You can customize them by: 1. Modifying the generated agent files in `.cursor/agents/` 2. Adding custom agents in your `hub.yaml` workflow 3. Creating new agent templates in the `agents/` directory ## Discovering Agents from the Registry Before installing, you can browse all available agents in the [Registry](/docs/registry): ```bash hub registry list --type agent ``` Or search for a specific agent: ```bash hub registry search "debug" --type agent hub registry search "review" --type agent ``` This queries the registry repository and shows each agent's name and description, so you can decide which ones to install. ## Managing Agents with the CLI ### `hub agents list` List installed agents (project and global): ```bash hub agents list ``` ### `hub agents add ` Install agents from the registry, GitHub, a git URL, or a local path: ```bash # From the registry (by name) hub agents add debugger # From a GitHub repo hub agents add arvoreeducacao/rhm # Install globally hub agents add debugger --global ``` | Flag | Description | |------|-------------| | `-a, --agent ` | Install a specific agent only | | `-g, --global` | Install to global `~/.cursor/agents/` | | `-r, --repo ` | Override registry repository | ### `hub agents sync` Install all agents referenced in your `hub.yaml` workflow pipeline from the registry in a single command: ```bash hub agents sync ``` It reads the pipeline steps, collects every agent name, and downloads the ones that aren't already installed locally. Existing agents are skipped unless you pass `--force`: ```bash hub agents sync --force hub agents sync --global ``` | Flag | Description | |------|-------------| | `-g, --global` | Install to global `~/.cursor/agents/` | | `-r, --repo ` | Override registry repository | | `-f, --force` | Re-install even if the agent already exists locally | ### `hub agents remove ` Remove an agent: ```bash hub agents remove debugger hub agents remove debugger --global ``` ## Pipeline Configuration Pipelines are defined in `hub.yaml` under the `workflow` key: ```yaml workflow: pipeline: - step: refinement agent: refinement - step: coding agents: [coding-backend, coding-frontend] parallel: true - step: review agent: code-reviewer - step: qa agent: qa-frontend tools: [playwright] - step: deliver actions: [create-pr, notify-slack] ``` Steps run sequentially by default. Use `parallel: true` to run multiple agents simultaneously. --- # Skills > Package domain knowledge as reusable skills for AI agents. Skills are packaged domain knowledge that agents consult when working on specific repositories or tasks. They encode project conventions, patterns, and best practices in a format AI agents can follow. Skills follow the [Agent Skills](https://agentskills.io/) open standard, which works across multiple AI tools including Cursor, Claude Code, Kiro, Windsurf, and many others. ## What is a Skill? A skill is a directory with a `SKILL.md` file that contains structured instructions about a specific technology, framework, or domain: ``` skills/backend-nestjs/ ├── SKILL.md # Main instructions (required) ├── references/ # Supporting docs (optional) └── scripts/ # Helper scripts (optional) ``` ## Using Skills Reference skills per-repo in your `hub.yaml`: ```yaml repos: - name: api path: ./api tech: nestjs skills: [backend-nestjs] - name: frontend path: ./frontend tech: nextjs skills: [frontend-nextjs] ``` When an agent works on a repo, it reads the associated skills first. This ensures consistent code quality and adherence to project conventions without manual instruction. ## Built-in Skills Repo Hub ships with several built-in skills: | Skill | Description | |-------|-------------| | `backend-nestjs` | NestJS development patterns, testing, and conventions | | `backend-elixir` | Elixir/Phoenix patterns with Ecto and GraphQL | | `frontend-nextjs` | Next.js App Router patterns with React and Tailwind | | `frontend-react` | Legacy React patterns with styled-components | | `database-mysql` | MySQL schema exploration and query patterns | | `aws` | AWS infrastructure patterns | | `kubernetes` | Kubernetes/EKS operations | | `qa-test-planner` | Test planning and QA patterns | ## Installing from the Community You can install any skill from any GitHub repository that follows the Agent Skills standard. Browse curated skills in the [Directory](/directory). ### Install a specific skill from a repo Use the `owner/repo/skill-name` format to download a single skill via GitHub API — no cloning required: ```bash hub skills add vercel-labs/agent-skills/react-best-practices hub skills add vercel-labs/agent-skills/web-design-guidelines hub skills add anthropics/skills/frontend-design hub skills add supabase/agent-skills/supabase-postgres-best-practices hub skills add obra/superpowers/systematic-debugging ``` ### Install all skills from a repo ```bash hub skills add vercel-labs/agent-skills ``` ### List available skills from a remote repo ```bash hub skills add vercel-labs/agent-skills --list ``` ### Browse the community directory ```bash hub skills find hub skills find react ``` This opens the [Repo Hub Directory](/directory), a curated collection of verified skills. ## Discovering Skills from the Registry Before installing, you can browse all available skills in the [Registry](/docs/registry): ```bash hub registry list --type skill ``` Or search for a specific skill: ```bash hub registry search "nestjs" --type skill hub registry search "frontend" --type skill ``` This queries the registry repository and shows each skill's name and description, so you can decide which ones to install. ## Installing from the Registry Repo Hub maintains its own registry of skills. Install by name: ```bash hub skills add backend-nestjs hub skills add frontend-nextjs ``` The registry is the Repo Hub repository itself. You can override it with the `HUB_REGISTRY` environment variable or the `--repo` flag: ```bash hub skills add backend-nestjs --repo myorg/my-skills-repo ``` ## CLI Commands ### `hub skills list` List installed skills (project and global): ```bash hub skills list ``` ### `hub skills add ` Install skills from the registry, GitHub, a git URL, or a local path: ```bash # From the registry (by name) hub skills add backend-nestjs # Specific skill from a GitHub repo (downloads only that skill) hub skills add vercel-labs/agent-skills/react-best-practices # All skills from a GitHub repo hub skills add vercel-labs/agent-skills # From git URL hub skills add git@github.com:company/ai-skills.git # From a local path hub skills add ./my-local-skills # Install globally (shared across all projects) hub skills add backend-nestjs --global # List remote skills without installing hub skills add vercel-labs/agent-skills --list ``` | Flag | Description | |------|-------------| | `-s, --skill ` | Install a specific skill only (for repo sources) | | `-g, --global` | Install to global `~/.cursor/skills/` | | `-r, --repo ` | Override registry repository | | `-l, --list` | List available skills without installing | ### `hub skills find [query]` Browse curated skills in the Repo Hub directory: ```bash hub skills find hub skills find react ``` ### `hub skills remove ` Remove a skill: ```bash hub skills remove backend-nestjs hub skills remove backend-nestjs --global ``` ## Creating Custom Skills A skill follows the Agent Skills standard: ```markdown --- name: my-skill description: What this skill does and when to use it --- # My Skill ## When to Use Describe the scenarios where this skill should be activated. ## Instructions ### Section 1 Detailed instructions... ### Section 2 More instructions... ## Examples Code examples showing correct patterns. ``` ### Best Practices 1. **Be specific** — Include exact patterns, not vague guidelines 2. **Show examples** — Real code is better than descriptions 3. **Keep it focused** — One skill per technology/domain 4. **Include anti-patterns** — Show what to avoid 5. **Stay under 500 lines** — Agents work better with concise instructions ## Storage Skills are stored in two locations: - **Project skills**: `skills/` directory in your hub root - **Global skills**: `~/.cursor/skills/` (shared across all projects) When `hub generate` runs: - **Cursor**: Skills are copied to `.cursor/skills/` as directories - **Claude Code**: Skills are copied to `.claude/skills/` as directories (native skills support) Both editors discover skills automatically from their respective directories. --- # MCPs > Connect AI to real infrastructure through Model Context Protocol servers. AI can't debug production if it can't see production. MCP (Model Context Protocol) servers give your AI agent direct access to databases, monitoring, secrets, and testing tools. ## What are MCPs? MCPs are lightweight servers that expose specific capabilities to AI agents through a standardized protocol. They act as bridges between your AI editor and your infrastructure. ## Available MCP Servers Repo Hub's MCP servers are maintained at [arvore-mcp-servers](https://github.com/arvoreeducacao/arvore-mcp-servers). | MCP | What it gives AI | |-----|-----------------| | `@arvoretech/mysql-mcp` | Read-only database queries | | `@arvoretech/postgresql-mcp` | Read-only database queries | | `@arvoretech/aws-secrets-manager-mcp` | Secret management | | `@arvoretech/datadog-mcp` | Metrics, logs, traces | | `@arvoretech/npm-registry-mcp` | Package security checks | | `@arvoretech/tempmail-mcp` | Temporary email for E2E tests | | `@arvoretech/memory-mcp` | Team memory with semantic search | | `@arvoretech/launchdarkly-mcp` | Feature flag management | ## Common MCPs (Practical Examples) Repo Hub is OSS and doesn't assume you have any specific MCP configured. Pick the MCPs that match your stack and the kind of work you want agents to do. | MCP (example) | What it unlocks | Example use case | |---------------|------------------|------------------| | Linear MCP | Issue lifecycle automation | Create a ticket, link a PR, and move status during the pipeline | | Slack MCP | Team notifications | Post a PR link to `#eng-prs` and status updates to `#releases` | | Notion MCP | Documentation automation | Generate/update runbooks, incident notes, or product docs | | Datadog MCP | Production debugging | Correlate error logs with traces to find root cause | | AWS Secrets Manager MCP | Runtime secrets access | Resolve API keys and connection strings without committing them | | Kubernetes MCP | Cluster debugging | Inspect pods/events to diagnose deployment failures | | Database MCP (MySQL/Postgres) | Schema + data visibility (read-only) | Validate columns/relationships before writing code or migrations | | ClickHouse MCP | Analytics validation | Verify an event pipeline by querying the warehouse | | npm Registry MCP | Dependency safety | Check adoption and security signals before adding a package | | SonarQube MCP | Static analysis feedback | Surface issues directly in the workflow and link to PR findings | | Playwright MCP | Browser automation | Run smoke flows, take screenshots, and verify UI behavior | | TempMail MCP | Email-based flows | Test signup/magic-link flows end-to-end without real inboxes | | Context7 MCP | Up-to-date docs | Pull framework/library docs into the agent context before coding | | Figma MCP | Design-to-code context | Read component specs and spacing before implementing UI | | GitHub MCP | Repo and PR context | Read PR metadata, issues, and check results to automate reviews | | Jina (web content) MCP | Fast web content retrieval | Pull an article or docs page into structured text for analysis | If you need multiple database connections, you can declare the same MCP server multiple times with different `name` values and different env/config (e.g. `postgresql-identity`, `postgresql-billing`). ## Configuration Declare MCPs in your `hub.yaml`: ```yaml mcps: # npm package — runs via npx - name: postgresql package: "@arvoretech/postgresql-mcp" env: PG_HOST: localhost PG_PORT: "5432" PG_DATABASE: myapp # npm package — no extra config needed - name: datadog package: "@arvoretech/datadog-mcp" # npm package - name: playwright package: "@playwright/mcp" # SSE URL — connects to a running server - name: linear url: "https://mcp.linear.app/sse" # Docker image — runs in a container - name: custom-tool image: "company/custom-mcp:latest" env: API_KEY: "${env:CUSTOM_TOOL_API_KEY}" ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | string | Yes | MCP identifier (used as key in generated mcp.json) | | `package` | string | No* | npm package name (runs via `npx -y `) | | `url` | string | No* | SSE URL for remote MCP servers | | `image` | string | No* | Docker image (runs via `docker run -i --rm `) | | `env` | object | No | Environment variables passed to the MCP process | *One of `package`, `url`, or `image` is required. When you run `hub generate`, these are written to `.cursor/mcp.json` (Cursor), `.mcp.json` (Claude Code), or `.kiro/settings/mcp.json` (Kiro), making them available to all agents. ## How Agents Use MCPs Agents interact with MCPs through tool calls. For example: - **Database MCP**: Agent queries the schema to understand table relationships before writing migrations - **Datadog MCP**: Debugger agent searches logs and traces to identify the root cause of a production error - **Playwright MCP**: QA agent navigates the web app, fills forms, and takes screenshots to verify UI changes - **npm Registry MCP**: Coding agent checks package download counts and security signals before adding dependencies ## Secret Environment Variables Many MCPs need API keys, tokens, or credentials. **Never hardcode secrets** in `hub.yaml` — use the `${env:VAR_NAME}` syntax to reference environment variables from your machine: ```yaml mcps: - name: datadog package: "@arvoretech/datadog-mcp" env: DATADOG_API_KEY: "${env:DATADOG_API_KEY}" DATADOG_APP_KEY: "${env:DATADOG_APP_KEY}" DATADOG_SITE: "${env:DATADOG_SITE}" - name: linear url: "https://mcp.linear.app/sse" env: LINEAR_API_KEY: "${env:LINEAR_API_KEY}" - name: postgresql package: "@arvoretech/postgresql-mcp" env: PG_HOST: localhost PG_PORT: "5432" PG_DATABASE: myapp PG_PASSWORD: "${env:PG_PASSWORD}" ``` When `hub generate` runs, `${env:VAR_NAME}` is written as-is to the generated MCP config file. The editor (Cursor, Claude Code, or Kiro) resolves the reference at runtime, reading the value from your local environment. This means: - `hub.yaml` can be safely committed to git — no secrets in the repo - Each developer sets their own keys in their shell profile (`.zshrc`, `.bashrc`, etc.) or a `.env` file - The same config works across the team without sharing credentials ### Setting up your environment Add the variables to your shell profile: ```bash export DATADOG_API_KEY="your-actual-key" export DATADOG_APP_KEY="your-actual-key" export LINEAR_API_KEY="lin_api_..." ``` Or use a tool like [direnv](https://direnv.net/) with a `.envrc` file (added to `.gitignore`). ### When to use plain values vs `${env:}` | Value | Use | |-------|-----| | `localhost`, `5432`, `myapp` | Plain value — not a secret | | API keys, tokens, passwords | `${env:VAR_NAME}` — always | | Internal URLs | Plain value, unless they contain auth tokens | ## Security - **Database MCPs are read-only** — Agents can query but cannot modify data - **Secrets are resolved at runtime** — No credentials stored in generated files (use `${env:VAR}`) - **MCPs run locally** — They connect to your infrastructure from your machine ## Creating Custom MCPs You can create custom MCP servers following the [MCP specification](https://modelcontextprotocol.io). A basic MCP server exposes: 1. **Tools** — Functions the AI can call 2. **Resources** — Data the AI can read 3. **Prompts** — Templates for common tasks Reference your custom MCP in `hub.yaml`: ```yaml mcps: - name: my-custom-mcp package: "@company/my-mcp" env: API_URL: "https://api.internal.company.com" API_KEY: "${env:MY_CUSTOM_MCP_API_KEY}" ``` --- # Worktrees > Manage git worktrees for parallel development sessions. Worktrees let you work on multiple features simultaneously in separate Cursor windows — each with its own branch, but sharing the same hub configuration and environment files. ## Why Worktrees? In a multi-repo hub, switching branches is disruptive. You'd need to switch branches in every repo, potentially lose running processes, and reset your dev environment. With worktrees, you create a full copy of the hub that uses a separate branch. Environment files are copied automatically. ## CLI Commands ### `hub worktree add ` Create a new worktree and copy environment files: ```bash hub worktree add feature-login ``` This: 1. Creates a git worktree at `~/.cursor/worktrees/repo-hub/` 2. Copies all environment files from the current hub to the worktree 3. Prints the path to open in Cursor ``` Worktree created at: ~/.cursor/worktrees/repo-hub/feature-login Open in Cursor: cursor ~/.cursor/worktrees/repo-hub/feature-login ``` ### `hub worktree list` List all active worktrees: ```bash hub worktree list ``` ### `hub worktree remove ` Remove a worktree: ```bash hub worktree remove feature-login ``` ### `hub worktree copy-envs [name]` Copy environment files from the main hub to a worktree. Useful after running `hub env staging` in the main hub: ```bash # Copy to a specific worktree hub worktree copy-envs feature-login # Copy to the current directory (if inside a worktree) hub worktree copy-envs ``` ## Workflow ```bash # 1. Create a worktree for your feature hub worktree add feature-user-profiles # 2. Open it in a new Cursor window cursor ~/.cursor/worktrees/repo-hub/feature-user-profiles # 3. Work on your feature independently # (main hub stays on its current branch) # 4. When env files change in main, sync them hub worktree copy-envs feature-user-profiles # 5. When done, remove the worktree hub worktree remove feature-user-profiles ``` ## How It Works Git worktrees share the same `.git` directory, so: - All branches and history are shared - Commits in a worktree are visible from the main hub - Each worktree can be on a different branch independently - Environment files are NOT shared (they're copied at creation time) --- # Philosophy > How we think about AI-powered development and the future of engineering teams. ## We had to go from 30 to 10. AI made sure we didn't skip a beat. At [Arvore](https://arvore.com.br), external circumstances forced us to reduce our engineering team from 30 to 10. That kind of change can break a company. It didn't break us. We invested heavily in AI tooling, structured our workflows around it, and discovered something unexpected: a smaller team with the right setup doesn't just survive — it ships faster. Features that used to take 3 to 6 months now go live in weeks. This isn't a story about replacing people with AI. It's about what becomes possible when you give a talented team the right tools. ## We're hiring. But the role has changed. We want more engineers. We're actively growing the team. But the profile we look for today is different from what it was two years ago. We call them **product engineers**. A product engineer knows the product deeply — the business logic, the user flows, the edge cases. They also know the technology — the architecture, the security constraints, the performance implications. What defines them is not how fast they type code. It's how well they **direct AI, review its output, and make the decisions that matter** — security, architecture, product judgment. AI writes most of the code. Product engineers make sure it's correct, secure, and solves the right problem. That takes skill, experience, and deep context. It's harder than writing code, not easier. ## We want people who can flow with this The tools change fast. Editors, models, frameworks — what's best today might not be best tomorrow. We don't optimize for a specific tool. We optimize for a way of working: - **Structured over ad-hoc** — Defined pipelines beat improvised prompting - **Context over cleverness** — The best prompt in the world fails without the right context - **Review over speed** — Shipping fast means nothing if you ship broken - **Adaptable over specialized** — The best engineers learn new tools quickly, not cling to old ones We want engineers who can thrive in this environment regardless of whether the editor is Cursor, Claude Code, Windsurf, or something that doesn't exist yet. If that sounds like you, we wrote a role overview here: [Product Engineer (Hiring) →](/docs/product-engineer) ## The investment math AI tooling is extraordinarily cost-effective, but that's not why we use it. We use it because it makes our team better. - **One Claude Opus 4.6 subscription** gives every engineer access to the most capable reasoning model available - **One Cursor license per engineer** pays for itself in the first week - **Repo Hub** is free, open source, and ties everything together The compound effect is what matters. Every skill we write, every agent we refine, every MCP we connect — it benefits the entire team instantly. The setup gets better every week. ## We're not early adopters. We're operators. This isn't a demo or a proof of concept. We run a real company on this stack, shipping real software to real users every week. Our current setup: - **9 repositories** managed as a single AI-aware workspace - **11 specialized agents** collaborating in structured pipelines - **19 MCP connections** giving AI access to databases, monitoring, secrets, and testing tools - **Claude Opus 4.6** as our primary model for complex reasoning and code generation This is production infrastructure. And we're open-sourcing it because we believe every team should have access to this approach. ## What we believe 1. **AI doesn't replace engineers. It changes what engineering means.** The best engineers will be the ones who know how to direct AI, not the ones who type the fastest. 2. **Context is everything.** An AI that can't see your database schema, your API contracts, and your deploy patterns is flying blind. Repo Hub exists to solve this. 3. **Structure beats improvisation.** Ad-hoc prompting produces ad-hoc results. Structured agent pipelines with defined roles, skills, and review steps produce consistent, high-quality output. 4. **The editor is the runtime.** We don't need another platform, another server, another dashboard. The AI editor is where the work happens. It should be where the workflow lives. 5. **Open source wins.** We're open-sourcing Repo Hub because we believe this approach should be accessible to every team, not locked behind a SaaS paywall. ## What's next We want to standardize the agent configurations, skills, and workflows that power our development — and make them tool-agnostic. Not tied to Cursor, not tied to Claude, not tied to any single vendor. Repo Hub is the first step. The CLI, the YAML schema, the agent templates, the MCP connections — all of it is designed to be adopted, extended, and contributed to by any team. [Get Started →](/docs/getting-started) --- # Why We Built This > The problem we faced at Arvore and how Repo Hub solved it. ## The problem AI coding assistants are blind. They see one repository at a time. At Arvore, we have 9 repositories — backend APIs in NestJS and Elixir, frontends in Next.js and React, shared libraries, infrastructure configs. They all depend on each other. When we asked an AI assistant to implement a feature, it didn't know: - That the API contract changed in the backend yesterday - What the database schema looks like - How the frontend consumes the endpoint it's building - What our deploy patterns and testing conventions are We spent more time **explaining context** than the AI spent writing code. Copy-pasting schemas, describing file structures, re-explaining the same conventions over and over. ## Then the team got smaller External circumstances forced us to reduce our engineering team from 30 to 10. Not by choice — it was a reality we had to face. With a third of the team and the same product ambitions, we couldn't afford to keep working the old way. Features that took 3 to 6 months were no longer viable on that timeline. We needed a fundamentally different approach. That's when we went all-in on AI-powered development. Not as a nice-to-have, but as a survival strategy. ## What we tried first We tried all the standard approaches: - **Long system prompts** — Hit context limits, became stale, hard to maintain - **README-driven context** — Too generic, agents ignored most of it - **Manual file references** — Tedious, easy to forget critical files - **Separate AI tools per repo** — No cross-repo awareness, fragmented workflows None of them solved the fundamental problem: the AI couldn't see the full picture. ## The insight The breakthrough was simple. Two lines of config: ```bash # .gitignore — repos excluded from hub's git api frontend # .cursorignore — but included for AI context !api/ !frontend/ ``` Git sees separate repositories. The AI sees one workspace. Each repo keeps its own history. Zero migration. But context alone wasn't enough. We needed the AI to **follow a process**, not just answer questions. So we built the agent orchestration layer: 1. **One YAML file** declares everything — repos, services, MCPs, workflow pipeline 2. **A CLI** generates editor configs from that YAML 3. **The editor becomes the runtime** — the orchestrator agent follows the pipeline, calling sub-agents in order No daemon. No server. No platform. The AI editor runs the entire workflow. ## What happened With 10 engineers and Repo Hub, we didn't just maintain our pace — we accelerated. **Before:** - 30 engineers - 3-6 months per feature - Manual context management - Ad-hoc AI usage with inconsistent results - Most time spent on coordination, not building **After:** - 10 product engineers - Weeks per feature - Automatic cross-repo context - Structured agent pipelines with consistent output - Most time spent on review, security, and product decisions The role naturally evolved. Our engineers stopped spending time writing boilerplate and started focusing on what requires human judgment: security, architecture, code review, and product decisions. We started calling them product engineers — because that's what they are. ## We're growing again The lesson wasn't "you need fewer people." The lesson was: **with the right tools and the right process, every engineer on your team becomes dramatically more effective.** We're actively hiring. We want more product engineers who can work this way — people who understand product deeply, know technology well, and can direct AI to build at a pace that wasn't possible before. Role details: [Product Engineer (Hiring) →](/docs/product-engineer) The tools will keep changing. Editors, models, providers — none of it is permanent. What matters is the discipline: structured workflows, encoded knowledge, connected infrastructure, and human review at every step. ## Why we open-sourced it We built Repo Hub to solve our own problem. But the problem isn't unique to Arvore. Every team using AI coding assistants hits the same wall: context fragmentation, workflow inconsistency, manual overhead. The solution shouldn't be proprietary. Repo Hub is MIT licensed and designed for adoption. If your team manages multiple repositories and uses AI for development, this framework is for you. [Read our Philosophy →](/docs/philosophy) · [Get Started →](/docs/getting-started) --- # Best Practices > Lessons learned from running AI-powered development in production. These are the practices we've refined at Arvore through months of real production usage. They're opinionated — they reflect what works for us with Repo Hub, Claude Opus 4.6, and Cursor. ## Choose the right model Not all models are equal for development work. Our stack: - **Claude Opus 4.6** for complex tasks — multi-file refactors, architecture decisions, agent orchestration, code review. It's the most capable model for sustained, multi-step reasoning across large codebases. - **Faster models** for quick, scoped tasks — simple bug fixes, test generation, single-file changes. Cheaper and lower latency when the task doesn't need deep reasoning. The orchestrator chooses the model per step. Refinement and review need the best model. A quick lint fix doesn't. **Key insight:** the model matters less than the context you give it. Opus 4.6 with poor context produces worse results than a smaller model with excellent context. Repo Hub exists to solve the context problem. ## Structure your workflow Ad-hoc prompting produces ad-hoc results. The biggest productivity gain isn't a better model — it's a **structured pipeline**: ```yaml workflow: pipeline: - step: refinement agent: refinement - step: coding agents: [coding-backend, coding-frontend] parallel: true - step: review agent: code-reviewer - step: qa agent: qa-frontend tools: [playwright] - step: deliver actions: [create-pr, notify-slack] ``` Every feature goes through the same steps. The refinement agent collects requirements before any code is written. The review agent checks against those requirements. The QA agent tests with real browser automation. This consistency is what makes "weeks instead of months" possible. Not magic — process. ## Encode knowledge in skills The single biggest source of errors in AI-generated code is **not knowing the conventions**. An AI that generates a NestJS service without knowing your error handling pattern, your testing conventions, or your database access layer will produce code that works but doesn't fit. Skills solve this: ``` skills/backend-nestjs/SKILL.md ├── Project structure ├── Testing patterns (Vitest, not Jest) ├── Database access (TypeORM conventions) ├── Error handling (custom exception filters) └── API response format ``` When a coding agent starts working on a repo with `skills: [backend-nestjs]`, it reads the skill first. The result is code that matches your existing codebase from the first attempt. **Write skills for every framework and convention in your stack.** This is the highest-ROI activity for any team using AI development. ## Connect to real infrastructure An AI that can't see your database schema is guessing at column names. An AI that can't see your logs is guessing at error causes. MCPs remove the guessing: | MCP | What it gives AI | |-----|-----------------| | Database MCP | Read-only queries to understand schema and data | | Datadog MCP | Metrics, logs, and traces for debugging | | Playwright MCP | Browser automation for E2E testing | | npm Registry MCP | Package security and adoption signals | The debugger agent with access to Datadog logs can identify a root cause in minutes instead of hours. The QA agent with Playwright can verify UI changes without manual testing. **Every piece of infrastructure your team uses should be accessible to AI through MCPs.** ## Review everything AI writes the code. Humans review it. This is non-negotiable. Our product engineers spend most of their time on: 1. **Product judgment** — Does this solve the actual user problem? Are there edge cases the AI missed? Should we do this now, and for which user segment? 2. **Architecture decisions** — Should we add this dependency? Is this the right abstraction? Will this scale? 3. **Code review** — Does this implementation match the requirements? Are there security implications? Is the error handling correct? The code review agent catches the obvious issues. The human catches the subtle ones. Both are essential. ## Train your team The framework is only as good as the people using it. Product engineers need to know: - **How to write effective refinement docs** — Clear requirements produce better code - **When to intervene vs. let the pipeline run** — Not every step needs human input - **How to write and maintain skills** — The team's knowledge should be encoded, not tribal - **How to debug agent behavior** — When an agent produces poor output, the fix is usually better context, not a better prompt We invest in training because the compound returns are enormous. One engineer who masters the workflow produces more output than five who don't. ## Start small, expand gradually Don't try to automate everything on day one. 1. Start with **cross-repo context** — the `.gitignore` / `.cursorignore` pattern 2. Add **one or two MCPs** — database and browser automation 3. Define **a simple 3-step pipeline** — refinement, coding, review 4. Write **skills for your primary framework** 5. Expand from there based on what bottlenecks you hit The full setup (11 agents, 19 MCPs, 9 repos) took us months to refine. But the first version — 2 repos, 3 agents, 2 MCPs — was running in a day. [Get Started →](/docs/getting-started) --- # Product Engineer > We're hiring product engineers to build with AI-first workflows at Arvore. ![Product Engineer at Arvore](https://attachments.gupy.io/production/companies/1668/career/2537/images/2025-01-23_00-40_imageUrl_2_0.jpg) We build real product for real schools across Brazil and the United States. We ship fast, but we don't ship reckless. We use AI heavily, and we expect humans to bring product judgment, engineering rigor, and ownership. For company benefits, values, and programs, see [Careers](/careers). ## Role context Our stack is centered around Node.js + NestJS, using REST APIs for backend-to-frontend communication: - **Backend**: Node.js, NestJS, TypeScript, REST APIs - **Web**: Next.js - **Mobile**: React Native - **Datastores**: MySQL, PostgreSQL, Redis, Elasticsearch - **Observability**: Datadog - **Async processing**: SQS If you don't have Node.js or NestJS experience, that's ok. In the first days you can already open PRs, and within ~3 months you should feel comfortable contributing. ## Product engineer: skills and what you'll do We're looking for product engineers who can thrive in an AI-first workflow: you will direct AI to do the heavy lifting, then apply deep product and engineering judgment to make sure the result is correct, secure, maintainable, and aligned with real user needs. ### What you will do - Own outcomes end-to-end: requirements, delivery, and ongoing improvements. - Collaborate closely with product, design, and other engineers to make good trade-offs. - Write clear requirements and edge cases so agents can produce high-quality code. - Lead product judgment: decide what should be built, for whom, and why, based on business goals and user reality. - Review AI-generated code with rigor: correctness, security, performance, and maintainability. - Make architecture decisions (APIs, data models, boundaries) and document them when needed. - Build and evolve REST APIs, web flows (Next.js), and mobile flows (React Native). - Use observability (Datadog) to diagnose issues and improve reliability. - Work with relational databases (MySQL/Postgres): schema, queries, migrations, and performance. - Use queues (SQS) and caching (Redis) where appropriate for async work and scalability. ### What we look for - Strong product sense: you can reason about real users, edge cases, and business logic. - Strong engineering fundamentals: testing, code review, API design, data modeling, performance. - Comfort with TypeScript and modern backend patterns (or ability to ramp up fast). - Experience shipping production systems and taking responsibility for quality. - Clear written communication and ability to work asynchronously (remote-first for tech). - Curiosity and adaptability: tools change fast; the way of working matters more than the editor. ### How you can stand out - You can turn ambiguous problems into crisp requirements and testable acceptance criteria. - You can spot gaps in AI output quickly and steer it back with better context. - You can improve team leverage by encoding conventions into reusable skills and workflows. --- # Roadmap > What works today and what's next. ## What Works Today - **Core concept** — The gitignore/cursorignore pattern for cross-repo AI context - **hub.yaml schema** — Repos, commands, tools, services, env profiles, MCPs, integrations, workflow pipeline - **CLI commands** — `init`, `add-repo`, `setup`, `generate`, `env`, `services`, `tools`, `skills`, `registry`, `pull`, `status`, `exec`, `worktree`, `doctor` - **Generator targets** — Cursor (`.cursor/rules/`, `.cursor/agents/`, `.cursor/mcp.json`), Claude Code (`CLAUDE.md`), Kiro (`.kiro/steering/`, `.kiro/settings/mcp.json`), AGENTS.md - **Agent templates** — 8 agents (orchestrator, refinement, coding-backend, coding-frontend, code-reviewer, qa-backend, qa-frontend, debugger) - **Skills system** — Install from git repos, project and global scopes, per-repo assignment - **Tool management** — Declare versions in hub.yaml, generate `.mise.toml`, install via mise - **Environment management** — Profiles, AWS Secrets Manager, per-repo overrides, per-repo profile override, DATABASE_URL building - **Docker services** — Generate `docker-compose.yml`, manage lifecycle (up/down/logs/clean) - **Worktrees** — Create parallel workspaces with env file copying - **MCP servers** — 7 open-source MCPs (MySQL, PostgreSQL, AWS Secrets, Datadog, npm registry, tempmail, Playwright) ## Short Term - **Workflow conditions** — Runtime evaluation of `when` expressions - **Hub registry** — Community hub for browsing and installing skills, agents, and workflow templates - **Windsurf adapter** — `generate --editor windsurf` - **Copilot Workspace adapter** — `generate --editor copilot` ## Medium Term - **Workflow execution tracking** — Track pipeline progress per task - **Agent performance metrics** — Which agents succeed, fail, or need human intervention - **Multi-hub** — Compose multiple hubs (platform team + feature team) - **Plugin system** — Third-party generators, MCP wrappers, custom pipeline steps ## Long Term - **Visual pipeline builder** — Web UI for designing and monitoring workflows - **Self-improving skills** — Agents propose skill updates based on recurring patterns - **Cross-team orchestration** — Hub-to-hub communication for platform/product split ## How to Contribute Pick any area and open a PR. For larger items, open an issue first to discuss approach. **Easiest high-impact contributions:** 1. **Editor adapters** — Windsurf, Copilot Workspace, or any AI editor 2. **Skills** — Write a SKILL.md for your framework/language (Go, Python/Django, Java/Spring, Vue, Svelte) 3. **Agent improvements** — Better prompts, new roles, workflow patterns 4. **Documentation** — Guides, tutorials, videos ---