Repo Hub is a configuration file that teaches your AI coding assistant how your company builds software — which repos to touch, how to test, where to deploy, and who to notify.
Built by Arvore · 10 engineers · Months → Weeks
Imagine this
You type: "Add profile editing to the user settings page"
The AI asks clarifying questions about the requirements
It writes the backend API and the frontend UI — in separate repos, in parallel
Another AI agent reviews the code against the original requirements
Another runs automated tests in a real browser
PRs are created, your team gets notified on Slack
You review the PR. That's it.
This is not a demo. It's how we build software every day at Arvore.
What is Repo Hub, really?
Think of it like a docker-compose for AI development. Instead of defining containers, you define your repositories, your tools, and your development workflow. The AI follows it automatically.
# Which repositories your AI can work on
repos:
- name: api
url: git@github.com:company/api.git
tech: nestjs
- name: frontend
url: git@github.com:company/frontend.git
tech: nextjs
# Tools the AI can use (databases, monitoring, browser testing...)
mcps:
- name: postgresql
- name: playwright
- name: datadog
# The step-by-step workflow the AI follows
workflow:
pipeline:
- step: refinement # Understand requirements
- step: coding # Write the code
- step: review # Review against requirements
- step: qa # Run tests
- step: deliver # Create PRs, notify Slack One file. One CLI command. Your AI knows the rest.
Why AI assistants fail without this
They only see one repo
Your AI edits the frontend but doesn't know the API contract changed yesterday. It writes code against an outdated assumption. You fix it manually.
Repo Hub shows all repos at once.
No process, just prompts
You prompt → it codes → you check → you prompt again → it codes more. You're the project manager, reviewer, and QA — all in one.
Repo Hub defines a pipeline it follows.
Can't use your tools
It can't check if the production database schema matches what it's coding against. Can't look at Datadog when debugging. You copy-paste everything.
Repo Hub connects AI to your infra.
How it actually works
You write hub.yaml
Declare your repos, tools, and workflow. Takes 5 minutes.
Run the CLI
hub generate reads the YAML and creates instructions your editor understands.
Ask for a feature
Open your editor and describe what you need. The AI follows the pipeline you defined.
Your code editor is the runtime. There's no server to deploy, no daemon to run. The AI agent in your editor reads the generated config and follows the pipeline automatically.
The pipeline
Each step is handled by a specialized AI role. Like a team where one person gathers requirements, another codes, another reviews, and another tests.
Key concepts (the jargon, explained)
Agents
Specialized AI roles. Like team members — one knows how to refine requirements, one writes backend code, one reviews, one tests. Each has its own instructions and focus area.
MCPs
Plugins that connect AI to your tools. A database MCP lets AI query your schema. A Datadog MCP lets it read logs. A Playwright MCP lets it click through your app and test it.
Skills
Cheat sheets for your AI. Written documentation that teaches the AI your team's coding patterns, naming conventions, and architecture decisions. Like onboarding a new developer.
Hub Workspace
A folder that contains all your repos. Not a monorepo — each repo keeps its own git history, branches, and PRs. The workspace just lets AI see everything at once.
In production at Arvore
9
REPOS
11
AI ROLES
19
TOOL CONNECTIONS
10x
OUTPUT
Real company · Real software · Shipping every week
One config file. One CLI command. Your AI ships the rest.
$ npx @arvoretech/hub init my-project
$ npx @arvoretech/hub add-repo git@github.com:co/api.git
$ npx @arvoretech/hub setup
$ npx @arvoretech/hub generate --editor cursor
Done! Open in Cursor and start building.