Define your repos, tools, and workflow. Your AI coding assistant handles refinement, coding, review, testing, and PRs — automatically.
npx @arvoretech/hub init Built by Arvore · 10 engineers · Months → Weeks
Imagine this
You type: "Add profile editing to the user settings page"
AI refines the requirements with you
Writes backend API and frontend UI across repos, in parallel
A reviewer agent checks the code against the spec
A QA agent runs automated tests in a real browser
PRs are opened, Slack gets notified
You review the PR. That's it.
This is not a demo. It's how we build software every day at Arvore.
What is Repo Hub, really?
Like docker-compose for AI development. Define your repositories, tools, and pipeline. The AI follows it end-to-end.
repos:
- name: api
url: git@github.com:company/api.git
tech: nestjs
- name: frontend
url: git@github.com:company/frontend.git
tech: nextjs
mcps:
- name: postgresql
- name: playwright
- name: datadog
workflow:
pipeline:
- step: refinement
- step: coding
parallel: true
- step: review
- step: qa
- step: deliver
actions: [create-pr, notify-slack] import { defineConfig, repo, mcp } from "@arvoretech/hub/config";
export default defineConfig({
repos: [
repo.nestjs("api", "git@github.com:company/api.git"),
repo.nextjs("frontend", "git@github.com:company/frontend.git"),
],
mcps: [
mcp.postgresql("main-db"),
mcp.playwright(),
mcp.datadog(),
],
workflow: {
pipeline: [
{ step: "refinement" },
{ step: "coding", parallel: true },
{ step: "review" },
{ step: "qa" },
{ step: "deliver", actions: ["create-pr", "notify-slack"] },
],
},
}); One config file. One CLI command. Your AI knows the rest.
Why AI assistants fail without this
They only see one repo
Your AI edits the frontend but doesn't know the API changed. It codes against stale assumptions.
Repo Hub gives AI full context across repos.
No process, just prompts
You prompt, it codes, you check, you prompt again. You're the PM, reviewer, and QA — all at once.
Repo Hub defines a pipeline it follows.
Can't use your tools
It can't check your database schema or read Datadog logs. You end up copy-pasting context.
Repo Hub connects AI to your infra.
How it actually works
You write your config
Declare your repos, tools, and workflow in YAML or TypeScript. Takes 5 minutes.
Run the CLI
hub generate reads your config and creates instructions your editor understands.
Ask for a feature
Open your editor and describe what you need. The AI follows the pipeline you defined.
Your code editor is the runtime. There's no server to deploy, no daemon to run. The AI agent in your editor reads the generated config and follows the pipeline automatically.
The pipeline
The pipeline splits into parallel agents and converges back. Backend and frontend code in parallel, QA runs both, then delivery fans out to PRs, Slack, and Linear — all at once.
Key concepts (the jargon, explained)
Agents
Specialized AI roles. Like team members — one knows how to refine requirements, one writes backend code, one reviews, one tests. Each has its own instructions and focus area.
MCPs
Plugins that connect AI to your tools. A database MCP lets AI query your schema. A Datadog MCP lets it read logs. A Playwright MCP lets it click through your app and test it.
Skills
Cheat sheets for your AI. Written documentation that teaches the AI your team's coding patterns, naming conventions, and architecture decisions. Like onboarding a new developer.
Hub Workspace
A folder that contains all your repos. Not a monorepo — each repo keeps its own git history, branches, and PRs. The workspace just lets AI see everything at once.
In production at Arvore
9
REPOS
11
AI ROLES
19
TOOL CONNECTIONS
10x
OUTPUT
Real company · Real software · Shipping every week
One config file. One CLI command. Your AI handles the rest.
$ npx @arvoretech/hub init my-project
$ npx @arvoretech/hub setup
$ npx @arvoretech/hub generate --editor cursor
Done. Open your editor and start building.