Arvore Repo Hub

Memory

Team memory captures accumulated knowledge over time. Unlike skills (which are prescriptive patterns), memories are descriptive facts the team has learned — decisions made, conventions adopted, incidents resolved, and domain knowledge documented.

The @arvoretech/memory-mcp provides semantic search via MCP tools, so AI agents can search your team’s knowledge base on-demand during conversations.

Quick Start

Add the memory section to your hub.yaml:

memory:
  path: ./memories
  categories:
    - decisions
    - conventions
    - incidents
    - domain
    - gotchas
  auto_capture: true

Create your first memory:

hub memory add decisions "Use PostgreSQL for all services" \
  --content "We chose PostgreSQL over MongoDB because we need ACID transactions and complex joins." \
  --tags "database,architecture" \
  --author joao.barros

Run hub generate to inject memories into your editor config.

Categories

CategoryPurposeExamples
decisionsArchitectural Decision RecordsDatabase choice, auth strategy, API design
conventionsCoding standards and preferencesNaming, file structure, PR process
incidentsPast bugs and their root causesOutages, memory leaks, data issues
domainBusiness domain knowledgeGlossary, product concepts, user flows
gotchasKnown issues and workaroundsLibrary bugs, env quirks, deploy caveats

Memory File Format

Memories are markdown files with YAML frontmatter, stored in memories/<category>/:

---
title: Use PostgreSQL for all services
category: decisions
date: 2024-06-01
author: joao.barros
tags: [database, architecture]
status: active
---

## Context
We needed to choose between PostgreSQL and MongoDB for our main database.

## Decision
PostgreSQL, because we need ACID transactions and complex joins across entities.

## Consequences
- Migrations managed by Ecto (Elixir) and Prisma (NestJS)
- No dynamic schema flexibility
- Need to manage connection pools carefully

Status Values

  • active — Included in searches and prompt injection (default)
  • superseded — Replaced by a newer decision, kept for history
  • archived — Soft-deleted, excluded from active searches

Semantic Search via MCP

The @arvoretech/memory-mcp server provides semantic search using local embeddings. Add it to your hub.yaml:

mcps:
  - name: team-memory
    package: "@arvoretech/memory-mcp"
    env:
      MEMORY_PATH: ./memories

This gives AI agents access to these tools during conversations:

ToolDescription
search_memoriesSemantic search across all memories
get_memoryGet full content of a specific memory
add_memoryCreate a new memory (AI can capture learnings)
list_memoriesList memories with optional filters
archive_memorySoft-delete a memory
remove_memoryPermanently delete a memory

The MCP uses the paraphrase-multilingual-MiniLM-L12-v2 model by default, supporting Portuguese and English queries. Vectors are stored in a local LanceDB database (memories/.lancedb/) with cosine similarity search and metadata filtering.

Custom Model

Override the embedding model via environment variable:

mcps:
  - name: team-memory
    package: "@arvoretech/memory-mcp"
    env:
      MEMORY_PATH: ./memories
      MEMORY_EMBEDDING_MODEL: Xenova/all-MiniLM-L6-v2  # Smaller, English-only

Auto-Capture

When auto_capture: true, the orchestrator is instructed to extract learnings from completed tasks and save them as new memories. After each task delivery, the AI may create entries for:

  • Architectural decisions made during the task
  • New conventions discovered
  • Bugs found and their root causes
  • Domain knowledge clarified

CLI Commands

hub memory list                           # List all active memories
hub memory list --category decisions      # Filter by category
hub memory list --status archived         # Show archived memories
hub memory add <category> "<title>"       # Create a memory
hub memory archive <id>                   # Soft-delete
hub memory remove <id>                    # Permanent delete

Configuration Reference

PropertyTypeDefaultDescription
pathstring./memoriesDirectory for memory files
categoriesstring[]all 5Which categories to use
auto_capturebooleanfalseAuto-extract learnings from tasks
embedding_modelstringmultilingual MiniLMHuggingFace model for MCP

Consolidation

Chat sessions contain valuable knowledge — decisions, patterns, gotchas, domain insights — that gets lost in history. hub consolidate extracts that knowledge automatically.

How it works

  1. Reads chat history from Kiro, Claude Code, and OpenCode
  2. Normalizes messages into a unified format (user/assistant only, no tool calls)
  3. Sorts by most recent, skips already-processed sessions
  4. Writes a compacted batch to .hub/consolidation/
  5. Spawns your editor’s CLI with a knowledge extraction prompt
  6. The agent reads the batch, checks existing memories for duplicates, and writes new files to ./memories/{category}/
  7. Marks processed sessions in .hub/consolidation-state.json

Usage

hub consolidate                    # Process last 20 sessions
hub consolidate --last 50          # Last 50
hub consolidate --since 2026-03-01 # Since a date
hub consolidate --editor kiro      # Only Kiro sessions
hub consolidate --cli claude       # Use Claude CLI for extraction
hub consolidate --dry-run          # Preview without extracting
hub consolidate --reset            # Reprocess everything

Supported editors

EditorChat locationFormat
Kiro~/Library/Application Support/Kiro/.../workspace-sessions/JSON
Claude Code~/.claude/projects/JSONL
OpenCode~/.local/share/opencode/storage/JSON

No extra dependencies

The command uses the CLI of whatever editor you already have installed (kiro-cli, claude, or opencode). No API keys, no SDK, no external service. Same pattern as agent-teams-lead spawning teammates.

Scheduling consolidation

Run hub consolidate automatically on a schedule so memories stay fresh.

macOS (launchd)

Create ~/Library/LaunchAgents/com.hub.consolidate.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.hub.consolidate</string>
    <key>ProgramArguments</key>
    <array>
        <string>/bin/bash</string>
        <string>-c</string>
        <string>cd /path/to/your/hub && npx @arvoretech/hub consolidate --last 30</string>
    </array>
    <key>StartCalendarInterval</key>
    <dict>
        <key>Hour</key>
        <integer>9</integer>
        <key>Minute</key>
        <integer>0</integer>
    </dict>
    <key>StandardOutPath</key>
    <string>/tmp/hub-consolidate.log</string>
    <key>StandardErrorPath</key>
    <string>/tmp/hub-consolidate.log</string>
</dict>
</plist>

Load it:

launchctl load ~/Library/LaunchAgents/com.hub.consolidate.plist

Linux (cron)

crontab -e

Add a line to run daily at 9am:

0 9 * * * cd /path/to/your/hub && npx @arvoretech/hub consolidate --last 30 >> /tmp/hub-consolidate.log 2>&1

Linux (systemd timer)

Create ~/.config/systemd/user/hub-consolidate.service:

[Unit]
Description=Hub memory consolidation

[Service]
Type=oneshot
WorkingDirectory=/path/to/your/hub
ExecStart=/usr/bin/npx @arvoretech/hub consolidate --last 30

Create ~/.config/systemd/user/hub-consolidate.timer:

[Unit]
Description=Run hub consolidate daily

[Timer]
OnCalendar=*-*-* 09:00:00
Persistent=true

[Install]
WantedBy=timers.target

Enable it:

systemctl --user enable --now hub-consolidate.timer

Windows (Task Scheduler)

$action = New-ScheduledTaskAction -Execute "cmd.exe" `
  -Argument "/c cd /d C:\path\to\your\hub && npx @arvoretech/hub consolidate --last 30"
$trigger = New-ScheduledTaskTrigger -Daily -At 9am
Register-ScheduledTask -TaskName "HubConsolidate" -Action $action -Trigger $trigger