Memory
Team memory captures accumulated knowledge over time. Unlike skills (which are prescriptive patterns), memories are descriptive facts the team has learned — decisions made, conventions adopted, incidents resolved, and domain knowledge documented.
The @arvoretech/memory-mcp provides semantic search via MCP tools, so AI agents can search your team’s knowledge base on-demand during conversations.
Quick Start
Add the memory section to your hub.yaml:
memory:
path: ./memories
categories:
- decisions
- conventions
- incidents
- domain
- gotchas
auto_capture: true
Create your first memory:
hub memory add decisions "Use PostgreSQL for all services" \
--content "We chose PostgreSQL over MongoDB because we need ACID transactions and complex joins." \
--tags "database,architecture" \
--author joao.barros
Run hub generate to inject memories into your editor config.
Categories
| Category | Purpose | Examples |
|---|---|---|
decisions | Architectural Decision Records | Database choice, auth strategy, API design |
conventions | Coding standards and preferences | Naming, file structure, PR process |
incidents | Past bugs and their root causes | Outages, memory leaks, data issues |
domain | Business domain knowledge | Glossary, product concepts, user flows |
gotchas | Known issues and workarounds | Library bugs, env quirks, deploy caveats |
Memory File Format
Memories are markdown files with YAML frontmatter, stored in memories/<category>/:
---
title: Use PostgreSQL for all services
category: decisions
date: 2024-06-01
author: joao.barros
tags: [database, architecture]
status: active
---
## Context
We needed to choose between PostgreSQL and MongoDB for our main database.
## Decision
PostgreSQL, because we need ACID transactions and complex joins across entities.
## Consequences
- Migrations managed by Ecto (Elixir) and Prisma (NestJS)
- No dynamic schema flexibility
- Need to manage connection pools carefully
Status Values
active— Included in searches and prompt injection (default)superseded— Replaced by a newer decision, kept for historyarchived— Soft-deleted, excluded from active searches
Semantic Search via MCP
The @arvoretech/memory-mcp server provides semantic search using local embeddings. Add it to your hub.yaml:
mcps:
- name: team-memory
package: "@arvoretech/memory-mcp"
env:
MEMORY_PATH: ./memories
This gives AI agents access to these tools during conversations:
| Tool | Description |
|---|---|
search_memories | Semantic search across all memories |
get_memory | Get full content of a specific memory |
add_memory | Create a new memory (AI can capture learnings) |
list_memories | List memories with optional filters |
archive_memory | Soft-delete a memory |
remove_memory | Permanently delete a memory |
The MCP uses the paraphrase-multilingual-MiniLM-L12-v2 model by default, supporting Portuguese and English queries. Vectors are stored in a local LanceDB database (memories/.lancedb/) with cosine similarity search and metadata filtering.
Custom Model
Override the embedding model via environment variable:
mcps:
- name: team-memory
package: "@arvoretech/memory-mcp"
env:
MEMORY_PATH: ./memories
MEMORY_EMBEDDING_MODEL: Xenova/all-MiniLM-L6-v2 # Smaller, English-only
Auto-Capture
When auto_capture: true, the orchestrator is instructed to extract learnings from completed tasks and save them as new memories. After each task delivery, the AI may create entries for:
- Architectural decisions made during the task
- New conventions discovered
- Bugs found and their root causes
- Domain knowledge clarified
CLI Commands
hub memory list # List all active memories
hub memory list --category decisions # Filter by category
hub memory list --status archived # Show archived memories
hub memory add <category> "<title>" # Create a memory
hub memory archive <id> # Soft-delete
hub memory remove <id> # Permanent delete
Configuration Reference
| Property | Type | Default | Description |
|---|---|---|---|
path | string | ./memories | Directory for memory files |
categories | string[] | all 5 | Which categories to use |
auto_capture | boolean | false | Auto-extract learnings from tasks |
embedding_model | string | multilingual MiniLM | HuggingFace model for MCP |
Consolidation
Chat sessions contain valuable knowledge — decisions, patterns, gotchas, domain insights — that gets lost in history. hub consolidate extracts that knowledge automatically.
How it works
- Reads chat history from Kiro, Claude Code, and OpenCode
- Normalizes messages into a unified format (user/assistant only, no tool calls)
- Sorts by most recent, skips already-processed sessions
- Writes a compacted batch to
.hub/consolidation/ - Spawns your editor’s CLI with a knowledge extraction prompt
- The agent reads the batch, checks existing memories for duplicates, and writes new files to
./memories/{category}/ - Marks processed sessions in
.hub/consolidation-state.json
Usage
hub consolidate # Process last 20 sessions
hub consolidate --last 50 # Last 50
hub consolidate --since 2026-03-01 # Since a date
hub consolidate --editor kiro # Only Kiro sessions
hub consolidate --cli claude # Use Claude CLI for extraction
hub consolidate --dry-run # Preview without extracting
hub consolidate --reset # Reprocess everything
Supported editors
| Editor | Chat location | Format |
|---|---|---|
| Kiro | ~/Library/Application Support/Kiro/.../workspace-sessions/ | JSON |
| Claude Code | ~/.claude/projects/ | JSONL |
| OpenCode | ~/.local/share/opencode/storage/ | JSON |
No extra dependencies
The command uses the CLI of whatever editor you already have installed (kiro-cli, claude, or opencode). No API keys, no SDK, no external service. Same pattern as agent-teams-lead spawning teammates.
Scheduling consolidation
Run hub consolidate automatically on a schedule so memories stay fresh.
macOS (launchd)
Create ~/Library/LaunchAgents/com.hub.consolidate.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.hub.consolidate</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>-c</string>
<string>cd /path/to/your/hub && npx @arvoretech/hub consolidate --last 30</string>
</array>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key>
<integer>9</integer>
<key>Minute</key>
<integer>0</integer>
</dict>
<key>StandardOutPath</key>
<string>/tmp/hub-consolidate.log</string>
<key>StandardErrorPath</key>
<string>/tmp/hub-consolidate.log</string>
</dict>
</plist>
Load it:
launchctl load ~/Library/LaunchAgents/com.hub.consolidate.plist
Linux (cron)
crontab -e
Add a line to run daily at 9am:
0 9 * * * cd /path/to/your/hub && npx @arvoretech/hub consolidate --last 30 >> /tmp/hub-consolidate.log 2>&1
Linux (systemd timer)
Create ~/.config/systemd/user/hub-consolidate.service:
[Unit]
Description=Hub memory consolidation
[Service]
Type=oneshot
WorkingDirectory=/path/to/your/hub
ExecStart=/usr/bin/npx @arvoretech/hub consolidate --last 30
Create ~/.config/systemd/user/hub-consolidate.timer:
[Unit]
Description=Run hub consolidate daily
[Timer]
OnCalendar=*-*-* 09:00:00
Persistent=true
[Install]
WantedBy=timers.target
Enable it:
systemctl --user enable --now hub-consolidate.timer
Windows (Task Scheduler)
$action = New-ScheduledTaskAction -Execute "cmd.exe" `
-Argument "/c cd /d C:\path\to\your\hub && npx @arvoretech/hub consolidate --last 30"
$trigger = New-ScheduledTaskTrigger -Daily -At 9am
Register-ScheduledTask -TaskName "HubConsolidate" -Action $action -Trigger $trigger