Codebolt memory is not a fixed conversation summary. It is an active engine layer that can ingest chats, code, prompts, tasks, events, and relationships; create new memory structures; evolve context rules; and inject the right memory into each agent run under budget.
Most agent memory products start with a fixed shape: summaries, preferences, key points, maybe mistakes. Codebolt ships memory primitives. You can decide what memory means for a project: unfinished tasks, negative patterns, positive patterns, object relationships, code facts, prompt behavior, agent habits, customer context, or a structure no one anticipated when the agent started.
The important part is not only that agents remember. It is that the memory substrate can grow, restructure, react, and decide what should enter context.
Memory grows as Codebolt is used. Agent chats, code changes, prompts, tool traces, tasks, decisions, failures, and relationships can be ingested into durable stores and converted into useful structures such as positive patterns, negative patterns, object relationships, unresolved work, and project facts.
usage keeps producing signal
├─ chats
├─ code
├─ prompts
├─ tools
├─ tasks
└─ events
↓
memory keeps compounding
The memory shape is not predefined by Codebolt. If you do not want conversation summaries, skip them. If you only want unfinished tasks, create that memory. If you need object graphs, failure patterns, or domain-specific state, define those instead.
Agents can change memory structures at runtime. They can create new memory definitions, adjust ingestion, improve retrieval pipelines, and add Context Assembly rules that control what future agents receive.
Memory does not blindly flood the prompt. The Context Assembly Engine injects memory by scope variables, rules, retrieval pipelines, contribution rules, and token budgets.
Memory work can run in the background and react to the system. Hooks, events, schedules, task outcomes, conversation lifecycle events, and custom plugins can trigger ingestion, updates, consolidation, and new derived memory.
event happens
↓
hook / ingestion pipeline
↓
update memory
↓
CAE changes future context
memory is not passive storage
Codebolt separates memory into primitives: storage types, ingestion pipelines, retrieval pipelines, context rules, contribution rules, and budgets. Together they let a project define the memory it actually needs instead of accepting a fixed vendor schema.
YOU CAN DEFINE:
├─ unfinished tasks memory
├─ negative pattern memory
├─ positive pattern memory
├─ object relationship memory
├─ codebase fact memory
├─ prompt behavior memory
└─ domain-specific memory
USING PRIMITIVES:
├─ episodic logs
├─ persistent memory definitions
├─ vector search
├─ knowledge graph
├─ key-value state
├─ ingestion pipelines
├─ retrieval pipelines
└─ context assembly rules
The memory system can consume agent chat, code, prompts, tool results, task state, project files, application events, and plugin events. Over time that raw usage becomes structured project intelligence.
A project can add new persistent memory definitions for the exact shape it needs: unfinished tasks, issue clusters, component relationships, repeated user corrections, agent mistakes, winning strategies, or business-specific entities.
Because memory structures are engine primitives, agents and plugins can update schemas, ingestion behavior, and retrieval pipelines while work is happening. The structure of memory can evolve with the problem.
Context Assembly rules determine which memories are injected for a given agent, task, phase, thread, project, user, or swarm. Memory is selected and budgeted before it enters the prompt.
Hooks, events, schedules, and lifecycle triggers can run memory work without a user prompt. Memory can consolidate, update relationships, record failures, and prepare future context while agents continue working.
[object Object]
[object Object]
Scalable memory creates a second problem: the more the system learns, the less any single agent can read. Codebolt solves that through Context Assembly. Memory can grow in the background, but each agent receives only the slice selected by rules, retrieval pipelines, contribution logic, and token budgets.
memory keeps growing:
├─ conversations
├─ code facts
├─ object graph
├─ positive patterns
├─ negative patterns
└─ unfinished tasks
CAE decides:
├─ which memory applies
├─ which retrieval pipeline to run
├─ where context is placed
└─ how much budget it receives
agent gets focused context,
not the whole database