Multiple agent types. 675+ skills. From no-code remix to full code custom.
Every agent in Codebolt is extensible. Remix agents let you customize without code. Flow agents give you visual composition. Custom agents give you full SDK access. All agents share the same 675+ skill library and capability system.
Remix agents inherit from a base agent and add custom instructions in YAML frontmatter + Markdown body. Stored as .md files in .codebolt/agents/remix/ — git-friendly, versionable, mergeable. Declare supported languages, frameworks, routing rules, and LLM model fallback chains. No code required.
.codebolt/agents/remix/my-agent.md:
---
name: Python Testing Expert
baseAgent: code-generation
metadata:
agent_routing:
supportedlanguages: [python]
defaultagentllm:
modelorder: [gpt-4, claude, local]
---
You are a Python testing expert.
Write pytest tests with fixtures...
Build agent workflows visually using the LiteGraph-based flow editor. Create .agentflow files with nodes, connections, and data flow. The runtime service executes the flow graph. Combine multiple agent capabilities into complex workflows without writing orchestration code.
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Analyze │───→│ Generate │───→│ Review │
│ Code │ │ Tests │ │ Output │
└─────────┘ └─────────┘ └─────────┘
│ │
└──────── feedback ────────────┘
Visual editor. .agentflow files.
Runtime executes the graph.
Write agents in JavaScript/TypeScript with complete control over the agentic loop. Connect via codeboltjs as child processes. Access all 63+ SDK modules. Implement custom decision logic, tool selection, memory management, and error handling. Your agent is your program.
import { codebolt } from 'codeboltjs';
codebolt.onMessage(async (msg) => {
// Step 1: Analyze
const code = await codebolt.fs.readDir('/src');
// Step 2: Decide (your logic)
const plan = await codebolt.llm.chat({...});
// Step 3: Execute
await codebolt.terminal.run('npm test');
// Step 4: Report
await codebolt.chat.sendMessage(result);
});
Agents running in external environments connect via HTTP/WebSocket endpoints. A remote agent has the same capabilities as a local one — full SDK access through the network. This enables distributed agent architectures where specialized agents live on specialized hardware.
Local Codebolt Instance
│
┌─────┼──────────────────┐
│ Remote Agent (GPU VM) │
│ → connects via WS │
│ → full SDK access │
│ → specialized hardware │
└────────────────────────┘
72 tool domains with 675+ individual tools. File operations (read, write, search, diff, create). Git (commit, push, pull, branch, merge, revert). Browser (navigate, screenshot, click, type, scroll). Terminal (execute, stream, manage PTY). LLM (chat, embed, count tokens). Vector DB (store, query, semantic search). Agent control (start, stop, list, spawn sub-agents). Memory & knowledge (persist, retrieve, graph queries). Skills are scoped by environment capabilities.
Agent declares needed capabilities:
→ browser-automation
→ llm-embedding
→ vector-storage
Environment checks:
→ Playwright available? ✓
→ ONNX.js available? ✓
→ Redis running? ✓
Agent gains access to:
675+ tools across 72 domains
Scoped by what's available
Each agent can declare different LLM configurations for different roles. Use GPT-4 for planning, Claude for code generation, a local model for repetitive tasks. Define fallback chains — if the primary model is unavailable, automatically try alternatives. Per-agent, per-task, per-role model selection. 17+ cloud providers plus Ollama and LM Studio.
metadata:
llm_role:
- role: planning
modelorder: [gpt-4, claude-opus]
- role: coding
modelorder: [claude-sonnet, gpt-4]
- role: review
modelorder: [local-llama, gemini]
Fallback: if primary unavailable,
try next in chain automatically.