Modular Agents¶
Most agent frameworks treat the LLM as the complete behavior: prompt in, actions out.
A major problem with this naive architecture is the lack of modularity and composability. LLMs are general-purpose reasoning engines that can be applied to a wide variety of tasks, but they are not good at maintaining complex, long-term reasoning processes on their own, especially if those processes run in parallel or interleaved with each other.
What Is An Agent?
Arbitrarily complex agents can be built from two ingredients: 1) the ability to recursively spawn agents dynamically (see Agents All The Way Down), and 2) an atomic agent architecture.
The first step in enhancing agent modularity is to isolate the LLM from the tools: the LLM interacts with the tools only through a structured tool framework, rather than directly generating actions. This expands the agent design space by adding many degrees of freedom:
- The LLM can be prompted with a static or dynamic (introspected) view of the tool framework.
- The LLM can generate one method call at a time, or generate larger code blocks that interact with multiple tools and contain complex logic.
Colony's Approach: The LLM as a code generator
Colony treats the LLM as a code generator that generates code to be executed by the agent, rather than the direct source of behavior. The LLM generates code that interacts with tools through a structured tool framework, rather than directly generating actions. This allows Colony to leverage the LLM's reasoning capabilities while providing structure, modularity, and reliability through the tool framework and AgentCapabilities. The LLM is a code generator that receives or introspects a "view" of a tool framework (the codebase) and generates code that interacts with those tools through the tool framework.
Colony's Approach: Intuition is a cross-cutting ingredient infused into all cognitive processes
The LLM provides raw reasoning power -- remarkable leaps of insight, but also confabulation, laziness, and drift. By design, Colony provides structure -- sequencing, verification, and correction of those intuitions into reliable behavior. Colony enforces the most useful cognitive processes, general reasoning tasks and multi-agent collaboration patterns by factoring them out into AgentCapabilities and treating the LLM as the source of intuition to be infused in all those capabilities. Colony then uses an LLM-powered ActionPolicy to compose (or weave) all these deliberative, reflective, and meta-cognitive processes that the LLM alone cannot sustain over long reasoning chains.
Any cognitive process in Colony is a pluggable AgentCapability. Planning, reflection, conflict resolution, memory consolidation, hypothesis evaluation, confidence tracking, multi-agent coordination protocols -- each is an AgentCapability that can be swapped, customized, or composed. The LLM provides the "intuition" that drives each AgentCapability, while the ActionPolicy structure provides the "consciousness" that composes or weaves those intuitions.
Conscious vs. Subconscious Processes¶
Colony's AgentCapability system directly implements the conscious/subconscious distinction:
Conscious Processes¶
Capabilities export @action_executor methods -- deliberate actions that the ActionPolicy can choose to invoke during its reasoning loop. These are interleaved with LLM reasoning and directly alter agent behavior:
- Planning: create, revise, or backtrack plans
- Reflection: assess past actions and adjust strategy
- Memory retrieval: consciously search for relevant past experiences
- Tool use: invoke external tools to gather information
- Communication: send structured messages to other agents
The LLM planner decides which conscious process to invoke and when, based on current context and goals.
class ReflectionCapability(AgentCapability):
# Conscious: LLM planner selects this action during its reasoning loop
@action_executor()
async def reflect_on_progress(self, goal: str) -> Reflection:
"""Assess progress toward a goal and suggest strategy adjustments."""
...
# Conscious: exposed to planner for deliberate memory search
@action_executor()
async def recall_similar_experiences(self, query: str) -> list[MemoryEntry]:
"""Search episodic memory for relevant past experiences."""
...
Subconscious Processes¶
Capabilities also run background processes that operate without LLM involvement:
- Memory consolidation: Periodically summarize and compress working memory into short-term and long-term stores
- Rehearsal: Strengthen important memories by replaying recent experiences
- Concept formation: Extract patterns from accumulated observations
- Decay and pruning: Reduce relevance of stale memories, remove duplicates
- Event monitoring: Watch for blackboard events that may require attention
These run continuously or periodically as async tasks, at different time scales, triggered by blackboard events or timer intervals. They keep the agent's cognitive infrastructure healthy without consuming LLM inference cycles.
# Subconscious: memory consolidation runs via MemoryCapability subscriptions
stm = MemoryCapability(
agent=agent,
scope_id=MemoryScope.agent_stm(agent_id),
ingestion_policy=MemoryIngestPolicy(
subscriptions=[
MemorySubscription(source_scope_id=MemoryScope.agent_working(agent_id)),
],
transformer=SummarizingTransformer(agent=agent, prompt="..."),
),
maintenance=MaintenanceConfig(decay_rate=0.01, prune_threshold=0.1),
)
# Subconscious: auto-capture agent behavior via hooks
MemoryProducerConfig(
pointcut=Pointcut.pattern("ActionDispatcher.dispatch"),
extractor=extract_action_from_dispatch, # (ctx, result) -> (data, tags, metadata)
ttl_seconds=3600,
)
graph TB
subgraph "Consciousness Layer (Policies)"
AP[ActionPolicy<br/>Aspect Weaver]
Plan[Planning Policy]
Reflect[Reflection Policy]
Conf[Confidence Policy]
AP --> Plan
AP --> Reflect
AP --> Conf
end
subgraph "Intuition Layer (LLM)"
LLM[LLM Inference<br/>Fast, associative]
end
Plan -->|"ask: what's next?"| LLM
Reflect -->|"ask: how did that go?"| LLM
Conf -->|"ask: how confident?"| LLM
LLM -->|"intuitive response"| AP
graph TB
subgraph "Subconscious (Background)"
Consol[Memory Consolidation]
Decay[Decay & Pruning]
Monitor[Event Monitoring]
end
Consol -.->|"runs independently"| BB[Blackboard]
Decay -.->|"runs independently"| BB
Monitor -.->|"watches"| BB