Skip to main content
AgentModule is the contract you implement to define agent behavior. It owns everything about how your agent thinks — state initialization, prompt construction, decision logic, and state reduction — but it never runs the loop itself. That’s the Engine’s job.

Generic type parameters

AgentModule is generic over three type variables:
class AgentModule(ABC, Generic[StateT, ObservationT, ActionT]):
    ...
ParameterWhat it represents
StateTYour typed state class (must extend StateSchema)
ObservationTThe observation dict the Engine passes to your hooks
ActionTThe action type your decide hook produces
A fully-typed declaration looks like:
class MyAgent(AgentModule[MyState, dict[str, Any], Action]):
    ...

The six hooks

AgentModule defines six hooks. Two are required (init_state and reduce). The rest are optional and have sensible defaults.

init_state (required)

Called once at the start of every run. Return your typed state object, initialized from the task string.
def init_state(self, task: str, **kwargs: Any) -> StateT:
    return MyState(task=task, max_steps=int(kwargs.get("max_steps", 10)))
Any keyword arguments passed to Engine.run() flow through here, so you can accept max_steps or other initial fields via kwargs.

reduce (required)

Called after every action cycle. Transform the previous state plus the new observation and decision into the next state. This is where your agent learns from what just happened.
def reduce(
    self,
    state: StateT,
    observation: ObservationT,
    decision: Decision[ActionT],
) -> StateT:
    ...
    return state
Always return state (or a new state object) from reduce. Returning None will cause a runtime error.

build_system_prompt (optional)

Return a dynamic system prompt string, or None to skip. Called before every decide step, so your prompt can react to current state.
def build_system_prompt(self, state: StateT) -> str | None:
    return render_prompt(SYSTEM_TEMPLATE, {"tools": self.tool_registry.get_tool_descriptions()})
The default implementation returns None.

prepare (optional)

Converts current state into the model-ready text that becomes the user turn. Defaults to str(state). Override this to format a structured prompt from your state fields.
def prepare(self, state: StateT) -> str:
    return f"Task: {state.task}\nStep: {state.current_step}/{state.max_steps}"

decide (optional)

Override to provide your own decision logic before the Engine hands off to the LLM. Return None to let the Engine use its model decision path.
def decide(self, state: StateT, observation: ObservationT) -> Decision[ActionT] | None:
    if state.current_step == 0:
        return Decision.wait("inspect the coding context first")
    return None  # let the model decide

should_stop (optional)

An additional stop condition checked each step after reduce. Return True to halt the run. The Engine’s built-in stop criteria (budget, FinalResultCriteria, etc.) still apply on top of this.
def should_stop(self, state: StateT) -> bool:
    return state.final_result is not None

Minimal example

In QitOS, the minimal public example is still a real coding agent: it configures a model, mounts a workspace, and lets the Engine drive tool use and verification.
from dataclasses import dataclass, field
from typing import Any

from qitos import Action, AgentModule, Decision, StateSchema
from qitos.kit import REACT_SYSTEM_PROMPT, ReActTextParser, format_action, render_prompt
from qitos.kit.toolset import coding_tools
from qitos.models import OpenAICompatibleModel


@dataclass
class MinimalCodingState(StateSchema):
    scratchpad: list[str] = field(default_factory=list)
    target_file: str = "buggy_module.py"
    test_command: str = 'python -c "import buggy_module; assert buggy_module.add(20, 22) == 42"'


class MinimalCodingAgent(AgentModule[MinimalCodingState, dict[str, Any], Action]):
    def __init__(self, llm: OpenAICompatibleModel, workspace_root: str):
        super().__init__(
            toolset=[coding_tools(workspace_root=workspace_root, shell_timeout=20, include_notebook=False)],
            llm=llm,
            model_parser=ReActTextParser(),
        )

    def init_state(self, task: str, **kwargs: Any) -> MinimalCodingState:
        return MinimalCodingState(
            task=task,
            max_steps=int(kwargs.get("max_steps", 8)),
            target_file=str(kwargs.get("target_file", "buggy_module.py")),
            test_command=str(kwargs.get("test_command")),
        )

    def build_system_prompt(self, state: MinimalCodingState) -> str | None:
        _ = state
        return render_prompt(
            REACT_SYSTEM_PROMPT,
            {"tool_schema": self.tool_registry.get_tool_descriptions()},
        )

    def reduce(
        self,
        state: MinimalCodingState,
        observation: dict[str, Any],
        decision: Decision[Action],
    ) -> MinimalCodingState:
        action_results = observation.get("action_results", [])
        if decision.rationale:
            state.scratchpad.append(f"Thought: {decision.rationale}")
        if decision.actions:
            state.scratchpad.append(f"Action: {format_action(decision.actions[0])}")
        if action_results:
            first = action_results[0]
            state.scratchpad.append(f"Observation: {first}")
            if isinstance(first, dict) and int(first.get("returncode", 1)) == 0:
                state.final_result = "Patch applied and verification passed."
        return state

Running an agent

Call agent.run() to execute a task. This is the primary entry point — it creates an Engine internally, runs the loop, and returns the result.
agent = MinimalCodingAgent(
    llm=build_model(),
    workspace_root="./playground/minimal_coding_agent",
)

# Returns final_result string by default
answer = agent.run(
    "Fix the bug in buggy_module.py and make the verification command pass.",
    workspace="./playground/minimal_coding_agent",
    max_steps=8,
)

# Pass return_state=True to get the full EngineResult
result = agent.run(
    "Fix the bug in buggy_module.py and make the verification command pass.",
    workspace="./playground/minimal_coding_agent",
    max_steps=8,
    return_state=True,
)
print(result.state.final_result)
print(result.state.stop_reason)
Key parameters for agent.run():
ParameterTypeDescription
taskstr | TaskPlain text objective or a structured Task object
max_stepsint | NoneOverride the step budget for this run
workspacestr | NoneWorking directory mounted into the host environment
return_stateboolReturn EngineResult instead of state.final_result
tracebool | TraceWriterTrue to auto-create a trace writer, False to disable
trace_logdirstrDirectory for trace output (default: "./runs")
parserParserOutput parser matched to your prompt format
criticslist[Critic]Critics evaluated after each step
stop_criterialist[StopCriteria]Additional stop conditions beyond the default
If you need finer control over the Engine — custom hooks, branch selectors, or a shared Engine instance — use agent.build_engine(**kwargs) and call engine.run(task) directly.

Constructor arguments

AgentModule.__init__ accepts these keyword arguments, all optional:
AgentModule(
    tool_registry=registry,   # ToolRegistry — tools available to the agent
    llm=model,                # LLM client used by the Engine's model runtime
    model_parser=parser,      # Parser that converts raw model output to Decision
    memory=memory,            # Memory implementation for cross-run recall
    history=history,          # History implementation for in-run message log
    **config,                 # Any extra keyword args stored in self.config
)