Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.maadify.com/llms.txt

Use this file to discover all available pages before exploring further.

Open the sub-agent dialog

Use the flow view actions menu to manage sub-agents inside a parent agent.
1

Open the dialog

In flow view, select Actions → Add Sub Agent.
2

Choose add or edit

Search for an existing sub-agent, or create a new one.
If you do not have edit permissions, the dialog opens in view-only mode.
Flow view Actions menu showing the Add Sub Agent option

Add or reuse a sub-agent

When you open the dialog from a parent agent, you can:
  • Search existing sub-agents and add them to the flow.
  • Create a new sub-agent with its own prompt and settings.
  • Create a copy of a selected sub-agent to edit without impacting other parent agents.
Copy a sub-agent when it is shared across multiple parent agents and you need a version-specific change.
Sub-agent dialog with search results, create new, and create copy controls

Sub-agent fields you can edit

Identity and behavior

  • Sub-agent name: the display name used in the flow.
  • Sub-agent name key: a unique, read-only identifier for owned sub-agents.
  • Description: used to explain intent and improve selection.
  • Agent type:
    • Assistant: selects tools and responds with reasoning.
    • Proxy: executes tools without conversational reasoning.
Sub-agent identity fields showing name, key, description, and Assistant or Proxy toggle

LLM configuration (Assistant only)

  • LLM model: model used by this sub-agent.
  • Temperature: creativity level for responses.

Visibility and messaging (Assistant only)

  • Visible senders: limit which agent messages are shown to this sub-agent.
  • Hidden senders: suppress messages from selected agents.
  • Hide tool calls: hide tool call metadata from the sub-agent.
  • Hide tool responses: hide tool outputs from the sub-agent.
Sub-agent visibility controls with visible senders, hidden senders, and hide tool call and response toggles

Memory configuration (Assistant only)

Configure how this sub-agent stores and retrieves long-term memory. Memory is scoped per tenant, per parent agent, and per conversation thread, and it controls both the messages the agent persists after each turn and the context that gets injected before its next turn.
  • Memory enabled: master toggle. When off, the sub-agent only sees the current conversation. No memories are stored or retrieved, no memory tools are registered, and no context is injected. The fields below appear only when this is on.

Storage

  • Store memories: when on, the sub-agent’s messages are persisted after each turn. When off, no new memories are created from this agent’s conversations, but it can still retrieve existing memories.
  • Store to scope: which scope new memories are written to.
    • Thread: short-term memory tied to the current conversation.
    • Agent: medium-term memory shared across threads for this parent agent.
    • Tenant: long-term memory shared across the organization.

Retrieval

  • Use memory tools: registers memory query tools (search, summary, recall) so the agent can explicitly look things up in its reasoning. When off, the agent cannot run memory queries directly, but automatic context injection still works.
  • Search scopes: which scopes this agent is allowed to search and inject from. Select any combination of Thread, Agent, and Tenant.
  • Inject memory context: when on, the system uses the latest user message to fetch relevant memories from the selected search scopes and prepends them as a system message before the agent’s next turn.
  • Memory context limit: how many memories to inject when context is auto-injected. Range 1 – 20. Higher values give the agent more recall but consume more of its context window.

Context window management

These fields control how the prompt sent to the LLM is compressed and trimmed before each turn. System prompts are never compressed or trimmed.
  • Allow compression: when on, large messages from other agents and tool responses are summarized once they exceed the threshold. If Use memory tools is also on, the full content is stored to memory before being summarized so the agent can recover it via search_conversation_memory(). User messages and the agent’s own protected messages are never compressed.
  • Compression threshold: the size in characters at which a message becomes a candidate for compression. Defaults to 2000 characters when left empty.
  • Context limit mode: how the size of the prompt window is bounded.
    • Messages: keep the last N non-system messages. Fastest and the default.
    • Tokens: keep messages up to N tokens. Uses tiktoken and falls back to characters if it is not available.
    • Chars: keep messages up to N characters (roughly tokens × 4).
  • Max active messages: shown when mode is Messages. The cap on non-system messages kept in context. Leave empty for no limit.
  • Max context tokens: shown when mode is Tokens. For example, 8000 for a GPT-4 class model. Leave empty for no limit.
  • Max context characters: shown when mode is Chars. Leave empty for no limit.
Tool call and tool response pairs are always kept together during trimming, so the agent never sees an orphaned response.
Before each agent turn the transform runs in this order: visibility filtering → compression → trimming → memory context injection. Storage of the agent’s outgoing messages happens after the turn completes.
For cost-sensitive agents, lower Compression threshold, set a tight Max context tokens or Max active messages, and turn Use memory tools on so the agent can pull back compressed details on demand instead of carrying them in every prompt.
If you turn Memory enabled off, the agent loses access to anything outside the current conversation, including any memories it has stored previously. Use Store memories: off with Memory enabled: on if you want a read-only memory consumer.
Sub-agent memory configuration panel showing storage, retrieval, and context window management fields

Structured output (Assistant only)

Use structured output when a sub-agent response needs to be predictable JSON instead of free-form text. The schema builder lets you define the exact fields the assistant should return so later workflow steps can reference those values reliably. You can create the schema in two ways:
  • Build it manually by adding fields and properties.
  • Generate it from an existing JSON schema or a sample JSON response.
When you build the schema manually, you can define:
  • Field names and display names for the values the agent should return.
  • Types such as string, integer, number, boolean, object, arrays, and object arrays.
  • Required fields that must be present in the agent response.
  • Descriptions that tell the agent what each field means.
  • Options when a value should come from a known list.
  • Defaults, minimums, maximums, and length limits for supported field types.
  • Nested object properties for structured records such as addresses, line items, or extracted entities.
Use structured output when another step needs to map a sub-agent result into a tool input, rule condition, approval message, or notification.
Structured output schema builder for a sub-agent

Save changes

Use Add Sub Agent or Update Sub Agent to save. Validation errors appear inline for required fields.