Skip to main content
This page describes the intended high-level message-mapping model for LangGraph in Aion. It complements the lower-level adapter rules in LangGraph Message Mapping.

Overview

The default LangGraph message-mapping flow is:
  1. A distribution or client sends an A2A request into Aion Server.
  2. Aion maps conversational text into LangGraph state.
  3. Aion injects request-scoped Aion context at invocation time.
  4. Your graph writes to the response buffer, a2a_outbox, or normal LangGraph output.
  5. Aion turns that result back into an A2A Message or Task.

Inbound Surfaces

There are four relevant surfaces when a LangGraph run starts:
SurfacePurpose
state.messagesModel-facing transcript for ordinary LangGraph and LangChain logic
runtime.contextPlanned Aion-aware request context, including thread, message, and event
a2a_inboxHybrid escape hatch for raw A2A task, message, and request metadata
config["configurable"]["thread_id"]LangGraph’s own checkpoint thread identifier
The important distinction is that LangGraph’s thread_id is not the same thing as Aion’s inbound messaging thread or context identifier. thread_id belongs to LangGraph persistence. The inbound distribution context should instead surface through the Aion request context.

Runtime Context vs Graph State

The fluent LangGraph SDK should prefer invocation-scoped context for transport metadata and routing:
from typing import Annotated, TypedDict

from langchain_core.messages import AnyMessage
from langgraph.graph import add_messages
from langgraph.runtime import Runtime

from aion.langgraph import AionContext


class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]


async def node(state: State, runtime: Runtime[AionContext]) -> dict:
    inbound = runtime.context.message
    thread = runtime.context.thread

    if inbound and inbound.text == "/help":
        await thread.reply("Here is what I can do.")
        return {}

    return {}
That keeps LangGraph state focused on model-facing data while request routing, normalized event metadata, and outbound buffering stay scoped to the current turn. Hybrid authoring should still remain available:
from typing import Annotated, Optional, TypedDict

from aion.shared.types import A2AInbox
from langchain_core.messages import AnyMessage
from langgraph.graph import add_messages


class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]
    a2a_inbox: Optional[A2AInbox]

Outbound Resolution

Aion should resolve the outbound reply in this order:
  1. SDK-managed response buffer Helper-emitted reply content and streamed output intended to become the durable reply live here first.
  2. a2a_outbox Use this when you want full protocol-level control over the outbound A2A object.
  3. Framework-native fallback If neither higher-precedence channel is used, Aion falls back to the current turn’s streamed chunks and then the last agent-authored AIMessage in LangGraph state.
This is why the fluent API should write through thread.reply(...), thread.post(...), and related helpers instead of forcing authors to choose between custom stream events and final message construction.

What This Means for Distribution Flows

For messaging distributions, the default target should remain inherited from the inbound request:
  • DM in, DM out
  • thread reply in, thread reply out
  • mention in shared conversation, reply in that same conversation
In the normal case, the graph should not need to rebuild that target manually. If an author wants to override it, that should become an explicit outbound action through the response buffer or a2a_outbox.