Skip to main content
This page documents the intended user-facing behavior for a Slack distribution. The integration, trigger configuration, and framework helpers described here are design targets and work in progress rather than shipped functionality.
Slack distributions connect Slack workspaces, channels, threads, and direct messages to Aion agent workflows. The goal is to let developers configure what should count as an inbound trigger, normalize that trigger into a consistent A2A request shape, and then reply back into the same Slack context by default.

Overview

The default behavior is:
  1. Configure which Slack triggers should result in inbound agent requests.
  2. Normalize each selected Slack event into an A2A SendMessage request plus distribution and event metadata.
  3. Let the framework adapter determine the agent response using its normal default mapping rules.
  4. Deliver the selected response back to Slack in the same conversation context that produced the request.
That last step is the key behavior to preserve across platforms. If the inbound trigger was a DM, the default response goes back to that DM. If the inbound trigger was a thread reply, the default response goes back to that thread. If the inbound trigger was an app mention inside a channel, the default response stays in that channel context. For the lower-level transport contract, see Distribution and Event.

Default Request Loop

Configuration

WIP.

Message Mapping

Slack distributions should map inbound and outbound messages through the same shared transport contracts used by other messaging integrations, while still preserving Slack-specific context such as DM, channel, and thread identity. Inbound Outbound

Features

Mentions

Slack mentions are the most obvious “message as trigger” behavior. In the default flow, an app mention becomes an inbound message event with Slack context preserved in the request metadata.
A LangGraph graph should receive the mention as a normal user message in state.messages, while the full transport envelope remains available in a2a_inbox.
from typing import Annotated, Optional, TypedDict
from langchain_core.messages import AIMessage, BaseMessage
from langgraph.graph import add_messages

from aion.shared.types import A2AInbox


class AgentState(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]
    a2a_inbox: Optional[A2AInbox]


def reply_to_mention(state: AgentState) -> dict:
    inbox = state.get("a2a_inbox")
    mention = state["messages"][-1]

    return {
        "messages": [
            AIMessage(
                content=f"I saw your Slack mention: {mention.content}"
            )
        ]
    }
The adapter should treat the last AIMessage as the default reply and send it back to the same Slack conversation.

Reactions

Reactions are better treated as activity events than normal user messages. They should still reach the agent through the same distribution boundary, but they are not expected to append a user text message by default.
Reaction events should be inspected from the raw inbound envelope rather than assumed to be a conversational text turn.
def on_reaction(state: AgentState) -> dict:
    inbox = state.get("a2a_inbox")
    event_type = inbox.message.metadata[
        "https://docs.aion.to/a2a/extensions/aion/event/1.0.0"
    ]["type"]

    if event_type == "to.aion.distribution.activity.1.0.0":
        return {"messages": [AIMessage(content="Thanks for the reaction.")]}

    return {}
The default response policy is still “reply in the same context” unless the graph overrides it.

Cards

Slack cards should be treated as a higher-level rendering of a generic card document rather than as a Slack-only response primitive. The intent is to let the framework emit a provider-neutral card file and let the distribution adapt it to Slack Block Kit.
from a2a.types import Message, Part, Role
from aion.shared.types import A2AOutbox


def send_card(state: AgentState) -> dict:
    card_document = """
    <Card title="Build completed">
      <Text>All checks passed.</Text>
      <Actions>
        <Button url="https://example.com/run/42">Open run</Button>
      </Actions>
    </Card>
    """.strip()

    return {
        "a2a_outbox": A2AOutbox(
            message=Message(
                role=Role.ROLE_AGENT,
                parts=[
                    Part(text="Build completed"),
                    Part(
                        raw=card_document,
                        filename="build-completed.card.jsx",
                        mediaType="application/vnd.aion.card+jsx",
                    ),
                ],
            )
        )
    }
The distribution should translate the card document into Block Kit while preserving the plain text part as a fallback.

Streaming

Streaming should work without changing the conversation target. The distribution keeps the Slack reply anchor fixed and updates the same provider message as new chunks arrive.
1. Manual stream reply
from langchain_core.messages import AIMessageChunk
from langgraph.types import StreamWriter
from aion.langgraph import emit_message


def stream_reply(state: AgentState, writer: StreamWriter):
    emit_message(writer, AIMessageChunk(content="Let"))
    emit_message(writer, AIMessageChunk(content="'s walk through the answer "))
    emit_message(writer, AIMessageChunk(content="step by step."))
    return state
The distribution should post the Slack reply once and then edit that message as the stream advances.2. LLM output, with LangGraph’s nostream tagUse this pattern when the node calls an LLM as part of internal reasoning or shaping work and you do not want those raw token events to become the visible Slack reply stream.
from langchain_openai import ChatOpenAI
from langchain_core.messages import AIMessage


model = ChatOpenAI(model="gpt-4.1-mini", streaming=True)
internal_model = model.with_config({"tags": ["nostream"]})


async def stream_reply_from_model_without_token_stream(
    state: AgentState,
) -> dict:
    # LangGraph omits `messages`-mode token events for invocations
    # tagged with `nostream`.
    final = await internal_model.ainvoke(state["messages"])
    return {"messages": [AIMessage(content=final.content)]}
This is the safer default when the LLM call is intermediate work. The final AIMessage still becomes the Slack reply, but the model’s token-by-token events should not be forwarded as the live stream.3. LLM output is the streamUse this pattern when the model output itself is the message you want the user to see streaming in Slack. In that case, let LangGraph emit the model’s messages-mode token events and let Aion capture them as the live reply stream.
from langchain_openai import ChatOpenAI


model = ChatOpenAI(model="gpt-4.1-mini", streaming=True)


async def stream_reply_from_model(state: AgentState) -> dict:
    # No `nostream` tag here. These token events are the
    # user-visible Slack stream.
    final = await model.ainvoke(state["messages"])
    return {"messages": [final]}
In this variation, the model tokens are expected to be the user-visible reply. Aion should map those LangGraph stream events into incremental Slack message updates and then finalize the reply with the returned AIMessage.

DMs

DMs are the simplest Slack request loop. They should map to trajectory = "direct-message" and keep the same DM context by default on the way back out.
def on_slack_dm(state: AgentState) -> dict:
    inbound = state["messages"][-1]
    return {
        "messages": [
            AIMessage(content=f"Slack DM received: {inbound.content}")
        ]
    }