This page documents the intended user-facing behavior for a Slack distribution. The integration, trigger configuration, and framework helpers described here are design targets and work in progress rather than shipped functionality.Slack distributions connect Slack workspaces, channels, threads, and direct messages to Aion agent workflows. The goal is to let developers configure what should count as an inbound trigger, normalize that trigger into a consistent A2A request shape, and then reply back into the same Slack context by default.
Overview
The default behavior is:- Configure which Slack triggers should result in inbound agent requests.
- Normalize each selected Slack event into an A2A
SendMessagerequest plus distribution and event metadata. - Let the framework adapter determine the agent response using its normal default mapping rules.
- Deliver the selected response back to Slack in the same conversation context that produced the request.
Default Request Loop
Configuration
WIP.Message Mapping
Slack distributions should map inbound and outbound messages through the same shared transport contracts used by other messaging integrations, while still preserving Slack-specific context such as DM, channel, and thread identity. Inbound- Protocol-level request metadata and event identity are defined by Distribution, Event, and Distribution/Messaging.
- Framework-level request mapping is described in LangGraph Message Mapping and Google ADK Message Mapping.
- Default response precedence is: SDK-managed response buffer first, explicit
a2a_outboxsecond, and framework-native fallback third. - Structured outbound parts are defined by Distribution/Messaging and Distribution/Cards.
- Framework-level response mapping is described in LangGraph Message Mapping and Google ADK Message Mapping.
Features
Mentions
Slack mentions are the most obvious “message as trigger” behavior. In the default flow, an app mention becomes an inbound message event with Slack context preserved in the request metadata.- LangGraph
- Google ADK
- A2A
A LangGraph graph should receive the mention as a normal user message in The adapter should treat the last
state.messages,
while the full transport envelope remains available in a2a_inbox.AIMessage as the default reply and send it back to the same
Slack conversation.Reactions
Reactions are better treated as activity events than normal user messages. They should still reach the agent through the same distribution boundary, but they are not expected to append a user text message by default.- LangGraph
- Google ADK
- A2A
Reaction events should be inspected from the raw inbound envelope rather than assumed to be a
conversational text turn.The default response policy is still “reply in the same context” unless the graph overrides it.
Cards
Slack cards should be treated as a higher-level rendering of a generic card document rather than as a Slack-only response primitive. The intent is to let the framework emit a provider-neutral card file and let the distribution adapt it to Slack Block Kit.- LangGraph
- Google ADK
- A2A
Streaming
Streaming should work without changing the conversation target. The distribution keeps the Slack reply anchor fixed and updates the same provider message as new chunks arrive.- LangGraph
- Google ADK
- A2A
1. Manual stream replyThe distribution should post the Slack reply once and then edit that message as the stream
advances.2. LLM output, with LangGraph’s This is the safer default when the LLM call is intermediate work. The final In this variation, the model tokens are expected to be the user-visible reply. Aion should map
those LangGraph stream events into incremental Slack message updates and then finalize the reply
with the returned
nostream tagUse this pattern when the node calls an LLM as part of internal reasoning or shaping work and
you do not want those raw token events to become the visible Slack reply stream.AIMessage still
becomes the Slack reply, but the model’s token-by-token events should not be forwarded as the
live stream.3. LLM output is the streamUse this pattern when the model output itself is the message you want the user to see streaming
in Slack. In that case, let LangGraph emit the model’s messages-mode token events and let
Aion capture them as the live reply stream.AIMessage.DMs
DMs are the simplest Slack request loop. They should map totrajectory = "direct-message" and
keep the same DM context by default on the way back out.
- LangGraph
- Google ADK
- A2A