Last updated April 21, 2026
Chat
Chat is ContextDX's conversational AI — the thing that turns "I need a context map for our payment system" into an actual, structured board populated with real components and relationships. It's not a chatbot. It's an architecture partner that understands your system context, reads your sources, and executes multi-step workflows to build and evolve your boards.
Getting started with chat
- Open a board (the Master Board works fine for your first conversation)
- Attach at least one source to give chat real context to work with
- Open the chat panel and type a natural language request
- Watch the real-time streaming as nodes and edges appear on your board
- Review what was created and refine with follow-up messages
How it works
Chat runs multi-step workflows rather than single prompt-response exchanges. When you send a message, the system:
- Initializes — Resolves which workflow to run based on your intent, board state, and context
- Plans — Builds a prompt with board metadata, source context, and available tools
- Executes — The AI works through the workflow steps, calling tools to create or modify board elements
- Streams — Results flow back in real-time so you see progress as it happens
- Finalizes — The conversation is summarized and the thread is persisted
Conversation rooms
Every interaction happens inside a conversation room — a space where human users and AI agents participate together. Each room tracks:
- Participants — who's in the conversation (humans and agents), with status like invited, joined, left, or removed
- Threads — separate conversation threads within the room, allowing branched discussions
- Messages — the full history, from human prompts to streaming agent responses to completion markers
The UI shows progress immediately — when you send a message, the response placeholder appears before the AI has written a single word. No "thinking…" spinner, no waiting in the dark.
Workflows
Each conversation runs on a workflow — a defined sequence of steps chat follows to handle your request. Workflows can be nested: a parent workflow can spawn child workflows for sub-tasks, each tracked independently.
How workflows execute
Starting context
Chat picks a starting posture based on your board:
| Starting posture | When it applies |
|---|---|
| Empty | New workspaces starting from scratch — chat begins with a blank canvas and builds from sources |
| Hydrated | Established workspaces with existing board data — chat has full context of current nodes, edges, and relationships |
What each step can do
The workflow runtime resolves the right workflow for your intent, loads board state, and begins step-by-step execution. Each step can:
- Call LLM APIs with assembled prompts
- Invoke tools (create nodes, modify edges, query sources)
- Branch to child workflows for complex sub-tasks
- Emit real-time events so you see progress as it happens
Workflow runs are persisted — you can inspect the execution trace of any chat conversation to see exactly which steps ran, what tools were called, and what decisions the AI made.
The workflow system is what makes chat more than a simple prompt-response chatbot. It plans multi-step work, branches into sub-tasks, and maintains state across turns. It's an execution engine, not a text generator.
Board orchestration
The Board Orchestrator is chat's primary workflow for board-related conversations. It supports four prompt modes:
General conversation mode. Use this for:
- Asking questions about your architecture
- Getting explanations of existing nodes and relationships
- General discussion about system design decisions
The orchestrator dynamically picks which tools are available based on board type, context, and the current turn's intent.
Real-time streaming
Chat streams events as work progresses. You'll see:
- Step traces — which workflow step is running and what it's doing
- Tool calls — each tool invocation (create node, update edge, query source) as it happens
- Agent messages — incremental response text streamed as it's generated
- Task completion — final status when the workflow completes
This means you don't wait for a "thinking…" spinner — you watch your architecture take shape in real-time.
Chat gets smarter the more context you give it. Attach governing sources to your board, fill in node descriptions, and your conversations will produce dramatically better results. Garbage in, garbage out — but context in, architecture out.