For as long as computers have done the math, analytics has been a conversation between a machine and a person. The person figures out what to ask. The machine runs the numbers. The person decides whether the answer makes sense and what to do about it on Monday morning. That back-and-forth has been the work of analytics for decades.
Most organizations have plenty of dashboards. The follow-up question, the second cut, the so what do we do about it—that work lives in analyst queues, Slack threads, and meeting notes.
Tableau Conference 2026 introduced a way to pull that work into the conversation itself by adding a third voice: an agent.
Ask a question in plain English, and the agent will pull the data, notice what changed, and draft what to do next. Three new capabilities make that possible: Conversational Analytics, Tableau MCP, and Agentified Actions. All are shipping or in beta now. The agent does a little of the person’s job and a little of the machine’s.
Disney showed what that looks like from the keynote stage. In three years, their teams won’t open dashboards at all. They’ll talk to their data and act on what they hear. That’s the third voice fully in the conversation: the agent shaping questions, pulling answers, and proposing actions while people focus on judgment.
Tableau organized its announcements around three pillars: Architecting Knowledge, Powering Decisions, and Agentifying Actions. Each one moves an organization closer to the picture Disney described, and each has its own role to play.
Architecting Knowledge: Giving the Agent Something to Work With
Before an agent can join the conversation, it has to know what your organization is talking about.
Composable Data Sources (GA in June) lets teams combine multiple published data sources into a single unified view, joining databases without rebuilding dashboards every time the underlying structure changes. The catch: this only works if your published data sources are governed. If they aren’t, the feature combines inconsistency faster than it fixes it.
The Auto Knowledge Graph (GA in July) is the engine underneath conversational analytics. It builds automatically from your data, learns over time, and can be edited to reflect how your organization actually talks about itself. The graph is what keeps natural-language querying from confidently returning the wrong answer, and it’s only as accurate as the curation behind it. Organizations that have treated their semantic layer as optional will find conversational analytics inherits every gap.
Powering Decisions: Letting the Agent Speak
Conversational Analytics is live. Teams ask Tableau questions in plain English and get governed answers back. No filters, no pivot tables, no analyst in the loop. The agent’s answers can vary from one run to the next, though, which means any metric a CFO has to sign off on should call on a deterministic definition rather than letting the agent generate the number.
Tableau MCP is also GA. It’s the integration layer that lets Tableau’s governed answers surface inside Claude, ChatGPT, Slack, Salesforce, and Teams. When analytics live in the tools people already use, the friction to using them mostly disappears. Once an answer can surface in any tool, access control becomes a design decision rather than a default.
Embedded Analytics, also live, lets teams go from concept to dashboard in minutes—work that used to take weeks. Describe what you want in Claude and get a working dashboard back. Real environments are messier than demo data, so plan a pilot before building timelines around the demo experience.
Agentifying Actions: Where the Agent Starts to Move Things
Answering questions is one thing. Acting on the answers is the bigger step. This is where the third voice stops describing the organization and starts changing it.
Agentified Actions (in beta) lets Tableau connect to the operational systems where work actually happens (your CRM, ERP, ticketing, inventory) and act inside them.
Auto Mode goes further. Agents monitor your data, surface what needs attention, and act on it, either with a human approval step or autonomously, depending on how you configure it. Dashboards update in real time.
If your organization runs on demand forecasting, inventory, service operations, or anything else where lagging indicators cost real money, this is the announcement worth getting into the roadmap. Autonomy is a leadership decision before it’s a configuration decision. Decide which actions an agent can take without a human in the loop before turning the feature on, not after.
Agent Health Monitor: The Most Important Thing Tableau Shipped
Tableau also shipped an Agent Health Monitor, and it may be the most strategically important thing they announced.
AI agents can give confident-sounding wrong answers. Adoption looks fine in week one. Three months in, teams have routed around the agent, and nobody can say why.
The Health Monitor gives leadership a single place to see what the agent is actually doing: adoption rates, accuracy, conversation quality, which data sources it’s pulling from, and a prioritized list of issues with suggested fixes you can approve in one click.
The third voice in the conversation is the one you can’t read body language from. The Health Monitor is how you tell whether it’s earning its seat—and whether your AI investment is delivering.
What TC26 Means for Your Data Analytics Strategy
The announcements at TC26 add up to something bigger than analytics. A third voice in the conversation is a third participant in your decisions. Once an agent is shaping questions, surfacing answers, and acting on them, the work of preparing your data is really the work of choosing what decisions you trust agents to handle.
The organizations that get the most out of TC26 will be the ones whose data foundation is ready for an agent to use.
It’s the kind of work we do at Coastal. If you’re sorting through what TC26 means for your Tableau roadmap, our Data Strategy Lab is built for that conversation. In the lab, we help you identify high-value use cases, shape the supporting data models, and build the frameworks that turn AI experimentation into dependable performance at scale.


