Most organizations have learned to deploy AI. Fewer have built the capacity to run it. These four AI business trends explain why.
Leaders Are Committed. Results Are Lagging.
In March 2026, Coastal partnered with Oxford Economics on a survey of 800 U.S. business and technology leaders. Every respondent had at least one AI initiative in production at the time.
- 84% believe AI is making their organization more competitive
- 74% are increasing AI investment over the next year
- 46% say AI has not met their expectations
- Only 20% strongly agree that AI has delivered measurable business value
Platforms like Salesforce Agentforce have made deployment fast. The day-to-day work of running AI scales faster than the teams managing it.
Four AI business trends showed up in the data: how organizations manage their data, whether AI fits how people work, how clearly the business problem is defined before technology gets picked, and who owns AI after launch.
Trend 1: Preparing Data Gets AI Launched. Managing Data Keeps It Working.
AI runs on data, and the need grows as more use cases go live. Stopping data work at launch creates problems later.
- 70% cite data access or quality as a top reason AI initiatives stall during setup
- 73% hit data problems after launch, while running AI in production
- 36% report data accuracy or availability problems have significantly limited a deployed AI capability, or forced them to terminate it
A year ago, leaders asked whether their data could support AI at all. Today they ask where it can support a specific use case and let deployment reveal where to invest next. Pilots also surface categories of data earlier systems didn’t have to handle. Emails, PDFs, and attachments were written for people to read; now they’re active inputs for language models.
The organizations running AI well treat data management as ongoing work. They analyze data quality during pilot design to set realistic targets, ship AI and improve the data in parallel instead of waiting for full readiness, and treat AI underperformance as a diagnostic for what data needs work.
Trend 2: Employees Are Ready for AI, But AI Isn’t Ready for Them.
Adoption stalls in organizations with enthusiastic employees. The harder part sits in building AI that fits how people actually work and earning the trust that keeps them using it.
- 77% of organizations say employees are eager to work with AI
- 73% still face adoption problems
- 50% report users don’t trust AI outputs enough to act on them
- 46% report outputs that don’t fit how people work
- 68% at the largest organizations ($20B+) say user adoption is where AI initiatives stall or fail (versus 57% across the market)
Designing trust and fit upfront costs less than retrofitting them after launch.
The report’s “AI Quicksand” table flags three places where the marketing and the reality don’t line up:
- Chatbots. Chat works for open-ended exploration. Most daily work happens inside click-based workflows; typing into a chatbot adds extra steps.
- Agentic everything. Agents handle bounded, deterministic tasks well. Judgment calls, unpredictable inputs, and cost predictability are where they get into trouble.
- Turnkey AI. Every deployment needs strategy, workflow fit, and change management on top of the platform.
Adoption holds where teams establish process baselines before AI ships, pick the right mode for each job (chat, embedded summary, headless automation, or voice), and build feedback loops with users. Declining usage and overrides are the earliest signals adoption is breaking down.
Trend 3: AI Is Only as Strategic as the Problem It’s Solving.
Most organizations start AI projects with the platform in hand. The business problem comes later, if at all.
- 43% start with the technology
- 24% start with a vendor-recommended use case
- 7% never formally settle on what they’re trying to solve
- Only 26% begin with a clearly defined business problem
All four approaches get AI into production. The differences show up in confidence to scale: 75% of problem-first and 71% of technology-first organizations are confident; vendor-recommended drops to 55%, and never-defined to 40%. Among the 24% that took a vendor-recommended use case, 64% say the solution turned out less valuable than expected.
Problem-first organizations also tend to have the practices to sustain what they build: 73% more likely to have dedicated AI ownership, 56% more likely to have formal governance, and 65% more likely to have AI embedded in workflows.
Define the business outcome before evaluating any platform; that outcome dictates which platform fits. Measure success in both efficiency and business terms, since time savings alone rarely justify the next round of investment.
Trend 4: AI Champions Are Everywhere. AI Owners Are Not.
Most organizations have plenty of AI champions and very few AI owners (someone whose primary job is keeping AI working).
- 1 in 6 organizations (18%) have a dedicated AI or transformation team
- 5 in 6 put AI under IT, business leadership, or an external partner
- 58% cite internal team bandwidth as a top barrier to running AI
- Dedicated AI teams are 3.6x more likely to have autonomous AI in production
Results follow the same pattern. Organizations with dedicated AI teams report 87% measurable business value (versus 71% average), 76% have four or more AI initiatives in production (versus 40%), and 30% strongly agree AI met expectations (versus 21% business-led and 12% IT-led).
One CTO in pharmaceuticals put the ambiguity plainly: “The worst mistake was not assigning clear responsibility; everyone thought IT was in charge, while IT thought operations was in charge, which caused things to stagnate.”
Agents encounter new conditions, platforms ship new capabilities, data shifts, and governance expands as use cases multiply. Without someone whose primary job is keeping AI working, no one does it.
The fix is dedicated ownership: one person or team with the mandate, time, and authority to run AI as their primary job, with executive backing. Governance gets built alongside it: what data AI can use, what decisions it can make, what limits apply.
The Work Ahead
Four gaps recur across the survey: data that degrades as agents depend on it, adoption that stalls despite employees who want to use AI, initiatives that were never connected to a business problem worth solving, and AI programs where everyone is involved but no one is accountable.
Most organizations bought AI as a technology project: select a platform, run a pilot, expand deployment. Running it well requires the operating model AI actually demands. The work follows a sequence:
- Problem. Define the business outcome and how you’ll measure success before evaluating any platform.
- Governance. Build the basics before the next pilot: what data is eligible, what decisions AI can make autonomously, what limits apply. The rules evolve with experience.
- Data and workflows. Analyze data and map workflows during pilot design. The analysis sets realistic accuracy targets; the mapping decides where agents fit into how people do the work.
- Adoption. Design AI around how people work: where users meet AI, how it fits the workflow, and what feedback gets captured. Retrofitting costs time, money, and trust.
- Operations. Use a dedicated team to manage AI post-launch. Running AI grows with every use case: monitoring performance, managing data, extending governance, and coaching agents as conditions change.
Each step sets up the next. Skip one, and the cost shows up downstream (with interest).
This analysis incorporates findings from Coastal and Oxford Economics’ March 2026 AI Operations Survey of 800 U.S. business and technology leaders, all from organizations with at least one AI initiative in production today. Access the full report here.


