This post is based on a webinar we hosted with Salesforce Ben and BBBSA’s CTO, Travis Gibson. You can watch the full recording here.
When Big Brothers Big Sisters of America (BBBSA) set out to use AI to improve mentor-mentee matching, they didn’t build a chatbot. They didn’t ask their match specialists to start writing prompts.
They built a button.
That single design decision reflects how we approached the whole project. Big Brothers Big Sisters’ match specialists were spending hours sifting through profiles, parsing interview notes, and relying on memory to find the connections that led to lasting mentor-mentee relationships. The goal was to reduce that burden by giving them better information, faster, inside the tools they already use.
The result is Smart Match: an AI-assisted matching tool built on Agentforce and embedded directly into BBBSA’s Salesforce workflow. Here’s what we learned building it.
1. Embed Agentforce in existing Salesforce workflows.
Chat interfaces are powerful, but for a team that already lives inside Salesforce, a button is faster—and more importantly, it’s familiar.
BBBSA deliberately chose to build a click-only interface for Smart Match. Specialists search for matches the same way they always have. The AI works behind the scenes, surfacing a ranked list with plain-language explanations—no prompt engineering required.
More than a UX preference, it was a change management strategy. Travis put it plainly: “If you just drop a solution on people, you lose trust.” Embedding AI into an existing workflow, rather than asking people to adopt a new one, dramatically lowers the barrier to first use—and first use is where habits form.
2. Use Agentforce to rank candidates, not just filter them.
Standard Salesforce filters are great at telling you who doesn’t fit. Hard rules on location, age range, or background check status can efficiently eliminate mismatches. But filters can’t tell you who the best fit is. They reduce the list; they don’t prioritize it.
Smart Match was built to close that gap.
Before this solution, match specialists were reading dozens of long-form interview profiles and relying on memory to surface connections—a Big’s (mentor) career interests lining up with a Little’s (mentee) aspirations, or shared hobbies that might seem minor but drive match longevity. The mental load was significant, and analysis paralysis was a documented problem.
The architecture separates two jobs that are easy to conflate. Hard filters run first to eliminate anyone who doesn’t qualify. Then, Agentforce ranks the remaining candidates based on semantic similarity across both structured data and unstructured interview notes. It returns the top 20 matches in order, each with an AI-generated summary explaining why they scored well.
The specialist still makes the call. But they’re reviewing the best options instead of sifting through everything.
3. Use hybrid search for cleaner, more reliable results.
One of the most practical architectural decisions in Smart Match: don’t ask AI to do what a rule-based system already does better.
Drive time between a mentor and a mentee is a critical factor—and, critically, “distance” means something different in Los Angeles than in rural Ohio. BBBSA integrated Google Maps APIs to calculate drive time by car, transit, cycling, and walking. That calculation is precise, repeatable, and has nothing to do with language models.
Once the hard logistics are handled, then Agentforce takes over—analyzing compatibility signals in qualitative data: shared hobbies, communication styles, long-term goals, values expressed in interviews.
We call this a “hybrid search” pattern: pre-filter on hard criteria, then let AI rank within the qualifying pool. The separation keeps the logic clean, and the explainability keeps the specialist confident.
4. Let users revert to build trust in AI recommendations.
AI doesn’t behave the same way every time—the same model that produces a great recommendation on Monday can produce an unexpected one on Tuesday—and ignoring that in your rollout plan is how you lose users.
BBBSA’s pilot addressed this directly: specialists could revert to the existing matching process at any time if Smart Match’s recommendations felt off. Having an easy exit actually made people more willing to try it.
Travis described the trust framework in three parts: transparency (specialists can see why a match was recommended, not just whether it was), control (the specialist always makes the final decision), and consistency (recommendations that align with what experienced specialists already know to be true). AI was positioned as a “second brain”—a tool to support judgment, not replace it.
The team also ran a test-and-tune cycle with subject-matter experts before launch—iterating on the architecture and surfacing feedback directly into the design. Agencies were involved from requirements gathering through testing. By the time Smart Match went live, specialists had already shaped it.
5. Define hard rules AI cannot override in Salesforce.
For BBBSA, youth safety is non-negotiable. There are hard requirements for background checks, fingerprinting, and verified preferences related to gender and youth safety that must be confirmed before any match can proceed. These are not areas where a model gets to exercise judgment.
In AI system design, these are your “red lines”—the boundaries where a model is explicitly barred from making a call. Enforcing them through standard system logic (not AI) ensures that safety is governed by rules, not judgment calls, even as you innovate everywhere else.
This is a broadly applicable principle. Every organization has its own non-negotiable: a compliance requirement, a regulatory constraint, a policy that cannot bend. Identifying those red lines before you build is what gives the rest of the system a foundation to stand on.
Smart Match Results After 90 Days
After 90 days, BBBSA is seeing a 4% lift in early match longevity—how long a mentor-mentee pair stays together—on an already-high 95% baseline. Big-Little pairs report stronger closeness and shared interests, with lower signals of early relational friction, like awkwardness or disengagement. This matters because higher relationship quality scores reflect the conditions for the deep, long-lasting connections that lead to greater impact on the lives of participating kids.
Additionally, agencies’ internal weekly match meetings, which used to run for an hour, have been cut in half or eliminated altogether.
Why Match Quality Determines Outcomes at BBBSA
Youth who had a “Big” earn 15% more over their lifetime and are 20% more likely to enroll in college. Those outcomes trace back to match quality, which depends on specialists being able to do their best work.
What Coastal and BBBSA built is a tool that reduces mental load so the people doing deeply relational work have more time for the high-value interactions that change lives. Weekly match meetings that ran an hour now run 30 minutes (or don’t happen at all). That time goes back to supporting matches in progress.
Travis summed it up well: “AI frees up specialists from administrative tasks, so they have more time for these high-value interactions.”
It’s a useful frame for any organization evaluating where AI belongs in their work.
If you’re interested in becoming a Big, visit bbbs.org.
Ready to explore what Agentforce could do in your Salesforce org? Meet with our team.


